This makes it possible to speed up def_iterator by stopping at the first
use. This makes def_empty() and getUniqueVRegDef() much faster when
there are many uses.
In a +Asserts build, LiveVariables is 100x faster in one case because
getVRegDef() has an assertion that would scan to the end of a
def_iterator chain.
Spill weight calculation is significantly faster (300x in one case)
because isTriviallyReMaterializable() calls MRI->isConstantPhysReg(%RIP)
which calls def_empty(%RIP).
llvm-svn: 161634
The xorps instruction is smaller than pxor, so prefer that encoding.
The ExecutionDepsFix pass will switch the encoding to pxor and xorpd
when appropriate.
llvm-svn: 143996
Instcombine does this but apparently there are situations where this pattern will escape the optimizer and / or created by isel. Here is a case that's seen in JavaScriptCore:
%t1 = sub i32 0, %a
%t2 = add i32 %t1, -1
The dag combiner pattern: ((c1-A)+c2) -> (c1+c2)-A
will fold it to -1 - %a.
llvm-svn: 93773
allows matching and remembering a string and then matching and
verifying that the string occurs later in the file.
Change X86/xor.ll to use this in some cases where the test was
checking for an arbitrary register allocation decision.
llvm-svn: 82891
regex and matching it instead of trying to match chunks at a time.
Matching chunks at a time broke with check lines like
CHECK: foo {{.*}}bar
because the .* would eat the entire rest of the line and bar would
never match.
Now we just escape the fixed strings for the user, so that something
like:
CHECK: a() {{.*}}???
is matched as:
CHECK: {{a\(\) .*\?\?\?}}
transparently "under the covers".
llvm-svn: 82779