register coalescing. This fixes many crashes and
places where debug info affects codegen (when
dbg.value is lowered to machine instructions, which
it isn't yet in TOT).
llvm-svn: 95739
The major win of this is that the code is simpler and they
print on the same line as the instruction again:
movl %eax, 96(%esp) ## 4-byte Spill
movl 96(%esp), %eax ## 4-byte Reload
cmpl 92(%esp), %eax ## 4-byte Folded Reload
jl LBB7_86
llvm-svn: 95738
Enhance the x86 backend to show the hex values of immediates in
comments when they are large. For example:
movl $1072693248, 4(%esp) ## imm = 0x3FF00000
llvm-svn: 95728
Move some utility TableGen defs, classes, etc. into a common file so
they may be used my multiple pattern files. We will use this for
the AVX specification to help with the transition from the current
SSE specification.
llvm-svn: 95727
in X86-32 mode. This is still required in x86-64 mode to avoid
forming [disp+rip] encoding. Rewrite the SIB byte decision logic
to be actually understandable.
llvm-svn: 95693
into TargetOpcodes.h. #include the new TargetOpcodes.h
into MachineInstr. Add new inline accessors (like isPHI())
to MachineInstr, and start using them throughout the
codebase.
llvm-svn: 95687
tMOVCCi pattern only valid for low registers, as the Thumb1 mov immediate to
register instruction only works with low registers. Allowing high registers
for the instruction resulted in the assembler choosing the wide (32-bit)
encoding for the mov, but LLVM though the instruction was only 16 bits wide,
so offset calculations for constant pools became incorrect, leading to
out of range constant pool entries.
llvm-svn: 95686
Previously spill registers, whose def indexes are not defined, would sometimes be improperly marked as coalescable with conflicting registers. The new findCoalesces routine conservatively assumes that any register with at least one undefined def is not coalescable with any register it interferes with.
llvm-svn: 95636
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly. Move testcase
to new directory.
Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.
llvm-svn: 95628
Document that MCExpr::Div, Mod, and the comparison operators are all
signed operators.
Document that the comparison operators' results are target-dependent.
Document that the behavior of shr is target-dependent.
llvm-svn: 95619
in global initializers. Instead of aborting, attempt to fold them on the
spot. If folding succeeds, emit the folded expression instead.
This fixes PR6255.
llvm-svn: 95583
warns about this base class not having a virtual destructor, but since
this class has no virtual methods and neither it or the types derived
from it has a destructor, a protected trivial destructor will do (and
shuts cppcheck up) the trick without the cost of introducing a vtable.
llvm-svn: 95526
- o32 cc must pass all arguments in A0...A3 and stack regardless
if its type (but respect the alignment).
- Store all variable arguments back to the caller stack.
llvm-svn: 95500
only run for x86 with fastisel. I've found it being very effective in
eliminating some obvious dead code as result of formal parameter lowering
especially when tail call optimization eliminated the need for some of the loads
from fixed frame objects. It also shrinks a number of the tests. A couple of
tests no longer make sense and are now eliminated.
llvm-svn: 95493
This time it's for real! I am going to hook this up in the frontends as well.
The inliner has some experimental heuristics for dealing with the inline hint.
When given a -respect-inlinehint option, functions marked with the inline
keyword are given a threshold just above the default for -O3.
We need some experiments to determine if that is the right thing to do.
llvm-svn: 95466
xform it is checking to actually pass. There is no need to match
m_SelectCst<0, -1> since instcombine canonicalizes that into not(sext).
Add matches for sext(not(x)) in addition to not(sext(x)).
llvm-svn: 95420
llc.cpp also defined these flags, meaning that when I linked all of LLVM's
libraries into a single shared library, llc crashed on startup with duplicate
flag definitions. This patch passes them through the EngineBuilder into
JIT::selectTarget().
llvm-svn: 95390
following it. However, the EmitGlobalConstant method wasn't emitting a body for
the constant. The assembler doesn't like that. Before, we were generating this:
.zerofill __DATA, __common, __cmd, 1, 3
This fix puts us back to that semantic.
llvm-svn: 95336
short-circuited conditions to AND/OR expressions, and those expressions
are often converted back to a short-circuited form in code gen. The
original source order may have been optimized to take advantage of the
expected values, and if we reassociate them, we change the order and
subvert that optimization. Radar 7497329.
llvm-svn: 95333
Instruction selection for X86 now can choose an instruction
sequence that will fit any address of any symbol, no matter
the pointer width. X86-64 uses a mov+call-via-reg sequence
for this.
llvm-svn: 95323
This makes the inliner about as agressive as it was before my changes to the
inliner cost calculations. These levels give the same performance and slightly
smaller code than before.
llvm-svn: 95320
"Attached patch removes the extra NUL bytes from the output and changes
test/Archive/MacOSX.toc from a binary to a text file (removes
svn:mime-type=application/octet-stream and adds svn:eol-style=native). I can't
figure out how to get SVN to include the new contents of the file in the patch
so I'm attaching it separately."
Patch by James Abbatiello!
llvm-svn: 95292
Fix bugs where we would compute out of bounds as in bounds, and where
we couldn't know that the linker could override the size of an array.
Add a few new testcases, change existing testcase to use a private
global array instead of extern.
llvm-svn: 95283
Lock prefix, Repeat string operation prefixes and the Segment override prefixes.
Also added versions of the move string and store string instructions without the
repeat prefixes to X86InstrInfo.td. And finally marked the rep versions of
move/store string records in X86InstrInfo.td as isCodeGenOnly = 1 so tblgen is
happy building the disassembler files.
llvm-svn: 95252
1-argument ExecutionEngine::create(Module*) ambiguous with the signature that
used to be ExecutionEngine::create(ModuleProvider*, defaulted_params). Fixed
by removing the 1-argument create(). Fixes PR6221.
llvm-svn: 95236
The SRThreshold value makes perfect sense for checking if an entire aggregate
should be promoted to a scalar integer, but it is not so good for splitting
an aggregate into its separate elements. A struct may contain a large embedded
array along with some scalar fields that would benefit from being split apart
by SROA. Even if the total aggregate size is large, it may still be good to
perform SROA. Thus, the most important piece of this patch is simply moving
the aggregate size comparison vs. SRThreshold so that it guards only the
aggregate promotion.
We have also been checking the number of elements to decide if an aggregate
should be split up. The limit of "SRThreshold/4" seemed rather arbitrary,
and I don't think it's very useful to derive this limit from SRThreshold
anyway. I've collected some data showing that the current default limit of
32 (since SRThreshold defaults to 128) is a reasonable cutoff for struct
types. One thing suggested by the data is that distinguishing between structs
and arrays might be useful. There are (obviously) a lot more large arrays
than large structs (as measured by the number of elements and not the total
size -- a large array inside a struct still counts as a single element given
the way we do SROA right now). Out of 8377 arrays where we successfully
performed SROA while compiling a large set of benchmarks, only 16 of them had
more than 8 elements. And, for those 16 arrays, it's not at all clear that
SROA was actually beneficial. So, to offset the compile time cost of
investigating more large structs for SROA, the patch lowers the limit on array
elements to 8.
This fixes Apple Radar 7563690.
llvm-svn: 95224