operand is a signed 32-bit immediate. Unlike with the 8-bit
signed immediate case, it isn't actually smaller to fold a
32-bit signed immediate instead of a load. In fact, it's
larger in the case of 32-bit unsigned immediates, because
they can be materialized with movl instead of movq.
llvm-svn: 67001
if FPConstant is legal because if the FPConstant doesn't need to be stored
in a constant pool, the transformation is unlikely to be profitable.
llvm-svn: 66994
ptrtoint and inttoptr in X86FastISel. These casts aren't always
handled in the generic FastISel code because X86 sometimes needs
custom code to do truncation and zero-extension.
llvm-svn: 66988
large for the testsuite) took over six minutes to compile on my Mac.
The patched LLVM-GCC compiles that testcase in three seconds (GCC
takes less than one second). This hash function is more complex
(about 35 instructions on x86) than what Chris wanted, but I expect it
will be well-behaved with arbitrary inputs.
Thank you to everyone who responded to my previous request for advice.
llvm-svn: 66962
by inserting explicit zero extensions where necessary. Included
is a testcase where SelectionDAG produces a virtual register
holding an i1 value which FastISel previously mistakenly assumed
to be zero-extended.
llvm-svn: 66941
changes.
For InvokeInst now all arguments begin at op_begin().
The Callee, Cont and Fail are now faster to get by
access relative to op_end().
This patch introduces some temporary uglyness in CallSite.
Next I'll bring CallInst up to a similar scheme and then
the uglyness will magically vanish.
This patch also exposes all the reliance of the libraries
on InvokeInst's operand ordering. I am thinking of taking
care of that too.
llvm-svn: 66920
1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants.
2. MachineConstantPool alignment field is also a log2 value.
3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values.
4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries.
5. Asm printer uses expensive data structure multimap to track constant pool entries by sections.
6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic.
Solutions:
1. ConstantPoolSDNode alignment field is changed to keep non-log2 value.
2. MachineConstantPool alignment field is also changed to keep non-log2 value.
3. Functions that create ConstantPool nodes are passing in non-log2 alignments.
4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT.
5. Asm printer uses cheaper data structure to group constant pool entries.
6. Asm printer compute entry offsets after grouping is done.
7. Change JIT code to compute entry offsets on the fly.
llvm-svn: 66875
for i32/i64 expressions (we could also do i16 on cpus where
i16 lea is fast, but I didn't add this). On the example, we now
generate:
_test:
movl 4(%esp), %eax
cmpl $42, (%eax)
setl %al
movzbl %al, %eax
leal 4(%eax,%eax,8), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
cmpl $41, (%eax)
movl $4, %ecx
movl $13, %eax
cmovg %ecx, %eax
ret
llvm-svn: 66869
operands can't both be fully folded at the same time. For example,
in the included testcase, a global variable is being added with
an add of two values. The global variable wants RIP-relative
addressing, so it can't share the address with another base
register, but it's still possible to fold the initial add.
llvm-svn: 66865
right; did the wrong thing when there are exactly 11
non-debug instructions, followed by debug info.
Remove a FIXME since it's apparently been fixed along the way.
llvm-svn: 66840
in the Ada testcase. Reverting this only covers up
the real problem, which is a nasty conceptual difficulty
in the phi elimination pass: when eliminating phi nodes
in landing pads, the register copies need to come before
the invoke, not at the end of the basic block which is
too late... See PR3784.
llvm-svn: 66826
access each with a fixed negative index from op_end().
This has two important implications:
- getUser() will work faster, because there are less iterations
for the waymarking algorithm to perform. This is important
when running various analyses that want to determine callers
of basic blocks.
- getSuccessor() now runs faster, because the indirection via OperandList
is not necessary: Uses corresponding to the successors are at fixed
offset to "this".
The price we pay is the slightly more complicated logic in the operator
User::delete, as it has to pick up the information whether it has to free
the memory of an original unconditional BranchInst or a BranchInst that
was originally conditional, but has been shortened to unconditional.
I was not able to come up with a nicer solution to this problem. (And
rest assured, I tried *a lot*).
Similar reorderings will follow for InvokeInst and CallInst. After that
some optimizations to pred_iterator and CallSite will fall out naturally.
llvm-svn: 66815
related transformations out of target-specific dag combine into the
ARM backend. These were added by Evan in r37685 with no testcases
and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll).
Add some simple X86-specific (for now) DAG combines that turn things
like cond ? 8 : 0 -> (zext(cond) << 3). This happens frequently
with the recently added cp constant select optimization, but is a
very general xform. For example, we now compile the second example
in const-select.ll to:
_test:
movsd LCPI2_0, %xmm0
ucomisd 8(%esp), %xmm0
seta %al
movzbl %al, %eax
movl 4(%esp), %ecx
movsbl (%ecx,%eax,4), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
leal 4(%eax), %ecx
movsd LCPI2_0, %xmm0
ucomisd 8(%esp), %xmm0
cmovbe %eax, %ecx
movsbl (%ecx), %eax
ret
This passes multisource and dejagnu.
llvm-svn: 66779
from a switch table. Multiple table entries that
branch to the same place were being sorted by the
pointer value of the ConstantInt*; changed to sort
by the actual value of the ConstantInt.
llvm-svn: 66749
allocations. Apparently the assumption is there is an
instruction (terminator?) following the allocation so I
am allowing the same assumption.
llvm-svn: 66716
alignment of the generated constant pool entry to the
desired alignment of a type. If we don't do this, we end up
trying to do movsd from 4-byte alignment memory. This fixes
450.soplex and 456.hmmer.
llvm-svn: 66641
1. Use the same value# to represent unknown values being merged into sub-registers.
2. When coalescer commute an instruction and the destination is a physical register, update its sub-registers by merging in the extended ranges.
llvm-svn: 66610
the untimed version of getOrCreateSourceID. getOrCreateSourceID calls
GetOrCreateSourceID, of course.
- Move some methods into the "private" section. Constify at least one method.
- General clean-ups.
llvm-svn: 66582
scheduled in multiple regions, liveness data used by the
anti-dependence breaker is carried from one region to the next, however
the information reflects the state of the instructions before scheduling.
After scheduling, there may be new live range overlaps. Handle this by
pessimizing the liveness data carried between regions to the point where
it will be conservatively correct now matter how the earlier region is
scheduled. This fixes a miscompilation in 176.gcc with the post-RA
scheduler enabled.
llvm-svn: 66558
to obtain debug info about them.
Introduce helpers to access debug info for global variables. Also introduce a
helper that works for both local and global variables.
llvm-svn: 66541
allocating memory in the JIT. This is insanely inefficient, but
hey, most people implement their own memory managers anyway.
Patch by Eric Yew!
llvm-svn: 66472
whether a global is dead or not. This should fix PR3749 - linker adds
spurious use to appending globals. I can't reasonably add a testcase
for this, because the bc writer/reader strip dead constant users.
llvm-svn: 66404
validate an invariant so that the asmparser rejects a bad construct
instead of the verifier. Before:
llvm-as: assembly parsed, but does not verify as correct!
Invalid struct return type!
i64 (%struct.Type*, %struct.Type*)* @foo
after:
llvm-as: t.ll:5:8: functions with 'sret' argument must return void
define i64 @foo(%struct.Type* noalias nocapture sret %agg.result, %struct.Type* nocapture byval %t) nounwind {
^
Second, check that void is only used where allowed (in function return types) not in
arbitrary places, fixing PR3747 - Crash in llvm-as with void field in struct. We
now reject that example with:
$ llvm-as t.ll
llvm-as: t.ll:1:12: struct element can not have void type
%x = type {void}
^
llvm-svn: 66394
by checking that the top-level type of a gep is sized. This
causes us to reject the example with:
llvm-as: t2.ll:2:16: invalid getelementptr indices
getelementptr i32()* null, i32 1
^
llvm-svn: 66393
and extern_weak_odr. These are the same as the non-odr versions,
except that they indicate that the global will only be overridden
by an *equivalent* global. In C, a function with weak linkage can
be overridden by a function which behaves completely differently.
This means that IP passes have to skip weak functions, since any
deductions made from the function definition might be wrong, since
the definition could be replaced by something completely different
at link time. This is not allowed in C++, thanks to the ODR
(One-Definition-Rule): if a function is replaced by another at
link-time, then the new function must be the same as the original
function. If a language knows that a function or other global can
only be overridden by an equivalent global, it can give it the
weak_odr linkage type, and the optimizers will understand that it
is alright to make deductions based on the function body. The
code generators on the other hand map weak and weak_odr linkage
to the same thing.
llvm-svn: 66339
signal handlers to prevent reentrance on unrelated things (a sigabort
where the handle bus errors) also, clear the signal mask so that the
signal doesn't infinitely reissue. This fixes rdar://6654827 -
Crash causes clang to loop
llvm-svn: 66330
1. When the JIT is asked to remove a function, updating it's
mapping to 0, we invalidate any function stubs used only
by that function. Now, also invalidate the JIT's mapping
from the GV the stub pointed to, to the address of the GV.
2. When dlsym stubs for cross-process JIT are enabled, do not
abort just because a named function cannot be found in the
JIT's process.
3. Fix various assumptions about when it is ok to use the lazy
resolver when non-lazy JITing is enabled.
llvm-svn: 66324
the same say the "test" instruction does in overflow cases,
so eliminating the test is only safe when those bits aren't
needed, as is the case for COND_E and COND_NE, or if it
can be proven that no overflow will occur. For now, just
restrict the optimization to COND_E and COND_NE and don't
do any overflow analysis.
llvm-svn: 66318
to find a tiny mouse hole to squeeze through, it struck
me that globals without a name can be considered internal
since they can't be referenced from outside the current
module. This patch makes GlobalOpt give them internal
linkage. Also done for aliases even though they always
have names, since in my opinion anonymous aliases should
be allowed for consistency with global variables and
functions. So if that happens one day, this code is ready!
llvm-svn: 66267
get nice and happy stack traces when we crash in an optimizer or codegen. For
example, an abort put in UnswitchLoops now looks like this:
Stack dump:
0. Program arguments: clang pr3399.c -S -O3
1. <eof> parser at end of file
2. per-module optimization passes
3. Running pass 'CallGraph Pass Manager' on module 'pr3399.c'.
4. Running pass 'Loop Pass Manager' on function '@foo'
5. Running pass 'Unswitch loops' on basic block '%for.inc'
Abort
llvm-svn: 66260
with multiple chain operands. This can occur when the scheduler
has added chain operands to a node that already has a chain
operand, in order to handle physical register dependencies.
This fixes an llvm-gcc bootstrap failure on x86-64 introduced
in r66058.
llvm-svn: 66240
INC64_32r and INC64_16r, because these instructions are encoded
differently on x86-64. This fixes JIT regressions on x86-64 in
kimwitu++ and others.
llvm-svn: 66207
If non constant local GV named A is used by a constant local GV named B (e.g. llvm.dbg.variable) and B is not used by anyone else then eliminate A as well as B.
In other words, debug info should not interfere in removal of unused GV.
--This life, and those below, will be ignored--
M test/Transforms/GlobalOpt/2009-03-03-dbg.ll
M lib/Transforms/IPO/GlobalOpt.cpp
llvm-svn: 66167
feels a kinship to machine stacks that grow down. Now we get
stuff like this:
Stack dump:
0. Program arguments: clang clang_crash_Iw2Osj.mi
1. /Developer/SDKs/MacOSX10.5.sdk/usr/lib/gcc/i686-apple-darwin9/4.0.1/include/xmmintrin.h:624:1: parsing function body '_mm_cvtpi16_ps'
2. /Developer/SDKs/MacOSX10.5.sdk/usr/lib/gcc/i686-apple-darwin9/4.0.1/include/xmmintrin.h:624:1: in compound statement ('{}')
Abort
llvm-svn: 66145
This invalidates the stubs in the resolver map when they are no longer referenced,
and should the JIT memory manager ever pick up a deallocateStub interface, the
JIT could reclaim the memory for unused stubs as well.
llvm-svn: 66141
arbitrary functions to be run when a crash happens. Delete
RemoveDirectoryOnSignal as it is dead and has never had clients.
Change PrintStackTraceOnErrorSignal to be implemented in terms of
AddSignalHandler.
I updated the Win32 versions of these APIs, but can't test them.
If there are any problems, I'd be happy to fix them as well.
llvm-svn: 66072
of MachineInstr def operands must be subtracted out. This bug
was uncovered by the recent x86 EFLAGS optimization. Before
that, the only instructions that ever needed unfolding were
things like CMP32rm, where NumDefs is zero.
llvm-svn: 66056
on failure to resolve it.
Do not abort on failure to resolve an external symbol when using dlsym stubs,
since the symbol may not be in the JIT's address space. Just use 0.
Allow dlsym stubs to differentiate between GlobalVars and Functions.
llvm-svn: 66050
so it changed it into a 31 via the TLO.ShrinkDemandedConstant() call. Then it
would go through the DAG combiner again. This time it had a value of 31, which
was turned into a -1 by TLI.SimplifyDemandedBits(). This would ping pong
forever.
Teach the TLO.ShrinkDemandedConstant() call not to lower a value if the demanded
value is an XOR of all ones.
llvm-svn: 65985
use, check also for the case where it has two uses,
the other being a llvm.dbg.declare. This is needed so
debug info doesn't affect codegen.
llvm-svn: 65970
arbitrary vector sizes. Add an optional MinSplatBits parameter to specify
a minimum for the splat element size. Update the PPC target to use the
revised interface.
llvm-svn: 65899
Move the code from 'llvmc/driver' into a new CompilerDriver library, and change
the build system accordingly. Makes it easier for projects using LLVM to build
their own llvmc-based drivers.
Tested with objdir != srcdir.
llvm-svn: 65821
extracts + build_vector into a shuffle would fail, because the
type of the new build_vector would not be legal. Try harder to
create a legal build_vector type. Note: this will be totally
irrelevant once vector_shuffle no longer takes a build_vector for
shuffle mask.
New:
_foo:
xorps %xmm0, %xmm0
xorps %xmm1, %xmm1
subps %xmm1, %xmm1
mulps %xmm0, %xmm1
addps %xmm0, %xmm1
movaps %xmm1, 0
Old:
_foo:
xorps %xmm0, %xmm0
movss %xmm0, %xmm1
xorps %xmm2, %xmm2
unpcklps %xmm1, %xmm2
pshufd $80, %xmm1, %xmm1
unpcklps %xmm1, %xmm2
pslldq $16, %xmm2
pshufd $57, %xmm2, %xmm1
subps %xmm0, %xmm1
mulps %xmm0, %xmm1
addps %xmm0, %xmm1
movaps %xmm1, 0
llvm-svn: 65791
its sentinel. This is quite a win when a function really has a basic block.
When the function is just a declaration (and stays so) the old way did not
allocate a sentinel. So this change is most beneficial when the ratio of
function definition to declaration is high. I.e. linkers etc. Incidentally
these are the most resource demanding applications, so I expect that the
reduced malloc traffic, locality and space savings outweigh the cost of
addition of two pointers to Function.
llvm-svn: 65776
testsuite:
Running /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmCore/test/CodeGen/X86/dg.exp ...
FAIL: /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmCore/test/CodeGen/X86/nancvt.ll
Failed with exit(1) at line 2
while running: grep 2147027116 nancvt.ll.tmp | count 3
count: expected 3 lines and got 0.
child process exited abnormally
FAIL: /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmCore/test/CodeGen/X86/vec_ins_extract.ll
Failed with exit(1) at line 1
while running: llvm-as < /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmCore/test/CodeGen/X86/vec_ins_extract.ll | opt -scalarrepl -instcombine | llc -march=x86 -mcpu=yonah | not /usr/bin/grep sub.*esp
subl $28, %esp
subl $28, %esp
child process exited abnormally
And more.
llvm-svn: 65758
stripped .bc file, it didn't make any attempt to try to reuse anonymous types.
This causes an amazing type explosion due to types getting duplicated everywhere
they are referenced and other problems.
This also caused correctness issues, because opaque types are unique for each time
they are uttered in the file. This means that stripping a .bc file could produce
a .ll file that could not be assembled (e.g. 2009-02-28-StripOpaqueName.ll).
This patch fixes both of these issues.
llvm-svn: 65738
This looks dangerous, but isn't because the sentinel is accessed in special way only,
namely the Next and Prev fields of it, and these are guaranteed to exist.
llvm-svn: 65626
overly long ints, e.g. i96, into pieces at PHIs
and the nodes that feed into them; however big-endian
reverses the order of the pieces (for some reason), and
wasn't doing it the same way on both sides, so
the pieces didn't match and runtime failures ensued.
Fixes 188.ammp and sqlite3 on ppc32.
llvm-svn: 65481
to more accurately describe what it does. Expand its doxygen comment
to describe what the backedge-taken count is and how it differs
from the actual iteration count of the loop. Adjust names and
comments in associated code accordingly.
llvm-svn: 65382
them are generic changes.
- Use the "fast" flag that's already being passed into the asm printers instead
of shoving it into the DwarfWriter.
- Instead of calling "MI->getParent()->getParent()" for every MI, set the
machine function when calling "runOnMachineFunction" in the asm printers.
llvm-svn: 65379
a DBG_LABEL or not. We want to fall back to the original way of emitting debug
info when we're in -O0/-fast mode.
- Add plumbing in to pass the "Fast" flag to places that need it.
- XFAIL DebugInfo/deaddebuglabel.ll. This is finding 11 labels instead of 8. I
need to investigate still.
llvm-svn: 65367
trip counts that use signed comparisons. It's not obviously the best
approach for preserving trip count information, and at any rate there
isn't anything in the tree right now that makes use of that, so for
now always using zero-extensions is preferable.
llvm-svn: 65347
so that ScalarEvolution doesn't hang onto a dangling Loop*, which
could be a problem if another Loop happens to get allocated at the
same address.
llvm-svn: 65323
instruction. The class also consolidates the code for detecting constant
splats that's shared across PowerPC and the CellSPU backends (and might be
useful for other backends.) Also introduces SelectionDAG::getBUID_VECTOR() for
generating new BUILD_VECTOR nodes.
llvm-svn: 65296
memcpy to match the alignment of the destination. It isn't necessary
for making loads and stores handled like the SSE loadu/storeu
intrinsics, and it was causing a performance regression in
MultiSource/Applications/JM/lencod.
The problem appears to have been a memcpy that copies from some
highly aligned array into an alloca; the alloca was then being
assigned a large alignment, which required codegen to perform
dynamic stack-pointer re-alignment, which forced the enclosing
function to have a frame pointer, which led to increased spilling.
llvm-svn: 65289
Now we're using one gross, but quite robust hack :) (previous ones
did not work, for example, when ext_weak symbol was used deep inside
constant expression in the initializer).
The proper fix of this problem will require some quite huge asmprinter
changes and that's why was postponed. This fixes PR3629 by the way :)
llvm-svn: 65230
as legality. Make load sinking and gep sinking more careful: we only
do it when it won't pessimize loads from the stack. This has the added
benefit of not producing code that is unanalyzable to SROA.
llvm-svn: 65209
addresses, part 1. This fixes an obvious logic bug. Previously if the only
in-loop use is a PHI, it would return AllUsesAreAddresses as true.
llvm-svn: 65178
Currently this pass will delete the variable declaration info,
and keep the line number info. But the kept line number info is not updated,
and some is redundant or not correct, this patch just updates those info.
llvm-svn: 65123
Ideally these would never get created in the first place, but until we enhance the spiller to have a more
global picture of what's happening, this is necessary for code quality in some circumstances.
llvm-svn: 65120
reduction of address calculations down to basic pointer arithmetic.
This is currently off by default, as it needs a few other features
before it becomes generally useful. And even when enabled, full
strength reduction is only performed when it doesn't increase
register pressure, and when several other conditions are true.
This also factors out a bunch of exisiting LSR code out of
StrengthReduceStridedIVUsers into separate functions, and tidies
up IV insertion. This actually decreases register pressure even
in non-superhero mode. The change in iv-users-in-other-loops.ll
is an example of this; there are two more adds because there are
two fewer leas, and there is less spilling.
llvm-svn: 65108
symlink. We really want the ultimate executable being run, not
the symlink. This lets clang find its headers when invoked through
a symlink. rdar://6602012
llvm-svn: 65017
here. Since we only do the transform if there is
one use, strip off any such users in the hope of
making the transform fire more often.
llvm-svn: 64926
trip count value when the original loop iteration condition is
signed and the canonical induction variable won't undergo signed
overflow. This isn't required for correctness; it just preserves
more information about original loop iteration values.
Add a getTruncateOrSignExtend method to ScalarEvolution,
following getTruncateOrZeroExtend.
llvm-svn: 64918
that has not been JIT'd yet, the callee is put on a list of pending functions
to JIT. The call is directed through a stub, which is updated with the address
of the function after it has been JIT'd. A new interface for allocating and
updating empty stubs is provided.
Add support for removing the ModuleProvider the JIT was created with, which
would otherwise invalidate the JIT's PassManager, which is initialized with the
ModuleProvider's Module.
Add support under a new ExecutionEngine flag for emitting the infomration
necessary to update Function and GlobalVariable stubs after JITing them, by
recording the address of the stub and the name of the GlobalValue. This allows
code to be copied from one address space to another, where libraries may live
at different virtual addresses, and have the stubs updated with their new
correct target addresses.
llvm-svn: 64906
are multiple IV's in a loop, some of them may under go signed
or unsigned wrapping even if the IV that's used in the loop
exit condition doesn't. Restrict sign-extension-elimination
and zero-extension-elimination to only those that operate on
the original loop-controlling IV.
llvm-svn: 64866
(Note: Eventually, commits like this will be handled via a pre-commit hook that
does this automagically, as well as expand tabs to spaces and look for 80-col
violations.)
llvm-svn: 64827
modified in a way that may effect the trip count calculation. Change
IndVars to use this method when it rewrites pointer or floating-point
induction variables instead of using a doInitialization method to
sneak these changes in before ScalarEvolution has a chance to see
the loop. This eliminates the need for LoopPass to depend on
ScalarEvolution.
llvm-svn: 64810
U include/llvm/CodeGen/DebugLoc.h
U lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
U lib/CodeGen/SelectionDAG/SelectionDAGBuild.cpp
U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
Enable debug location generation at -Os. This goes with the reapplication of the
r63639 patch.
llvm-svn: 64715
Enhance instcombine to use the preferred field of
GetOrEnforceKnownAlignment in more cases, so that regular IR operations are
optimized in the same way that the intrinsics currently are.
llvm-svn: 64623
one bit set, because the bit may be shifted off the end. Instead,
just check for a constant 1 being shifted. This is still sufficient
to handle all the cases in test/CodeGen/X86/bt.ll. This fixes PR3583.
llvm-svn: 64622
when I was looking at functions used by python.
Highlights include, better largefile support (64-bit file sizes on 32-bit
systems), fputs string is nocapture, popen/pclose added (popen being noalias
return), modf and frexp and friends. Also added some missing 'break' statements
and combined identical sections.
llvm-svn: 64615