This was only needed to locate llvm-gcc's installation directory when clang
falls back to run llvm-gcc for i386 kexts. As of clang svn r140187, we're
now just searching paths with several different Darwin versions on either
side of the current version, so this is no longer needed.
llvm-svn: 140188
extract its associated landing pad block as well. However, that landing pad
block may have more than one predecessor. So split the landing pad block so that
individual landing pads have only one predecessor.
This type of transformation may produce a false positive with bugpoint.
llvm-svn: 140173
No functionality change. The hook makes it explicit which patterns
require "special" handling. i.e. it self-documents tblgen
deficiencies. I plan to add verification in ExpandISelPseudos and
Thumb2SizeReduce to catch any missing hasPostISelHooks. Otherwise it's
too fragile.
llvm-svn: 140160
Modified ARMISelLowering::AdjustInstrPostInstrSelection to handle the
full gamut of CPSR defs/uses including instructins whose "optional"
cc_out operand is not really optional. This allowed removal of the
hasPostISelHook to simplify the .td files and make the implementation
more robust.
Fixes rdar://10137436: sqlite3 miscompile
llvm-svn: 140134
extract the landing pad block. Otherwise, there will be a situation where the
invoke's unwind edge lands on a non-landing pad.
We also forbid the user from extracting the landing pad block by itself. Again,
this is not a valid transformation.
llvm-svn: 140083
yet legal according to comments in LegalizeDAG.cpp:227.
Memcpy nodes created for copying byval arguments are inserted before
CALLSEQ_START.
The two failing tests reported in PR10876 pass after applying this patch.
llvm-svn: 140046
dag-combine optimization to implement the ext-load efficiently (using shuffles).
For example the type <4 x i8> is stored in memory as i32, but it needs to
find its way into a <4 x i32> register. Previously we scalarized the memory
access, now we use shuffles.
llvm-svn: 139995
maxps and maxpd). This broke the sse41-blend.ll testcase by causing
maxpd to be produced rather than a cmp+blend pair, which is the reason
I tweaked it. Gives a small speedup on doduc with dragonegg when the
GCC vectorizer is used.
llvm-svn: 139986
are declared with load patterns. This fix the crash in PR10941. No testcases,
since a fold is triggered and then converted back to the register form
afterwards.
llvm-svn: 139953
This PR basically reports a problem where a crash in generated code
happened due to %rbp being clobbered:
pushq %rbp
movq %rsp, %rbp
....
vmovmskps %ymm12, %ebp
....
movq %rbp, %rsp
popq %rbp
ret
Since Eric's r123367 commit, the default stack alignment for x86 32-bit
has changed to be 16-bytes. Since then, the MaxStackAlignmentHeuristicPass
hasn't been really used, but with AVX it becomes useful again, since per
ABI compliance we don't always align the stack to 256-bit, but only when
there are 256-bit incoming arguments.
ReserveFP was only used by this pass, but there's no RA target hook that
uses getReserveFP() to check for the presence of FP (since nothing was
triggering the pass to run, the uses of getReserveFP() were removed
through time without being noticed). Change this pass to use
setForceFramePointer, which is properly called by MachineFunction
hasFP method.
The testcase is very big and dependent on RA, not sure if it's worth
adding to test/CodeGen/X86.
llvm-svn: 139939
The leaveIntvAfter() function normally inserts a back-copy after the
requested instruction, making the back-copy kill the live range.
In spill mode, try to insert the back-copy before the last use instead.
That means the last use becomes the kill instead of the back-copy. This
lowers the register pressure because the last use can now redefine the
same register it was reading.
This will also improve compile time: The back-copy isn't a kill, so
hoisting it in hoistCopiesForSize() won't force a recomputation of the
source live range. Similarly, if the back-copy isn't hoisted by the
splitter, the spiller will not attempt hoisting it locally.
llvm-svn: 139883
If the source register is live after the copy being spilled, there is no
point to hoisting it. Hoisting inside a basic block only serves to
resolve interferences by shortening the live range of the source.
llvm-svn: 139882
gold plugin is built with Large File Support (sizeof(off_t) == 64 on i686)
and the rest of LLVM is built w/o Large File Support
(sizeof(off_t) == 32 on i686) which corrupts the stack.
llvm-svn: 139873
When -split-spill-mode is enabled, spill hoisting is performed by
SplitKit instead of by InlineSpiller. This hidden command line option
is for testing the splitter spill mode.
llvm-svn: 139845
take into consideration the presence of AVX. This change, together with
the SSEDomainFix enabled for AVX, makes AVX codegen to always (hopefully)
emit the same code as SSE for 128-bit vector ops. I don't
have a testcase for this, but AVX now beats SSE in performance for
128-bit ops in the majority of programas in the llvm testsuite
llvm-svn: 139817
Assembler private local symbols aren't legal targets of symbol attributes,
so issue a diagnostic for them.
Based on patch by Stepan Dyatkovskiy.
llvm-svn: 139807
If we see an EOF w/o a preceding end-of-line, return an EndOfStatement
token before returning the Eof token.
Based on patch by Stepan Dyatkovskiy.
llvm-svn: 139798
When traceSiblingValue() encounters a PHI-def value created by live
range splitting, don't look at all the predecessor blocks. That can be
very expensive in a complicated CFG.
Instead, consider that all the non-PHI defs jointly dominate all the
PHI-defs. Tracing directly to all the non-PHI defs is much faster that
zipping around in the CFG when there are many PHIs with many
predecessors.
This significantly improves compile time for indirectbr interpreters.
llvm-svn: 139797
Blocks with multiple PHI successors only need to go on the worklist
once. Use a SmallPtrSet to track the live-out blocks that have already
been handled. This is a lot faster than the two live range check we
would otherwise do.
Also stop recomputing hasPHIKill flags. Like RenumberValues(), it is
conservatively correct to leave them in, and they are not used for
anything important.
llvm-svn: 139792
It does, after all.
RemoveCopyByCommutingDef rewrites the uses of one particular value
number in A. It doesn't know how to rewrite phi uses, so there can't be
any.
llvm-svn: 139787
There is only one legitimate use remaining, in addIntervalsForSpills().
All other calls to hasPHIKill() are only used to update PHIKill flags.
The addIntervalsForSpills() function is part of the old spilling
framework, only used by linearscan.
llvm-svn: 139783
Instead, let HasOtherReachingDefs() test for defs in B that overlap any
phi-defs in A as well. This test is slightly different, but almost
identical.
A perfectly precise test would only check those phi-defs in A that are
reachable from AValNo.
llvm-svn: 139782
The source live range is recomputed using shrinkToUses() which does
handle phis correctly. The hasPHIKill() condition was relevant in the
old days when ReMaterializeTrivialDef() tried to recompute the live
range itself.
The shrinkToUses() function will mark the original def as dead when no
more uses and phi kills remain. It is then removed by
runOnMachineFunction().
llvm-svn: 139781
It is conservatively correct to keep the hasPHIKill flags, even after
deleting PHI-defs.
The calculation can be very expensive after taildup has created a
quadratic number of indirectbr edges in the CFG, and the hasPHIKill flag
isn't used for anything after RenumberValues().
llvm-svn: 139780
Clean up register list handling in general a bit to explicitly check things
like all the registers being from the same register class.
rdar://8883573
llvm-svn: 139707
THe LRE_DidCloneVirtReg callback may be called with vitual registers
that RAGreedy doesn't even know about yet. In that case, there are no
data structures to update.
llvm-svn: 139702
When a back-copy is hoisted to the nearest common dominator, keep
looking up the dominator tree for a less loopy dominator, and place the
back-copy there instead.
Don't do this when a single existing back-copy dominates all the others.
Assume the client knows what he is doing, and keep the dominating
back-copy.
This prevents us from hoisting back-copies into loops in most cases. If
a value is defined in a loop with multiple exits, we may still hoist
back-copies into that loop. That is the speed/size tradeoff.
llvm-svn: 139698
- Add TSFlags for the instruction formats. The idea here is to use
as much encoding as possible from getBinaryCodeForInstr, and having
TSFLags formats for that would make it easier to encode most part
of the instructions (since Mips encodings are pretty straightforward)
- Improve the mips mechanism for compilation callback
- Add Mips specific code for invalidating the instruction cache
- Next patch will address wrong tablegen encoding
Commit msg added by my own but the patch is from Sasa Stankovic.
llvm-svn: 139688
alignment check for 256-bit classes more strict. There're no testcases
but we catch more folding cases for AVX while running single and multi
sources in the llvm testsuite.
Since some 128-bit AVX instructions have different number of operands
than their SSE counterparts, they are placed in different tables.
256-bit AVX instructions should also be added in the table soon. And
there a few more 128-bit versions to handled, which should come in
the following commits.
llvm-svn: 139687
- Add enum SymbolType and function getSymbolType()
- Add function isGlobal() - it's returns true for symbols that can be used in another objects, such as library functions.
- Rename function getAddress() to getOffset() and add new function getAddress(), because currently getAddress() returns section offset of symbol first byte. new getAddress() return symbol address.
- Change usage SymbolRef::getAddress() to getOffset() in tools/llvm-nm and tools/llvm-objdump.
Patch by Danil Malyshev!
llvm-svn: 139683
This is only one half of it, the part that caches address ranges from the DIEs when .debug_aranges is
not available will be ported soon.
llvm-svn: 139680
#line directives with the needed support in the lexer. Next will be to build
a simple file/line# table mapping source SMLoc's for later use by diagnostics.
And the last step will be to get the diagnostics to use the mapping for file
and line numbers.
llvm-svn: 139669
When a ParentVNI maps to multiple defs in a new interval, its live range
may still be derived directly from RegAssign by transferValues().
On the other hand, when instructions have been rematerialized or
hoisted, it may be necessary to completely recompute live ranges using
LiveRangeCalc::extend() to all uses.
Use a bit in the value map to indicate that a live range must be
recomputed. Rename markComplexMapped() to forceRecompute().
This fixes some live range verification errors when
-split-spill-mode=size hoists back-copies by recomputing source ranges
when RegAssign kills can't be moved.
llvm-svn: 139660
Whenever the complement interval is defined by multiple copies of the
same value, hoist those back-copies to the nearest common dominator.
This ensures that at most one copy is inserted per value in the
complement inteval, and no phi-defs are needed.
llvm-svn: 139651
This introduces a new library to LLVM: libDebugInfo. It will provide debug information
parsing to LLVM. Much of the design and some of the code is taken from the LLDB project.
It also contains an llvm-dwarfdump tool that can dump the abbrevs and DIEs from an
object file. It can be used to write tests for DWARF input and output easily.
llvm-svn: 139627
It is an endian-aware helper that can read data from a StringRef. It will
come in handy for DWARF parsing. This class is inspired by LLDB's
DataExtractor, but is stripped down to the bare minimum needed for DWARF.
Comes with unit tests!
llvm-svn: 139626
more strict about the alignment checking. This was found by inspection
and I don't have any testcases so far, although the llvm testsuite runs
without any problem.
llvm-svn: 139625
This function is used to flag values where the complement interval may
overlap other intervals. Call it from overlapIntv, and use the flag to
fully recompute those live ranges in transferValues().
llvm-svn: 139612
The complement interval may overlap the other intervals created, so use
a separate LiveRangeCalc instance to compute its live range.
A LiveRangeCalc instance can only be shared among non-overlapping
intervals.
llvm-svn: 139603
SplitKit will soon need two copies of these data structures, and the
algorithms will also be useful when LiveIntervalAnalysis becomes
independent of LiveVariables.
llvm-svn: 139572
Splitting a landing pad takes considerable care because of PHIs and other
nasties. The problem is that the jump table needs to jump to the landing pad
block. However, the landing pad block can be jumped to only by an invoke
instruction. So we clone the landingpad instruction into its own basic block,
have the invoke jump to there. The landingpad instruction's basic block's
successor is now the target for the jump table.
But because of PHI nodes, we need to create another basic block for the jump
table to jump to. This is definitely a hack, because the values for the PHI
nodes may not be defined on the edge from the jump table. But that's okay,
because the jump table is simply a construct to mimic what is happening in the
CFG. So the values are mysteriously there, even though there is no value for the
PHI from the jump table's edge (hence calling this a hack).
llvm-svn: 139545
No tests; these changes aren't really interesting in the sense that the logic is the same for volatile and atomic.
I believe this completes all of the changes necessary for the optimizer to handle loads and stores correctly. I'm going to try and come up with some additional testing, though.
llvm-svn: 139533
However with this fix it does now.
Basically the operand order for the x86 target specific node
is not the same as the instruction, but since the intrinsic need that
specific order at the instruction definition, just change the order
during legalization. Also, there were some wrong invertions of condition
codes, such as GE => LE, GT => LT, fix that too. Fix PR10907.
llvm-svn: 139528
SplitKit always computes a complement live range to cover the places
where the original live range was live, but no explicit region has been
allocated.
Currently, the complement live range is created to be as small as
possible - it never overlaps any of the regions. This minimizes
register pressure, but if the complement is going to be spilled anyway,
that is not very important. The spiller will eliminate redundant
spills, and hoist others by making the spill slot live range overlap
some of the regions created by splitting. Stack slots are cheap.
This patch adds the interface to enable spill modes in SplitKit. In
spill mode, SplitKit will assume that the complement is going to spill,
so it will allow it to overlap regions in order to avoid back-copies.
By doing some of the spiller's work early, the complement live range
becomes simpler. In some cases, it can become much simpler because no
extra PHI-defs are required. This will speed up both splitting and
spilling.
This is only the interface to enable spill modes, no implementation yet.
llvm-svn: 139500