This change adds code to explicitly mark a function which requires runtime stack realignment as not having a fixed frame size in the StackMap section. As it happens, this is not actually a functional change. The size that would be reported without the check is also "-1", but as far as I can tell, that's an accident. The code change makes this explicit.
Note: There's a separate bug in handling of stackmaps and patchpoints in functions which need dynamic frame realignment. The current code assumes that offsets can be calculated from RBP, but realigned frames must use RSP. (There's a variable gap between RBP and the spill slots.) This change set does not address that issue.
Reviewers: atrick, ributzka
Differential Revision: http://reviews.llvm.org/D4572
llvm-svn: 214534
This is a followup patch for r214366, which added the same behavior to the
AArch64 and X86 FastISel code. This fix reproduces the already existing
behavior of SelectionDAG in FastISel.
llvm-svn: 214531
The tbz/tbnz checks the sign bit to convert
op w1, w1, w10
cmp w1, #0
b.lt .LBB0_0
to
op w1, w1, w10
tbnz w1, #31, .LBB0_0
Differential Revision: http://reviews.llvm.org/D4440
llvm-svn: 214518
Note: The current code in DecodeMSRMask() rejects the unpredictable A/R MSR mask '0000' with Fail. The code in the patch follows this style and rejects unpredictable M-class MSR masks also with Fail (instead of SoftFail). If SoftFail is preferred in this case then additional changes to ARMInstPrinter (to print non-symbolic masks) and ARMAsmParser (to parse non-symbolic masks) will be needed.
Patch by Petr Pavlu!
llvm-svn: 214505
The ARM ARM prohibits LDRB/LDRSB instructions with writeback into the destination register. With this commit this constraint is now enforced and we stop assembling LDRH/LDRSH instructions with unpredictable behavior.
llvm-svn: 214500
The ARM ARM prohibits LDRH/LDRSH instructions with writeback into the source register. With this commit this constraint is now enforced and we stop assembling LDRH/LDRSH instructions with unpredictable behavior.
llvm-svn: 214499
The ARM ARM prohibits LDR instructions with writeback into the destination register. With this commit this constraint is now enforced and we stop assembling LDR instructions with unpredictable behavior.
llvm-svn: 214498
Summary:
Big-endian mode was not correctly adjusting the offset for types smaller
than an ABI slot.
Fixes PR19612
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: sstankovic, llvm-commits
Differential Revision: http://reviews.llvm.org/D4556
llvm-svn: 214493
ADDS and SUBS cannot encode negative immediates or immediates larger than 12bit.
This fix checks if the immediate version can be used under this constraints and
if we can convert ADDS to SUBS or vice versa to support negative immediates.
Also update the test cases to test the immediate versions.
llvm-svn: 214470
When generating unaligned vector loads, we need to search for other loads or
stores nearby offset by one vector width. If we find one, then we know that we
can safely generate another aligned load at that address. Otherwise, we must
generate the next load using an offset of the vector width minus one byte (so
we don't read off the end of the allocation if the base unaligned address
happened to be aligned at runtime). We had previously done this using only
other vector loads and stores, but did not consider the PowerPC-specific vector
load/store intrinsics. Now we'll also consider vector intrinsics. By itself,
this change is a feature enhancement, but is a necessary step toward fixing the
underlying problem behind PR19991.
llvm-svn: 214469
Abs/neg folding has moved out of foldOperands and into the instruction
selection phase using complex patterns. As a consequence of this
change, we now prefer to select the 64-bit encoding for most
instructions and the modifier operands have been dropped from
integer VOP3 instructions.
llvm-svn: 214467
This allows assembling the two new instructions, encls and enclu for the
SKX processor model.
Note the diffs are a bigger than what might think, but to fit the new
MRM_CF and MRM_D7 in things in the right places things had to be
renumbered and shuffled down causing a bit more diffs.
rdar://16228228
llvm-svn: 214460
The current remark is ambiguous and makes it sounds like explicitly specifying vectorization will allow the loop to be vectorized. This is not the case. The improved remark directs the user to -Rpass-analysis=loop-vectorize to determine the cause of the pass-miss.
Reviewed by Arnold Schwaighofer`
llvm-svn: 214445
This is a follow-up to the activity in the bug at
http://llvm.org/bugs/show_bug.cgi?id=18663 . The underlying issue has
to do with how the KILL pseudo-instruction is handled. I defer to
Hal/Jakob/Uli for additional details and background.
This will disable the (bad?) assert, add an associated fixme comment,
and add a pair of tests.
The code change and the pr18663-2.ll test are copied from the referenced
bug. That test does not immediately fail in my environment, but I have
added the pr18663.ll test which does.
(Comment from Hal)
to provide everyone else with some context, this assert was not bad when
it was written. At that time, we only generated KILL pseudo instructions
around subregister copies. This logic, unfortunately, had its own problems.
In r199797, the relevant logic in MachineCopyPropagation was replaced to
generate KILLs for other kinds of copies too. This change in semantics broke
this now-problematic assumption in AggressiveAntiDepBreaker. The
AggressiveAntiDepBreaker really needs a proper cleanup to deal with the
change, but removing the assert (which just allows the function to return
false) is a safe conservative behavior, and should do for the time being.
llvm-svn: 214429
It seems that when I fixed this, almost exactly a year ago, I did not quite do
it correctly. When we have duplicate block predecessors, we can indeed not have
different incoming values for the same block, but we *must* have duplicate
entries. So, instead of skipping the duplicates, we explicitly add the
duplicate incoming values.
Fixes PR20442.
llvm-svn: 214423
Correctly sort self-users (such as PHI nodes). I added a targeted test
in `test/Bitcode/use-list-order.ll` and the final missing RUN line to
tests in `test/Assembly`.
This is part of PR5680.
llvm-svn: 214417
Switch array type shadow from a single integer to
an array of integers (i.e. make it per-element).
This simplifies instrumentation of extractvalue and fixes PR20493.
llvm-svn: 214398
We can only propagate the nsw bits if both subtraction instructions are
marked with the appropriate bit.
N.B. We only propagate the nsw bit in InstCombine because the nuw case
is already handled in InstSimplify.
This fixes PR20189.
llvm-svn: 214385
If the NUW bit is set for 0 - Y, we know that all values for Y other
than 0 would produce a poison value. This allows us to replace (0 - Y)
with 0 in the expression (X - (0 - Y)) which will ultimately leave us
with X.
This partially fixes PR20189.
llvm-svn: 214384
This commit updates the existing SelectionDAG tests for the stackmap and patchpoint
intrinsics and enables FastISel testing. It also splits up the tests into separate
files, due to different codegen between SelectionDAG and FastISel.
llvm-svn: 214382
Currently the large code model for MachO uses the GOT to make function calls.
Emit the required adrp and ldr instructions to load the address from the GOT.
Related to <rdar://problem/17733076>.
llvm-svn: 214381
Since initializers of GlobalValues are being assigned IDs before
GlobalValues themselves, explicitly exclude GlobalValues from the
constant pool. Added targeted test in `test/Bitcode/use-list-order.ll`
and added two more RUN lines in `test/Assembly`.
This is part of PR5680.
llvm-svn: 214368
This fixes a mistake where I accidentially dropped the upper 32bit of a
64bit pointer during FastISel lowering of the patchpoint intrinsic.
llvm-svn: 214367
UNDEF arguments are not ment to be touched - especially for the webkit_js
calling convention. This fix reproduces the already existing behavior of
SelectionDAG in FastISel.
llvm-svn: 214366
Add RUN line for `verify-uselistorder` to every test in `test/Assembly`,
unless it's a negative check (assembler rejects it) or verification
fails.
There are three files that verification fails on (so I've left out the
RUN lines):
- 2002-08-22-DominanceProblem.ll
- ConstantExprFold.ll
- ConstantExprFoldCast.ll
This is part of PR5680.
llvm-svn: 214365
In some places we've been using different suffixes for the different
file formats involved in instrprof, but in others we've just
ambiguously used .profdata. Update the test files to indicate the
types of file more obviously.
No functional change.
llvm-svn: 214357
Before this patch we had
@a = weak global ...
but
@b = alias weak ...
The patch changes aliases to look more like global variables.
Looking at some really old code suggests that the reason was that the old
bison based parser had a reduction for alias linkages and another one for
global variable linkages. Putting the alias first avoided the reduce/reduce
conflict.
The days of the old .ll parser are long gone. The new one parses just "linkage"
and a later check is responsible for deciding if a linkage is valid in a
given context.
llvm-svn: 214355
This improves the code generation for the XALU intrinsics when the
condition is feeding a select instruction.
This also updates and enables the XALU unit tests for FastISel.
This fixes <rdar://problem/17831117>.
llvm-svn: 214350
Currently the shift-immediate versions are not supported by tblgen and
hopefully this can be later removed, once the required support has been
added to tblgen.
llvm-svn: 214345
While we can already transform A | (A ^ B) into A | B, things get bad
once we have (A ^ B) | (A ^ B ^ Cst) because reassociation will morph
this into (A ^ B) | ((A ^ Cst) ^ B). Our existing patterns fail once
this happens.
To fix this, we add a new pattern which looks through the tree of xor
binary operators to see that, in fact, there exists a redundant xor
operation.
What follows bellow is a correctness proof of the transform using CVC3.
$ cat t.cvc
A, B, C : BITVECTOR(64);
QUERY BVXOR(A, B) | BVXOR(BVXOR(B, C), A) = BVXOR(A, B) | C;
QUERY BVXOR(BVXOR(A, C), B) | BVXOR(A, B) = BVXOR(A, B) | C;
QUERY BVXOR(A, B) & BVXOR(BVXOR(B, C), A) = BVXOR(A, B) & ~C;
QUERY BVXOR(BVXOR(A, C), B) & BVXOR(A, B) = BVXOR(A, B) & ~C;
$ cvc3 < t.cvc
Valid.
Valid.
Valid.
Valid.
llvm-svn: 214342
The lifetime intrinsics need some work in order to make it clear which
optimizations are or are not valid.
For now dropping this optimization avoids a miscompilation.
Patch by Björn Steinbrink.
llvm-svn: 214336
This patch adds an explicit triple to the test case introduced by r214322. This
should fix build failueres that are occuring on bots that are cross building.
llvm-svn: 214330
DAGCombine may choose to rewrite graphs where two loads feed a select into
graphs where a select of two addresses feed a load. While it sanity checks the
loads to make sure they are broadly equivalent it currently just uses the
alignment restriction of the left node. In cases where the right node has
stronger alignment requiresment this may lead to bad codegen, such as generating
an aligned load where an unaligned load is required. This patch makes the
combine generate a load with an alignment that is the same as whichever is more
restrictive of the two alignments.
Tests included.
rdar://17762530
llvm-svn: 214322
When predicting use-list order, we visit functions in reverse order
followed by `GlobalValue`s and write out use-lists at the first
opportunity. In the reader, this will translate to *after* the last use
has been added.
For this to work, we actually need to descend into `GlobalValue`s.
Added a targeted test in `use-list-order.ll` and `RUN` lines to the
newly passing tests in `test/Bitcode`.
There are two remaining failures in `test/Bitcode`:
- blockaddress.ll: I haven't thought through how to model the way
block addresses change the order of use-lists (or how to work around
it).
- metadata-2.ll: There's an old-style `@llvm.used` global array here
that I suspect the .ll parser isn't upgrading properly. When it
round-trips through bitcode, the .bc reader *does* upgrade it, so
the extra variable (`i8* null`) has an extra use, and the shuffle
vector doesn't match.
I think the fix is to upgrade old-style global arrays (or reject
them?) in the .ll parser.
This is part of PR5680.
llvm-svn: 214321
the jump instruction table pass. First, the verifier is already built
into all the tools. The test case is adapted to just run llvm-as
demonstrating that we still catch the broken module. Second, the
verifier is *extremely* slow. This was responsible for very significant
compile time regressions.
If you have deployed a Clang binary anywhere from r210280 to this
commit, you really want to re-deploy.
llvm-svn: 214287
We now (1) correctly decode the branch immediate, (2) modify the immediate to
corretly treat it as PC-rel, and (3) properly populate the stub entry.
Previously we had been doing each of these wrong.
<rdar://problem/17750739>
llvm-svn: 214285
Fix the missing case in ScalarizeVectorResult() that was exposed with
libclcore.bc in Android.
Differential Revision: http://reviews.llvm.org/D4645
llvm-svn: 214266
r214242 was subtle enough it really deserves a targeted test with
comments. This adds some global variables that trigger the relevant
code path. Sorry this wasn't committed with the fix.
llvm-svn: 214243
To avoid unnecessary forward references, the reader doesn't process
initializers of `GlobalValue`s until after the constant pool has been
processed, and then in reverse order. Model this when predicting
use-list order. This gets two more Bitcode tests passing with
`llvm-uselistorder`.
Part of PR5680.
llvm-svn: 214242
This moves some tests around to make it clearer what's being tested,
and adds very rudimentary comment syntax to the text input format to
make specifying this kind of test a little bit simpler.
llvm-svn: 214235
full paths for its first argument.
This allows us to remove the annoying sed lines in the test cases, and write
direct references to file names in stub_addr calls (rather than <filename>
placeholders).
llvm-svn: 214211
Per feedback on r214111, we are going to use null to represent unspecified
parameter. If the type array is {null}, it means a function that returns void;
If the type array is {null, null}, it means a variadic function that returns
void. In summary if we have more than one element in the type array and the last
element is null, it is a variadic function.
rdar://17628609
llvm-svn: 214189
The test being performed is just an approximation anyway, so it really
shouldn't crash when things don't go entirely as expected.
Should fix PR20474.
llvm-svn: 214177
We need to make sure we use the softened version of all appropriate operands in
the libcall, or things go horribly wrong. This may entail actually executing a
1-stage softening.
llvm-svn: 214175
Fix the sort of expected order in the reader to correctly return `false`
when comparing a `Use` against itself.
This was caught by test/Bitcode/binaryIntInstructions.3.2.ll, so I'm
adding a `RUN` line using `llvm-uselistorder` for every test in
`test/Bitcode` that passes.
A few tests still fail, so I'll investigate those next.
This is part of PR5680.
llvm-svn: 214157
The enum types array by design contains pointers to MDNodes rather than DIRefs.
Unique them when handling the enum types in DwarfDebug.
rdar://17628609
llvm-svn: 214139
DITypeArray is an array of DITypeRef, at its creation, we will create
DITypeRef (i.e use the identifier if the type node has an identifier).
This is the last patch to unique the type array of a subroutine type.
rdar://17628609
llvm-svn: 214132
Predict and serialize use-list order in bitcode. This makes the option
`-preserve-bc-use-list-order` work *most* of the time, but this is still
experimental.
- Builds a full value-table up front in the writer, sets up a list of
use-list orders to write out, and discards the table. This is a
simpler first step than determining the order from the various
overlapping IDs of values on-the-fly.
- The shuffles stored in the use-list order list have an unnecessarily
large memory footprint.
- `blockaddress` expressions cause functions to be materialized
out-of-order. For now I've ignored this problem, so use-list orders
will be wrong for constants used by functions that have block
addresses taken. There are a couple of ways to fix this, but I
don't have a concrete plan yet.
- When materializing functions lazily, the use-lists for constants
will not be correct. This use case is out of scope: what should the
use-list order be, if it's incomplete?
This is part of PR5680.
llvm-svn: 214125
These are only used when the 'ld' in the path is gold and the plugin has
been built, but it is already a start to make sure we don't regress features
that cannot be tested with llvm-lto.
llvm-svn: 214058
The subtarget information is the ultimate source of truth for the feature set
that is enabled at this point. We would previously not propagate the feature
information to the subtarget. While this worked for the most part (features
would be enabled/disabled as requested), if another operation that changed the
feature bits was encountered (such as a mode switch via a .arm or .thumb
directive), we would end up resetting the behaviour of the architectural
extensions.
Handling this properly requires a slightly more complicated handling. We need
to check if the feature is now being toggled. If so, only then do we toggle the
features. In return, we no longer have to calculate the feature bits ourselves.
The test changes are mostly to the diagnosis, which is now more uniform (a nice
side effect!). Add an additional test to ensure that we handle this case
properly.
Thanks to Nico Weber for alerting me to this issue!
llvm-svn: 214057
Rename to allowsMisalignedMemoryAccess.
On R600, 8 and 16 byte accesses are mostly OK with 4-byte alignment,
and don't need to be split into multiple accesses. Vector loads with
an alignment of the element type are not uncommon in OpenCL code.
llvm-svn: 214055
'J' represents a negative number suitable for an add/sub alias
instruction, but while preparing it to become an int64_t we were
mangling the sign extension. So "i32 -1" became 0xffffffffLL, for
example.
Should fix one half of PR20456.
llvm-svn: 214052
instructions in the legalized DAG, and leverage it to combine long
sequences of instructions to PSHUFB.
Eventually, the other x86-instruction-specific shuffle combines will
probably all be driven out of this routine. But the real motivation is
to detect after we have fully legalized and optimized a shuffle to the
minimal number of x86 instructions whether it is profitable to replace
the chain with a fully generic PSHUFB instruction even though doing so
requires either a load from a constant pool or tying up a register with
the mask.
While the Intel manuals claim it should be used when it replaces 5 or
more instructions (!!!!) my experience is that it is actually very fast
on modern chips, and so I've gon with a much more aggressive model of
replacing any sequence of 3 or more instructions.
I've also taught it to do some basic canonicalization to special-purpose
instructions which have smaller encodings than their generic
counterparts.
There are still quite a few FIXMEs here, and I've not yet implemented
support for lowering blends with PSHUFB (where its power really shines
due to being able to zero out lanes), but this starts implementing real
PSHUFB support even when using the new, fancy shuffle lowering. =]
llvm-svn: 214042
over each node in the worklist prior to combining.
This allows the combiner to produce new nodes which need to go back
through legalization. This is particularly useful when generating
operands to target specific nodes in a post-legalize DAG combine where
the operands are significantly easier to express as pre-legalized
operations. My immediate use case will be PSHUFB formation where we need
to build a constant shuffle mask with a build_vector node.
This also refactors the relevant functionality in the legalizer to
support this, and updates relevant tests. I've spoken to the R600 folks
and these changes look like improvements to them. The avx512 change
needs to be investigated, I suspect there is a disagreement between the
legalizer and the DAG combiner there, but it seems a minor issue so
leaving it to be re-evaluated after this patch.
Differential Revision: http://reviews.llvm.org/D4564
llvm-svn: 214020
The tale starts with r212808 which attempted to fix inversion of the low
and high bits when lowering MUL_LOHI. Sadly, that commit did not include
any positive test cases, and just removed some operations from a test
case where the actual logic being changed isn't fully visible from the
test.
What this commit did was two things. First, it reversed the low and high
results in the formation of the MERGE_VALUES node for the multiple
results. This is entirely correct.
Second it changed the shuffles for extracting the low and high
components from the i64 results of the multiplies to extract them
assuming a big-endian-style encoding of the multiply results. This
second change is wrong. There is no big-endian encoding in x86, the
results of the multiplies are normal v2i64s: when cast to v4i32, the low
i32s are at offsets 0 and 2, and the high i32s are at offsets 1 and 3.
However, the first change wasn't enough to actually fix the bug, which
is (I assume) why the second change was also made. There was another bug
in the MERGE_VALUES formation: we weren't using a VTList, and so were
getting a single result node! When grabbing the *second* result from the
node, we got... well.. colud be anything. I think this *appeared* to
invert things, but had to be causing other problems as well.
Fortunately, I fixed the MERGE_VALUES issue in r213931, so we should
have been fine, right? NOOOPE! Because the core bug was never addressed,
the test in vector-idiv failed when I fixed the MERGE_VALUES node.
Because there are essentially no docs for this node, I had to guess at
how to fix it and tried swapping the operands, restoring the order of
the original code before r212808. While this "fixed" the test case (in
that we produced the write instructions) we were still extracting the
wrong elements of the i64s, and thus PR20355 was still broken.
This commit essentially reverts the big-endian-style extraction part of
r212808 and goes back to the original masks which were correct. Now that
the MERGE_VALUES node formation is also correct, everything works. I've
also included a more detailed test from PR20355 to make sure this stays
fixed.
llvm-svn: 214011
The clever way to implement signed multiplication with unsigned *is
already implemented* and tested and working correctly. The bug is
somewhere else. Re-investigating.
This will teach me to not scroll far enough to read the code that did
what I thought needed to be done.
llvm-svn: 214009
signed multiplication is requested. While there is not a difference in
the *low* half of the result, the *high* half (used specifically to
implement the signed division by these constants) certainly is used. The
test case I've nuked was actively asserting wrong code.
There is a delightful solution to doing signed multiplication even when
we don't have it that Richard Smith has crafted, but I'll add the
machinery back and implement that in a follow-up patch. This at least
restores correctness.
llvm-svn: 214007
This is the first commit in a series that add an @llvm.assume intrinsic which
can be used to provide the optimizer with a condition it may assume to be true
(when the control flow would hit the intrinsic call). Some basic properties are added here:
- llvm.invariant(true) is dead.
- llvm.invariant(false) is unreachable (this directly corresponds to the
documented behavior of MSVC's __assume(0)), so is llvm.invariant(undef).
The intrinsic is tagged as writing arbitrarily, in order to maintain control
dependencies. BasicAA has been updated, however, to return NoModRef for any
particular location-based query so that we don't unnecessarily block code
motion.
llvm-svn: 213973
address of the stack guard was being spilled to the stack.
Previously the address of the stack guard would get spilled to the stack if it
was impossible to keep it in a register. This patch introduces a new target
independent node and pseudo instruction which gets expanded post-RA to a
sequence of instructions that load the stack guard value. Register allocator
can now just remat the value when it can't keep it in a register.
<rdar://problem/12475629>
llvm-svn: 213967
This commit implements the frameaddress intrinsic for the AArch64 architecture
in FastISel.
There were two test cases that pretty much tested the same, so I combined them
to a single test case.
Fixes <rdar://problem/17811834>
llvm-svn: 213959
Ugh. Turns out not even transformation passes link in how to read IR.
I sincerely believe the buildbots will finally agree with my system
after this though. (I don't really understand why all of this has been
working on my system, but not on all the buildbots.)
Create a new tool called llvm-uselistorder to use for verifying use-list
order. For now, just dump everything from the (now defunct)
-verify-use-list-order pass into the tool.
This might be a better way to test use-list order anyway.
Part of PR5680.
llvm-svn: 213957
This recommits r208930, r208933, and r208975 (by reverting r209338) and
reverts r209529 (the FIXME to readd this functionality once the tools
were fixed) now that DWP has been fixed to cope with a single section
for all fission type units.
Original commit message:
"Since type units in the dwo file are handled by a debug aware tool,
they don't need to leverage the ELF comdat grouping to implement
deduplication. Avoid creating all the .group sections for these as a
space optimization."
llvm-svn: 213956
Reverted by Eric Christopher (Thanks!) in r212203 after Bob Wilson
reported LTO issues. Duncan Exon Smith and Aditya Nandakumar helped
provide a reduced reproduction, though the failure wasn't too hard to
guess, and even easier with the example to confirm.
The assertion that the subprogram metadata associated with an
llvm::Function matches the scope data referenced by the DbgLocs on the
instructions in that function is not valid under LTO. In LTO, a C++
inline function might exist in multiple CUs and the subprogram metadata
nodes will refer to the same llvm::Function. In this case, depending on
the order of the CUs, the first intance of the subprogram metadata may
not be the one referenced by the instructions in that function and the
assertion will fail.
A test case (test/DebugInfo/cross-cu-linkonce-distinct.ll) is added, the
assertion removed and a comment added to explain this situation.
This was then reverted again in r213581 as it caused PR20367. The root
cause of this was the early exit in LiveDebugVariables meant that
spurious DBG_VALUE intrinsics that referenced dead variables were not
removed, causing an assertion/crash later on. The fix is to have
LiveDebugVariables strip all DBG_VALUE intrinsics in functions without
debug info as they're not needed anyway. Test case added to cover this
situation (that occurs when a debug-having function is inlined into a
nodebug function) in test/DebugInfo/X86/nodebug_with_debug_loc.ll
Original commit message:
If a function isn't actually in a CU's subprogram list in the debug info
metadata, ignore all the DebugLocs and don't try to build scopes, track
variables, etc.
While this is possibly a minor optimization, it's also a correctness fix
for an incoming patch that will add assertions to LexicalScopes and the
debug info verifier to ensure that all scope chains lead to debug info
for the current function.
Fix up a few test cases that had broken/incomplete debug info that could
violate this constraint.
Add a test case where this occurs by design (inlining a
debug-info-having function in an attribute nodebug function - we want
this to work because /if/ the nodebug function is then inlined into a
debug-info-having function, it should be fine (and will work fine - we
just stitch the scopes up as usual), but should the inlining not happen
we need to not assert fail either).
llvm-svn: 213952
* Add CUs to the named CU node
* Add missing DW_TAG_subprogram nodes
* Add llvm::Functions to the DW_TAG_subprogram nodes
This cleans up the tests so that they don't break under a
soon-to-be-made change that is more strict about such things.
llvm-svn: 213951
This functionality is currently turned off by default.
Part of the motivation for introducing scoped-noalias metadata is to enable the
preservation of noalias parameter attribute information after inlining.
Sometimes this can be inferred from the code in the caller after inlining, but
often we simply lose valuable information.
The overall process if fairly simple:
1. Create a new unqiue scope domain.
2. For each (used) noalias parameter, create a new alias scope.
3. For each pointer, collect the underlying objects. Add a noalias scope for
each noalias parameter from which we're not derived (and has not been
captured prior to that point).
4. Add an alias.scope for each noalias parameter from which we might be
derived (or has been captured before that point).
Note that the capture checks apply only if one of the underlying objects is not
an identified function-local object.
llvm-svn: 213949
In the process of fixing the noalias parameter -> metadata conversion process
that will take place during inlining (which will be committed soon, but not
turned on by default), I have come to realize that the semantics provided by
yesterday's commit are not really what we want. Here's why:
void foo(noalias a, noalias b, noalias c, bool x) {
*q = x ? a : b;
*c = *q;
}
Generically, we know that *c does not alias with *a and with *b (so there is an
'and' in what we know we're not), and we know that *q might be derived from *a
or from *b (so there is an 'or' in what we know that we are). So we do not want
the semantics currently, where any noalias scope matching any alias.scope
causes a NoAlias return. What we want to know is that the noalias scopes form a
superset of the alias.scope list (meaning that all the things we know we're not
is a superset of all of things the other instruction might be).
Making that change, however, introduces a composibility problem. If we inline
once, adding the noalias metadata, and then inline again adding more, and we
append new scopes onto the noalias and alias.scope lists each time. But, this
means that we could change what was a NoAlias result previously into a MayAlias
result because we appended an additional scope onto one of the alias.scope
lists. So, instead of giving scopes the ability to have parents (which I had
borrowed from the TBAA implementation, but seems increasingly unlikely to be
useful in practice), I've given them domains. The subset/superset condition now
applies within each domain independently, and we only need it to hold in one
domain. Each time we inline, we add the new scopes in a new scope domain, and
everything now composes nicely. In addition, this simplifies the
implementation.
llvm-svn: 213948
Add a -verify-use-list-order pass, which shuffles use-list order, writes
to bitcode, reads back, and verifies that the (shuffled) order matches.
- The utility functions live in lib/IR/UseListOrder.cpp.
- Moved (and renamed) the command-line option to enable writing
use-lists, so that this pass can return early if the use-list orders
aren't being serialized.
It's not clear that this pass is the right direction long-term (perhaps
a separate tool instead?), but short-term it's a great way to test the
use-list order prototype. I've added an XFAIL-ed testcase that I'm
hoping to get working pretty quickly.
This is part of PR5680.
llvm-svn: 213945
assembly instructions.
This is necessary to ensure ARM assembler switches to Thumb mode before it
starts assembling the file level inline assembly instructions at the beginning
of a .s file.
<rdar://problem/17757232>
llvm-svn: 213924
Because the PowerPC vmrgh* and vmrgl* instructions have a built-in
big-endian bias, it is necessary to swap their inputs in little-endian
mode when using them to implement a vector shuffle. This was
previously missed in the vector LE implementation.
There was already logic to distinguish between unary and "normal"
vmrg* vector shuffles, so this patch extends that logic to use a third
option: "swapped" vmrg* vector shuffles that are used for little
endian in place of the "normal" ones.
I've updated the vec-shuffle-le.ll test to check for the expected
register ordering on the generated instructions.
This bug was discovered when testing the LE and ELFv2 patches for
safety if they were backported to 3.4. A different vectorization
decision was made in 3.4 than on mainline trunk, and that exposed the
problem. I've verified this fix takes care of that issue.
llvm-svn: 213915
The -print-file-name option in llvm-nm is to precede each symbol
with the object file it came from. While code for the parsing of this
option and its aliases existed there was no code to implement it.
llvm-svn: 213906
This tool's job is to dump the vtables inside object files. It is
currently limited to MS ABI vf- and vb-tables but it will eventually
support Itanium-style v-tables as well.
Differential Revision: http://reviews.llvm.org/D4584
llvm-svn: 213903
hint) the loop unroller replaces the llvm.loop.unroll.count metadata with
llvm.loop.unroll.disable metadata to prevent any subsequent unrolling
passes from unrolling more than the hint indicates. This patch fixes
an issue where loop unrolling could be disabled for other loops as well which
share the same llvm.loop metadata.
llvm-svn: 213900