Commit Graph

25524 Commits

Author SHA1 Message Date
Philip Reames 87c2b605f5 Explicitly report runtime stack realignment in StackMap section
This change adds code to explicitly mark a function which requires runtime stack realignment as not having a fixed frame size in the StackMap section. As it happens, this is not actually a functional change. The size that would be reported without the check is also "-1", but as far as I can tell, that's an accident. The code change makes this explicit.

Note: There's a separate bug in handling of stackmaps and patchpoints in functions which need dynamic frame realignment. The current code assumes that offsets can be calculated from RBP, but realigned frames must use RSP. (There's a variable gap between RBP and the spill slots.) This change set does not address that issue.

Reviewers: atrick, ributzka

Differential Revision: http://reviews.llvm.org/D4572

llvm-svn: 214534
2014-08-01 18:26:27 +00:00
Juergen Ributzka 4c018a12a3 [FastISel][ARM] Do not emit stores for undef arguments.
This is a followup patch for r214366, which added the same behavior to the
AArch64 and X86 FastISel code. This fix reproduces the already existing
behavior of SelectionDAG in FastISel.

llvm-svn: 214531
2014-08-01 18:04:14 +00:00
Matt Arsenault 06bd3933ba R600: Cleanup test
Remove -CHECKs, use multiple prefixes, name values,
also test the @llvm.fabs version

llvm-svn: 214525
2014-08-01 17:00:29 +00:00
Chad Rosier 4d71a4e2c6 [AArch64] Fix test from r214518 in an attempt to appease buildbots.
llvm-svn: 214521
2014-08-01 15:30:41 +00:00
Chad Rosier 579c02c9a5 [AArch64] Generate tbz/tbnz when comparing against zero.
The tbz/tbnz checks the sign bit to convert

op w1, w1, w10
cmp w1, #0
b.lt .LBB0_0

to

op w1, w1, w10
tbnz w1, #31, .LBB0_0

Differential Revision: http://reviews.llvm.org/D4440

llvm-svn: 214518
2014-08-01 14:48:56 +00:00
Tim Northover 4bd286ab53 llvm-objdump: implement printing for MachO __compact_unwind info.
llvm-svn: 214509
2014-08-01 13:07:19 +00:00
James Molloy 137ce60ecf Allow only disassembling of M-class MSR masks that the assembler knows how to assemble back.
Note: The current code in DecodeMSRMask() rejects the unpredictable A/R MSR mask '0000' with Fail. The code in the patch follows this style and rejects unpredictable M-class MSR masks also with Fail (instead of SoftFail). If SoftFail is preferred in this case then additional changes to ARMInstPrinter (to print non-symbolic masks) and ARMAsmParser (to parse non-symbolic masks) will be needed.

Patch by Petr Pavlu!

llvm-svn: 214505
2014-08-01 12:42:11 +00:00
Tilmann Scheller 7cc0ed48f0 [ARM] Make the assembler reject unpredictable pre/post-indexed ARM LDRB/LDRSB instructions.
The ARM ARM prohibits LDRB/LDRSB instructions with writeback into the destination register. With this commit this constraint is now enforced and we stop assembling LDRH/LDRSH instructions with unpredictable behavior.

llvm-svn: 214500
2014-08-01 12:08:04 +00:00
Tilmann Scheller 8ff079c16b [ARM] Make the assembler reject unpredictable pre/post-indexed ARM LDRH/LDRSH instructions.
The ARM ARM prohibits LDRH/LDRSH instructions with writeback into the source register. With this commit this constraint is now enforced and we stop assembling LDRH/LDRSH instructions with unpredictable behavior.

llvm-svn: 214499
2014-08-01 11:33:47 +00:00
Tilmann Scheller 8ba74305da [ARM] Make the assembler reject unpredictable pre/post-indexed ARM LDR instructions.
The ARM ARM prohibits LDR instructions with writeback into the destination register. With this commit this constraint is now enforced and we stop assembling LDR instructions with unpredictable behavior.

llvm-svn: 214498
2014-08-01 11:08:51 +00:00
Erik Eckstein c80e1dc081 SLPVectorizer: improved scheduling algorithm.
llvm-svn: 214494
2014-08-01 09:20:42 +00:00
Daniel Sanders 2b553d488f [mips][PR19612] Fix va_arg for big-endian mode.
Summary:
Big-endian mode was not correctly adjusting the offset for types smaller
than an ABI slot.

Fixes PR19612

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: sstankovic, llvm-commits

Differential Revision: http://reviews.llvm.org/D4556

llvm-svn: 214493
2014-08-01 09:17:39 +00:00
Suyog Sarda 56c9a87035 This patch implements transform for pattern "(A & ~B) ^ (~A) -> ~(A & B)".
Differential Revision: http://reviews.llvm.org/D4653

llvm-svn: 214479
2014-08-01 05:07:20 +00:00
Suyog Sarda 1c6c2f69f7 This patch implements transform for pattern "(A | B) & ((~A) ^ B) -> (A & B)".
Differential Revision: http://reviews.llvm.org/D4628

llvm-svn: 214478
2014-08-01 04:59:26 +00:00
Suyog Sarda 52324c82cc This patch implements transform for pattern "( A & (~B)) | (A ^ B) -> (A ^ B)"
Differential Revision: http://reviews.llvm.org/D4652

llvm-svn: 214477
2014-08-01 04:50:31 +00:00
Suyog Sarda 16d646594e This patch implements transform for pattern "(A & B) | ((~A) ^ B) -> (~A ^ B)".
Patch Credit to Ankit Jain !

Differential Revision: http://reviews.llvm.org/D4655

llvm-svn: 214476
2014-08-01 04:41:43 +00:00
Juergen Ributzka 82ecc7ff2a [FastISel][AArch64] Fix the immediate versions of the {s|u}{add|sub}.with.overflow intrinsics.
ADDS and SUBS cannot encode negative immediates or immediates larger than 12bit.
This fix checks if the immediate version can be used under this constraints and
if we can convert ADDS to SUBS or vice versa to support negative immediates.

Also update the test cases to test the immediate versions.

llvm-svn: 214470
2014-08-01 01:25:55 +00:00
Hal Finkel 3604bf7fe7 [PowerPC] Recognize consecutive memory accesses from intrinsics
When generating unaligned vector loads, we need to search for other loads or
stores nearby offset by one vector width. If we find one, then we know that we
can safely generate another aligned load at that address. Otherwise, we must
generate the next load using an offset of the vector width minus one byte (so
we don't read off the end of the allocation if the base unaligned address
happened to be aligned at runtime). We had previously done this using only
other vector loads and stores, but did not consider the PowerPC-specific vector
load/store intrinsics. Now we'll also consider vector intrinsics. By itself,
this change is a feature enhancement, but is a necessary step toward fixing the
underlying problem behind PR19991.

llvm-svn: 214469
2014-08-01 01:02:01 +00:00
Tom Stellard b4a313a76f R600/SI: Do abs/neg folding with ComplexPatterns
Abs/neg folding has moved out of foldOperands and into the instruction
selection phase using complex patterns.  As a consequence of this
change, we now prefer to select the 64-bit encoding for most
instructions and the modifier operands have been dropped from
integer VOP3 instructions.

llvm-svn: 214467
2014-08-01 00:32:39 +00:00
Tom Stellard 6407e1e632 R600/SI: Fold immediates when shrinking instructions
This will prevent us from using extra MOV instructions once we prefer
selecting 64-bit instructions.

llvm-svn: 214464
2014-08-01 00:32:33 +00:00
Tom Stellard 86d12ebdbd R600/SI: Fix incorrect commute operation in shrink instructions pass
We were commuting the instruction by still shrinking it using the
original opcode.

NOTE: This is a candidate for the 3.5 branch.
llvm-svn: 214463
2014-08-01 00:32:28 +00:00
Kevin Enderby 0d928a142b Add support for the X86 secure guard extensions instructions in assembler (SGX).
This allows assembling the two new instructions, encls and enclu for the
SKX processor model.

Note the diffs are a bigger than what might think, but to fit the new
MRM_CF and MRM_D7 in things in the right places things had to be
renumbered and shuffled down causing a bit more diffs.

rdar://16228228

llvm-svn: 214460
2014-07-31 23:57:38 +00:00
Reid Kleckner b7e2f6015a X86 MC: Don't crash on empty memory operand parens
Instead, create an absolute memory operand.

Fixes PR20504.

llvm-svn: 214457
2014-07-31 23:26:35 +00:00
Reid Kleckner 0c5da97dd0 X86 MC: Reject invalid segment registers before a memory operand colon
Previously we would execute unreachable during object emission.

llvm-svn: 214456
2014-07-31 23:03:22 +00:00
Jan Vesely 3047950964 R600: Modernize work item intrinsics test
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Matt Arsenault <Matthew.Arsenault@amd.com>
llvm-svn: 214451
2014-07-31 22:11:03 +00:00
Tyler Nowicki b5a65395cc Improve the remark generated for -Rpass-missed.
The current remark is ambiguous and makes it sounds like explicitly specifying vectorization will allow the loop to be vectorized. This is not the case. The improved remark directs the user to -Rpass-analysis=loop-vectorize to determine the cause of the pass-miss.

Reviewed by Arnold Schwaighofer`

llvm-svn: 214445
2014-07-31 21:22:22 +00:00
Tyler Nowicki 9fe497fcac Improve the remark generated when a variable that is used outside the loop is not a reduction or induction variable.
Reviewed by Arnold Schwaighofer

llvm-svn: 214440
2014-07-31 21:02:40 +00:00
Will Schmidt 44ff8f06ec Disable IsSub subregister assert. pr18663.
This is a follow-up to the activity in the bug at
http://llvm.org/bugs/show_bug.cgi?id=18663 .  The underlying issue has
to do with how the KILL pseudo-instruction is handled.  I defer to
Hal/Jakob/Uli for additional details and background.

This will disable the (bad?) assert, add an associated fixme comment,
and add a pair of tests.

The code change and the pr18663-2.ll test are copied from the referenced
bug.  That test does not immediately fail in my environment, but I have
added the pr18663.ll test which does.

(Comment from Hal)
to provide everyone else with some context, this assert was not bad when
it was written. At that time, we only generated KILL pseudo instructions
around subregister copies. This logic, unfortunately, had its own problems.
In r199797, the relevant logic in MachineCopyPropagation was replaced to
generate KILLs for other kinds of copies too. This change in semantics broke
this now-problematic assumption in AggressiveAntiDepBreaker. The
AggressiveAntiDepBreaker really needs a proper cleanup to deal with the
change, but removing the assert (which just allows the function to return
false) is a safe conservative behavior, and should do for the time being.

llvm-svn: 214429
2014-07-31 19:50:53 +00:00
Hal Finkel 36eff0f854 Fix ScalarEvolutionExpander when creating a PHI in a block with duplicate predecessors
It seems that when I fixed this, almost exactly a year ago, I did not quite do
it correctly. When we have duplicate block predecessors, we can indeed not have
different incoming values for the same block, but we *must* have duplicate
entries. So, instead of skipping the duplicates, we explicitly add the
duplicate incoming values.

Fixes PR20442.

llvm-svn: 214423
2014-07-31 19:13:38 +00:00
Duncan P. N. Exon Smith 852e00e3d1 verify-uselistorder: Change the default -num-shuffles=5
Change the default for `-num-shuffles` to 5 and better document the
algorithm in the header docs of `verify-uselistorder`.

llvm-svn: 214419
2014-07-31 18:46:24 +00:00
Duncan P. N. Exon Smith ab6adeb8a1 UseListOrder: Handle self-users
Correctly sort self-users (such as PHI nodes).  I added a targeted test
in `test/Bitcode/use-list-order.ll` and the final missing RUN line to
tests in `test/Assembly`.

This is part of PR5680.

llvm-svn: 214417
2014-07-31 18:33:12 +00:00
Evgeniy Stepanov 5997feb7dc [msan] Fix handling of array types.
Switch array type shadow from a single integer to
an array of integers (i.e. make it per-element).
This simplifies instrumentation of extractvalue and fixes PR20493.

llvm-svn: 214398
2014-07-31 11:02:27 +00:00
Evgeniy Stepanov 77ad86681f [asan] Support x86 REP MOVS asm instrumentation.
Patch by Yuri Gorshenin.

llvm-svn: 214395
2014-07-31 09:11:04 +00:00
Juergen Ributzka c537bd2da4 [FastISel][AArch64] Add basic bitcast support for conversion between float and int.
Fixes <rdar://problem/17867078>.

llvm-svn: 214389
2014-07-31 06:25:37 +00:00
Juergen Ributzka 130e77e431 [FastISel][AArch64] Add sqrt intrinsic support.
Fixes <rdar://problem/17867067>.

llvm-svn: 214388
2014-07-31 06:25:33 +00:00
David Majnemer a92687d636 InstCombine: Correctly propagate NSW/NUW for x-(-A) -> x+A
We can only propagate the nsw bits if both subtraction instructions are
marked with the appropriate bit.

N.B.  We only propagate the nsw bit in InstCombine because the nuw case
is already handled in InstSimplify.

This fixes PR20189.

llvm-svn: 214385
2014-07-31 04:49:29 +00:00
David Majnemer cd4fbcd1bb InstSimplify: Simplify (X - (0 - Y)) if the second sub is NUW
If the NUW bit is set for 0 - Y, we know that all values for Y other
than 0 would produce a poison value.  This allows us to replace (0 - Y)
with 0 in the expression (X - (0 - Y)) which will ultimately leave us
with X.

This partially fixes PR20189.

llvm-svn: 214384
2014-07-31 04:49:18 +00:00
Juergen Ributzka a80dd08b56 [FastISel][AArch64] Update and enable patchpoint and stackmap intrinsic tests for FastISel.
This commit updates the existing SelectionDAG tests for the stackmap and patchpoint
intrinsics and enables FastISel testing. It also splits up the tests into separate
files, due to different codegen between SelectionDAG and FastISel.

llvm-svn: 214382
2014-07-31 04:10:43 +00:00
Juergen Ributzka 052e6c289b [FastISel][AArch64] Add MachO large code model support for function calls.
Currently the large code model for MachO uses the GOT to make function calls.
Emit the required adrp and ldr instructions to load the address from the GOT.

Related to <rdar://problem/17733076>.

llvm-svn: 214381
2014-07-31 04:10:40 +00:00
Duncan P. N. Exon Smith 9177867b24 UseListOrder: Don't give constant IDs to GlobalValues
Since initializers of GlobalValues are being assigned IDs before
GlobalValues themselves, explicitly exclude GlobalValues from the
constant pool.  Added targeted test in `test/Bitcode/use-list-order.ll`
and added two more RUN lines in `test/Assembly`.

This is part of PR5680.

llvm-svn: 214368
2014-07-31 00:13:28 +00:00
Juergen Ributzka e8514fc1f7 [FastISel] Fix the patchpoint intrinsic lowering in FastISel for large target addresses.
This fixes a mistake where I accidentially dropped the upper 32bit of a
64bit pointer during FastISel lowering of the patchpoint intrinsic.

llvm-svn: 214367
2014-07-31 00:11:16 +00:00
Juergen Ributzka 39032673da [FastISel][AArch64 and X86] Don't emit stores for UNDEF arguments during function call lowering.
UNDEF arguments are not ment to be touched - especially for the webkit_js
calling convention. This fix reproduces the already existing behavior of
SelectionDAG in FastISel.

llvm-svn: 214366
2014-07-31 00:11:11 +00:00
Duncan P. N. Exon Smith 5faa6225ad verify-uselistorder: Add RUN lines to cases in test/Assembly
Add RUN line for `verify-uselistorder` to every test in `test/Assembly`,
unless it's a negative check (assembler rejects it) or verification
fails.

There are three files that verification fails on (so I've left out the
RUN lines):

  - 2002-08-22-DominanceProblem.ll
  - ConstantExprFold.ll
  - ConstantExprFoldCast.ll

This is part of PR5680.

llvm-svn: 214365
2014-07-31 00:10:27 +00:00
Joerg Sonnenberger 9e281bf0fe Add mtpid/mfpid for BookE.
llvm-svn: 214363
2014-07-30 23:59:11 +00:00
Justin Bogner 4ce7f4462a llvm-profdata: Add a test for mismatched numbers of counters
llvm-svn: 214360
2014-07-30 23:36:06 +00:00
Justin Bogner af3001e859 llvm-profdata: Use consistent file suffixes in tests
In some places we've been using different suffixes for the different
file formats involved in instrprof, but in others we've just
ambiguously used .profdata. Update the test files to indicate the
types of file more obviously.

No functional change.

llvm-svn: 214357
2014-07-30 23:02:01 +00:00
Rafael Espindola 464fe024c5 Use "weak alias" instead of "alias weak"
Before this patch we had

@a = weak global ...
but
@b = alias weak ...

The patch changes aliases to look more like global variables.

Looking at some really old code suggests that the reason was that the old
bison based parser had a reduction for alias linkages and another one for
global variable linkages. Putting the alias first avoided the reduce/reduce
conflict.

The days of the old .ll parser are long gone. The new one parses just "linkage"
and a later check is responsible for deciding if a linkage is valid in a
given context.

llvm-svn: 214355
2014-07-30 22:51:54 +00:00
Joerg Sonnenberger c5fe19d062 Refactor TLBIVAX and add tlbsx.
llvm-svn: 214354
2014-07-30 22:51:15 +00:00
Juergen Ributzka 3771fbb2f5 [FastISel][AArch64] Add select folding support for the XALU intrinsics.
This improves the code generation for the XALU intrinsics when the
condition is feeding a select instruction.

This also updates and enables the XALU unit tests for FastISel.

This fixes <rdar://problem/17831117>.

llvm-svn: 214350
2014-07-30 22:04:37 +00:00
Juergen Ributzka a75cb11f14 [FastISel][AArch64] Add support for shift-immediate.
Currently the shift-immediate versions are not supported by tblgen and
hopefully this can be later removed, once the required support has been
added to tblgen.

llvm-svn: 214345
2014-07-30 22:04:22 +00:00
David Majnemer 42af3601c2 InstCombine: Simplify (A ^ B) or/and (A ^ B ^ C)
While we can already transform A | (A ^ B) into A | B, things get bad
once we have (A ^ B) | (A ^ B ^ Cst) because reassociation will morph
this into (A ^ B) | ((A ^ Cst) ^ B).  Our existing patterns fail once
this happens.

To fix this, we add a new pattern which looks through the tree of xor
binary operators to see that, in fact, there exists a redundant xor
operation.

What follows bellow is a correctness proof of the transform using CVC3.

$ cat t.cvc
A, B, C : BITVECTOR(64);

QUERY BVXOR(A, B) | BVXOR(BVXOR(B, C), A) = BVXOR(A, B) | C;
QUERY BVXOR(BVXOR(A, C), B) | BVXOR(A, B) = BVXOR(A, B) | C;

QUERY BVXOR(A, B) & BVXOR(BVXOR(B, C), A) = BVXOR(A, B) & ~C;
QUERY BVXOR(BVXOR(A, C), B) & BVXOR(A, B) = BVXOR(A, B) & ~C;

$ cvc3 < t.cvc
Valid.
Valid.
Valid.
Valid.

llvm-svn: 214342
2014-07-30 21:26:37 +00:00
Joerg Sonnenberger 680928748b Add rfdi and rfmci from the e500/e500mc ISA.
llvm-svn: 214339
2014-07-30 21:09:03 +00:00
Chad Rosier 78f41b3ca7 SLP Vectorizer: Canonicalize tree operands of commutitive binary operands.
llvm-svn: 214338
2014-07-30 21:07:56 +00:00
Rafael Espindola d07cf400ab SimplifyCFG: Avoid miscompilations due to removed lifetime intrinsics.
The lifetime intrinsics need some work in order to make it clear which
optimizations are or are not valid.

For now dropping this optimization avoids a miscompilation.

Patch by Björn Steinbrink.

llvm-svn: 214336
2014-07-30 21:04:00 +00:00
Joerg Sonnenberger fee94b47ed Add BookE's tlbre, tlbwe and tlbivax instructions.
llvm-svn: 214332
2014-07-30 20:44:04 +00:00
Louis Gerbarg 7d7ab5d1f6 Fix test case introduced in r214322
This patch adds an explicit triple to the test case introduced by r214322. This
should fix build failueres that are occuring on bots that are cross building.

llvm-svn: 214330
2014-07-30 20:26:09 +00:00
Louis Gerbarg 4fc09b36de Retain alignment requirements for load->selects modified by DAGCombine
DAGCombine may choose to rewrite graphs where two loads feed a select into
graphs where a select of two addresses feed a load. While it sanity checks the
loads to make sure they are broadly equivalent it currently just uses the
alignment restriction of the left node. In cases where the right node has
stronger alignment requiresment this may lead to bad codegen, such as generating
an aligned load where an unaligned load is required. This patch makes the
combine generate a load with an alignment that is the same as whichever is more
restrictive of the two alignments.

Tests included.

rdar://17762530

llvm-svn: 214322
2014-07-30 18:24:41 +00:00
Duncan P. N. Exon Smith c69b516056 UseListOrder: Visit global values
When predicting use-list order, we visit functions in reverse order
followed by `GlobalValue`s and write out use-lists at the first
opportunity.  In the reader, this will translate to *after* the last use
has been added.

For this to work, we actually need to descend into `GlobalValue`s.
Added a targeted test in `use-list-order.ll` and `RUN` lines to the
newly passing tests in `test/Bitcode`.

There are two remaining failures in `test/Bitcode`:

  - blockaddress.ll: I haven't thought through how to model the way
    block addresses change the order of use-lists (or how to work around
    it).

  - metadata-2.ll: There's an old-style `@llvm.used` global array here
    that I suspect the .ll parser isn't upgrading properly.  When it
    round-trips through bitcode, the .bc reader *does* upgrade it, so
    the extra variable (`i8* null`) has an extra use, and the shuffle
    vector doesn't match.

    I think the fix is to upgrade old-style global arrays (or reject
    them?) in the .ll parser.

This is part of PR5680.

llvm-svn: 214321
2014-07-30 17:51:09 +00:00
Duncan P. N. Exon Smith 9c29666edc Fix line endings before adding RUN lines, NFC
llvm-svn: 214320
2014-07-30 17:49:00 +00:00
Duncan P. N. Exon Smith a12e023c8a Rename llvm-uselistorder => verify-uselistorder
llvm-svn: 214318
2014-07-30 17:11:27 +00:00
Rafael Espindola 21ab24edfb Add a small test showing when a linkonce_odr symbol can be hidden.
llvm-svn: 214311
2014-07-30 15:31:21 +00:00
Joerg Sonnenberger b97f319922 Add BookE's wrtee and wrteei instructions.
llvm-svn: 214297
2014-07-30 10:32:51 +00:00
Joerg Sonnenberger dda8e784f6 SPRG 0 to 3 are valid outside BookE, so move them to the normal test
file. Add support for accessing SPRG 4 to 7 on BookE.

llvm-svn: 214295
2014-07-30 09:24:37 +00:00
Chandler Carruth 681069d675 Don't manually (and forcibly) run the verifier on the entire module from
the jump instruction table pass. First, the verifier is already built
into all the tools. The test case is adapted to just run llvm-as
demonstrating that we still catch the broken module. Second, the
verifier is *extremely* slow. This was responsible for very significant
compile time regressions.

If you have deployed a Clang binary anywhere from r210280 to this
commit, you really want to re-deploy.

llvm-svn: 214287
2014-07-30 05:44:04 +00:00
Lang Hames 1316365e2c [MCJIT] Fix the ARM BR24 relocation in RuntimeDyldMachO.
We now (1) correctly decode the branch immediate, (2) modify the immediate to
corretly treat it as PC-rel, and (3) properly populate the stub entry.
Previously we had been doing each of these wrong.

<rdar://problem/17750739>

llvm-svn: 214285
2014-07-30 03:35:05 +00:00
Adam Nemet f1a80c1e17 [AVX512] Test that _mm512_set1_* intrinsics generate broadcasts
llvm-svn: 214275
2014-07-30 01:30:51 +00:00
Adam Nemet 9dcc254a47 [AVX512] Add missing CHECK-LABEL
llvm-svn: 214273
2014-07-30 01:30:45 +00:00
Duncan P. N. Exon Smith 3cbca2055a Reapply "UseListOrder: Order GlobalValue uses after initializers"
This reverts commit r214249, reapplying r214242 and r214243, now that
r214270 has fixed the UB.

llvm-svn: 214271
2014-07-30 01:22:16 +00:00
Petar Jovanovic b7c305f091 Add support for scalarizing ctlz_zero_undef
Fix the missing case in ScalarizeVectorResult() that was exposed with
libclcore.bc in Android.

Differential Revision: http://reviews.llvm.org/D4645

llvm-svn: 214266
2014-07-30 00:44:03 +00:00
Joerg Sonnenberger 130765558b Add rfci instruction.
llvm-svn: 214256
2014-07-29 23:45:20 +00:00
Joerg Sonnenberger 2450768251 mbar without argument is equivalent to mbar 0.
llvm-svn: 214250
2014-07-29 23:31:27 +00:00
Duncan P. N. Exon Smith b57aef0030 Revert "UseListOrder: Order GlobalValue uses after initializers"
This reverts commits r214242 and r214243 while I investigate buildbot
failures [1][2][3].  I can't reproduce these failures locally, so if
anyone can see what I've done wrong, I'd appreciate a note.

[1]: http://lab.llvm.org:8011/builders/llvm-hexagon-elf/builds/9840
[2]: http://lab.llvm.org:8011/builders/clang-hexagon-elf/builds/14981
[3]: http://bb.pgr.jp/builders/cmake-llvm-x86_64-linux/builds/15191

llvm-svn: 214249
2014-07-29 23:31:11 +00:00
Joerg Sonnenberger 99ef10f915 Recognize BookE's mbar instruction.
llvm-svn: 214244
2014-07-29 23:16:31 +00:00
Duncan P. N. Exon Smith 508f5b0316 UseListOrder: Additional test coverage for r214242
r214242 was subtle enough it really deserves a targeted test with
comments.  This adds some global variables that trigger the relevant
code path.  Sorry this wasn't committed with the fix.

llvm-svn: 214243
2014-07-29 23:15:49 +00:00
Duncan P. N. Exon Smith 1d501e8f46 UseListOrder: Order GlobalValue uses after initializers
To avoid unnecessary forward references, the reader doesn't process
initializers of `GlobalValue`s until after the constant pool has been
processed, and then in reverse order.  Model this when predicting
use-list order.  This gets two more Bitcode tests passing with
`llvm-uselistorder`.

Part of PR5680.

llvm-svn: 214242
2014-07-29 23:06:14 +00:00
Eli Bendersky 447755be51 Add missing test for r214210.
Thanks dblaikie for reminding me.

llvm-svn: 214239
2014-07-29 22:57:59 +00:00
Joerg Sonnenberger 053566aeab Fix typo in alias: DSIR -> DSISR
llvm-svn: 214238
2014-07-29 22:42:44 +00:00
Justin Bogner cf36a366a8 llvm-profdata: Clean up and reorganize some tests
This moves some tests around to make it clearer what's being tested,
and adds very rudimentary comment syntax to the text input format to
make specifying this kind of test a little bit simpler.

llvm-svn: 214235
2014-07-29 22:29:23 +00:00
Joerg Sonnenberger 9e9623ca64 Support move to/from segment register.
llvm-svn: 214234
2014-07-29 22:21:57 +00:00
Lang Hames 3ce5baefce [MCJIT] XFAIL some RuntimeDyld tests on MIPS - RuntimeDyldChecker isn't properly
endian-aware yet, and this is causing failures when cross-linking on MIPS.

llvm-svn: 214231
2014-07-29 21:48:22 +00:00
Rafael Espindola 6c472e5e14 gold plugin: Fix handling of corrupted bitcode files.
We should still claim them and tell gold about the error.

llvm-svn: 214214
2014-07-29 20:46:19 +00:00
Lang Hames 480763f814 [MCJIT] Make the RuntimeDyldChecker stub_addr builtin use file names rather than
full paths for its first argument.

This allows us to remove the annoying sed lines in the test cases, and write
direct references to file names in stub_addr calls (rather than <filename>
placeholders).

llvm-svn: 214211
2014-07-29 20:40:37 +00:00
Juergen Ributzka 0e913b17d6 [RuntimeDyld][AArch64] Make encode/decodeAddend also work on big-endian hosts.
llvm-svn: 214205
2014-07-29 19:57:15 +00:00
Joerg Sonnenberger b1ccf5623b Add a number of aliases for SPR access.
llvm-svn: 214196
2014-07-29 18:55:43 +00:00
Manman Ren f93ac4bfad [Debug Info] remove DITrivialType and use null to represent unspecified param.
Per feedback on r214111, we are going to use null to represent unspecified
parameter. If the type array is {null}, it means a function that returns void;
If the type array is {null, null}, it means a variadic function that returns
void. In summary if we have more than one element in the type array and the last
element is null, it is a variadic function.

rdar://17628609

llvm-svn: 214189
2014-07-29 18:20:39 +00:00
Rafael Espindola 340aae797d Add a test for the mtriple plugin option.
llvm-svn: 214186
2014-07-29 17:27:07 +00:00
Joerg Sonnenberger accbc9426d Add rfi instruction. Based on feedback by Ulrich Weigand.
llvm-svn: 214181
2014-07-29 15:49:09 +00:00
Sasa Stankovic f4a9e3bc28 [mips] Don't use odd-numbered single precision registers for fastcc calling
convention if -mno-odd-spreg is used.

Differential Revision: http://reviews.llvm.org/D4682

llvm-svn: 214180
2014-07-29 14:39:24 +00:00
Ulrich Weigand e09f73716a [PowerPC] Fix ppc64-elf-abi.ll test case on Darwin
Use full -mtriple instead of just -march to ensure Linux ABI
(ELFv1 or ELFv2) is selected.

llvm-svn: 214179
2014-07-29 12:48:14 +00:00
Tim Northover e2239ff3eb CodeGenPrep: fall back to MVT::Other if instruction's type isn't an EVT.
The test being performed is just an approximation anyway, so it really
shouldn't crash when things don't go entirely as expected.

Should fix PR20474.

llvm-svn: 214177
2014-07-29 10:20:22 +00:00
Tim Northover 4e13a61413 ARM: add __aeabi_d2h for truncation on AEABI systems
ARM does actually define the name for this conversion, so we should use it on
"-eabi" platforms.

llvm-svn: 214176
2014-07-29 09:56:45 +00:00
Tim Northover f67bb2079d ARM: fix @llvm.convert.from.fp16 on softfloat targets.
We need to make sure we use the softened version of all appropriate operands in
the libcall, or things go horribly wrong. This may entail actually executing a
1-stage softening.

llvm-svn: 214175
2014-07-29 09:56:38 +00:00
Jiangning Liu cd296378a7 Implement AArch64 TTI interface isAsCheapAsAMove.
llvm-svn: 214159
2014-07-29 02:09:26 +00:00
Duncan P. N. Exon Smith 3f0fc7bca9 Bitcode: Correctly compare a Use against itself
Fix the sort of expected order in the reader to correctly return `false`
when comparing a `Use` against itself.

This was caught by test/Bitcode/binaryIntInstructions.3.2.ll, so I'm
adding a `RUN` line using `llvm-uselistorder` for every test in
`test/Bitcode` that passes.

A few tests still fail, so I'll investigate those next.

This is part of PR5680.

llvm-svn: 214157
2014-07-29 01:13:56 +00:00
Duncan P. N. Exon Smith fee1f5013c Fix line-endings, NFC
A follow-up commit is adding a RUN line to each of these tests, so fix
the line endings first.  This is a whitespace-only change.

llvm-svn: 214156
2014-07-29 01:10:57 +00:00
Duncan P. N. Exon Smith ff3caa5a3e llvm-uselistorder: Add -num-shuffles option
llvm-svn: 214144
2014-07-28 23:25:21 +00:00
Kevin Enderby 49b4f53cad Tweak llvm-nm’s -undefined-only (aka -u) printing for Mach-O files
to just print the symbol name.  So it matches darwin’s nm(1) -u option.

llvm-svn: 214143
2014-07-28 23:17:38 +00:00
Manman Ren 4dbffad804 Clean up testing cases.
llvm-svn: 214142
2014-07-28 23:16:05 +00:00
Manman Ren bd1628a595 [Debug Info] unique MDNodes in the enum types of each compile unit.
The enum types array by design contains pointers to MDNodes rather than DIRefs.
Unique them when handling the enum types in DwarfDebug.

rdar://17628609

llvm-svn: 214139
2014-07-28 23:04:20 +00:00
Manman Ren bd3c17cc86 Remove extra ; in testing case.
llvm-svn: 214137
2014-07-28 22:46:46 +00:00
Manman Ren f8a1967c8c [Debug Info] add DISubroutineType and its creation takes DITypeArray.
DITypeArray is an array of DITypeRef, at its creation, we will create
DITypeRef (i.e use the identifier if the type node has an identifier).

This is the last patch to unique the type array of a subroutine type.

rdar://17628609

llvm-svn: 214132
2014-07-28 22:24:06 +00:00
Duncan P. N. Exon Smith 1f66c856b5 Bitcode: Serialize (and recover) use-list order
Predict and serialize use-list order in bitcode.  This makes the option
`-preserve-bc-use-list-order` work *most* of the time, but this is still
experimental.

  - Builds a full value-table up front in the writer, sets up a list of
    use-list orders to write out, and discards the table.  This is a
    simpler first step than determining the order from the various
    overlapping IDs of values on-the-fly.

  - The shuffles stored in the use-list order list have an unnecessarily
    large memory footprint.

  - `blockaddress` expressions cause functions to be materialized
    out-of-order.  For now I've ignored this problem, so use-list orders
    will be wrong for constants used by functions that have block
    addresses taken.  There are a couple of ways to fix this, but I
    don't have a concrete plan yet.

  - When materializing functions lazily, the use-lists for constants
    will not be correct.  This use case is out of scope: what should the
    use-list order be, if it's incomplete?

This is part of PR5680.

llvm-svn: 214125
2014-07-28 21:19:41 +00:00
Lang Hames 1f638440c6 [MCJIT] Remove extraneous parentheses in test case.
llvm-svn: 214117
2014-07-28 21:00:48 +00:00
Rafael Espindola 99298fc36a Test the linker plugin handling of llvm.used.
llvm-svn: 214116
2014-07-28 20:42:29 +00:00
Matt Arsenault 2b252ecf6b R600: Modernize test
llvm-svn: 214108
2014-07-28 18:06:08 +00:00
Matt Arsenault 46645fa102 R600/SI: Implement getOptimalMemOpType
The default guess uses i32. This needs an address space argument
to really do the right thing in all cases.

llvm-svn: 214104
2014-07-28 17:49:26 +00:00
Rafael Espindola 6dca51502a Add tests for the various emit-llvm plugin options.
llvm-svn: 214102
2014-07-28 17:37:25 +00:00
Rafael Espindola 2172f51e4e Test the mcpu option.
llvm-svn: 214087
2014-07-28 14:44:33 +00:00
Robert Khasanov 595683da00 [SKX] Enabling mask logic instructions: encoding, lowering
Instructions: KAND{BWDQ}, KANDN{BWDQ}, KOR{BWDQ}, KXOR{BWDQ}, KXNOR{BWDQ}

Reviewed by Elena Demikhovsky <elena.demikhovsky@intel.com>

llvm-svn: 214081
2014-07-28 13:46:45 +00:00
Ulrich Weigand 085a10c49e [PowerPC] Add testcase forgotten in the 214072 commit.
llvm-svn: 214073
2014-07-28 13:10:25 +00:00
Rafael Espindola 852bad0a95 Start adding some tests for the gold plugin.
These are only used when the 'ld' in the path is gold and the plugin has
been built, but it is already a start to make sure we don't regress features
that cannot be tested with llvm-lto.

llvm-svn: 214058
2014-07-27 23:11:06 +00:00
Saleem Abdulrasool 8988c2a524 ARM: correct handling of features in arch_extension
The subtarget information is the ultimate source of truth for the feature set
that is enabled at this point.  We would previously not propagate the feature
information to the subtarget.  While this worked for the most part (features
would be enabled/disabled as requested), if another operation that changed the
feature bits was encountered (such as a mode switch via a .arm or .thumb
directive), we would end up resetting the behaviour of the architectural
extensions.

Handling this properly requires a slightly more complicated handling.  We need
to check if the feature is now being toggled.  If so, only then do we toggle the
features.  In return, we no longer have to calculate the feature bits ourselves.

The test changes are mostly to the diagnosis, which is now more uniform (a nice
side effect!).  Add an additional test to ensure that we handle this case
properly.

Thanks to Nico Weber for alerting me to this issue!

llvm-svn: 214057
2014-07-27 19:07:09 +00:00
Matt Arsenault 6f2a526101 Add alignment value to allowsUnalignedMemoryAccess
Rename to allowsMisalignedMemoryAccess.

On R600, 8 and 16 byte accesses are mostly OK with 4-byte alignment,
and don't need to be split into multiple accesses. Vector loads with
an alignment of the element type are not uncommon in OpenCL code.

llvm-svn: 214055
2014-07-27 17:46:40 +00:00
Tim Northover 2c46beb0d1 AArch64: fix conversion of 'J' inline asm constraints.
'J' represents a negative number suitable for an add/sub alias
instruction, but while preparing it to become an int64_t we were
mangling the sign extension. So "i32 -1" became 0xffffffffLL, for
example.

Should fix one half of PR20456.

llvm-svn: 214052
2014-07-27 07:10:29 +00:00
Chandler Carruth 80c5bfd843 [x86] Add a much more powerful framework for combining x86 shuffle
instructions in the legalized DAG, and leverage it to combine long
sequences of instructions to PSHUFB.

Eventually, the other x86-instruction-specific shuffle combines will
probably all be driven out of this routine. But the real motivation is
to detect after we have fully legalized and optimized a shuffle to the
minimal number of x86 instructions whether it is profitable to replace
the chain with a fully generic PSHUFB instruction even though doing so
requires either a load from a constant pool or tying up a register with
the mask.

While the Intel manuals claim it should be used when it replaces 5 or
more instructions (!!!!) my experience is that it is actually very fast
on modern chips, and so I've gon with a much more aggressive model of
replacing any sequence of 3 or more instructions.

I've also taught it to do some basic canonicalization to special-purpose
instructions which have smaller encodings than their generic
counterparts.

There are still quite a few FIXMEs here, and I've not yet implemented
support for lowering blends with PSHUFB (where its power really shines
due to being able to zero out lanes), but this starts implementing real
PSHUFB support even when using the new, fancy shuffle lowering. =]

llvm-svn: 214042
2014-07-27 01:15:58 +00:00
Matt Arsenault 24aa028cfa R600/SI: Fix broken test.
There was no check prefix for the instruction lines.
Match what is emitted though, although I'm pretty sure it is
incorrect.

llvm-svn: 214035
2014-07-26 21:21:42 +00:00
Joey Gouly ec981058aa Fix the failing test 'vector-idiv.ll'.
On Darwin the comment character is ##.

llvm-svn: 214028
2014-07-26 10:58:14 +00:00
Chandler Carruth 411fb407f8 [SDAG] When performing post-legalize DAG combining, run the legalizer
over each node in the worklist prior to combining.

This allows the combiner to produce new nodes which need to go back
through legalization. This is particularly useful when generating
operands to target specific nodes in a post-legalize DAG combine where
the operands are significantly easier to express as pre-legalized
operations. My immediate use case will be PSHUFB formation where we need
to build a constant shuffle mask with a build_vector node.

This also refactors the relevant functionality in the legalizer to
support this, and updates relevant tests. I've spoken to the R600 folks
and these changes look like improvements to them. The avx512 change
needs to be investigated, I suspect there is a disagreement between the
legalizer and the DAG combiner there, but it seems a minor issue so
leaving it to be re-evaluated after this patch.

Differential Revision: http://reviews.llvm.org/D4564

llvm-svn: 214020
2014-07-26 05:49:40 +00:00
NAKAMURA Takumi f2df3f59fb llvm/test/CodeGen/X86/vector-idiv.ll: Fix for -Asserts.
llvm-svn: 214015
2014-07-26 04:47:01 +00:00
Chandler Carruth 5896698e2e [x86] Fix PR20355 (for real). There are many layers to this bug.
The tale starts with r212808 which attempted to fix inversion of the low
and high bits when lowering MUL_LOHI. Sadly, that commit did not include
any positive test cases, and just removed some operations from a test
case where the actual logic being changed isn't fully visible from the
test.

What this commit did was two things. First, it reversed the low and high
results in the formation of the MERGE_VALUES node for the multiple
results. This is entirely correct.

Second it changed the shuffles for extracting the low and high
components from the i64 results of the multiplies to extract them
assuming a big-endian-style encoding of the multiply results. This
second change is wrong. There is no big-endian encoding in x86, the
results of the multiplies are normal v2i64s: when cast to v4i32, the low
i32s are at offsets 0 and 2, and the high i32s are at offsets 1 and 3.

However, the first change wasn't enough to actually fix the bug, which
is (I assume) why the second change was also made. There was another bug
in the MERGE_VALUES formation: we weren't using a VTList, and so were
getting a single result node! When grabbing the *second* result from the
node, we got... well.. colud be anything. I think this *appeared* to
invert things, but had to be causing other problems as well.

Fortunately, I fixed the MERGE_VALUES issue in r213931, so we should
have been fine, right? NOOOPE! Because the core bug was never addressed,
the test in vector-idiv failed when I fixed the MERGE_VALUES node.
Because there are essentially no docs for this node, I had to guess at
how to fix it and tried swapping the operands, restoring the order of
the original code before r212808. While this "fixed" the test case (in
that we produced the write instructions) we were still extracting the
wrong elements of the i64s, and thus PR20355 was still broken.

This commit essentially reverts the big-endian-style extraction part of
r212808 and goes back to the original masks which were correct. Now that
the MERGE_VALUES node formation is also correct, everything works. I've
also included a more detailed test from PR20355 to make sure this stays
fixed.

llvm-svn: 214011
2014-07-26 03:46:57 +00:00
Chandler Carruth 591c16a967 [x86] Finish switching from CHECK to ALL. This was mistakenly included
in r214007 and then reverted when I backed that (very misguided) patch
out. This recovers the test case cleanup which was good.

llvm-svn: 214010
2014-07-26 03:46:54 +00:00
Chandler Carruth f6406ac5d6 [x86] Revert r214007: Fix PR20355 ...
The clever way to implement signed multiplication with unsigned *is
already implemented* and tested and working correctly. The bug is
somewhere else. Re-investigating.

This will teach me to not scroll far enough to read the code that did
what I thought needed to be done.

llvm-svn: 214009
2014-07-26 02:14:54 +00:00
Chandler Carruth 1bf4d19172 [x86] Fix PR20355 (and dups) by not using unsigned multiplication when
signed multiplication is requested. While there is not a difference in
the *low* half of the result, the *high* half (used specifically to
implement the signed division by these constants) certainly is used. The
test case I've nuked was actively asserting wrong code.

There is a delightful solution to doing signed multiplication even when
we don't have it that Richard Smith has crafted, but I'll add the
machinery back and implement that in a follow-up patch. This at least
restores correctness.

llvm-svn: 214007
2014-07-26 01:52:13 +00:00
Chandler Carruth 80adc64066 [x86] Add coverage for PMUL* instruction testing on SSE2 as well as
SSE4.1.

llvm-svn: 214001
2014-07-26 01:11:10 +00:00
Chandler Carruth 8709cb4a6b [x86] More cleanup for this test -- simplify the command line.
llvm-svn: 213991
2014-07-26 00:21:52 +00:00
Chandler Carruth 6da2d97a32 [x86] FileCheck-ize this test.
llvm-svn: 213988
2014-07-25 23:59:20 +00:00
Hal Finkel f5867a79c5 Canonicalization for @llvm.assume
Adds simple logical canonicalization of assumption intrinsics to instcombine,
currently:
 - invariant(a && b) -> invariant(a); invariant(b)
 - invariant(!(a || b)) -> invariant(!a); invariant(!b)

llvm-svn: 213977
2014-07-25 21:45:17 +00:00
Hal Finkel 930469107d Add @llvm.assume, lowering, and some basic properties
This is the first commit in a series that add an @llvm.assume intrinsic which
can be used to provide the optimizer with a condition it may assume to be true
(when the control flow would hit the intrinsic call). Some basic properties are added here:

 - llvm.invariant(true) is dead.
 - llvm.invariant(false) is unreachable (this directly corresponds to the
   documented behavior of MSVC's __assume(0)), so is llvm.invariant(undef).

The intrinsic is tagged as writing arbitrarily, in order to maintain control
dependencies. BasicAA has been updated, however, to return NoModRef for any
particular location-based query so that we don't unnecessarily block code
motion.

llvm-svn: 213973
2014-07-25 21:13:35 +00:00
Akira Hatanaka ba3af24c25 [stack protector] Add test cases for thumb and thumb2.
<rdar://problem/12475629>

llvm-svn: 213970
2014-07-25 19:47:46 +00:00
Akira Hatanaka e5b6e0d231 [stack protector] Fix a potential security bug in stack protector where the
address of the stack guard was being spilled to the stack.

Previously the address of the stack guard would get spilled to the stack if it
was impossible to keep it in a register. This patch introduces a new target
independent node and pseudo instruction which gets expanded post-RA to a
sequence of instructions that load the stack guard value. Register allocator
can now just remat the value when it can't keep it in a register. 

<rdar://problem/12475629>

llvm-svn: 213967
2014-07-25 19:31:34 +00:00
Hal Finkel 7c8ae53506 [PowerPC] Support TLS on PPC32/ELF
Patch by Justin Hibbits!

llvm-svn: 213960
2014-07-25 17:47:22 +00:00
Juergen Ributzka 5d6c43e294 [FastISel][AArch64] Add support for frameaddress intrinsic.
This commit implements the frameaddress intrinsic for the AArch64 architecture
in FastISel.

There were two test cases that pretty much tested the same, so I combined them
to a single test case.

Fixes <rdar://problem/17811834>

llvm-svn: 213959
2014-07-25 17:47:14 +00:00
Duncan P. N. Exon Smith 4b4d8ecde1 Move -verify-use-list-order into llvm-uselistorder
Ugh.  Turns out not even transformation passes link in how to read IR.
I sincerely believe the buildbots will finally agree with my system
after this though.  (I don't really understand why all of this has been
working on my system, but not on all the buildbots.)

Create a new tool called llvm-uselistorder to use for verifying use-list
order.  For now, just dump everything from the (now defunct)
-verify-use-list-order pass into the tool.

This might be a better way to test use-list order anyway.

Part of PR5680.

llvm-svn: 213957
2014-07-25 17:13:03 +00:00
David Blaikie 29459ae83c Reapply "DebugInfo: Don't put fission type units in comdat sections."
This recommits r208930, r208933, and r208975 (by reverting r209338) and
reverts r209529 (the FIXME to readd this functionality once the tools
were fixed) now that DWP has been fixed to cope with a single section
for all fission type units.

Original commit message:

"Since type units in the dwo file are handled by a debug aware tool,
they don't need to leverage the ELF comdat grouping to implement
deduplication. Avoid creating all the .group sections for these as a
space optimization."

llvm-svn: 213956
2014-07-25 17:11:58 +00:00
David Blaikie 2f04011435 Recommit r212203: Don't try to construct debug LexicalScopes hierarchy for functions that do not have top level debug information.
Reverted by Eric Christopher (Thanks!) in r212203 after Bob Wilson
reported LTO issues. Duncan Exon Smith and Aditya Nandakumar helped
provide a reduced reproduction, though the failure wasn't too hard to
guess, and even easier with the example to confirm.

The assertion that the subprogram metadata associated with an
llvm::Function matches the scope data referenced by the DbgLocs on the
instructions in that function is not valid under LTO. In LTO, a C++
inline function might exist in multiple CUs and the subprogram metadata
nodes will refer to the same llvm::Function. In this case, depending on
the order of the CUs, the first intance of the subprogram metadata may
not be the one referenced by the instructions in that function and the
assertion will fail.

A test case (test/DebugInfo/cross-cu-linkonce-distinct.ll) is added, the
assertion removed and a comment added to explain this situation.

This was then reverted again in r213581 as it caused PR20367. The root
cause of this was the early exit in LiveDebugVariables meant that
spurious DBG_VALUE intrinsics that referenced dead variables were not
removed, causing an assertion/crash later on. The fix is to have
LiveDebugVariables strip all DBG_VALUE intrinsics in functions without
debug info as they're not needed anyway. Test case added to cover this
situation (that occurs when a debug-having function is inlined into a
nodebug function) in test/DebugInfo/X86/nodebug_with_debug_loc.ll

Original commit message:

If a function isn't actually in a CU's subprogram list in the debug info
metadata, ignore all the DebugLocs and don't try to build scopes, track
variables, etc.

While this is possibly a minor optimization, it's also a correctness fix
for an incoming patch that will add assertions to LexicalScopes and the
debug info verifier to ensure that all scope chains lead to debug info
for the current function.

Fix up a few test cases that had broken/incomplete debug info that could
violate this constraint.

Add a test case where this occurs by design (inlining a
debug-info-having function in an attribute nodebug function - we want
this to work because /if/ the nodebug function is then inlined into a
debug-info-having function, it should be fine (and will work fine - we
just stitch the scopes up as usual), but should the inlining not happen
we need to not assert fail either).

llvm-svn: 213952
2014-07-25 16:10:16 +00:00
David Blaikie 48af9c3527 DebugInfo: Fix up some test cases to have more correct debug info metadata.
* Add CUs to the named CU node
* Add missing DW_TAG_subprogram nodes
* Add llvm::Functions to the DW_TAG_subprogram nodes

This cleans up the tests so that they don't break under a
soon-to-be-made change that is more strict about such things.

llvm-svn: 213951
2014-07-25 16:05:18 +00:00
Hal Finkel ff0bcb60c9 Convert noalias parameter attributes into noalias metadata during inlining
This functionality is currently turned off by default.

Part of the motivation for introducing scoped-noalias metadata is to enable the
preservation of noalias parameter attribute information after inlining.
Sometimes this can be inferred from the code in the caller after inlining, but
often we simply lose valuable information.

The overall process if fairly simple:
 1. Create a new unqiue scope domain.
 2. For each (used) noalias parameter, create a new alias scope.
 3. For each pointer, collect the underlying objects. Add a noalias scope for
    each noalias parameter from which we're not derived (and has not been
    captured prior to that point).
 4. Add an alias.scope for each noalias parameter from which we might be
    derived (or has been captured before that point).

Note that the capture checks apply only if one of the underlying objects is not
an identified function-local object.

llvm-svn: 213949
2014-07-25 15:50:08 +00:00
Hal Finkel 029cde639c Simplify and improve scoped-noalias metadata semantics
In the process of fixing the noalias parameter -> metadata conversion process
that will take place during inlining (which will be committed soon, but not
turned on by default), I have come to realize that the semantics provided by
yesterday's commit are not really what we want. Here's why:

void foo(noalias a, noalias b, noalias c, bool x) {
  *q = x ? a : b;
  *c = *q;
}

Generically, we know that *c does not alias with *a and with *b (so there is an
'and' in what we know we're not), and we know that *q might be derived from *a
or from *b (so there is an 'or' in what we know that we are). So we do not want
the semantics currently, where any noalias scope matching any alias.scope
causes a NoAlias return. What we want to know is that the noalias scopes form a
superset of the alias.scope list (meaning that all the things we know we're not
is a superset of all of things the other instruction might be).

Making that change, however, introduces a composibility problem. If we inline
once, adding the noalias metadata, and then inline again adding more, and we
append new scopes onto the noalias and alias.scope lists each time. But, this
means that we could change what was a NoAlias result previously into a MayAlias
result because we appended an additional scope onto one of the alias.scope
lists. So, instead of giving scopes the ability to have parents (which I had
borrowed from the TBAA implementation, but seems increasingly unlikely to be
useful in practice), I've given them domains. The subset/superset condition now
applies within each domain independently, and we only need it to hold in one
domain. Each time we inline, we add the new scopes in a new scope domain, and
everything now composes nicely. In addition, this simplifies the
implementation.

llvm-svn: 213948
2014-07-25 15:50:02 +00:00
Duncan P. N. Exon Smith 6b6fdc992a IPO: Add use-list-order verifier
Add a -verify-use-list-order pass, which shuffles use-list order, writes
to bitcode, reads back, and verifies that the (shuffled) order matches.

  - The utility functions live in lib/IR/UseListOrder.cpp.

  - Moved (and renamed) the command-line option to enable writing
    use-lists, so that this pass can return early if the use-list orders
    aren't being serialized.

It's not clear that this pass is the right direction long-term (perhaps
a separate tool instead?), but short-term it's a great way to test the
use-list order prototype.  I've added an XFAIL-ed testcase that I'm
hoping to get working pretty quickly.

This is part of PR5680.

llvm-svn: 213945
2014-07-25 14:49:26 +00:00
Amara Emerson 115d2df8a4 [ARM] Emit ABI_PCS_R9_use build attribute.
Patch by Ben Foster!

Differential Revision: http://reviews.llvm.org/D4657

llvm-svn: 213944
2014-07-25 14:03:14 +00:00
NAKAMURA Takumi b4553da481 llvm/test/CodeGen/ARM/inlineasm-global.ll: Add explicit triple to appease targeting *-win32.
llvm-svn: 213933
2014-07-25 09:55:01 +00:00
NAKAMURA Takumi dd620a242a llvm/test/CodeGen/ARM/inlineasm-global.ll: Avoid specifing source file on llc.
It sometimes confuses FileCheck. Consider the case that path contains 'stmib'. :)

llvm-svn: 213932
2014-07-25 09:54:49 +00:00
Akira Hatanaka 16e47ff42e [ARM] In thumb mode, emit directive ".code 16" before file level inline
assembly instructions.

This is necessary to ensure ARM assembler switches to Thumb mode before it
starts assembling the file level inline assembly instructions at the beginning
of a .s file.

<rdar://problem/17757232>

llvm-svn: 213924
2014-07-25 05:12:49 +00:00
Lang Hames 98c3c0f38a [X86] Add comments to clarify some non-obvious lines in the stackmap-nops.ll
testcases.

Based on code review from Philip Reames. Thanks Philip!

llvm-svn: 213923
2014-07-25 04:50:08 +00:00
Bill Schmidt c9fa5dd618 [PATCH][PPC64LE] Correct little-endian usage of vmrgh* and vmrgl*.
Because the PowerPC vmrgh* and vmrgl* instructions have a built-in
big-endian bias, it is necessary to swap their inputs in little-endian
mode when using them to implement a vector shuffle.  This was
previously missed in the vector LE implementation.

There was already logic to distinguish between unary and "normal"
vmrg* vector shuffles, so this patch extends that logic to use a third
option:  "swapped" vmrg* vector shuffles that are used for little
endian in place of the "normal" ones.

I've updated the vec-shuffle-le.ll test to check for the expected
register ordering on the generated instructions.

This bug was discovered when testing the LE and ELFv2 patches for
safety if they were backported to 3.4.  A different vectorization
decision was made in 3.4 than on mainline trunk, and that exposed the
problem.  I've verified this fix takes care of that issue.

llvm-svn: 213915
2014-07-25 01:55:55 +00:00
Kevin Enderby 08e1bbd645 Add an implementation for llvm-nm’s -print-file-name option (aka -o and -A).
The -print-file-name option in llvm-nm is to precede each symbol
with the object file it came from.  While code for the parsing of this
option and its aliases existed there was no code to implement it.

llvm-svn: 213906
2014-07-24 23:31:52 +00:00
David Majnemer ab131e86ce Opportunistically fix the builders
A builder complained that it couldn't find llvm-vtabledump, this is
probably because it wasn't a dependency of the 'test' target.

llvm-svn: 213905
2014-07-24 23:26:54 +00:00
David Majnemer 72ab1a5aee llvm-vtabledump: A vtable dumper
This tool's job is to dump the vtables inside object files.  It is
currently limited to MS ABI vf- and vb-tables but it will eventually
support Itanium-style v-tables as well.

Differential Revision: http://reviews.llvm.org/D4584

llvm-svn: 213903
2014-07-24 23:14:40 +00:00
Mark Heffernan 8ec1474f7f After unrolling a loop with llvm.loop.unroll.count metadata (unroll factor
hint) the loop unroller replaces the llvm.loop.unroll.count metadata with
llvm.loop.unroll.disable metadata to prevent any subsequent unrolling
passes from unrolling more than the hint indicates.  This patch fixes
an issue where loop unrolling could be disabled for other loops as well which
share the same llvm.loop metadata.

llvm-svn: 213900
2014-07-24 22:36:40 +00:00
Joerg Sonnenberger b5459e6e22 Don't use 128bit functions on PPC32.
llvm-svn: 213899
2014-07-24 22:20:10 +00:00