Commit Graph

55460 Commits

Author SHA1 Message Date
Nuno Lopes 705141d4df baby steps toward fixing some problems with inbound GEPs that overflow, as discussed 2 months ago or so.
Make sure we do not emit index computations with NSW flags so that we dont get an undef value if the GEP overflows

llvm-svn: 160589
2012-07-20 23:07:40 +00:00
Nuno Lopes 20ea62527a move the bounds checking pass to the instrumentation folder, where it belongs. I dunno why in the world I dropped it in the Scalar folder in the first place.
No functionality change.

llvm-svn: 160587
2012-07-20 22:39:33 +00:00
Benjamin Kramer 5be8f60126 Remove unused private member variables uncovered by the recent changes to clang's -Wunused-private-field.
llvm-svn: 160583
2012-07-20 22:05:57 +00:00
Jakob Stoklund Olesen e2cfd0d45a Avoid folding loads that are unsafe to move.
LiveRangeEdit::foldAsLoad() can eliminate a register by folding a load
into its only use. Only do that when the load is safe to move, and it
won't extend any live ranges.

This fixes PR13414.

llvm-svn: 160575
2012-07-20 21:29:31 +00:00
Chandler Carruth 1f41bf0c3f Fix a dangling StringRef bug in the auto upgrader. In one case, we reset
CI's name, and then used the StringRef pointing at its old name. I'm
fixing it by storing the name in a std::string, and hoisting the
renaming logic to happen always. This is nicer anyways as it will allow
the upgraded IR to have the same names as the input IR in more cases.

Another bug found by AddressSanitizer. Woot.

llvm-svn: 160572
2012-07-20 21:09:18 +00:00
Jakob Stoklund Olesen f62c07f147 Split loop exiting edges more aggressively.
PHIElimination splits critical edges when it predicts it can resolve
interference and eliminate copies. It doesn't split the edge if the
interference wouldn't be resolved anyway because the phi-use register is
live in the critical edge anyway.

Teach PHIElimination to split loop exiting edges with interference, even
if it wouldn't resolve the interference. This removes the necessary
copies from the loop, which is still an improvement from injecting the
copies into the loop.

The test case demonstrates the improvement. Before:

LBB0_1:
  cmpb  $0, (%rdx)
  leaq  1(%rdx), %rdx
  movl  %esi, %eax
  je  LBB0_1

After:

LBB0_1:
  cmpb  $0, (%rdx)
  leaq  1(%rdx), %rdx
  je  LBB0_1

  movl  %esi, %eax

llvm-svn: 160571
2012-07-20 20:49:53 +00:00
Benjamin Kramer dfaa0f3a81 Try to unbreak the windows build.
llvm-svn: 160567
2012-07-20 19:49:33 +00:00
Daniel Dunbar c8b8c49d6f SourceMgr: Use has_colors() instead of just is_displayed() before trying to use
color.

llvm-svn: 160559
2012-07-20 18:29:44 +00:00
Daniel Dunbar 04b4583c9b raw_ostream: Add a has_colors() method.
llvm-svn: 160558
2012-07-20 18:29:41 +00:00
Daniel Dunbar 712de82154 Process: Add sys::Process::FileDescriptorHasColors().
llvm-svn: 160557
2012-07-20 18:29:38 +00:00
Richard Osborne 0ab2b0df82 Fix assertion in jump threading (PR13405).
GetBestDestForJumpOnUndef() assumes there is at least 1 successor, which isn't
true if the block ends in an indirect branch with no successors. Fix this by
bailing out earlier in this case.

llvm-svn: 160546
2012-07-20 10:36:17 +00:00
Kostya Serebryany f02c6069ac [asan] make sure that the crash callbacks do not get merged (Chandler's idea: insert an empty InlineAsm). Change the order in which the new BBs are inserted: the slow path BB is insert between old BBs, the crash BB is inserted at the end. Don't create an empty BB (introduced by recent commits). Update the test. The experimental code that does manual crash callback merge will most likely be deleted later.
llvm-svn: 160544
2012-07-20 09:54:50 +00:00
Craig Topper 0b94e46ce3 Don't use implicit register operands to calculate L-bit for AVX instructions. Needed because super reg defs and kills are added as implicit operands on 128-bit instructions. Fixes PR13349. Patch by Jose Fonseca.
llvm-svn: 160543
2012-07-20 07:03:46 +00:00
Nick Lewycky 7707e23429 Revert r160529 due to crashes.
llvm-svn: 160532
2012-07-19 23:59:21 +00:00
Pete Cooper dcf94db677 Fix crash in machine verifier when trying to print the def of a register which has no def
llvm-svn: 160531
2012-07-19 23:40:38 +00:00
Nick Lewycky 0fa6a28141 Don't wipe out global variables that are probably storing pointers to heap
memory. This makes clang play nice with leak checkers.

llvm-svn: 160529
2012-07-19 22:35:28 +00:00
Galina Kistanova 27540f8d8c Reverting r 160419.
llvm-svn: 160525
2012-07-19 21:43:55 +00:00
Preston Gurd 8e082688a1 Adds the family codes for the Midview Atom processors so that the
Atom buildbot will auto-detect Atom.

llvm-svn: 160521
2012-07-19 19:05:37 +00:00
Sebastian Pop 221e07e140 default to use -mv4 when no version of Hexagon has been specified
This fixes a bunch of make check failures of the form:

Unknown Architecture Version.
UNREACHABLE executed at ../lib/Target/Hexagon/HexagonSubtarget.cpp:60!

llvm-svn: 160518
2012-07-19 18:24:50 +00:00
Nuno Lopes c14776d406 reimplement truncate() to make it optimal.
It is optimal at least up to 7 bits (I've tested all such cases)
This change to truncate() allows a little simplification to the multiplication code,
and it also makes multiplication optimal :)

llvm-svn: 160512
2012-07-19 16:27:45 +00:00
Benjamin Kramer 347d559323 Pull the simple parts of DenseMapInfo<DebugLoc> inline and prune includes.
llvm-svn: 160507
2012-07-19 15:00:34 +00:00
Benjamin Kramer f364a63c3e Replace some explicit compare loops with std::equal.
No functionality change.

llvm-svn: 160501
2012-07-19 10:46:05 +00:00
Jush Lu e67e07b901 [arm-fast-isel] Add support for vararg function calls.
llvm-svn: 160500
2012-07-19 09:49:00 +00:00
Alexey Samsonov e16e16add6 DebugInfo library: add support for fetching absolute paths to source files
(instead of basenames) from DWARF. Use this behavior in llvm-dwarfdump tool.

Reviewed by Benjamin Kramer.

llvm-svn: 160496
2012-07-19 07:03:58 +00:00
Galina Kistanova aaf9735951 Fixed few warnings.
llvm-svn: 160493
2012-07-19 04:50:12 +00:00
Bill Wendling 723444e767 Remove tabs.
llvm-svn: 160483
2012-07-19 00:25:04 +00:00
Bill Wendling 318f03f56f Remove tabs.
llvm-svn: 160479
2012-07-19 00:15:11 +00:00
Bill Wendling ea6397f67b Remove tabs.
llvm-svn: 160477
2012-07-19 00:11:40 +00:00
Bill Wendling 2b07965042 Remove tabs.
llvm-svn: 160476
2012-07-19 00:06:06 +00:00
Bill Wendling d163405df8 Remove tabs.
llvm-svn: 160475
2012-07-19 00:04:14 +00:00
Manman Ren d0a4ee8427 X86: remove redundant cmp against zero.
Updated OptimizeCompare in peephole to remove redundant cmp against zero.
We only remove Compare if CF and OF are not used.

rdar://11855129

llvm-svn: 160454
2012-07-18 21:40:01 +00:00
Preston Gurd f0a48ec8f1 This patch fixes 8 out of 20 unexpected failures in "make check"
when run on an Intel Atom processor. The failures have arisen due
to changes elsewhere in the trunk over the past 8 weeks or so.

These failures were not detected by the Atom buildbot because the
CPU on the Atom buildbot was not being detected as an Atom CPU.
The fix for this problem is in Host.cpp and X86Subtarget.cpp, but
shall remain commented out until the current set of Atom test failures
are fixed.

Patch by Andy Zhang and Tyler Nowicki!

llvm-svn: 160451
2012-07-18 20:49:17 +00:00
Victor Oliveira aa9ccee921 Adding some debug information to PassManager
llvm-svn: 160446
2012-07-18 19:59:29 +00:00
Chad Rosier 848094e3ce Whitespace.
llvm-svn: 160445
2012-07-18 19:35:16 +00:00
Chandler Carruth 985454e0ac Fix a somewhat nasty crasher in PR13378. This crashes inside of
LiveIntervals due to the two-addr pass generating bogus MI code.

The crux of the issue was a loop nesting problem. The intent of the code
which attempts to transform instructions before converting them to
two-addr form is to defer and reprocess any transformed instructions as
the second processing is likely to have more opportunities to coalesce
copies, etc. Unfortunately, there was one section of processing that was
not deferred -- the INSERT_SUBREG rewriting. Due to quirks of how this
rewriting proceeded, not only did it occur early, it removed the bits of
information needed for the deferred processing to correctly generate the
necessary two address form (specifically inserting a copy), but didn't
trigger any immediate assertions and produced what appeared to be
already valid two-address from code. Thus, the assertion only fired much
later in the pipeline.

The fix is to hoist the transformation logic up layer to where it can
more firmly defer all further processing, and to teach the normal
processing to handle an edge case previously handled as part of the
transformation logic. This edge case (already matched tied register
operands) needs to *not* defer any steps.

As has been brought up repeatedly in the process: wow does this code
need refactoring. I *may* squeeze in some time to at least bring sanity
to this loop... but wow... =]

Thanks to Jakob for helpful hints on the way here, and the review.

llvm-svn: 160443
2012-07-18 18:58:22 +00:00
Andrew Trick a22cdb713b Fix ARMTargetLowering::isLegalAddImmediate to consider thumb encodings.
Based on Evan's suggestion without a commitable test.

llvm-svn: 160441
2012-07-18 18:34:27 +00:00
Andrew Trick bc325168c3 whitespace
llvm-svn: 160440
2012-07-18 18:34:24 +00:00
Nadav Rotem 4c12245b3a The vbroadcast family of instructions has 'fallback patterns' in case where the
load source operand is used by multiple nodes. The v2i64 broadcast was emulated
by shuffling the two lower i32 elements to the upper two.
We had a bug in the immediate used for the broadcast.
Replacing 0 to 0x44.
0x44 means [01|00|01|00] which corresponds to the correct lane.

Patch by Michael Kuperstein.

llvm-svn: 160430
2012-07-18 08:14:48 +00:00
Jack Carter a62ba82825 Mips specific inline asm operand modifier 'M':
Print the high order register of a double word register operand.

In 32 bit mode, a 64 bit double word integer will be represented
by 2 32 bit registers. This modifier causes the high order register
to be used in the asm expression. It is useful if you are using 
doubles in assembler and continue to control register to variable
relationships.

This patch also fixes a related bug in a previous patch:

    case 'D': // Second part of a double word register operand
    case 'L': // Low order register of a double word register operand
    case 'M': // High order register of a double word register operand

I got 'D' and 'M' confused. The second part of a double word operand
will only match 'M' for one of the endianesses. I had 'L' and 'D'
be the opposite twins when 'L' and 'M' are.

llvm-svn: 160429
2012-07-18 06:41:36 +00:00
Craig Topper 6bf3ed454a Remove tab characters.
llvm-svn: 160425
2012-07-18 04:59:16 +00:00
Craig Topper 8532423268 Fix typo in error message and remove some tab characters.
llvm-svn: 160423
2012-07-18 04:36:35 +00:00
Andrew Trick 0d07dfcd6f indvars: drive by heuristics fix.
Minor oversight noticed by inspection. Sorry no unit test.

llvm-svn: 160422
2012-07-18 04:35:13 +00:00
Andrew Trick c08726627c indvars: Linear function test replace should avoid reusing undef.
Fixes PR13371: indvars pass incorrectly substitutes 'undef' values.

I do not like this fix. It's needed until/unless the meaning of undef
changes. It attempts to be complete according to the IR spec, but I
don't have much confidence in the implementation given the difficulty
testing undefined behavior. Worse, this invalidates some of my
hard-fought work on indvars and LSR to optimize pointer induction
variables. It results benchmark regressions, which I'll track
internally. On x86_64 no LTO I see:

-3% huffbench
-3% 400.perlbench
-8% fhourstones

My only suggestion for recovering is to change the meaning of
undef. If we could trust an arbitrary instruction to produce a some
real value that can be manipulated (e.g. incremented) according to
non-undef rules, then this case could be easily handled with SCEV.

llvm-svn: 160421
2012-07-18 04:35:10 +00:00
Craig Topper 01deb5f2df Make x86 asm parser to check for xmm vs ymm for index register in gather instructions. Also fix Intel syntax for gather instructions to use 'DWORD PTR' or 'QWORD PTR' to match gas.
llvm-svn: 160420
2012-07-18 04:11:12 +00:00
Galina Kistanova 5ac251b81a Fixed few warnings.
llvm-svn: 160419
2012-07-18 04:06:49 +00:00
Nuno Lopes 2151497dca ignore 'invoke @llvm.donothing', but still keep the edge to the continuation BB
llvm-svn: 160411
2012-07-18 00:07:17 +00:00
Joel Jones b84f7bea09 More replacing of target-dependent intrinsics with target-indepdent
intrinsics.  The second instruction(s) to be handled are the vector versions 
of count set bits (ctpop).

The changes here are to clang so that it generates a target independent 
vector ctpop when it sees an ARM dependent vector bits set count.  The changes 
in llvm are to match the target independent vector ctpop and in 
VMCore/AutoUpgrade.cpp to update any existing bc files containing ARM 
dependent vector pop counts with target-independent ctpops.  There are also 
changes to an existing test case in llvm for ARM vector count instructions and 
to a test for the bitcode upgrade.

<rdar://problem/11892519>

There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>

llvm-svn: 160410
2012-07-18 00:02:16 +00:00
Akira Hatanaka f640f040d1 Clean up Mips16InstrFormats.td and Mips16InstrInfo.td.
Patch by Reed Kotler.

llvm-svn: 160403
2012-07-17 22:55:34 +00:00
Evan Cheng e6a3b03ee0 Back out r160101 and instead implement a dag combine to recover from instcombine transformation.
llvm-svn: 160387
2012-07-17 18:54:11 +00:00
Jakob Stoklund Olesen 0ef031186c Add some trace output to TwoAddressInstructionPass.
llvm-svn: 160380
2012-07-17 17:57:23 +00:00
Benjamin Kramer 7c1598caaa Remove unused variable.
llvm-svn: 160372
2012-07-17 17:00:11 +00:00
Nuno Lopes 216d571af7 simplify getSetSize() per Duncan's comments
llvm-svn: 160368
2012-07-17 15:43:59 +00:00
Alexey Samsonov b604ff2a07 Improve behavior of DebugInfoEntryMinimal::getSubprogramName() introduced in r159512.
To fetch a subprogram name we should not only inspect the DIE for this subprogram, but optionally inspect
its specification, or its abstract origin (even if there is no inlining), or even specification of an abstract origin.

Reviewed by Benjamin Kramer.

llvm-svn: 160365
2012-07-17 15:28:35 +00:00
Kostya Serebryany 986b8da500 [asan] more code to merge crash callbacks. Doesn't fully work yet, but allows to hold performance experiments
llvm-svn: 160361
2012-07-17 11:04:12 +00:00
Nadav Rotem 277a40bc0a Fix a crash in the legalization of large vectors.
When truncating a result of a vector that is split we need
to use the result of the split vector, and not re-split the dead node.

llvm-svn: 160357
2012-07-17 09:07:37 +00:00
Evan Cheng 780f9b5f92 Implement r160312 as target indepedenet dag combine.
llvm-svn: 160354
2012-07-17 08:31:11 +00:00
Evan Cheng 47d7be9578 Make sure constant bitwidth is <= 64 bit before calling getSExtValue().
llvm-svn: 160350
2012-07-17 07:47:50 +00:00
Evan Cheng f579beca6d This is another case where instcombine demanded bits optimization created
large immediates. Add dag combine logic to recover in case the large
immediates doesn't fit in cmp immediate operand field.

int foo(unsigned long l) {
  return (l>> 47) == 1;
}

we produce

  %shr.mask = and i64 %l, -140737488355328
  %cmp = icmp eq i64 %shr.mask, 140737488355328
  %conv = zext i1 %cmp to i32
  ret i32 %conv

which codegens to

movq    $0xffff800000000000,%rax
andq    %rdi,%rax
movq    $0x0000800000000000,%rcx
cmpq    %rcx,%rax
sete    %al
movzbl    %al,%eax
ret

TargetLowering::SimplifySetCC would transform
(X & -256) == 256 -> (X >> 8) == 1
if the immediate fails the isLegalICmpImmediate() test. For x86,
that's immediates which are not a signed 32-bit immediate.

Based on a patch by Eli Friedman.

PR10328
rdar://9758774

llvm-svn: 160346
2012-07-17 06:53:39 +00:00
Andrew Trick c803706c18 Reapply r160340. LSR: Limit CollectSubexprs.
Speculatively fix crashes by code inspection. Can't reproduce them yet.

llvm-svn: 160344
2012-07-17 05:30:37 +00:00
Andrew Trick e834cb465a Revert "LSR: try not to blow up solving combinatorial problems brute force."
Some units tests crashed on a different platform.

llvm-svn: 160341
2012-07-17 05:05:21 +00:00
Andrew Trick 7cd6d426b3 LSR: try not to blow up solving combinatorial problems brute force.
This places limits on CollectSubexprs to constrains the number of
reassociation possibilities. It limits the recursion depth and skips
over chains of nested recurrences outside the current loop.

Fixes PR13361. Although underlying SCEV behavior is still potentially bad.

llvm-svn: 160340
2012-07-17 05:00:56 +00:00
Nuno Lopes 482fb19fd5 fix PR13339 (remove the predecessor from the unwind BB when removing an invoke)
llvm-svn: 160325
2012-07-16 22:49:40 +00:00
Nuno Lopes 986cc181b0 teach ConstantRange that zero times X is always zero
llvm-svn: 160317
2012-07-16 20:47:16 +00:00
Evan Cheng 75315b877c For something like
uint32_t hi(uint64_t res)
{
        uint_32t hi = res >> 32;
        return !hi;
}

llvm IR looks like this:
define i32 @hi(i64 %res) nounwind uwtable ssp {
entry:
  %lnot = icmp ult i64 %res, 4294967296
  %lnot.ext = zext i1 %lnot to i32
  ret i32 %lnot.ext
}

The optimizer has optimize away the right shift and truncate but the resulting
constant is too large to fit in the 32-bit immediate field. The resulting x86
code is worse as a result:
        movabsq $4294967296, %rax       ## imm = 0x100000000
        cmpq    %rax, %rdi
        sbbl    %eax, %eax
        andl    $1, %eax

This patch teaches the x86 lowering code to handle ult against a large immediate
with trailing zeros. It will issue a right shift and a truncate followed by
a comparison against a shifted immediate.
        shrq    $32, %rdi
        testl   %edi, %edi
        sete    %al
        movzbl  %al, %eax

It also handles a ugt comparison against a large immediate with trailing bits
set. i.e. X >  0x0ffffffff -> (X >> 32) >= 1

rdar://11866926

llvm-svn: 160312
2012-07-16 19:35:43 +00:00
Nadav Rotem 60f7904db7 Minor cleanup and docs.
llvm-svn: 160311
2012-07-16 18:56:39 +00:00
Nadav Rotem 839a06e9d7 Make ComputeDemandedBits return a deterministic result when computing an AssertZext value.
In the added testcase the constant 55 was behind an AssertZext of type i1, and ComputeDemandedBits
reported that some of the bits were both known to be one and known to be zero.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160305
2012-07-16 18:34:53 +00:00
Tom Stellard 1be1aa84ec Revert "AMDGPU: Add core backend files for R600/SI codegen v6"
This reverts commit 4ea70107c5e51230e9e60f0bf58a0f74aa4885ea.

llvm-svn: 160303
2012-07-16 18:19:53 +00:00
Tom Stellard 95bd0be903 Revert "Build script changes for R600/SI Codegen v6"
This reverts commit e3013202259ed1e006c21817c63cf25d75982721.

llvm-svn: 160301
2012-07-16 18:19:46 +00:00
Tom Stellard 151dc338e4 Revert "Target/AMDGPU/R600KernelParameters.cpp: Fix two includes, <llvm/IRBuilder.h> and <llvm/TypeBuilder.h>"
This reverts commit 0258a6bdd30802f5cc0e8e57c8e768fde2aef590.

llvm-svn: 160299
2012-07-16 18:19:41 +00:00
Tom Stellard 1bd3012505 Revert "Target/AMDGPU: [CMake] Fix dependencies. 1) Add intrinsics_gen. Add AMDGPUCommonTableGen."
This reverts commit ebc934ba32ee71abbb8f0f2eb6a0fbaa613ba0d2.

llvm-svn: 160298
2012-07-16 18:19:40 +00:00
Tom Stellard 781853e11f Revert "Target/AMDGPU/R600KernelParameters.cpp: Don't use "and", "or" as conditional operator..."
This reverts commit 29f28bc14ad5a907f5dc849f004fafeec0aab33a.

llvm-svn: 160297
2012-07-16 18:19:38 +00:00
Tom Stellard 2e007de42d Revert "Target/AMDGPU/AMDILIntrinsicInfo.cpp: Use llvm_unreachable() in nonreturn function, instead of assert(0)."
This reverts commit 4ba4acc1bc2561b944a571edbb6a2dc78e357dfe.

llvm-svn: 160296
2012-07-16 18:19:37 +00:00
Tom Stellard f65e78b2fa Revert "Target/AMDGPU: Fix includes, or msvc build failed."
This reverts commit fef4aa1b16fcf7a472559abbbcf4c1adc9eb5ca6.

llvm-svn: 160295
2012-07-16 18:19:32 +00:00
Nuno Lopes 99504c577c make ConstantRange::getSetSize() properly compute the size of wrapped and full sets.
Make it always return APInts with the same bitwidth for the same ConstantRange bitwidth to simply clients

llvm-svn: 160294
2012-07-16 18:08:12 +00:00
Chad Rosier 10e8207c9e With r160248 in place this code is no longer needed.
llvm-svn: 160293
2012-07-16 17:42:13 +00:00
Kostya Serebryany c4ce5dfe2d [asan] a bit more refactoring, addressed some of the style comments from chandlerc, partially implemented crash callback merging (under flag)
llvm-svn: 160290
2012-07-16 17:12:07 +00:00
Aaron Ballman ed9b0a9114 MSVC's implementation of isalnum will assert on characters > 255, so we need to use an unsigned char to ensure the integer promotion happens properly. This fixes an assert in debug builds with CodeGen\X86\utf8.ll
llvm-svn: 160286
2012-07-16 16:18:18 +00:00
Kostya Serebryany 874dae6119 [asan] refactor instrumentation to allow merging the crash callbacks (not fully implemented yet, no functionality change except the BB order)
llvm-svn: 160284
2012-07-16 16:15:40 +00:00
NAKAMURA Takumi 96cc5e5bf9 Target/AMDGPU: Fix includes, or msvc build failed.
llvm-svn: 160280
2012-07-16 15:43:50 +00:00
NAKAMURA Takumi dc4261794f Target/AMDGPU/AMDILIntrinsicInfo.cpp: Use llvm_unreachable() in nonreturn function, instead of assert(0).
llvm-svn: 160279
2012-07-16 15:43:09 +00:00
NAKAMURA Takumi 5f5fd8e545 Target/AMDGPU/R600KernelParameters.cpp: Don't use "and", "or" as conditional operator...
llvm-svn: 160278
2012-07-16 15:42:35 +00:00
Jack Carter f649043aa5 Doubleword Shift Left Logical Plus 32
Mips shift instructions DSLL, DSRL and DSRA are transformed into
DSLL32, DSRL32 and DSRA32 respectively if the shift amount is between
32 and 63

Here is a description of DSLL:

Purpose: Doubleword Shift Left Logical Plus 32
To execute a left-shift of a doubleword by a fixed amount--32 to 63 bits

Description: GPR[rd] <- GPR[rt] << (sa+32)

The 64-bit doubleword contents of GPR rt are shifted left, inserting
 zeros into the emptied bits; the result is placed in
GPR rd. The bit-shift amount in the range 0 to 31 is specified by sa.

This patch implements the direct object output of these instructions.

llvm-svn: 160277
2012-07-16 15:14:51 +00:00
NAKAMURA Takumi bb42a5e2cf Target/AMDGPU: [CMake] Fix dependencies. 1) Add intrinsics_gen. Add AMDGPUCommonTableGen.
llvm-svn: 160276
2012-07-16 15:09:11 +00:00
NAKAMURA Takumi 3128d26124 Target/AMDGPU/R600KernelParameters.cpp: Fix two includes, <llvm/IRBuilder.h> and <llvm/TypeBuilder.h>
llvm-svn: 160275
2012-07-16 15:08:47 +00:00
Tom Stellard 812e652b43 Build script changes for R600/SI Codegen v6
llvm-svn: 160272
2012-07-16 14:17:16 +00:00
Tom Stellard bcce80fa95 AMDGPU: Add core backend files for R600/SI codegen v6
llvm-svn: 160270
2012-07-16 14:17:08 +00:00
Kostya Serebryany 4273bb05d1 [asan] initialize asan error callbacks in runOnModule instead of doing that on-demand
llvm-svn: 160269
2012-07-16 14:09:42 +00:00
Nadav Rotem 4968e45b9f Fix a bug in the 3-address conversion of LEA when one of the operands is an
undef virtual register. The problem is that ProcessImplicitDefs removes the
definition of the register and marks all uses as undef. If we lose the undef
marker then we get a register which has no def, is not marked as undef. The
live interval analysis does not collect information for these virtual
registers and we crash in later passes.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160260
2012-07-16 10:52:25 +00:00
Chandler Carruth 8b540ab337 Revert r160254 temporarily.
It turns out that ASan relied on the at-the-end block insertion order to
(purely by happenstance) disable some LLVM optimizations, which in turn
start firing when the ordering is made more "normal". These
optimizations in turn merge many of the instrumentation reporting calls
which breaks the return address based error reporting in ASan.

We're looking at several different options for fixing this.

llvm-svn: 160256
2012-07-16 10:01:02 +00:00
Chandler Carruth 3dd6c81492 Teach AddressSanitizer to create basic blocks in a more natural order.
This is particularly useful to the backend code generators which try to
process things in the incoming function order.

Also, cleanup some uses of IRBuilder to be a bit simpler and more clear.

llvm-svn: 160254
2012-07-16 08:58:53 +00:00
Alexey Samsonov dcc1291d17 This CL changes the function prologue and epilogue emitted on X86 when stack needs realignment.
It is intended to fix PR11468.

Old prologue and epilogue looked like this:
push %rbp
mov %rsp, %rbp
and $alignment, %rsp
push %r14
push %r15
...
pop %r15
pop %r14
mov %rbp, %rsp
pop %rbp

The problem was to reference the locations of callee-saved registers in exception handling:
locations of callee-saved had to be re-calculated regarding the stack alignment operation. It would
take some effort to implement this in LLVM, as currently MachineLocation can only have the form
"Register + Offset". Funciton prologue and epilogue are now changed to:

push %rbp
mov %rsp, %rbp
push %14
push %15
and $alignment, %rsp
...
lea -$size_of_saved_registers(%rbp), %rsp
pop %r15
pop %r14
pop %rbp

Reviewed by Chad Rosier.

llvm-svn: 160248
2012-07-16 06:54:09 +00:00
Chandler Carruth 36e2ecf528 Move llvm/Support/TypeBuilder.h -> llvm/TypeBuilder.h. This completes
the move of *Builder classes into the Core library.

No uses of this builder in Clang or DragonEgg I could find.

If there is a desire to have an IR-building-support library that
contains all of these builders, that can be easily added, but currently
it seems likely that these add no real overhead to VMCore.

llvm-svn: 160243
2012-07-15 23:45:24 +00:00
Chandler Carruth ec7ad6561f Move llvm/Support/MDBuilder.h to llvm/MDBuilder.h, to live with
IRBuilder, DIBuilder, etc.

This is the proper layering as MDBuilder can't be used (or implemented)
without the Core Metadata representation.

Patches to Clang and Dragonegg coming up.

llvm-svn: 160237
2012-07-15 23:26:50 +00:00
Nadav Rotem 3050e07108 Fix a bug in the scalarization of BUILD_VECTOR. BUILD_VECTOR elements may be wider than the output element type. Make sure to trunc them if needed.
Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160235
2012-07-15 20:39:08 +00:00
Nadav Rotem eec74c7279 Teach getTargetVShiftNode about TargetConstant nodes.
llvm-svn: 160234
2012-07-15 20:27:43 +00:00
Nadav Rotem ee3552f88d Rename VBROADCASTSDrm into VBROADCASTSDYrm to match the naming convention.
Allow the folding of vbroadcastRR to vbroadcastRM, where the memory operand is a spill slot.

PR12782.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160230
2012-07-15 12:26:30 +00:00
Nadav Rotem a62368c965 Refactor the code that checks that all operands of a node are UNDEFs.
Add a micro-optimization to getNode of CONCAT_VECTORS when both operands are undefs.
Can't find a testcase for this because VECTOR_SHUFFLE already handles undef operands, but Duncan suggested that we add this.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160229
2012-07-15 08:38:23 +00:00
Chandler Carruth db5536f09d Reapply r160194, switching to use LV information for finding local kills.
The notable fix is to look at any dependencies attached to the kill
instruction (or other instructions between MI nad the kill) where the
dependencies are specific to the register in question.

The old code implicitly handled this by rejecting the transform if *any*
other uses were found within the block, but after the start point. The
new code directly finds the kill, and has to re-use the existing
dependency scan to check for non-kill uses.

This was caught by self-host, but I found the bug via inspection and use
of absurd assert scaffolding to compute the kills in two ways and
compare them. So I have no useful testcase for this other than
"bootstrap". I'd work harder to reduce a test case if this particular
code were likely to live for a long time.

Thanks to Benjamin Kramer for reviewing the fix itself.

llvm-svn: 160228
2012-07-15 03:29:46 +00:00
Nadav Rotem 9466e81df6 AVX: Fix a bug in getTargetVShiftNode. The shift amount has to be a 128bit vector with the same element type as the input vector.
This is needed because of the patterns we have for the VP[SLL/SRA/SRL][W/D/Q] instructions.

llvm-svn: 160222
2012-07-14 22:26:05 +00:00
Nadav Rotem 018921002e Add a dagcombine optimization to convert concat_vectors of undefs into a single undef.
The unoptimized concat_vectors isd prevented the canonicalization of the vector_shuffle node.

llvm-svn: 160221
2012-07-14 21:30:27 +00:00
Jakob Stoklund Olesen 8f324a2cc8 Account for early-clobber reload instructions.
No test case, there are no in-tree targets that require this.

llvm-svn: 160219
2012-07-14 18:45:35 +00:00
Jakob Stoklund Olesen 3d604ab933 Be more verbose when detecting dominance problems.
Catch uses of undefined physregs that haven't been added to basic block
live-in lists. Run the verifier to pinpoint the problem.

Also run the verifier when a virtual register use is not jointly
dominated by defs.

llvm-svn: 160207
2012-07-13 23:39:05 +00:00
Andrew Trick 653513b8dd LSR Fix: check SCEV expression safety before expansion.
All SCEV expressions used by LSR formulae must be safe to
expand. i.e. they may not contain UDiv unless we can prove nonzero
denominator.

Fixes PR11356: LSR hoists UDiv.

llvm-svn: 160205
2012-07-13 23:33:10 +00:00
Andrew Trick ee76065b7a IVUsers should only generate SCEV's for values that are safe to speculate.
This allows SCEVExpander to run on the IV expressions.

This codifies an assumption made by LSR to complete the fix for
PR11356, but I haven't been able to generate a separate unit test for
this part. I'm adding it as an extra safety check.

llvm-svn: 160204
2012-07-13 23:33:05 +00:00
Andrew Trick 365e31c36c Factor SCEV traversal code so I can use it elsewhere. No functionality.
llvm-svn: 160203
2012-07-13 23:33:03 +00:00
Joel Jones 43cb87839c This is one of the first steps at moving to replace target-dependent
intrinsics with target-indepdent intrinsics.  The first instruction(s) to be 
handled are the vector versions of count leading zeros (ctlz).

The changes here are to clang so that it generates a target independent 
vector ctlz when it sees an ARM dependent vector ctlz.  The changes in llvm 
are to match the target independent vector ctlz and in VMCore/AutoUpgrade.cpp 
to update any existing bc files containing ARM dependent vector ctlzs with 
target-independent ctlzs.  There are also changes to an existing test case in 
llvm for ARM vector count instructions and a new test for the bitcode upgrade.

<rdar://problem/11831778>

There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>

llvm-svn: 160200
2012-07-13 23:25:25 +00:00
Chandler Carruth 9c97cd5672 Revert r160194, which switched to use LV information for finding local
kills.

This is causing miscompiles that I'm working on tracking down.

llvm-svn: 160196
2012-07-13 22:23:32 +00:00
Chandler Carruth 58c470dc68 Use the LiveVariables information to efficiently get local kills. This
removes the largest scaling problem in the test cases from PR13225 when
ASan is switched to insert basic blocks in the natural CFG order.

It may also solve some scaling problems for more normal code with large
numbers of basic blocks and variables.

llvm-svn: 160194
2012-07-13 21:18:38 +00:00
Jakob Stoklund Olesen ed6c0408fa Remove variable_ops from call instructions in most targets.
Call instructions are no longer required to be variadic, and
variable_ops should only be used for instructions that encode a variable
number of arguments, like the ARM stm/ldm instructions.

llvm-svn: 160189
2012-07-13 20:44:29 +00:00
Jakob Stoklund Olesen 6a81d30269 Remove variable_ops from ARM call instructions.
Function argument registers are added to the call SDNode, but
InstrEmitter now knows how to make those operands implicit, and the call
instruction doesn't have to be variadic.

Explicit register operands should only be those that are encoded in the
instruction, implicit register operands are for extra dependencies like
call argument and return values.

llvm-svn: 160188
2012-07-13 20:27:00 +00:00
Jack Carter 5ddcfda8ef The Mips specific relocation R_MIPS_GOT_DISP
is used in cases where global symbols are 
directly represented in the GOT and we use an 
offset into the global offset table.

This patch adds direct object support for R_MIPS_GOT_DISP.

llvm-svn: 160183
2012-07-13 19:15:47 +00:00
Benjamin Kramer abbfe69356 Make helper functions static.
llvm-svn: 160173
2012-07-13 13:25:15 +00:00
Craig Topper b3bac4908e Mark VINSERTI128rm as MayLoad=1. Fixes PR13348.
llvm-svn: 160162
2012-07-13 05:46:28 +00:00
Galina Kistanova fc25990582 Fixed few warnings; trimmed empty lines.
llvm-svn: 160159
2012-07-13 01:25:27 +00:00
Jim Grosbach 1af8c8060c Provide function name in 'Cannot select' fatal error.
When dumping the DAG for a fatal 'Cannot select' back-end error, also
provide the name of the function the construct is in. Useful when dealing
with large testcases, as the next step is to llvm-extract the function
in question to get a small(er) testcase.

llvm-svn: 160152
2012-07-13 00:29:09 +00:00
Eric Christopher bf57091f8b The end of the prologue should be marked with is_stmt.
Fixes PR13303.

Patch by Paul Robinson!

llvm-svn: 160148
2012-07-12 23:30:25 +00:00
Galina Kistanova 7da6578291 Fixed few warnings.
llvm-svn: 160142
2012-07-12 20:45:36 +00:00
Benjamin Kramer 4d0916788d Give the rdrand instructions a SideEffect flag and a chain so MachineCSE and MachineLICM don't touch it.
I already had the necessary things in place for IR-level passes but missed the machine passes.

llvm-svn: 160137
2012-07-12 18:14:57 +00:00
Benjamin Kramer 0ab2794eda Add intrinsics for Ivy Bridge's rdrand instruction.
The rdrand/cmov sequence is the same that is emitted by both
GCC and ICC.

Fixes PR13284.

llvm-svn: 160117
2012-07-12 09:31:43 +00:00
Duncan Sands 671cc2575d The result type of EXTRACT_VECTOR_ELT doesn't have to match the element type of
the input vector, it can be bigger (this is helpful for powerpc where <2 x i16>
is a legal vector type but i16 isn't a legal type, IIRC).  However this wasn't
being taken into account by ExpandRes_EXTRACT_VECTOR_ELT, causing PR13220.
Lightly tweaked version of a patch by Michael Liao.

llvm-svn: 160116
2012-07-12 09:01:35 +00:00
Craig Topper f7755df776 Update GATHER instructions to support 2 read-write operands. Patch from myself and Manman Ren.
llvm-svn: 160110
2012-07-12 06:52:41 +00:00
Evan Cheng 493eb32ff4 Instcombine was transforming:
%shr = lshr i64 %key, 3
  %0 = load i64* %val, align 8
  %sub = add i64 %0, -1
  %and = and i64 %sub, %shr
  ret i64 %and

to:
  %shr = lshr i64 %key, 3
  %0 = load i64* %val, align 8
  %sub = add i64 %0, 2305843009213693951
  %and = and i64 %sub, %shr
  ret i64 %and

The demanded bit optimization is actually a pessimization because add -1 would
be codegen'ed as a sub 1. Teach the demanded constant shrinking optimization
to check for negated constant to make sure it is actually reducing the width
of the constant.

rdar://11793464

llvm-svn: 160101
2012-07-12 01:45:35 +00:00
Jim Grosbach d2aabd3bb2 TableGen: Location information for diagnostic.
def Pat<...>;

Results in 'record name is not a string!' diagnostic. Not the best,
but the lack of location information moves it from not very helpful
into completely useless. We're in the Record class when throwing the
error, so just add the location info directly.

llvm-svn: 160098
2012-07-12 00:53:31 +00:00
Manman Ren 88a0d3313b ARM: fix typo in comments
llvm-svn: 160093
2012-07-11 23:47:00 +00:00
Manman Ren 34cb93e192 ARM: Fix optimizeCompare to correctly check safe condition.
It is safe if CPSR is killed or re-defined.
When we are done with the basic block, check whether CPSR is live-out.
Do not optimize away cmp if CPSR is live-out.

llvm-svn: 160090
2012-07-11 22:51:44 +00:00
Jack Carter 570ae0b1f7 Patch for Mips direct object generation.
When WriteFragmentData() case FT_align called
Asm.getBackend().writeNopData() is called, nothing
is done since Mips implementation of writeNopData just
returned "true".

For some reason this has not caused problems in 32 bit
mode, but in 64 bit mode it caused an assert when processing
multiple function units.

The test case included will assert without this patch. It
runs twice with different flags to prevent false positives
due to changes in code generation over time.

llvm-svn: 160084
2012-07-11 22:17:39 +00:00
Jack Carter 42ebf98b04 This change removes an "initialization" warning.
Even though variable in question could not 
be initialized before use, the code was such that 
the compiler had no way of knowing that.

llvm-svn: 160081
2012-07-11 21:41:49 +00:00
Argyrios Kyrtzidis f141156e6c In MemoryBuffer::getOpenFile() don't verify that the mmap'ed
file buffer is null-terminated.

If the file is smaller than we thought, mmap will not allow dereferencing
past the pages that are enough to cover the actual file size,
even though we asked for a larger address range.

rdar://11612916

llvm-svn: 160075
2012-07-11 20:59:20 +00:00
Akira Hatanaka bb5519154c In register classes in MipsRegisterInfo.td, list the registers in ascending
order of binary encoding.

Patch by Vladimir Medic.

llvm-svn: 160073
2012-07-11 20:51:50 +00:00
Chad Rosier 8446ede023 [x86 fast-isel] Per discussion with Eric, add all cases to switch with verbose
comments.

llvm-svn: 160069
2012-07-11 19:58:38 +00:00
Manman Ren 1553ce0e81 X86: Update to peephole optimization to move Movr0 before (Sub, Cmp) pair.
When Movr0 is between sub and cmp, we move Movr0 before sub if it enables
removal of Cmp.

llvm-svn: 160066
2012-07-11 19:35:12 +00:00
Akira Hatanaka 24cf4e36e5 Implement MipsTargetLowering::LowerSELECT_CC to custom lower SELECT_CC.
llvm-svn: 160064
2012-07-11 19:32:27 +00:00
Evan Cheng b17122859b InstrEmitter::EmitSubregNode() optimize extract_subreg in this case:
r1025 = s/zext r1024, 4
r1026 = extract_subreg r1025, 4

to a copy:
r1026 = copy r1024

This is correct. However it uses TII->isCoalescableExtInstr() which can return
true for instructions which essentially does a sext_in_reg so this can end up
with an illegal copy where the source and destination register classes do not
match. Add a check to avoid it. Sorry, no test case possible at this time.

rdar://11849816

llvm-svn: 160059
2012-07-11 18:55:07 +00:00
Benjamin Kramer 3aab6a86a2 PR13326: Fix a subtle edge case in the udiv -> magic multiply generator.
This caused 6 of 65k possible 8 bit udivs to be wrong.

llvm-svn: 160058
2012-07-11 18:31:59 +00:00
Chad Rosier 43218c59c3 [x86 fast-isel] Rather then call llvm_unreachable() have fast-isel fall back
to Selection DAG isel.  Patch by Andrew Kaylor <andrew.kaylor@intel.com>.

llvm-svn: 160055
2012-07-11 17:23:17 +00:00
Nadav Rotem d2bdcebb14 When ext-loading and trunc-storing vectors to memory, on x86 32bit systems, allow loads/stores of 64bit values from xmm registers.
llvm-svn: 160044
2012-07-11 13:27:05 +00:00
Nadav Rotem 2a148668b6 Rename many of the Tmp1, Tmp2, Tmp3 variables to names such as Chain, Value, Ptr, etc.
No functionality change.

llvm-svn: 160042
2012-07-11 11:02:16 +00:00
Benjamin Kramer 9488100d46 Remove unused variable.
llvm-svn: 160040
2012-07-11 09:39:04 +00:00
Nadav Rotem de6fd282ef Refactor the DAG Legalizer by extracting the legalization of
Load and Store nodes into their own functions.
No functional change.

llvm-svn: 160037
2012-07-11 08:52:09 +00:00
Owen Anderson b8844d6744 Only apply the SETCC+SITOFP -> SELECTCC optimization when the SETCC returns an MVT::i1, i.e. before type legalization.
This is a speculative fix for a problem on Mips reported by Akira Hatanaka.

llvm-svn: 160036
2012-07-11 06:38:55 +00:00
Akira Hatanaka 878ad8b28d Lower RETURNADDR node in Mips backend.
Patch by Sasa Stankovic.

llvm-svn: 160031
2012-07-11 00:53:32 +00:00
Jack Carter e8cb2fc616 Mips specific inline asm operand modifier 'L'.
Low order register of a double word register operand. Operands 
   are defined by the name of the variable they are marked with in
   the inline assembler code. This is a way to specify that the 
   operand just refers to the low order register for that variable.
   
   It is the opposite of modifier 'D' which specifies the high order
   register.
   
   Example:
   
 main()
{

    long long ll_input = 0x1111222233334444LL;
    long long ll_val = 3;
    int i_result = 0;

    __asm__ __volatile__( 
		   "or	%0, %L1, %2"
	     : "=r" (i_result) 
	     : "r" (ll_input), "r" (ll_val)); 
}

   Which results in:
   
   	lui	$2, %hi(_gp_disp)
	addiu	$2, $2, %lo(_gp_disp)
	addiu	$sp, $sp, -8
	addu	$2, $2, $25
	sw	$2, 0($sp)
	lui	$2, 13107
	ori	$3, $2, 17476     <-- Low 32 bits of ll_input
	lui	$2, 4369
	ori	$4, $2, 8738      <-- High 32 bits of ll_input
	addiu	$5, $zero, 3  <-- Low 32 bits of ll_val
	addiu	$2, $zero, 0  <-- High 32 bits of ll_val
	#APP
	or	$3, $4, $5        <-- or i_result, high 32 ll_input, low 32 of ll_val
	#NO_APP
	addiu	$sp, $sp, 8
	jr	$ra

If not direction is done for the long long for 32 bit variables results
in using the low 32 bits as ll_val shows.

There is an existing bug if 'L' or 'D' is used for the destination register
for 32 bit long longs in that the target value will be updated incorrectly
for the non-specified part unless explicitly set within the inline asm code.

llvm-svn: 160028
2012-07-10 22:41:20 +00:00
Jakob Stoklund Olesen bc90a4ea82 Require and preserve LoopInfo for early if-conversion.
It will surely be needed by heuristics.

llvm-svn: 160027
2012-07-10 22:39:56 +00:00
Chandler Carruth 2207f76cd4 Teach the LiveInterval::join function to use the fast merge algorithm,
generalizing its implementation sufficiently to support this value
number scenario as well.

This cuts out another significant performance hit in large functions
(over 10k basic blocks, etc), especially those with "natural" CFG
structures.

llvm-svn: 160026
2012-07-10 22:25:21 +00:00
Jakob Stoklund Olesen 02638392c1 Run early if-conversion in domtree post-order.
This ordering allows nested if-conversion without using a work list, and
it makes it possible to update the dominator tree on the fly as well.

Any erased basic blocks will always be dominated by the current
post-order position, so the domtree can be pruned without invalidating
the iterator.

llvm-svn: 160025
2012-07-10 22:18:23 +00:00
Chad Rosier 97c2214277 Move [get|set]BasePtrStackAdjustment() from MachineFrameInfo to
X86MachineFunctionInfo as this is currently only used by X86. If this ever
becomes an issue on another arch (e.g., ARM) then we can hoist it back out.

llvm-svn: 160009
2012-07-10 18:27:15 +00:00
Chad Rosier bdb08ac50a Add support for dynamic stack realignment in the presence of dynamic allocas on
X86.  Basically, this is a reapplication of r158087 with a few fixes.

Specifically, (1) the stack pointer is restored from the base pointer before
popping callee-saved registers and (2) in obscure cases (see comments in patch)
we must cache the value of the original stack adjustment in the prologue and
apply it in the epilogue.

rdar://11496434

llvm-svn: 160002
2012-07-10 17:45:53 +00:00
Chandler Carruth 77d940011d Fix a bug where I didn't test for an empty range before inspecting the
back of it.

I don't have anything even remotely close to a test case for this. It
only broke two build bots, both of them doing bootstrap builds, one of
them a dragonegg bootstrap. It doesn't break for me when I bootstrap
either. It doesn't reproduce every time or on many machines during the
bootstrap. Many thanks to Duncan Sands who got the exact command (and
stage of the bootstrap) which failed on the dragonegg bootstrap and
managed to get it to trigger under valgrind with debug symbols. The fix
was then found by inspection.

llvm-svn: 159993
2012-07-10 15:41:33 +00:00
Nadav Rotem d908ddc186 Improve the loading of load-anyext vectors by allowing the codegen to load
multiple scalars and insert them into a vector. Next, we shuffle the elements
into the correct places, as before.
Also fix a small dagcombine bug in SimplifyBinOpWithSameOpcodeHands, when the
migration of bitcasts happened too late in the SelectionDAG process.

llvm-svn: 159991
2012-07-10 13:25:08 +00:00
Richard Barton 1dc44dcedd Fix instruction description of VMOV (between two ARM core registers and two single-precision resiters) (and do it properly this time!
llvm-svn: 159989
2012-07-10 12:51:09 +00:00