Commit Graph

5685 Commits

Author SHA1 Message Date
Evan Cheng 7bf83096df DAG combine incorrectly optimize (i32 vextract (v4i16 load $addr), c) to
(i16 load $addr+c*sizeof(i16)) and replace uses of (i32 vextract) with the
i16 load. It should issue an extload instead: (i32 extload $addr+c*sizeof(i16)).

rdar://11035895

llvm-svn: 152675
2012-03-13 22:00:52 +00:00
Chad Rosier a281afc676 Fix a regression from r147481.
Original commit message from r147481:
DAGCombine for transforming 128->256 casts into a vmovaps, rather
then a vxorps + vinsertf128 pair if the original vector came from a load.

Fix:
Unaligned loads need to generate a vmovups.
rdar://10974078

llvm-svn: 152366
2012-03-09 02:00:48 +00:00
Benjamin Kramer 0ef86b0ea3 Remove the no longer existent psp triple from a test.
The test fell back to the C backend, making it useless and it started to fail
on configurations that don't build the C backend.

llvm-svn: 152342
2012-03-08 21:22:27 +00:00
Akira Hatanaka d60cb3822f Test case for r152280, r152285 and r152290.
llvm-svn: 152292
2012-03-08 03:32:42 +00:00
Evan Cheng 80893ce5f5 Extend r148086 to check for [r +/- reg] address mode. This fixes queens performance regression (due to increased register pressure from overly aggressive pre-inc formation).
llvm-svn: 152162
2012-03-06 23:33:32 +00:00
Jakob Stoklund Olesen d9b427ee65 Add <imp-def> operands when reloading into physregs.
When an instruction only writes sub-registers, it is still necessary to
add an <imp-def> operand for the super-register.  When reloading into a
virtual register, rewriting will add the operand, but when loading
directly into a virtual register, the <imp-def> operand is still
necessary.

llvm-svn: 152095
2012-03-06 02:48:17 +00:00
Lang Hames 718cfbe05a Split fpscr into two registers: FPSCR and FPSCR_NZCV.
The fpscr register contains both flags (set by FP operations/comparisons) and
control bits. The control bits (FPSCR) should be reserved, since they're always
available and needn't be defined before use. The flag bits (FPSCR_NZCV) should
like to be unreserved so they can be hoisted by MachineCSE. This fixes PR12165.

llvm-svn: 152076
2012-03-06 00:19:55 +00:00
Jakob Stoklund Olesen fcd435ee73 Remove a test case that no longer makes sense.
This was testing the handling of sub-register coalescing followed by
remat.  The original problem was caused by the extra <imp-def> operands
added by sub-register coalescing.  Those <imp-def> operands are not
added any longer, and the test case passes even when the original patch
is reverted.

llvm-svn: 152040
2012-03-05 19:10:13 +00:00
Sebastian Pop 957a6583f1 updated patch for the ARM fused multiply add/sub
In this update:
- I assumed neon2 does not imply vfpv4, but neon and vfpv4 imply neon2.
- I kept setting .fpu=neon-vfpv4 code attribute because that is what the
assembler understands.

Patch by Ana Pazos <apazos@codeaurora.org>

llvm-svn: 152036
2012-03-05 17:39:52 +00:00
Jakob Stoklund Olesen f729ceae04 Use <def,undef> operands when spilling NEON bundles.
MachineOperands that define part of a virtual register must have an
<undef> flag if they are not intended as read-modify-write operands.

The old trick of adding an <imp-def> operand doesn't work any longer.

Fixes PR12177.

llvm-svn: 152008
2012-03-04 18:40:30 +00:00
Bill Wendling 97b9359623 Do trivial CSE of dead BBs during codegen preparation.
Some BBs can become dead after codegen preparation. If we delete them here, it
could help enable tail-call optimizations later on.
<rdar://problem/10256573>

llvm-svn: 152002
2012-03-04 10:46:01 +00:00
Jakob Stoklund Olesen a0bd36e3bc Fix RA-dependent test.
llvm-svn: 151958
2012-03-03 00:26:30 +00:00
Chad Rosier f5e086f18e Prevent obscure and incorrect tail-call optimization.
In this instance we are generating the tail-call during legalizeDAG.  The 2nd
floor call can't be a tail call because it clobbers %xmm1, which is defined by
the first floor call.  The first floor call can't be a tail-call because it's
not in the tail position.  The only reasonable way I could think to fix this
in a target-independent manner was to check for glue logic on the copy reg.

rdar://10930395

llvm-svn: 151877
2012-03-02 02:50:46 +00:00
Evan Cheng d12af5dc69 Neuter the optimization I implemented with r107852 and r108258 which turn some
floating point equality comparisons into integer ones with -ffast-math. The
issue is the optimization causes +0.0 != -0.0.

Now the optimization is only done when one side is known to be 0.0. The other
side's sign bit is masked off for the comparison.

rdar://10964603

llvm-svn: 151861
2012-03-01 23:27:13 +00:00
Akira Hatanaka 6bbe1f0d10 Fix bugs which were introduced when support for base+index floating point loads
and stores was added.

- SelectAddr should return false if Parent is an unaligned f32 load or store.
- Only aligned load and store nodes should be matched to select reg+imm
  floating point instructions.
- MIPS does not have support for f64 unaligned load or store instructions.

llvm-svn: 151843
2012-03-01 22:12:30 +00:00
Preston Gurd be1c875a1c Trivial change to make the test use Use –mcpu=generic,
so that the test will not fail when run on an Intel Atom
processor, due to the Atom scheduler producing an instruction sequence that is
different from that which is normally expected.

llvm-svn: 151832
2012-03-01 19:57:20 +00:00
Chad Rosier 2913f500fa Revert r151816 as Jim has the appropriate fix.
llvm-svn: 151818
2012-03-01 17:41:19 +00:00
Chad Rosier f0208ed76a Fix testcases from r151807.
llvm-svn: 151816
2012-03-01 17:31:30 +00:00
Jim Grosbach 394ad59d90 Add missing triple for tests.
Make darwin bots happier.

llvm-svn: 151813
2012-03-01 17:30:32 +00:00
James Molloy f6298e9281 Fix a codegen fault in which log2 or exp2 could be dead-code eliminated even though they could have sideeffects.
Only allow log2/exp2 to be converted to an intrinsic if they are declared "readnone".

llvm-svn: 151807
2012-03-01 14:32:18 +00:00
Lang Hames 76e66c31a0 Don't redundantly copy implicit operands when rematerializing.
While we're at it - don't copy vreg implicit operands while rematerializing.
This fixes PR12138.

llvm-svn: 151779
2012-03-01 00:41:17 +00:00
Benjamin Kramer d05a0c6c42 LegalizeIntegerTypes: Reorder operations in the "big shift by small amount" optimization, making the lives of later passes easier.
llvm-svn: 151722
2012-02-29 13:27:00 +00:00
Benjamin Kramer 0c281a7deb LegalizeIntegerTypes: Reenable the large shift with small amount optimization.
To avoid problems with zero shifts when getting the bits that move between words
we use a trick: first shift the by amount-1, then do another shift by one. When
amount is 0 (and size 32) we first shift by 31, then by one, instead of by 32.

Also fix a latent bug that emitted the low and high words in the wrong order
when shifting right.

Fixes PR12113.

llvm-svn: 151637
2012-02-28 17:58:00 +00:00
Daniel Dunbar ee7b899343 Revert r151623 "Some ARM implementaions, e.g. A-series, does return stack prediction. ...", it is breaking the Clang build during the Compiler-RT part.
llvm-svn: 151630
2012-02-28 15:36:07 +00:00
Nadav Rotem 875e463b19 Fix a bug in the code that builds SDNodes from vector GEPs.
When the GEP index is a vector of pointers, the code that calculated the size
of the element started from the vector type, and not the contained pointer type.
As a result, instead of looking at the data element pointed by the vector, this
code used the size of the vector. This works for 32bit members (on 32bit
systems), but not for other types. Added code to peel the vector type and
added a test.

llvm-svn: 151626
2012-02-28 11:54:05 +00:00
Evan Cheng 87c7b09d8d Some ARM implementaions, e.g. A-series, does return stack prediction. That is,
the processor keeps a return addresses stack (RAS) which stores the address
and the instruction execution state of the instruction after a function-call
type branch instruction.

Calling a "noreturn" function with normal call instructions (e.g. bl) can
corrupt RAS and causes 100% return misprediction so LLVM should use a
unconditional branch instead. i.e.
mov lr, pc
b _foo
The "mov lr, pc" is issued in order to get proper backtrace.

rdar://8979299

llvm-svn: 151623
2012-02-28 06:42:03 +00:00
Akira Hatanaka 330d901ce3 Add support for floating point base register + offset register addressing mode
load and store instructions.

llvm-svn: 151611
2012-02-28 02:55:02 +00:00
Jakob Stoklund Olesen 4c5ad2b812 Handle regmasks in MachineCSE.
Don't attempt to extend physreg live ranges across calls.

<rdar://problem/10942095>

llvm-svn: 151610
2012-02-28 02:08:50 +00:00
Jakob Stoklund Olesen 92c15b2b2c Enable ARM base pointer when calling functions with large arguments.
When an outgoing call takes more than 2k of arguments on the stack, we
don't allocate that call frame in the prolog, but adjust the stack
pointer immediately before the call instead.

This causes problems with the emergency spill slot because PEI can't
track stack pointer adjustments on the second pass, and if the outgoing
arguments are too big, SP can't be used to reach the emergency spill
slot at all.

Work around these problems by ensuring there is a base or frame pointer
that can be used to access the emergency spill slot.

<rdar://problem/10917166>

llvm-svn: 151604
2012-02-28 01:15:01 +00:00
Preston Gurd 43b2506e32 test commit.
llvm-svn: 151588
2012-02-27 23:31:51 +00:00
Roman Divacky ded7f01062 Test the section specification.
llvm-svn: 151552
2012-02-27 20:42:19 +00:00
Roman Divacky 8fe40cd659 Reapply r151278 with fixes.
MCize function entry label emission on PowerPC64 properly.

llvm-svn: 151547
2012-02-27 20:20:47 +00:00
Hal Finkel 6fd2b434bd Revert r151278, breaks static linking.
Reverting this because it breaks static linking on ppc64. Specifically, it may be linkonce_odr functions that are the problem.
With this patch, if you link statically, calls to some functions end up calling their descriptor addresses instead
of calling to their entry points. This causes the execution to fail with SIGILL (b/c the descriptor address just
has some pointers, not code).

llvm-svn: 151433
2012-02-25 03:40:11 +00:00
NAKAMURA Takumi bdf94879df Target/X86: Fix assertion failures and warnings caused by r151382 _ftol2 lowering for i386-*-win32 targets. Patch by Joe Groff.
[Joe Groff] Hi everyone. My previous patch applied as r151382 had a few problems:
Clang raised a warning, and X86 LowerOperation would assert out for
fptoui f64 to i32 because it improperly lowered to an illegal
BUILD_PAIR. Here's a patch that addresses these issues. Let me know if
any other changes are necessary. Thanks.

llvm-svn: 151432
2012-02-25 03:37:25 +00:00
Akira Hatanaka 60f7a8e710 Add definitions of floating point multiply add/sub and negative multiply
add/sub instructions.

llvm-svn: 151415
2012-02-25 00:21:52 +00:00
Akira Hatanaka b049aef2d1 Add an option to use a virtual register as the global base register instead of
reserving a physical register ($gp or $28) for that purpose.

This will completely eliminate loads that restore the value of $gp after every
function call, if the register allocator assigns a callee-saved register, or
eliminate unnecessary loads if it assigns a temporary register. 

example:

.cpload $25       // set $gp.
...
.cprestore 16     // store $gp to stack slot 16($sp).
...
jalr $25          // function call. clobbers $gp.
lw $gp, 16($sp)   // not emitted if callee-saved reg is chosen.
...
lw $2, 4($gp)
...
jalr $25          // function call.
lw $gp, 16($sp)   // not emitted if $gp is not live after this instruction.
...

llvm-svn: 151402
2012-02-24 22:34:47 +00:00
Michael J. Spencer 248d65e78b Add WIN_FTOL_* psudo-instructions to model the unique calling convention
used by the Win32 _ftol2 runtime function. Patch by Joe Groff!

llvm-svn: 151382
2012-02-24 19:01:22 +00:00
Hal Finkel a3e6ed2161 X11/X2 loads around indirect calls on ppc64 should not be deleted.
llvm-svn: 151374
2012-02-24 17:54:01 +00:00
Hal Finkel b9a3d61894 Don't crash when a glue node contains an internal CopyToReg
This is necessary to support the existing ppc lowering code for indirect calls.
Fixes PR12071.

llvm-svn: 151373
2012-02-24 17:53:59 +00:00
Kristof Beyls b59291a8e6 test commit. removing unnecessary whitespace.
llvm-svn: 151363
2012-02-24 13:52:45 +00:00
NAKAMURA Takumi b2ef38b999 test/CodeGen/X86/2012-02-23-mmx-inlineasm.ll: Fixup to add -march=x86.
-mcpu does not choose arch automatically, on non-x86 hosts.

llvm-svn: 151362
2012-02-24 13:29:50 +00:00
Pete Cooper 682c76b7d4 Turn avx insert intrinsic calls into INSERT_SUBVECTOR DAG nodes and remove duplicate patterns for selecting the intrinsics
llvm-svn: 151342
2012-02-24 03:51:49 +00:00
Jim Grosbach c01104dfbf Thumb2 size reduction fix for tied operands of tMUL.
The tied source operand of tMUL is the second source operand, not the
first like every other two-address thumb instruction. Special case it
in the size reduction pass to make sure we create the tMUL instruction
properly.

llvm-svn: 151315
2012-02-24 00:33:36 +00:00
Dan Gohman d4a77c4682 When emitting a cmp with 0 for a lowered select, mask out the high
bits of the value carying the boolean condition, as their contents
are undefined. This fixes rdar://10887484.

llvm-svn: 151310
2012-02-24 00:09:36 +00:00
Bill Wendling 38b31619f6 Allow an integer to be converted into an MMX type when it's used in an inline
asm.
<rdar://problem/10106006>

llvm-svn: 151303
2012-02-23 23:25:25 +00:00
Roman Divacky a2d3608f78 MCize function entry label emission on PowerPC64 properly.
llvm-svn: 151278
2012-02-23 20:28:39 +00:00
Jakob Stoklund Olesen aae960d978 Make tests less sensitive to scheduling changes.
llvm-svn: 151260
2012-02-23 17:19:34 +00:00
Anton Korobeynikov a22828e085 Fix to make sure that a comdat group gets generated correctly for a static member
of instantiated C++ templates.

Patch by Kristof Beyls!

llvm-svn: 151250
2012-02-23 10:36:04 +00:00
Evan Cheng f258a15bdf Canonicalize (srl (bswap x), 16) to (rotr (bswap x), 16) if the high 16 bits
of x are zero. This optimizes rev + lsr 16 to rev16.

rdar://10750814

llvm-svn: 151230
2012-02-23 02:58:19 +00:00
Evan Cheng e87681cf34 Optimize a couple of common patterns involving conditional moves where the false
value is zero. Instead of a cmov + op, issue an conditional op instead. e.g.
    cmp   r9, r4
    mov   r4, #0
    moveq r4, #1 
    orr   lr, lr, r4

should be:
    cmp   r9, r4
    orreq lr, lr, #1

That is, optimize (or x, (cmov 0, y, cond)) to (or.cond x, y). Similarly extend
this to xor as well as (and x, (cmov -1, y, cond)) => (and.cond x, y).

It's possible to extend this to ADD and SUB but I don't think they are common.

rdar://8659097

llvm-svn: 151224
2012-02-23 01:19:06 +00:00