Commit Graph

12133 Commits

Author SHA1 Message Date
Rafael Espindola 8bac423ddb Add support for lowercase variants.
llvm-svn: 124071
2011-01-23 16:11:25 +00:00
Chris Lattner 9491dee24e Enhance SRoA to be more aggressive about scalarization of aggregate allocas
that have PHI or select uses of their element pointers.  This can often happen
when instcombine sinks two loads into a successor, inserting a phi or select.

With this patch, we can scalarize the alloca, but the pinned elements are not
yet promoted.  This is still a win for large aggregates where only one element
is used.  This fixes rdar://8904039 and part of rdar://7339113 (poor codegen
on stringswitch).

llvm-svn: 124070
2011-01-23 08:27:54 +00:00
Chris Lattner a587ab7b94 remove an old hack that avoided creating MMX datatypes. The
X86 backend has been fixed.

llvm-svn: 124064
2011-01-23 06:40:33 +00:00
Nick Lewycky bc98f5b78e Use value ranges to fold ext(trunc) in SCEV when possible.
llvm-svn: 124062
2011-01-23 06:20:19 +00:00
Rafael Espindola 4b7b7fba38 Delay the creation of eh_frame so that the user can change the defaults.
Add support for SHT_X86_64_UNWIND.

llvm-svn: 124059
2011-01-23 05:43:40 +00:00
Venkatraman Govindaraju cc91b7a3f6 Pass sret arguments through the stack instead of through registers in Sparc backend. It makes the code generated more compliant with the sparc32 ABI.
llvm-svn: 124030
2011-01-22 13:05:16 +00:00
Venkatraman Govindaraju 7a0c350079 Added ICC, FCC as uses of movcc instruction to generate correct code when -mattr=v9 is used.
llvm-svn: 124027
2011-01-22 11:36:24 +00:00
Dan Gohman 19e30d5a7d Actually check memcpy lengths, instead of just commenting about
how they should be checked.

llvm-svn: 123999
2011-01-21 22:07:57 +00:00
Venkatraman Govindaraju ef8cf45eb1 Sparc backend:
Rename FLUSH to FLUSHW.
 Output "ta 3" instead of a "flushw" instruction if v8 instruction set is used.

llvm-svn: 123997
2011-01-21 22:00:00 +00:00
Owen Anderson a834200dbe Just because we have determined that an (fcmp | fcmp) is true for A < B,
A == B, and A > B, does not mean we can fold it to true.  We still need to
check for A ? B (A unordered B).

llvm-svn: 123993
2011-01-21 19:39:42 +00:00
Evan Cheng 2f2435d026 Last round of fixes for movw + movt global address codegen.
1. Fixed ARM pc adjustment.
2. Fixed dynamic-no-pic codegen
3. CSE of pc-relative load of global addresses.

It's now enabled by default for Darwin.

llvm-svn: 123991
2011-01-21 18:55:51 +00:00
Bruno Cardoso Lopes 4bd612384a Fix the encoding of QADD/SUB, QDADD/SUB. While qadd16, qadd8 use "rd, rn, rm",
qadd and qdadd uses "rd, rm, rn", the same applies to the 'sub' variants. This
is described in ARM manuals and matches the encoding used by the gnu assembler.

llvm-svn: 123975
2011-01-21 14:07:40 +00:00
Venkatraman Govindaraju 0594789f07 Implement support for byval arguments in Sparc backend.
llvm-svn: 123974
2011-01-21 14:00:01 +00:00
Michael J. Spencer e12c8a802f Revert "Object: Renable the tests now that none of the build bots complain about aliasing."
This reverts commit 281f3901b7b0869929caf8946c1ad1228bc38922.

llvm-svn: 123972
2011-01-21 06:27:04 +00:00
Andrew Trick bd428ec50f Enable support for precise scheduling of the instruction selection
DAG. Disable using "-disable-sched-cycles".

For ARM, this enables a framework for modeling the cpu pipeline and
counting stalls. It also activates several heuristics to drive
scheduling based on the model. Scheduling is inherently imprecise at
this stage, and until spilling is improved it may defeat attempts to
schedule. However, this framework provides greater control over
tuning codegen.

Although the flag is not target-specific, it should have very little
affect on the default scheduler used by x86. The only two changes that
affect x86 are:
- scheduling a high-latency operation bumps the current cycle so independent
  operations can have their latency covered. i.e. two independent 4
  cycle operations can produce results in 4 cycles, not 8 cycles.
- Two operations with equal register pressure impact and no
  latency-based stalls on their uses will be prioritized by depth before height
  (height is irrelevant if no stalls occur in the schedule below this point).

llvm-svn: 123971
2011-01-21 06:19:05 +00:00
Andrew Trick 47ff14b091 Convert -enable-sched-cycles and -enable-sched-hazard to -disable
flags. They are still not enable in this revision.

Added TargetInstrInfo::isZeroCost() to fix a fundamental problem with
the scheduler's model of operand latency in the selection DAG.

Generalized unit tests to work with sched-cycles.

llvm-svn: 123969
2011-01-21 05:51:33 +00:00
Chris Lattner b5e15d1907 fix PR9013, an infinite loop in instcombine.
llvm-svn: 123968
2011-01-21 05:29:50 +00:00
Michael J. Spencer 2d9860cbec Object: Renable the tests now that none of the build bots complain about aliasing.
llvm-svn: 123964
2011-01-21 05:07:13 +00:00
Nick Lewycky 6a083cf820 Don't try to pull vector bitcasts that change the number of elements through
a select. A vector select is pairwise on each element so we'd need a new
condition with the right number of elements to select on. Fixes PR8994.

llvm-svn: 123963
2011-01-21 02:30:43 +00:00
Nick Lewycky 39b12c059d Add a constant folding of casts from zero to zero. Fixes PR9011!
While here, I'd like to complain about how vector is not an aggregate type
according to llvm::Type::isAggregateType(), but they're listed under aggregate
types in the LangRef and zero vectors are stored as ConstantAggregateZero.

llvm-svn: 123956
2011-01-21 01:12:09 +00:00
Evan Cheng 028ccbfcbf Don't be overly aggressive with CSE of "ldr constantpool". If it's a pc-relative
value, the "add pc" must be CSE'ed at the same time. We could follow the same
approach as T2 by adding pseudo instructions that combine the ldr + "add pc".
But the better approach is to use movw + movt (which I will enable soon), so
I'll leave this as a TODO.

llvm-svn: 123949
2011-01-20 23:55:07 +00:00
Tobias Grosser f07426b40d Implement requiredTransitive
The PassManager did not implement the transitivity of requiredTransitive. This
was unnoticed since 2006.

llvm-svn: 123942
2011-01-20 21:03:22 +00:00
Bruno Cardoso Lopes 1f69de3983 Add testcases for clz encoding
llvm-svn: 123937
2011-01-20 19:27:16 +00:00
Bruno Cardoso Lopes e965f06f7f Fix the encoding and parsing of clrex instruction
llvm-svn: 123936
2011-01-20 19:18:32 +00:00
Bruno Cardoso Lopes d8f9b37f31 Add cdp/cdp2 instructions for thumb/thumb2
llvm-svn: 123929
2011-01-20 18:32:09 +00:00
Devang Patel a573d5c16d Disable objdump-trivial-object.test. It is broken on powerpc-darwin9.
llvm-svn: 123928
2011-01-20 18:08:44 +00:00
Bruno Cardoso Lopes 33461ecc82 - Use a more appropriate name for Owen's ARM Parser isMCR hack since the same operands can be present
in cdp/cdp2 instructions. Also increase the hack with cdp/cdp2 instructions.
- Fix the encoding of cdp/cdp2 instructions for ARM (no thumb and thumb2 yet) and add testcases for t
hem.

llvm-svn: 123927
2011-01-20 18:06:58 +00:00
Bruno Cardoso Lopes 4d4b490fb7 Add mcr*2 and mr*c2 support to thumb2 targets
llvm-svn: 123919
2011-01-20 16:58:48 +00:00
Bruno Cardoso Lopes cf99dc7eb9 Add mcr* and mr*c support to thumb targets
llvm-svn: 123917
2011-01-20 16:35:57 +00:00
Michael J. Spencer 11b293b304 Disable this test until I can figure out why it's broken. Not xfailed because it
usese 100% CPU and times out, so it's annoying to run it.

llvm-svn: 123915
2011-01-20 16:24:07 +00:00
Kalle Raiskila 6e5a54b36c Allow sign-extending of i8 and i16 to i128 on SPU.
llvm-svn: 123912
2011-01-20 15:49:06 +00:00
Duncan Sands 8fb2c3827c At -O123 the early-cse pass is run before instcombine has run. According to my
auto-simplier the transform most missed by early-cse is (zext X) != 0 -> X != 0.
This patch adds this transform and some related logic to InstructionSimplify
and removes some of the logic from instcombine (unfortunately not all because
there are several situations in which instcombine can improve things by making
new instructions, whereas instsimplify is not allowed to do this).  At -O2 this
often results in more than 15% more simplifications by early-cse, and results in
hundreds of lines of bitcode being eliminated from the testsuite.  I did see some
small negative effects in the testsuite, for example a few additional instructions
in three programs.  One program, 483.xalancbmk, got an additional 35 instructions,
which seems to be due to a function getting an additional instruction and then
being inlined all over the place.

llvm-svn: 123911
2011-01-20 13:21:55 +00:00
Eric Christopher 785db078b4 Expand invalid return values for umulo and smulo. Handle these similarly
to add/sub by doing the normal operation and then checking for overflow
afterwards. This generally relies on the DAG handling the later invalid
operations as well.

Fixes the 64-bit part of rdar://8622122 and rdar://8774702.

llvm-svn: 123908
2011-01-20 08:54:28 +00:00
Evan Cheng f2e914be15 Add test.
llvm-svn: 123906
2011-01-20 08:38:21 +00:00
Evan Cheng b8b0ad80a8 Sorry, several patches in one.
TargetInstrInfo:
Change produceSameValue() to take MachineRegisterInfo as an optional argument.
When in SSA form, targets can use it to make more aggressive equality analysis.

Machine LICM:
1. Eliminate isLoadFromConstantMemory, use MI.isInvariantLoad instead.
2. Fix a bug which prevent CSE of instructions which are not re-materializable.
3. Use improved form of produceSameValue.

ARM:
1. Teach ARM produceSameValue to look pass some PIC labels.
2. Look for operands from different loads of different constant pool entries
   which have same values.
3. Re-implement PIC GA materialization using movw + movt. Combine the pair with
   a "add pc" or "ldr [pc]" to form pseudo instructions. This makes it possible
   to re-materialize the instruction, allow machine LICM to hoist the set of
   instructions out of the loop and make it possible to CSE them. It's a bit
   hacky, but it significantly improve code quality.
4. Some minor bug fixes as well.

With the fixes, using movw + movt to materialize GAs significantly outperform the
load from constantpool method. 186.crafty and 255.vortex improved > 20%, 254.gap
and 176.gcc ~10%.

llvm-svn: 123905
2011-01-20 08:34:58 +00:00
Michael J. Spencer 2d67ed8f3b Object: Add some tests!
llvm-svn: 123899
2011-01-20 06:39:15 +00:00
Venkatraman Govindaraju 058e12476c Sparc backend: Implements a delay slot filler that attempt to fill delay slots
with useful instructions.

llvm-svn: 123884
2011-01-20 05:08:26 +00:00
Eric Christopher bb14f65672 If we can, lower the multiply part of a umulo/smulo call to a libcall
with an invalid type then split the result and perform the overflow check
normally.

Fixes the 32-bit parts of rdar://8622122 and rdar://8774702.

llvm-svn: 123864
2011-01-20 00:29:24 +00:00
Devang Patel 2d9e532a3a Fix debug info for merged global.
llvm-svn: 123862
2011-01-20 00:02:16 +00:00
Nick Lewycky 5c901f3489 Similarly, analyze truncate through multiply.
llvm-svn: 123842
2011-01-19 18:56:00 +00:00
Nick Lewycky 5143f0f09b Add a missed SCEV fold that is required to continue analyzing the IR produced
by indvars through the scev expander.

trunc(add x, y) --> add(trunc x, y). Currently SCEV largely folds the other way
which is probably wrong, but preserved to minimize churn. Instcombine doesn't
do this fold either, demonstrating a missed optz'n opportunity on code doing
add+trunc+add.

llvm-svn: 123838
2011-01-19 16:59:46 +00:00
Bruno Cardoso Lopes d6335ce508 Fix the encoding of mrrc and mcrr family of instructions. Also add testcases for mcr and mrc
llvm-svn: 123837
2011-01-19 16:56:52 +00:00
Rafael Espindola fc355bc070 Add unnamed_addr when we can show that address of a global is not used.
llvm-svn: 123834
2011-01-19 16:32:21 +00:00
Nick Lewycky e9ea75e3fc Add a missing SCEV simplification sext(zext x) --> zext x.
llvm-svn: 123832
2011-01-19 15:56:12 +00:00
Owen Anderson dac7a0174e When matching asm operands, always try to match the most restricted type first.
Unfortunately, while this is the "right" thing to do, it breaks some ARM
asm parsing tests because MemMode5 and ThumbMemModeReg are ambiguous.  This
is tricky to resolve since neither is a subset of the other.

XFAIL the test for now.  The old way was broken in other ways, just ways
we didn't happen to be testing, and our ARM asm parsing is going to require
significant revisiting at a later point anyways.

llvm-svn: 123786
2011-01-18 23:01:21 +00:00
Bruno Cardoso Lopes 2082057b18 Create two new generic classes to represent the following VMRS/VMSR variations:
vmrs  reg, fpexc
vmrs  reg, fpsid
vmsr  fpexc, reg
vmsr  fpsid, reg

llvm-svn: 123783
2011-01-18 21:58:20 +00:00
Bruno Cardoso Lopes cba727f291 Fix MRS encoding for arm and thumb.
llvm-svn: 123778
2011-01-18 21:31:35 +00:00
Bruno Cardoso Lopes e86a7ad01a Fix the encoding of t2ISB by using the right class and also parse it correctly
llvm-svn: 123776
2011-01-18 21:17:09 +00:00
Dan Gohman 44da55b7be Teach BasicAA to return PartialAlias in cases where both pointers
are pointing to the same object, one pointer is accessing the entire
object, and the other is access has a non-zero size. This prevents
TBAA from kicking in and saying NoAlias in such cases.

llvm-svn: 123775
2011-01-18 21:16:06 +00:00
Bruno Cardoso Lopes e6290ccf9b Follow the current hack set and enable the correct parsing of bkpt while in thumb mode.
llvm-svn: 123772
2011-01-18 20:55:11 +00:00
Chris Lattner 86d56c651d fix rdar://8878965, a regression I introduced with the recent
llvm.objectsize changes.

llvm-svn: 123771
2011-01-18 20:53:04 +00:00
Bruno Cardoso Lopes 7f639c11d7 Add support for parsing and encoding ARM's official syntax for the BFI instruction
llvm-svn: 123770
2011-01-18 20:45:56 +00:00
Bruno Cardoso Lopes 4dc73fa075 Add support for mips32 madd and msub instructions. Patch by Akira Hatanaka
llvm-svn: 123760
2011-01-18 19:29:17 +00:00
Duncan Sands 99589d07e9 For completeness, generalize the (X + Y) - Y -> X transform and add X - (X + 1) -> -1.
These were not recommended by my auto-simplifier since they don't fire often enough.
However they do fire from time to time, for example they remove one subtraction from
the final bitcode for 483.xalancbmk.

llvm-svn: 123755
2011-01-18 11:50:19 +00:00
Duncan Sands 9b8e2bd8ef Simplify (X<<1)-X into X. According to my auto-simplier this is the most common missed
simplification in fully optimized code.  It occurs sporadically in the testsuite, and
many times in 403.gcc: the final bitcode has 131 fewer subtractions after this change.
The reason that the multiplies are not eliminated is the same reason that instcombine
did not catch this: they are used by other instructions (instcombine catches this with
a more general transform which in general is only profitable if the operands have only
one use).

llvm-svn: 123754
2011-01-18 09:24:58 +00:00
Daniel Dunbar 66e91d4a58 McARM: Start marking T2 address operands as such, for the benefit of the parser.
llvm-svn: 123722
2011-01-18 03:06:03 +00:00
Benjamin Kramer 45d183ccf0 Fix an off-by-one error in ctpop combining.
llvm-svn: 123664
2011-01-17 18:00:28 +00:00
Devang Patel 3ec1f198e5 Update tests to accomodate unnamed_addr introduction.
llvm-svn: 123663
2011-01-17 17:54:17 +00:00
Benjamin Kramer 24c5184dca Add a DAGCombine to turn (ctpop x) u< 2 into (x & x-1) == 0.
This shaves off 4 popcounts from the hacked 186.crafty source.

This is enabled even when a native popcount instruction is available. The
combined code is one operation longer but it should be faster nevertheless.

llvm-svn: 123621
2011-01-17 12:04:57 +00:00
Kalle Raiskila 7e7b4ac751 Don't crash SPU BE with memory accesses with big alignmnet.
llvm-svn: 123620
2011-01-17 11:59:20 +00:00
Evan Cheng dfce83c8f5 Materialize GA addresses with movw + movt pairs for Darwin in PIC mode. e.g.
movw    r0, :lower16:(L_foo$non_lazy_ptr-(LPC0_0+4))
        movt    r0, :upper16:(L_foo$non_lazy_ptr-(LPC0_0+4))
LPC0_0:
        add     r0, pc, r0

It's not yet enabled by default as some tests are failing. I suspect bugs in
down stream tools.

llvm-svn: 123619
2011-01-17 08:03:18 +00:00
Nick Lewycky 872a453ada Test for lazy value info's ability to prove the absense of NULLs in pointers.
llvm-svn: 123601
2011-01-16 21:57:20 +00:00
Michael J. Spencer 4e51541319 Make everyone happy this time.
llvm-svn: 123599
2011-01-16 21:34:34 +00:00
Anders Carlsson d3db83349e Teach DAE to look for functions whose arguments are unused, and change all callers to pass in an undefvalue instead.
llvm-svn: 123596
2011-01-16 21:25:33 +00:00
Michael J. Spencer 12a620fd58 Try and fix this test. For some reason llvm-ar thinks that the file exists when
it shouldn't, but I have no way to verify that it doesn't actually exist on the
buildbot.

llvm-svn: 123594
2011-01-16 20:52:58 +00:00
Rafael Espindola ec517cdf24 Update tests.
llvm-svn: 123591
2011-01-16 18:02:57 +00:00
Rafael Espindola 751677a040 Don't merge two constants if we care about the address of both.
This fixes the original testcase in PR8927. It also causes a clang
binary built with a patched clang to increase in size by 0.21%.

We can probably get some of the size back by writing a pass that
detects that a global never has its pointer compared and adds
unnamed_addr to it (maybe extend global opt). It is also possible that
there are some other cases clang could add unnamed_addr to.

I will investigate extending globalopt next.

llvm-svn: 123584
2011-01-16 17:05:09 +00:00
Owen Anderson ec3b10fc56 Reduce and merge testcases.
llvm-svn: 123579
2011-01-16 09:13:31 +00:00
Chris Lattner 35a2e65bcb fix PR8514, a bug where the "heroic" transformation of shift/and
into and/shift would cause nodes to move around and a dangling pointer
to happen.  The code tried to avoid this with a HandleSDNode, but 
got the details wrong.

llvm-svn: 123578
2011-01-16 08:48:11 +00:00
Chris Lattner e5f8de8639 fix PR8932, a case where arg promotion could infinitely promote.
llvm-svn: 123574
2011-01-16 08:09:24 +00:00
Chris Lattner 6fab2e9418 if an alloca is only ever accessed as a unit, and is accessed with load/store instructions,
then don't try to decimate it into its individual pieces.  This will just make a mess of the
IR and is pointless if none of the elements are individually accessed.  This was generating
really terrible code for std::bitset (PR8980) because it happens to be lowered by clang
as an {[8 x i8]} structure instead of {i64}.

The testcase now is optimized to:

define i64 @test2(i64 %X) {
  br label %L2

L2:                                               ; preds = %0
  ret i64 %X
}

before we generated:

define i64 @test2(i64 %X) {
  %sroa.store.elt = lshr i64 %X, 56
  %1 = trunc i64 %sroa.store.elt to i8
  %sroa.store.elt8 = lshr i64 %X, 48
  %2 = trunc i64 %sroa.store.elt8 to i8
  %sroa.store.elt9 = lshr i64 %X, 40
  %3 = trunc i64 %sroa.store.elt9 to i8
  %sroa.store.elt10 = lshr i64 %X, 32
  %4 = trunc i64 %sroa.store.elt10 to i8
  %sroa.store.elt11 = lshr i64 %X, 24
  %5 = trunc i64 %sroa.store.elt11 to i8
  %sroa.store.elt12 = lshr i64 %X, 16
  %6 = trunc i64 %sroa.store.elt12 to i8
  %sroa.store.elt13 = lshr i64 %X, 8
  %7 = trunc i64 %sroa.store.elt13 to i8
  %8 = trunc i64 %X to i8
  br label %L2

L2:                                               ; preds = %0
  %9 = zext i8 %1 to i64
  %10 = shl i64 %9, 56
  %11 = zext i8 %2 to i64
  %12 = shl i64 %11, 48
  %13 = or i64 %12, %10
  %14 = zext i8 %3 to i64
  %15 = shl i64 %14, 40
  %16 = or i64 %15, %13
  %17 = zext i8 %4 to i64
  %18 = shl i64 %17, 32
  %19 = or i64 %18, %16
  %20 = zext i8 %5 to i64
  %21 = shl i64 %20, 24
  %22 = or i64 %21, %19
  %23 = zext i8 %6 to i64
  %24 = shl i64 %23, 16
  %25 = or i64 %24, %22
  %26 = zext i8 %7 to i64
  %27 = shl i64 %26, 8
  %28 = or i64 %27, %25
  %29 = zext i8 %8 to i64
  %30 = or i64 %29, %28
  ret i64 %30
}

In this case, instcombine was able to eliminate the nonsense, but in PR8980 enough
PHIs are in play that instcombine backs off.  It's better to not generate this stuff
in the first place.

llvm-svn: 123571
2011-01-16 06:18:28 +00:00
Chris Lattner d55581ded8 enhance FoldOpIntoPhi in instcombine to try harder when a phi has
multiple uses.  In some cases, all the uses are the same operation,
so instcombine can go ahead and promote the phi.  In the testcase
this pushes an add out of the loop.

llvm-svn: 123568
2011-01-16 05:28:59 +00:00
Evan Cheng 572756ac11 Spill R4 if it's going to be used to restore SP from FP.
llvm-svn: 123567
2011-01-16 05:14:33 +00:00
Owen Anderson 4e54efd625 Improve the safety of my globalopt enhancement by ensuring that the bitcast
of the stored value to the new store type is always.  Also, add a testcase.

llvm-svn: 123563
2011-01-16 04:33:33 +00:00
Chris Lattner 08f43456c9 fix PR8983, a broken assertion.
llvm-svn: 123562
2011-01-16 03:43:53 +00:00
Venkatraman Govindaraju 1b0e2cbf3f Implement AnalyzeBranch in Sparc Backend.
llvm-svn: 123561
2011-01-16 03:15:11 +00:00
Chris Lattner 218092e68e fix PR8981, a crash trying to form a conditional inc with a floating point compare.
llvm-svn: 123560
2011-01-16 02:56:53 +00:00
Chris Lattner 2d186574a6 reapply my fix for PR8961 with a tweak to properly handle
multi-instruction sequences like calls.  Many thanks to Jakob for
finding a testcase.

llvm-svn: 123559
2011-01-16 02:27:38 +00:00
Michael J. Spencer 2ff30b84f8 Revert "Archive: Replace all internal uses of PathV1 with PathV2. The external API still uses PathV1."
llvm-svn: 123557
2011-01-16 01:43:22 +00:00
Chris Lattner c703334ff1 one of michael's recent patches broke this, temporarily disable
it so the bots go green

llvm-svn: 123555
2011-01-16 01:04:49 +00:00
Chris Lattner 1e209b87ad remove the partial specialization pass. It is unmaintained and has bugs.
llvm-svn: 123554
2011-01-16 00:27:10 +00:00
Nick Lewycky 0296a481f9 Make constmerge a two-pass algorithm so that it won't miss merging
opporuntities. Fixes PR8978.

llvm-svn: 123541
2011-01-15 18:14:21 +00:00
Rafael Espindola 489e505adf Allow unnamed_addr on declarations.
llvm-svn: 123529
2011-01-15 08:15:00 +00:00
Chris Lattner af26390790 temporarily revert r123526. While working on a follow-on patch I
realize that ConstantFoldTerminator doesn't preserve dominfo.

llvm-svn: 123527
2011-01-15 07:51:19 +00:00
Chris Lattner 8df83c4a24 fix rdar://8785296 - -fcatch-undefined-behavior generates inefficient code
The basic issue is that isel (very reasonably!) expects conditional branches
to be folded, so CGP leaving around a bunch dead computation feeding
conditional branches isn't such a good idea.  Just fold branches on constants
into unconditional branches.

llvm-svn: 123526
2011-01-15 07:36:13 +00:00
Chris Lattner 1b93be501d Now that instruction optzns can update the iterator as they go, we can
have objectsize folding recursively simplify away their result when it
folds.  It is important to catch this here, because otherwise we won't
eliminate the cross-block values at isel and other times.

llvm-svn: 123524
2011-01-15 07:25:29 +00:00
Chris Lattner 9c10d587f6 implement an instcombine xform that canonicalizes casts outside of and-with-constant operations.
This fixes rdar://8808586 which observed that we used to compile:


union xy {
        struct x { _Bool b[15]; } x;
        __attribute__((packed))
        struct y {
                __attribute__((packed)) unsigned long b0to7;
                __attribute__((packed)) unsigned int b8to11;
                __attribute__((packed)) unsigned short b12to13;
                __attribute__((packed)) unsigned char b14;
        } y;
};

struct x
foo(union xy *xy)
{
        return xy->x;
}

into:

_foo:                                   ## @foo
	movq	(%rdi), %rax
	movabsq	$1095216660480, %rcx    ## imm = 0xFF00000000
	andq	%rax, %rcx
	movabsq	$-72057594037927936, %rdx ## imm = 0xFF00000000000000
	andq	%rax, %rdx
	movzbl	%al, %esi
	orq	%rdx, %rsi
	movq	%rax, %rdx
	andq	$65280, %rdx            ## imm = 0xFF00
	orq	%rsi, %rdx
	movq	%rax, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rdx, %rsi
	movl	%eax, %edx
	andl	$-16777216, %edx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rdx
	orq	%rcx, %rdx
	movabsq	$280375465082880, %rcx  ## imm = 0xFF0000000000
	movq	%rax, %rsi
	andq	%rcx, %rsi
	orq	%rdx, %rsi
	movabsq	$71776119061217280, %r8 ## imm = 0xFF000000000000
	andq	%r8, %rax
	orq	%rsi, %rax
	movzwl	12(%rdi), %edx
	movzbl	14(%rdi), %esi
	shlq	$16, %rsi
	orl	%edx, %esi
	movq	%rsi, %r9
	shlq	$32, %r9
	movl	8(%rdi), %edx
	orq	%r9, %rdx
	andq	%rdx, %rcx
	movzbl	%sil, %esi
	shlq	$32, %rsi
	orq	%rcx, %rsi
	movl	%edx, %ecx
	andl	$-16777216, %ecx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rcx
	movq	%rdx, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rcx, %rsi
	movq	%rdx, %rcx
	andq	$65280, %rcx            ## imm = 0xFF00
	orq	%rsi, %rcx
	movzbl	%dl, %esi
	orq	%rcx, %rsi
	andq	%r8, %rdx
	orq	%rsi, %rdx
	ret

We now compile this into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movzwl	12(%rdi), %eax
	movzbl	14(%rdi), %ecx
	shlq	$16, %rcx
	orl	%eax, %ecx
	shlq	$32, %rcx
	movl	8(%rdi), %edx
	orq	%rcx, %rdx
	movq	(%rdi), %rax
	ret

A small improvement :-)

llvm-svn: 123520
2011-01-15 06:32:33 +00:00
Rafael Espindola b1ebba9ec3 Update llvm-gcc's tests.
llvm-svn: 123447
2011-01-14 17:01:20 +00:00
Duncan Sands d6f1a9584d Turn X-(X-Y) into Y. According to my auto-simplifier this is the most common
simplification present in fully optimized code (I think instcombine fails to
transform some of these when "X-Y" has more than one use).  Fires here and
there all over the test-suite, for example it eliminates 8 subtractions in
the final IR for 445.gobmk, 2 subs in 447.dealII, 2 in paq8p etc.

llvm-svn: 123442
2011-01-14 15:26:10 +00:00
Duncan Sands 571fd9a606 Factorize common code out of the InstructionSimplify shift logic. Add in
threading of shifts over selects and phis while there.  This fires here and
there in the testsuite, to not much effect.  For example when compiling spirit
it fires 5 times, during early-cse, resulting in 6 more cse simplifications,
and 3 more terminators being folded by jump threading, but the final bitcode
doesn't change in any interesting way: other optimizations would have caught
the opportunity anyway, only later.

llvm-svn: 123441
2011-01-14 14:44:12 +00:00
Duncan Sands c3eb0f4b2e Rename this test.
llvm-svn: 123440
2011-01-14 14:16:33 +00:00
Chris Lattner 5e0fef8531 relax testcase a bit.
llvm-svn: 123433
2011-01-14 07:46:33 +00:00
Chris Lattner e93e4f118c revert my fastisel patch again which apparently still gives the
llvm-gcc-i386-linux-selfhost buildbot heartburn...

llvm-svn: 123431
2011-01-14 06:14:33 +00:00
Chris Lattner 5ca1391003 reapply r123414 now that the botz are calmed down and the fix is already in.
llvm-svn: 123427
2011-01-14 04:24:28 +00:00
Evan Cheng d4a5c05c97 Completed :lower16: / :upper16: support for movw / movt pairs on Darwin.
- Fixed :upper16: fix up routine. It should be shifting down the top 16 bits first.
- Added support for Thumb2 :lower16: and :upper16: fix up.
- Added :upper16: and :lower16: relocation support to mach-o object writer.

llvm-svn: 123424
2011-01-14 02:38:49 +00:00
Chris Lattner 21a64979f1 r123414 broke llvm-gcc bootstrap apparently, revert
llvm-svn: 123422
2011-01-14 02:07:32 +00:00
Duncan Sands 7f60dc1eb0 Move some shift transforms out of instcombine and into InstructionSimplify.
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything.  I fixed this in the constant folder as well.  Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero.  This is in accordance with the LangRef, but I must
admit that it is fairly aggressive.  Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.

llvm-svn: 123417
2011-01-14 00:37:45 +00:00
Chris Lattner 0c34cb429e fix PR8961 - a fast isel miscompilation where we'd insert a new instruction
after sext's generated for addressing that got folded.  Previously we compiled
test5 into:

_test5:                                 ## @test5
## BB#0:
        movq    -8(%rsp), %rax          ## 8-byte Reload
        movq    (%rdi,%rax), %rdi
        addq    %rdx, %rdi
        movslq  %esi, %rax
        movq    %rax, -8(%rsp)          ## 8-byte Spill
        movq    %rdi, %rax
        ret

which is insane and wrong.  Now we produce:

_test5:                                 ## @test5
## BB#0:
	movslq	%esi, %rax
	movq	(%rdi,%rax), %rax
	addq	%rdx, %rax
	ret

llvm-svn: 123414
2011-01-14 00:01:01 +00:00
Owen Anderson ec47597ecd As far as I can tell, unified syntax uses c0-c15 instead of cr0-cr15 for mcr and friends.
llvm-svn: 123407
2011-01-13 22:38:16 +00:00
Bob Wilson 08713d3c5f Extend SROA to handle arrays accessed as homogeneous structs and vice versa.
This is a minor extension of SROA to handle a special case that is
important for some ARM NEON operations.  Some of the NEON intrinsics
return multiple values, which are handled as struct types containing
multiple elements of the same vector type.  The corresponding return
types declared in the arm_neon.h header have equivalent arrays.  We
need SROA to recognize that it can split up those arrays and structs
into separate vectors, even though they are not always accessed with
the same type.  SROA already handles loads and stores of an entire
alloca by using insertvalue/extractvalue to access the individual
pieces, and that code works the same regardless of whether the type
is a struct or an array.  So, all that needs to be done is to check
for compatible arrays and homogeneous structs.

llvm-svn: 123381
2011-01-13 17:45:11 +00:00