Commit Graph

53972 Commits

Author SHA1 Message Date
Jim Grosbach 0c509fa6bf Tidy up. 80 columns.
llvm-svn: 154226
2012-04-06 23:43:50 +00:00
Jakob Stoklund Olesen baa3566091 ARMPat is equivalent to Requires<[IsARM]>.
llvm-svn: 154210
2012-04-06 21:21:59 +00:00
Jakob Stoklund Olesen b4bd3880ba Eliminate iOS-specific tail call instructions.
After register masks were introdruced to represent the call clobbers, it
is no longer necessary to have duplicate instruction for iOS.

llvm-svn: 154209
2012-04-06 21:17:42 +00:00
Chandler Carruth 8a102c21e3 There is no portable std::abs overload for int64_t, use the llvm::abs64
which exists for this purpose.

llvm-svn: 154199
2012-04-06 20:10:52 +00:00
Sean Callanan e804b5b762 Fixed two leaks in the MC disassembler. The MC
disassembler requires a MCSubtargetInfo and a
MCInstrInfo to exist in order to initialize the
instruction printer and disassembler; however,
although the printer and disassembler keep
references to these objects they do not own them.
Previously, the MCSubtargetInfo and MCInstrInfo
objects were just leaked.

I have extended LLVMDisasmContext to own these
objects and delete them when it is destroyed.

llvm-svn: 154192
2012-04-06 18:21:09 +00:00
Jakob Stoklund Olesen 967b86a0a2 Allow negative immediates in ARM and Thumb2 compares.
ARM and Thumb2 mode can use cmn instructions to compare against negative
immediates. Thumb1 mode can't.

llvm-svn: 154183
2012-04-06 17:45:04 +00:00
David Chisnall c1c9cdab23 Reintroduce InlineCostAnalyzer::getInlineCost() variant with explicit callee
parameter until we have a more sensible API for doing the same thing.

Reviewed by Chandler.

llvm-svn: 154180
2012-04-06 17:27:41 +00:00
Chandler Carruth 49da93396e Sink the collection of return instructions until after *all*
simplification has been performed. This is a bit less efficient
(requires another ilist walk of the basic blocks) but shouldn't matter
in practice. More importantly, it's just too much work to keep track of
all the various ways the return instructions can be mutated while
simplifying them. This fixes yet another crasher, reported by Daniel
Dunbar.

llvm-svn: 154179
2012-04-06 17:21:31 +00:00
Duncan Sands d12b18f820 Make GVN's propagateEquality non-recursive. No intended functionality change.
The modifications are a lot more trivial than they appear to be in the diff!

llvm-svn: 154174
2012-04-06 15:31:09 +00:00
Benjamin Kramer 3cacabfb04 Fix narrowing conversion.
llvm-svn: 154171
2012-04-06 13:33:52 +00:00
Craig Topper 447417c932 Allow 256-bit shuffles to be split if a 128-bit lane contains elements from a single source. This is a rewrite of the 256-bit shuffle splitting code based on similar code from legalize types. Fixes PR12413.
llvm-svn: 154166
2012-04-06 07:45:23 +00:00
Chandler Carruth e41f6f4189 Sink the return instruction collection until after we're done deleting
dead code, including dead return instructions in some cases. Otherwise,
we end up having a bogus poniter to a return instruction that blows up
much further down the road.

It turns out that this pattern is both simpler to code, easier to update
in the face of enhancements to the inliner cleanup, and likely cheaper
given that it won't add dead instructions to the list.

Thanks to John Regehr's numerous test cases for teasing this out.

llvm-svn: 154157
2012-04-06 01:11:52 +00:00
Jakob Stoklund Olesen 6a2e99a46a Deduplicate ARM call-related instructions.
We had special instructions for iOS because r9 is call-clobbered, but
that is represented dynamically by the register mask operands now, so
there is no need for the pseudo-instructions.

llvm-svn: 154144
2012-04-06 00:04:58 +00:00
Jim Grosbach d6a1a1dc2f ARM: Don't form a t2LDRi8 or t2STRi8 with an offset of zero.
The load/store optimizer splits LDRD/STRD into two instructions when the
register pairing doesn't work out. For negative offsets in Thumb2, it uses
t2STRi8 to do that. That's fine, except for the case when the offset is in
the range [-4,-1]. In that case, we'll also form a second t2STRi8 with
the original offset plus 4, resulting in a t2STRi8 with a non-negative
offset, which ends up as if it were an STRT, which is completely bogus.
Similarly for loads.

No testcase, unfortunately, as any I've been able to construct is both large
and extremely fragile.

rdar://11193937

llvm-svn: 154141
2012-04-05 23:51:24 +00:00
Jim Grosbach 930f2f66e7 ARM assembly aliases for add negative immediates using sub.
'add r2, #-1024' should just use 'sub r2, #1024' rather than erroring out.
Thumb1 aliases for adding a negative immediate to the stack pointer,
also.

rdar://11192734

llvm-svn: 154123
2012-04-05 20:57:13 +00:00
Eric Christopher aec8a82694 Patch to set is_stmt a little better for prologue lines in a function.
This enables debuggers to see what are interesting lines for a
breakpoint rather than any line that starts a function.

rdar://9852092

llvm-svn: 154120
2012-04-05 20:39:05 +00:00
Jakob Stoklund Olesen 37492eac8c Don't break the IV update in TLI::SimplifySetCC().
LSR always tries to make the ICmp in the loop latch use the incremented
induction variable. This allows the induction variable to be kept in a
single register.

When the induction variable limit is equal to the stride,
SimplifySetCC() would break LSR's hard work by transforming:

   (icmp (add iv, stride), stride) --> (cmp iv, 0)

This forced us to use lea for the IC update, preventing the simpler
incl+cmp.

<rdar://problem/7643606>
<rdar://problem/11184260>

llvm-svn: 154119
2012-04-05 20:30:20 +00:00
Dan Gohman cc64bbca81 Fix accidentally inverted logic from r152803, and make the
testcase slightly less trivial. This fixes rdar://11171718.

llvm-svn: 154118
2012-04-05 20:27:21 +00:00
Owen Anderson a6eebf6013 Treat f16 the same as f80/f128 for the purposes of generating constants during instruction selection.
llvm-svn: 154113
2012-04-05 18:50:32 +00:00
Silviu Baranga af3c79f0ac Added support for unpredictable ADC/SBC instructions on ARM, and also fixed some corner cases involving the PC register as an operand for these instructions.
llvm-svn: 154101
2012-04-05 16:19:29 +00:00
Silviu Baranga d365397daa Added support for handling unpredictable arithmetic instructions on ARM.
llvm-svn: 154100
2012-04-05 16:13:15 +00:00
Hongbin Zheng 31d33b8318 BBVectorize: Add the const modifier to the VectorizeConfig because we won't
modify it.

llvm-svn: 154098
2012-04-05 16:07:49 +00:00
Hongbin Zheng d6825173d3 Introduce the VectorizeConfig class, with which we can control the behavior
of the BBVectorizePass without using command line option. As pointed out
  by Hal, we can ask the TargetLoweringInfo for the architecture specific
  VectorizeConfig to perform vectorizing with architecture specific
  information.

llvm-svn: 154096
2012-04-05 15:46:55 +00:00
Hongbin Zheng 6edbc39bd7 Add the function "vectorizeBasicBlock" which allow users vectorize a
BasicBlock in other passes, e.g. we can call vectorizeBasicBlock in the
 loop unroll pass right after the loop is unrolled.

llvm-svn: 154089
2012-04-05 08:05:16 +00:00
Jim Grosbach 15c6884a4b ARM assembly aliases for two-operand V[R]SHR instructions.
rdar://11189467

llvm-svn: 154087
2012-04-05 07:23:53 +00:00
Argyrios Kyrtzidis ef909265e8 In MemoryBuffer::getOpenFile() make sure that the buffer is null-terminated if
the caller requested a null-terminated one.

When mapping the file there could be a racing issue that resulted in the file being larger
than the FileSize passed by the caller. We already have an assertion
for this in MemoryBuffer::init() but have a runtime guarantee that
the buffer will be null-terminated, so do a copy that adds a null-terminator.

Protects against crash of rdar://11161822.

llvm-svn: 154082
2012-04-05 04:23:56 +00:00
Jim Grosbach 3d00eecc53 ARM assembly parsing for 'msr' plain 'cpsr' operand.
Plain 'cpsr' is an alias for 'cpsr_fc'.

rdar://11153753

llvm-svn: 154080
2012-04-05 03:17:53 +00:00
Jakob Stoklund Olesen f2390e8303 Pass the right sign to TLI->isLegalICmpImmediate.
LSR can fold three addressing modes into its ICmpZero node:

  ICmpZero BaseReg + Offset      => ICmp BaseReg, -Offset
  ICmpZero -1*ScaleReg + Offset  => ICmp ScaleReg, Offset
  ICmpZero BaseReg + -1*ScaleReg => ICmp BaseReg, ScaleReg

The first two cases are only used if TLI->isLegalICmpImmediate() likes
the offset.

Make sure the right Offset sign is passed to this method in the second
case. The ARM version is not symmetric.

<rdar://problem/11184260>

llvm-svn: 154079
2012-04-05 03:10:56 +00:00
Akira Hatanaka 121342fcc2 Reapply 154038 without the failing test.
llvm-svn: 154062
2012-04-04 22:16:36 +00:00
Owen Anderson 4743c6e159 Revert r154038. It was causing make check failures.
llvm-svn: 154054
2012-04-04 21:18:58 +00:00
Pete Cooper d7290700e6 REG_SEQUENCE expansion to COPY instructions wasn't taking account of sub register indices on the source registers. No simple test case
llvm-svn: 154051
2012-04-04 21:03:25 +00:00
Benjamin Kramer 379018b2da Fix a C++11 UDL conflict.
Still not fixed in the standard ;)

llvm-svn: 154044
2012-04-04 20:33:56 +00:00
Pete Cooper 8a3dc0ed8c f16 FREM can now be legalized by promoting to f32
llvm-svn: 154039
2012-04-04 19:36:31 +00:00
Akira Hatanaka 9705c865d9 Fix LowerGlobalAddress to produce instructions with the correct relocation
types for N32 ABI. Add new test case and update existing ones.

llvm-svn: 154038
2012-04-04 19:02:38 +00:00
Akira Hatanaka 591ecdd7c1 Fix LowerJumpTable to produce instructions with the correct relocation
types for N32 ABI. Test case will be updated after the patch that fixes
TargetLowering::getPICJumpTableRelocBase is checked in.

llvm-svn: 154036
2012-04-04 18:31:32 +00:00
Akira Hatanaka b3a2b8c199 Fix LowerConstantPool to produce instructions with the correct relocation
types for N32 ABI and update test case.

llvm-svn: 154034
2012-04-04 18:26:12 +00:00
Jakob Stoklund Olesen 0a5b72f0e4 Implement ARMBaseInstrInfo::commuteInstruction() for MOVCCr.
A MOVCCr instruction can be commuted by inverting the condition. This
can help reduce register pressure and remove unnecessary copies in some
cases.

<rdar://problem/11182914>

llvm-svn: 154033
2012-04-04 18:23:42 +00:00
Jakob Stoklund Olesen 92fd79a639 Remove spurious debug output.
llvm-svn: 154032
2012-04-04 18:23:38 +00:00
Akira Hatanaka aeff24e424 Fix LowerBlockAddress to produce instructions with the correct relocation
types for N32 ABI and update test case.

llvm-svn: 154031
2012-04-04 18:22:53 +00:00
Rafael Espindola ba0a6cabb8 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

llvm-svn: 154011
2012-04-04 12:51:34 +00:00
Hongbin Zheng b21b865fe8 LoopUnrollPass: Use variable "Threshold" instead of "CurrentThreshold" when
reducing unroll count, otherwise the reduced unroll count is not taking
  the "OptimizeForSize" attribute into account.

llvm-svn: 154007
2012-04-04 11:44:08 +00:00
Benjamin Kramer a1355d17ca Move yaml::Stream's dtor out of line so it can see Scanner's dtor.
llvm-svn: 154004
2012-04-04 08:53:34 +00:00
Craig Topper 4c7d995029 Remove default case from switch that was already covering all cases.
llvm-svn: 153996
2012-04-04 04:42:42 +00:00
Pete Cooper e7bff68a5e Removed useless switch for default case when switch was covering all the enum values
llvm-svn: 153984
2012-04-04 00:53:04 +00:00
Michael J. Spencer afc0d6a36f Sorry about that. MSVC seems to accept just about any random string you give it ;/
llvm-svn: 153979
2012-04-03 23:36:44 +00:00
Michael J. Spencer 22120c47a7 Add YAML parser to Support.
llvm-svn: 153977
2012-04-03 23:09:22 +00:00
Pete Cooper 9511ec86f9 Add VSELECT to LegalizeVectorTypes::ScalariseVectorResult. Previously it would crash if it encountered a 1 element VSELECT. Solution is slightly more complicated than just creating a SELET as we have to mask or sign extend the vector condition if it had different boolean contents from the scalar condition. Fixes <rdar://problem/11178095>
llvm-svn: 153976
2012-04-03 22:57:55 +00:00
Pete Cooper b98934cf72 Removed one last bad continue statement meant to be removed in r153914.
llvm-svn: 153975
2012-04-03 22:18:49 +00:00
Chad Rosier 2a02fe1bb2 Fix an issue in SimplifySetCC() specific to vector comparisons.
When folding X == X we need to check getBooleanContents() to determine if the
result is a vector of ones or a vector of negative ones. 

I tried creating a test case, but the problem seems to only be exposed on a
much older version of clang (around r144500).
rdar://10923049

llvm-svn: 153966
2012-04-03 20:11:24 +00:00
Eric Christopher b81e2b403c Fix thinko check for number of operands to be the one that actually
might have more than 19 operands. Add a testcase to make sure I
never screw that up again.

Part of rdar://11026482

llvm-svn: 153961
2012-04-03 17:55:42 +00:00
Dylan Noblesmith 7a3973d3e0 ARMDisassembler: drop bogus dependency on ARMCodeGen
And indirectly, a dependency on most of the core LLVM optimization
libraries.

llvm-svn: 153957
2012-04-03 15:48:14 +00:00
Dylan Noblesmith 6338485d59 Object: drop bogus VMCore dependency
llvm-svn: 153956
2012-04-03 15:48:10 +00:00
Bill Wendling e2cf674310 The speedup doesn't appear to have been from this, but was an anomaly of my testing machine.
llvm-svn: 153951
2012-04-03 11:19:21 +00:00
Bill Wendling dd91e73409 Reserve space for the eventual filling of the vector. This gives a small speedup.
llvm-svn: 153949
2012-04-03 10:50:09 +00:00
Anton Korobeynikov 325e92668b Make PPCCompilationCallbackC function to be static, so there will be no need to issue call via
PLT when LLVM is built as shared library. This mimics the X86 backend towards the approach.

llvm-svn: 153938
2012-04-03 06:59:28 +00:00
Craig Topper 7629d63bc4 Add support for AVX enhanced comparison predicates. Patch from Kay Tiong Khoo.
llvm-svn: 153935
2012-04-03 05:20:24 +00:00
Akira Hatanaka d19f025374 Revert r153924. Delete test/MC/Disassembler/Mips and lib/Target/Mips/Disassembler.
llvm-svn: 153926
2012-04-03 03:01:13 +00:00
Akira Hatanaka 55059262aa Revert r153924. There were buildbot failures.
llvm-svn: 153925
2012-04-03 02:51:09 +00:00
Akira Hatanaka e2498d014b MIPS disassembler support.
Patch by Vladimir Medic.

llvm-svn: 153924
2012-04-03 02:20:58 +00:00
Eric Christopher 34164196af Add a line number for the scope of the function (starting at the first
brace) so that we get more accurate line number information about the
declaration of a given function and the line where the function
first starts.

Part of rdar://11026482

llvm-svn: 153916
2012-04-03 00:43:49 +00:00
Pete Cooper 4f0dbb27d9 Fixes to r153903. Added missing explanation of behaviour when the VirtRegMap is NULL. Also changed it in this case to just avoid updating the map, but live ranges or intervals will still get updated and created
llvm-svn: 153914
2012-04-03 00:28:46 +00:00
Pete Cooper 3ca96f9950 Moved LiveRangeEdit.h so that it can be called from other parts of the backend, not just libCodeGen
llvm-svn: 153906
2012-04-02 22:44:18 +00:00
Jakob Stoklund Olesen 291007b055 Allocate virtual registers in ascending order.
This is just the fallback tie-breaker ordering, the main allocation
order is still descending size.

Patch by Shamil Kurmangaleev!

llvm-svn: 153904
2012-04-02 22:30:39 +00:00
Pete Cooper 2bde2f42b1 Refactored the LiveRangeEdit interface so that MachineFunction, TargetInstrInfo, MachineRegisterInfo, LiveIntervals, and VirtRegMap are all passed into the constructor and stored as members instead of passed in to each method.
llvm-svn: 153903
2012-04-02 22:22:53 +00:00
Bill Wendling 932b992888 Add an option to turn off the expensive GVN load PRE part of GVN.
llvm-svn: 153902
2012-04-02 22:16:50 +00:00
Owen Anderson 98f2c0c384 Add predicates for checking whether targets have free FNEG and FABS operations, and prevent the DAGCombiner from turning them into bitwise operations if they do.
llvm-svn: 153901
2012-04-02 22:10:29 +00:00
Lang Hames aaafacd07e During two-address lowering, rescheduling an instruction does not untie
operands. Make TryInstructionTransform return false to reflect this.
Fixes PR11861.

llvm-svn: 153892
2012-04-02 19:58:43 +00:00
Akira Hatanaka b1f68f9696 Initial 64 bit direct object support.
This patch allows llvm to recognize that a 64 bit object file is being produced
and that the subsequently generated ELF header has the correct information.

The test case checks for both big and little endian flavors.

Patch by Jack Carter.

llvm-svn: 153889
2012-04-02 19:25:22 +00:00
Hal Finkel 7591afa235 The binutils for the IBM BG/P are too old to support CFI.
llvm-svn: 153886
2012-04-02 19:09:04 +00:00
Hal Finkel f208af02a4 Add triple support for the IBM BG/P and BG/Q supercomputers.
llvm-svn: 153882
2012-04-02 18:31:33 +00:00
Eric Christopher ad9fe8955a Turn on the accelerator tables for Darwin.
llvm-svn: 153880
2012-04-02 17:58:52 +00:00
Stepan Dyatkovskiy f62ffeca88 Fast fix for PR12343:
http://llvm.org/bugs/show_bug.cgi?id=12343

We have not trivial way for splitting edges that are goes from indirect branch. We can do it with some tricks, but it should be additionally discussed. And it is still dangerous due to difficulty of indirect branches controlling.

Fix forbids this case for unswitching.

llvm-svn: 153879
2012-04-02 17:16:45 +00:00
Roman Divacky b9663ccd6b Implement the SVR4 byval alignment for aggregates. Fixing a FIXME.
llvm-svn: 153876
2012-04-02 15:49:30 +00:00
Benjamin Kramer 1c0541b031 Move getOpcodeName from the various target InstPrinters into the superclass MCInstPrinter.
All implementations used the same code.

llvm-svn: 153866
2012-04-02 08:32:38 +00:00
Nadav Rotem 702f080767 Optimizing swizzles of complex shuffles may generate additional complex shuffles.
Do not try to optimize swizzles of shuffles if the source shuffle has more than
a single user, except when the source shuffle is also a swizzle.

llvm-svn: 153864
2012-04-02 07:11:12 +00:00
Craig Topper dab9e35ad0 Remove getInstructionName from MCInstPrinter implementations in favor of using the instruction name table from MCInstrInfo. Reduces static data in the InstPrinter implementations.
llvm-svn: 153863
2012-04-02 07:01:04 +00:00
Craig Topper 54bfde79db Make MCInstrInfo available to the MCInstPrinter. This will be used to remove getInstructionName and the static data it contains since the same tables are already in MCInstrInfo.
llvm-svn: 153860
2012-04-02 06:09:36 +00:00
Hal Finkel 3ecfa7b277 Fix some 80-col. violations I introduced with the A2 PPC64 core.
llvm-svn: 153852
2012-04-01 21:20:14 +00:00
Hal Finkel 322e41a914 Enable prefetch generation on PPC64.
llvm-svn: 153851
2012-04-01 20:08:17 +00:00
Hal Finkel 9032344c15 Add LdStSTD* itin. for the PPC64 A2 core.
llvm-svn: 153850
2012-04-01 20:08:08 +00:00
Nadav Rotem b078350872 This commit contains a few changes that had to go in together.
1. Simplify xor/and/or (bitcast(A), bitcast(B)) -> bitcast(op (A,B))
   (and also scalar_to_vector).

2. Xor/and/or are indifferent to the swizzle operation (shuffle of one src).
   Simplify xor/and/or (shuff(A), shuff(B)) -> shuff(op (A, B))

3. Optimize swizzles of shuffles:  shuff(shuff(x, y), undef) -> shuff(x, y).

4. Fix an X86ISelLowering optimization which was very bitcast-sensitive.

Code which was previously compiled to this:

movd    (%rsi), %xmm0
movdqa  .LCPI0_0(%rip), %xmm2
pshufb  %xmm2, %xmm0
movd    (%rdi), %xmm1
pshufb  %xmm2, %xmm1
pxor    %xmm0, %xmm1
pshufb  .LCPI0_1(%rip), %xmm1
movd    %xmm1, (%rdi)
ret

Now compiles to this:

movl    (%rsi), %eax
xorl    %eax, (%rdi)
ret

llvm-svn: 153848
2012-04-01 19:31:22 +00:00
Lang Hames 652f21274f Fix typo.
llvm-svn: 153846
2012-04-01 19:27:25 +00:00
Hal Finkel 88ed4e3b15 Set the default PPC node scheduling preference to ILP (for the embedded cores).
The 440 and A2 cores have detailed itineraries, and this allows them to be
fully used to maximize throughput.

llvm-svn: 153845
2012-04-01 19:23:08 +00:00
Hal Finkel b9845f5758 Add ppc440 itin. entries for LdStSTD*
llvm-svn: 153844
2012-04-01 19:23:04 +00:00
Hal Finkel ec5a1e3669 Use full anti-dep. breaking with post-ra sched. on the embedded ppc cores.
Post-RA scheduling gives a significant performance improvement on
the embedded cores, so turn it on. Using full anti-dep. breaking is
important for FP-intensive blocks, so turn it on (just on the
embedded cores for now; this should also be good on the 970s because
post-ra scheduling is all that we have for now, but that should have
more testing first).

llvm-svn: 153843
2012-04-01 19:22:57 +00:00
Hal Finkel 9f9f8929ee Add instruction itinerary for the PPC64 A2 core.
This adds a full itinerary for IBM's PPC64 A2 embedded core. These
cores form the basis for the CPUs in the new IBM BG/Q supercomputer.

llvm-svn: 153842
2012-04-01 19:22:40 +00:00
Chandler Carruth 45ae88f5fc Belatedly address some code review from Chris.
As a side note, I really dislike array_pod_sort... Do we really still
care about any STL implementations that get this so wrong? Does libc++?

llvm-svn: 153834
2012-04-01 10:41:24 +00:00
Chandler Carruth c5bfb3c0f5 Fix a pretty scary bug I introduced into the always inliner with
a single missing character. Somehow, this had gone untested. I've added
tests for returns-twice logic specifically with the always-inliner that
would have caught this, and fixed the bug.

Thanks to Matt for the careful review and spotting this!!! =D

llvm-svn: 153832
2012-04-01 10:21:05 +00:00
Andrew Trick 779b32a44e misched: Add finalizeScheduler to complete the target interface.
llvm-svn: 153827
2012-04-01 07:24:23 +00:00
Eli Bendersky f5becf617f Removing a file that's no longer being used after the recent refactorings
llvm-svn: 153825
2012-04-01 06:50:01 +00:00
Hal Finkel 59607e63cb Split the LdStGeneral PPC itin. class into LdStLoad and LdStStore.
Loads and stores can have different pipeline behavior, especially on
embedded chips. This change allows those differences to be expressed.
Except for the 440 scheduler, there are no functionality changes.
On the 440, the latency adjustment is only by one cycle, and so this
probably does not affect much. Nevertheless, it will make a larger
difference in the future and this removes a FIXME from the 440 itin.

llvm-svn: 153821
2012-04-01 04:44:16 +00:00
Rafael Espindola 80c540e656 Teach CodeGen's version of computeMaskedBits to understand the range metadata.
This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.

llvm-svn: 153817
2012-03-31 18:14:00 +00:00
Hal Finkel 51861b4855 Fix dynamic linking on PPC64.
Dynamic linking on PPC64 has had problems since we had to move the top-down
hazard-detection logic post-ra. For dynamic linking to work there needs to be
a nop placed after every call. It turns out that it is really hard to guarantee
that nothing will be placed in between the call (bl) and the nop during post-ra
scheduling. Previous attempts at fixing this by placing logic inside the
hazard detector only partially worked.

This is now fixed in a different way: call+nop codegen-only instructions. As far
as CodeGen is concerned the pair is now a single instruction and cannot be split.
This solution works much better than previous attempts.

The scoreboard hazard detector is also renamed to be more generic, there is currently
no cpu-specific logic in it.

llvm-svn: 153816
2012-03-31 14:45:15 +00:00
Chandler Carruth 1a4cc6cc9f Fix a typo reported in IRC by someone reviewing this code.
llvm-svn: 153815
2012-03-31 13:18:09 +00:00
Chandler Carruth a88a0faaa3 Give the always-inliner its own custom filter. It shouldn't have to pay
the very high overhead of the complex inline cost analysis when all it
wants to do is detect three patterns which must not be inlined. Comment
the code, clean it up, and leave some hints about possible performance
improvements if this ever shows up on a profile.

Moving this off of the (now more expensive) inline cost analysis is
particularly important because we have to run this inliner even at -O0.

llvm-svn: 153814
2012-03-31 13:17:18 +00:00
Chandler Carruth edd2826f3e Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

llvm-svn: 153813
2012-03-31 12:48:08 +00:00
Chandler Carruth 0539c071ea Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

llvm-svn: 153812
2012-03-31 12:42:41 +00:00
Benjamin Kramer 53dc873342 Internalize: Remove reference of @llvm.noinline, it was replaced with the noinline attribute a long time ago.
llvm-svn: 153806
2012-03-31 11:03:47 +00:00
Duncan Sands 26a80f3ddb I noticed in passing that the Metadata getIfExists method was creating a new
node and returning it if one didn't exist.

llvm-svn: 153798
2012-03-31 08:20:11 +00:00
Hal Finkel 5cad8742cc Correctly vectorize powi.
The powi intrinsic requires special handling because it always takes a single
integer power regardless of the result type. As a result, we can vectorize
only if the powers are equal. Fixes PR12364.

llvm-svn: 153797
2012-03-31 03:38:40 +00:00
Akira Hatanaka 8f4e3a0088 Select static relocation model if it is jitting.
llvm-svn: 153795
2012-03-31 02:38:36 +00:00
Jakob Stoklund Olesen d915503486 Add a 2 byte safety margin in offset computations.
ARMConstantIslandPass still has bugs where jump table compression can
cause constant pool entries to go out of range.

Add a safety margin of 2 bytes when placing constant islands, but use
the real max displacement for verification.

<rdar://problem/11156595>

llvm-svn: 153789
2012-03-31 00:06:44 +00:00
Jakob Stoklund Olesen 24bb3d59d7 Add more debugging output to ARMConstantIslandPass.
llvm-svn: 153788
2012-03-31 00:06:42 +00:00
Benjamin Kramer 682de39f2d Rip out emission of the regIsInRegClass function for the asm printer.
It's slow, bloated and completely redundant with MCRegisterClass::contains.

llvm-svn: 153782
2012-03-30 23:13:40 +00:00
Jim Grosbach 913cc3072d ARM fix encoding fixup resolution for ldrd and friends.
The 8-bit payload is not contiguous in the opcode. Move the upper nibble
over 4 bits into the correct place.

rdar://11158641

llvm-svn: 153780
2012-03-30 21:54:22 +00:00
Jim Grosbach fdaab531b7 ARM assembler should prefer non-aliases encoding of cmp.
When an immediate is both a value [t2_]so_imm and a [t2_]so_imm_neg,
we want to use the non-negated form to make sure we prefer the normal
encoding, not the aliased encoding via the negation of, e.g., 'cmp.w'.

llvm-svn: 153770
2012-03-30 19:59:02 +00:00
Jim Grosbach daa04130ed ARM encoding for VSWP got the second operand incorrect.
Make the non-tied register operand names line up with what the base
class encoding handler expects.

rdar://11157236

llvm-svn: 153766
2012-03-30 18:53:01 +00:00
Jim Grosbach 74005ae691 ARM can only use narrow encoding for low regs.
llvm-svn: 153765
2012-03-30 18:39:43 +00:00
Jim Grosbach def5e34812 ARM integrated assembler should encoding choice for add/sub imm.
For 'adds r2, r2, #56' outside of an IT block, the 16-bit encoding T2
can be used for this syntax. Prefer the narrow encoding when possible.

rdar://11156277

llvm-svn: 153759
2012-03-30 17:20:40 +00:00
Rafael Espindola a53c46aaa3 Handle unreachable code in the dominates functions. This changes users when
needed for correctness, but still doesn't clean up code that now unnecessary
checks for reachability.

llvm-svn: 153755
2012-03-30 16:46:21 +00:00
Danil Malyshev 70d22ccb22 Re-factored RuntimeDyLd:
1. The main works will made in the RuntimeDyLdImpl with uses the ObjectFile class. RuntimeDyLdMachO and RuntimeDyLdELF now only parses relocations and resolve it. This is allows to make improvements of the RuntimeDyLd more easily. In addition the support for COFF can be easily added.

2. Added ARM relocations to RuntimeDyLdELF.

3. Added support for stub functions for the ARM, allowing to do a long branch.

4. Added support for external functions that are not loaded from the object files, but can be loaded from external libraries. Now MCJIT can correctly execute the code containing the printf, putc, and etc.

5. The sections emitted instead functions, thanks Jim Grosbach. MemoryManager.startFunctionBody() and MemoryManager.endFunctionBody() have been removed.
6. MCJITMemoryManager.allocateDataSection() and MCJITMemoryManager. allocateCodeSection() used JMM->allocateSpace() instead of JMM->allocateCodeSection() and JMM->allocateDataSection(), because I got an error: "Cannot allocate an allocated block!" with object file contains more than one code or data sections.

llvm-svn: 153754
2012-03-30 16:45:19 +00:00
Jim Grosbach 199ab90946 ARM assembly parsing needs to be paranoid about negative immediates.
Make sure to treat immediates as unsigned when doing relative comparisons.

rdar://11153621

llvm-svn: 153753
2012-03-30 16:31:31 +00:00
Rafael Espindola 53190539db Add computeMaskedBitsLoad back, as it was the change to instsimplify that
caused the slowdown last time.

llvm-svn: 153747
2012-03-30 15:52:11 +00:00
Benjamin Kramer 88d31b3f0c Add a note about a missed cmov -> sbb opportunity.
llvm-svn: 153741
2012-03-30 13:02:58 +00:00
James Molloy fb5cd6085f Ensure conditional BL instructions for ARM are given the fixup fixup_arm_condbranch.
Patch by Tim Northover!

llvm-svn: 153737
2012-03-30 09:15:32 +00:00
Evan Cheng a40d40602c ARM target should allow codegenprep to duplicate ret instructions to enable tailcall opt. rdar://11140249
llvm-svn: 153717
2012-03-30 01:24:39 +00:00
Bill Wendling 9f829f1cc4 If we have a VLA that has a "use" in a metadata node that's then used
here but it has no other uses, then we have a problem. E.g.,

  int foo (const int *x) {
    char a[*x];
    return 0;
  }

If we assign 'a' a vreg and fast isel later on has to use the selection
DAG isel, it will want to copy the value to the vreg. However, there are
no uses, which goes counter to what selection DAG isel expects.
<rdar://problem/11134152>

llvm-svn: 153705
2012-03-30 00:02:55 +00:00
Bill Wendling 76fdc4b885 Revert r153694. It was causing failures in the buildbots.
llvm-svn: 153701
2012-03-29 23:23:59 +00:00
Jakob Stoklund Olesen d8af9a5ee1 Invalidate liveness in ARMConstantIslandPass.
This pass splits basic blocks to insert constant islands, and it
doesn't recompute the live-in lists. No later passes depend on accurate
liveness information.

This fixes PR12410 where the machine code verifier was complaining.

llvm-svn: 153700
2012-03-29 23:14:26 +00:00
Jakob Stoklund Olesen 2f2897372a Prefer even-odd D-register pairs.
We are sometimes allocatinog from the DPair register class which
contains odd-even pairs in addition to the Q registers.

Place the Q registers first in the DPair allocation order as they can be
copied with a single instruction. The odd-even pairs should only be
allocated as a last resort.

llvm-svn: 153699
2012-03-29 22:54:32 +00:00
Lang Hames 591cdaf2ee Try using vmov.i32 to materialize FP32 constants that can't be materialized by
vmov.f32.

llvm-svn: 153696
2012-03-29 21:56:11 +00:00
Danil Malyshev 3548eaf896 Re-factored RuntimeDyld.
Added ExecutionEngine/MCJIT tests.

llvm-svn: 153694
2012-03-29 21:46:18 +00:00
Eric Christopher c13fd6d1e1 Lowercase the tag name to match the rest of dwarf.
llvm-svn: 153691
2012-03-29 21:35:05 +00:00
Jim Grosbach 0b0298302c ARM assembly 'cmp lr, #0' should not encode using 'cmn'.
The CMP->CMN alias was matching for an immediate of zero when it
should only match for negative values.

rdar://11129224

llvm-svn: 153689
2012-03-29 21:19:52 +00:00
Jakob Stoklund Olesen caa6bd273f Handle register copies for the new ARM register classes.
ARM recently gained DPair, DTriple, and DQuad register classes.
Update copyPhysReg() to handle copies in these register classes.

No test case, it is difficult to make the register allocator emit the
odd copies reliably. The missing DPair copy caused a failure on
partialsums in the nightly test suite.

<rdar://problem/11147997>

llvm-svn: 153686
2012-03-29 21:10:40 +00:00
Lang Hames 5569ce7d56 Make x86 REP_MOV* and REP_STO instructions use the correct operand sizes in 64-bit mode.
llvm-svn: 153680
2012-03-29 19:54:28 +00:00
Akira Hatanaka 0603ad8c65 Expand FREM.
llvm-svn: 153671
2012-03-29 18:43:11 +00:00
Jakob Stoklund Olesen 4e55044ff5 Don't PRE compares.
CodeGenPrepare sinks compare instructions down to their uses to prevent
live flags and predicate registers across basic blocks.

PRE of a compare instruction prevents that, forcing the i1 compare
result into a general purpose register.  That is usually more expensive
than the redundant compare PRE was trying to eliminate in the first
place.

llvm-svn: 153657
2012-03-29 17:22:39 +00:00
Benjamin Kramer 8619c37b5b Replace assert(0) with llvm_unreachable to avoid warnings about dropping off the end of a non-void function in Release builds.
llvm-svn: 153643
2012-03-29 12:37:26 +00:00
Eric Christopher 70e1bd8872 Add support for objc property decls according to the page at:
http://llvm.org/docs/SourceLevelDebugging.html#objcproperty

including type and DECL. Expand the metadata needed accordingly.

rdar://11144023

llvm-svn: 153639
2012-03-29 08:42:56 +00:00
Craig Topper a0a603e582 Only allow symbolic names for (v)cmpss/sd/ps/pd encodings 8-31 to be used with 'v' version of instructions.
llvm-svn: 153636
2012-03-29 07:11:23 +00:00
Joel Jones 68d59e8a90 For X86, change load/dec-or-inc/store into dec-or-inc, respectively.
This is a code change to add support for changing instruction sequences of the form:

  load
  inc/dec of 8/16/32/64 bits
  store

into the appropriate X86 inc/dec through memory instruction:

  inc[qlwb] / dec[qlwb]

The checks that were in X86DAGToDAGISel::Select(SDNode *Node)>>ISD::STORE have been extracted to isLoadIncOrDecStore and reworked to use the better
named wrappers for getOperand(unsigned) (e.g. getOffset()) and replaced Chain.getNode() with LoadNode.  The comments have also been expanded.

llvm-svn: 153635
2012-03-29 05:45:48 +00:00
Joel Jones b474099e63 Reverted to revision 153616 to unblock build
llvm-svn: 153623
2012-03-29 01:20:56 +00:00
Joel Jones b88c81fe0f For X86, change load/dec-or-inc/store into dec-or-inc, respectively.
This is a code change to add support for changing instruction sequences of the form:

  load
  inc/dec of 8/16/32/64 bits
  store

into the appropriate X86 inc/dec through memory instruction:

  inc[qlwb] / dec[qlwb]

The checks that were in X86DAGToDAGISel::Select(SDNode *Node)>>ISD::STORE have been extracted to isLoadIncOrDecStore and reworked to use the better
named wrappers for getOperand(unsigned) (e.g. getOffset()) and replaced Chain.getNode() with LoadNode.  The comments have also been expanded.

llvm-svn: 153617
2012-03-29 00:37:47 +00:00
Jakob Stoklund Olesen c3e80cc885 Enable machine code verification in the entire code generator.
Some targets still mess up the liveness information, but that isn't
verified after MRI->invalidateLiveness().

The verifier can still check other useful things like register classes
and CFG, so it should be enabled after all passes.

llvm-svn: 153615
2012-03-28 23:54:28 +00:00
Jakob Stoklund Olesen d1bd8fba13 Enable machine code verification after PreSched2 passes.
The late scheduler depends on accurate liveness information if it is
breaking anti-dependencies, so we should be able to verify it.

Relax the terminator checking in the machine code verifier so it can
handle the basic blocks created by if conversion.

llvm-svn: 153614
2012-03-28 23:31:15 +00:00
Jakob Stoklund Olesen b6a7a89289 Don't kill the base register when expanding strd.
When an strd instruction doesn't get the registers it wants, it can be
expanded into two str instructions. Make sure the first str doesn't kill
the base register in the case where the base and data registers are
identical:

  t2STRi12 %R0<kill>, %R0, 4, pred:14, pred:%noreg
  t2STRi12 %R2<kill>, %R0, 8, pred:14, pred:%noreg

<rdar://problem/11101911>

llvm-svn: 153611
2012-03-28 23:07:03 +00:00
Jakob Stoklund Olesen cdee326ab6 Preserve implicit defs in ARMLoadStoreOptimizer.
When a number of sub-register VLRDS instructions are combined into a
VLDM, preserve any super-register implicit defs. This is required to
keep the register scavenger and machine code verifier happy.

Enable machine code verification after ARMLoadStoreOptimizer.
ARM/2012-01-26-CopyPropKills.ll was failing because of this.

llvm-svn: 153610
2012-03-28 22:50:56 +00:00
Danil Malyshev bfee542cce Move getPointerToNamedFunction() from JIT/MCJIT to JITMemoryManager.
llvm-svn: 153607
2012-03-28 21:46:36 +00:00
Rafael Espindola 5054ee82cc Handle intrinsics in GlobalsModRef. Fixes pr12351.
llvm-svn: 153604
2012-03-28 21:31:24 +00:00
Jakob Stoklund Olesen 9e512120b7 Spill DPair registers, not just QPR.
The arm_neon intrinsics can create virtual registers from the DPair
register class which allows both even-odd and odd-even D-register pairs.

This fixes PR12389.

llvm-svn: 153603
2012-03-28 21:20:32 +00:00
Jakob Stoklund Olesen e433c68d7c Also verify after ExpandPostRAPseudos.
llvm-svn: 153599
2012-03-28 20:49:30 +00:00
Jakob Stoklund Olesen 341e06f8d5 Enable machine code verification after the late machine optimization passes.
Branch folding invalidates liveness and disables liveness verification
on some targets.

llvm-svn: 153597
2012-03-28 20:47:37 +00:00
Jakob Stoklund Olesen b21df32cf5 Skip liveness verification when MRI->tracksLiveness() is false.
Extract the liveness verification into its own method.

This makes it possible to run the machine code verifier after liveness
information is no longer required to be valid.

llvm-svn: 153596
2012-03-28 20:47:35 +00:00
Jakob Stoklund Olesen 8cb97523c6 Revert r153516: "Invalidate liveness in Thumb2ITBlockPass."
Revert r153519: "ARMLoadStoreOptimizer invalidates register liveness."

These patches caused miscompilations in povray by turning off branch
folding's updating of live-in lists.

It turns out the the late scheduler depends on the live-in lists, even
if it doesn't need correct kill flags.

<rdar://problem/11139228>

llvm-svn: 153593
2012-03-28 20:11:44 +00:00
Jakob Stoklund Olesen 8e58c90f51 Allow removeLiveIn to be called with a register that isn't live-in.
This avoids the silly double search:

  if (isLiveIn(Reg))
    removeLiveIn(Reg);

llvm-svn: 153592
2012-03-28 20:11:42 +00:00
Chad Rosier e27081d348 Revert r153521 as it's causing large regressions on the nightly testers.
Original commit message for r153521 (aka r153423):
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153587
2012-03-28 18:42:50 +00:00
Pete Cooper 148ebb8802 Fixed commuteInstructions bug where if its called pre-regalloc the subreg indices weren't commuted
llvm-svn: 153579
2012-03-28 17:02:22 +00:00
Benjamin Kramer aa9e4a5e59 GlobalOpt: If we have an inbounds GEP from a ConstantAggregateZero global that we just determined to be constant, replace all loads from it with a zero value.
llvm-svn: 153576
2012-03-28 14:50:09 +00:00
Benjamin Kramer 20b32d2da6 Add another note about a missed compare with nsw arithmetic instcombine.
llvm-svn: 153574
2012-03-28 10:50:18 +00:00
Richard Barton 7ce39497b4 Fixup VST1.32 with writeback instruction. Also re-factor non-writeback version.
llvm-svn: 153573
2012-03-28 10:18:11 +00:00
Chandler Carruth 772c88b887 Switch to WeakVHs in the value mapper, and aggressively prune dead basic
blocks in the function cloner. This removes the last case of trivially
dead code that I've been seeing in the wild getting inlined, analyzed,
re-inlined, optimized, only to be deleted. Nukes a FIXME from the
cleanup tests.

llvm-svn: 153572
2012-03-28 08:38:27 +00:00
Eric Christopher 24a6298512 More debug output.
llvm-svn: 153571
2012-03-28 07:34:36 +00:00
Eric Christopher 7285c7d51d Fix the output of the DW_TAG_friend tag to include DW_AT_friend
and not the rest of the member tag.

Fixes PR11695

llvm-svn: 153570
2012-03-28 07:34:31 +00:00
Akira Hatanaka 2c67006cdd Turn off post-RA scheduler by default.
llvm-svn: 153557
2012-03-28 00:52:23 +00:00
Chad Rosier bb2a6da440 Fix 80-column violation.
llvm-svn: 153556
2012-03-28 00:35:33 +00:00
Akira Hatanaka 047473e293 Turn on post register allocation scheduler.
llvm-svn: 153554
2012-03-28 00:24:17 +00:00
Akira Hatanaka 5ba593f509 Sort relocation entries before they are written out to a file. MIPS ABI
imposes a constraint that GOT16 referring to a local symbol or HI16 has to be
followed immediately by a matching LO16 relocation.

llvm-svn: 153553
2012-03-28 00:23:33 +00:00
Akira Hatanaka 34ee3ff83d Emit all directives except for ".cprestore" during asm printing rather than emit
them as machine instructions. Directives ".set noat" and ".set at" are now
emitted only at the beginning and end of a function except in the case where
they are emitted to enclose .cpload with an immediate operand that doesn't fit
in 16-bit field or unaligned load/stores.

Also, make the following changes:
- Remove function isUnalignedLoadStore and use a switch-case statement to
  determine whether an instruction is an unaligned load or store.

- Define helper function CreateMCInst which generates an instance of an MCInst
  from an opcode and a list of operands.

llvm-svn: 153552
2012-03-28 00:22:50 +00:00
Akira Hatanaka 1518a5fa9c Mark flag neverHasSideEffects of pattern-less instructions that do not have
any side effects.

llvm-svn: 153551
2012-03-28 00:21:37 +00:00
Benjamin Kramer 2735c01906 Add a note about a cute little fabs optimization.
llvm-svn: 153543
2012-03-27 22:42:42 +00:00
Benjamin Kramer f0901459b9 Add two missed instcombines related to compares with nsw arithmetic.
llvm-svn: 153542
2012-03-27 22:03:19 +00:00
Akira Hatanaka 52656d1047 Remove trailing white space.
llvm-svn: 153536
2012-03-27 20:35:51 +00:00
Lang Hames 5544bf1b8a Use a SmallVector and linear lookup instead of a DenseSet - SourceMap values
will always be tiny sets, so DenseSet is overkill (SmallSet won't work as we
need iteration support). 

llvm-svn: 153529
2012-03-27 19:10:45 +00:00
Akira Hatanaka a25fe22198 Add member EmitNOAT and its setter and getter functions to class MipsFunctionInfo.
If EmitNOAT is true, directives ".set noat" and ".set at" are emitted at the
beginning and end of a function. 

llvm-svn: 153528
2012-03-27 19:08:42 +00:00
Eric Christopher 7ed2efca6a Use DW_AT_low_pc for a single entry point into a routine.
Fixes PR10105

llvm-svn: 153524
2012-03-27 18:35:54 +00:00
Chad Rosier 8e6dbccd03 Reapply r153423; the original commit was fine. The failing test, distray, had
undefined behavior, which Rafael was kind enough to fix.

Original commit message for r153423:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153521
2012-03-27 17:44:52 +00:00
Jakob Stoklund Olesen 4acbcb3171 ARMLoadStoreOptimizer invalidates register liveness.
This pass tries to update kill flags, but there are still many bugs.
Passes after the load/store optimizer don't need accurate liveness, so
don't even try.

<rdar://problem/11101911>

llvm-svn: 153519
2012-03-27 17:33:52 +00:00
Jakob Stoklund Olesen 6c08534aff Print SSA and liveness tracking flags in MF::print().
llvm-svn: 153518
2012-03-27 17:17:16 +00:00
Jakob Stoklund Olesen d1664a1571 Branch folding may invalidate liveness.
Branch folding can use a register scavenger to update liveness
information when required. Don't do that if liveness information is
already invalid.

llvm-svn: 153517
2012-03-27 17:06:09 +00:00
Jakob Stoklund Olesen 14459cdc49 Invalidate liveness in Thumb2ITBlockPass.
llvm-svn: 153516
2012-03-27 17:06:06 +00:00
Chris Lattner 1cc25e8a40 fix what looks like a real logic bug, found by PVS-Studio (part of PR12357)
llvm-svn: 153513
2012-03-27 16:27:21 +00:00
Jakob Stoklund Olesen 9c1ad5cb7d Add an MRI::tracksLiveness() flag.
Late optimization passes like branch folding and tail duplication can
transform the machine code in a way that makes it expensive to keep the
register liveness information up to date. There is a fuzzy line between
register allocation and late scheduling where the liveness information
degrades.

The MRI::tracksLiveness() flag makes the line clear: While true,
liveness information is accurate, and can be used for register
scavenging. Once the flag is false, liveness information is not
accurate, and can only be used as a hint.

Late passes generally don't need the liveness information, but they will
sometimes use the register scavenger to help update it. The scavenger
enforces strict correctness, and we have to spend a lot of code to
update register liveness that may never be used.

llvm-svn: 153511
2012-03-27 15:13:58 +00:00
Chandler Carruth b9e35fbc1e Make a seemingly tiny change to the inliner and fix the generated code
size bloat. Unfortunately, I expect this to disable the majority of the
benefit from r152737. I'm hopeful at least that it will fix PR12345. To
explain this requires... quite a bit of backstory I'm afraid.

TL;DR: The change in r152737 actually did The Wrong Thing for
linkonce-odr functions. This change makes it do the right thing. The
benefits we saw were simple luck, not any actual strategy. Benchmark
numbers after a mini-blog-post so that I've written down my thoughts on
why all of this works and doesn't work...

To understand what's going on here, you have to understand how the
"bottom-up" inliner actually works. There are two fundamental modes to
the inliner:

1) Standard fixed-cost bottom-up inlining. This is the mode we usually
   think about. It walks from the bottom of the CFG up to the top,
   looking at callsites, taking information about the callsite and the
   called function and computing th expected cost of inlining into that
   callsite. If the cost is under a fixed threshold, it inlines. It's
   a touch more complicated than that due to all the bonuses, weights,
   etc. Inlining the last callsite to an internal function gets higher
   weighth, etc. But essentially, this is the mode of operation.

2) Deferred bottom-up inlining (a term I just made up). This is the
   interesting mode for this patch an r152737. Initially, this works
   just like mode #1, but once we have the cost of inlining into the
   callsite, we don't just compare it with a fixed threshold. First, we
   check something else. Let's give some names to the entities at this
   point, or we'll end up hopelessly confused. We're considering
   inlining a function 'A' into its callsite within a function 'B'. We
   want to check whether 'B' has any callers, and whether it might be
   inlined into those callers. If so, we also check whether inlining 'A'
   into 'B' would block any of the opportunities for inlining 'B' into
   its callers. We take the sum of the costs of inlining 'B' into its
   callers where that inlining would be blocked by inlining 'A' into
   'B', and if that cost is less than the cost of inlining 'A' into 'B',
   then we skip inlining 'A' into 'B'.

Now, in order for #2 to make sense, we have to have some confidence that
we will actually have the opportunity to inline 'B' into its callers
when cheaper, *and* that we'll be able to revisit the decision and
inline 'A' into 'B' if that ever becomes the correct tradeoff. This
often isn't true for external functions -- we can see very few of their
callers, and we won't be able to re-consider inlining 'A' into 'B' if
'B' is external when we finally see more callers of 'B'. There are two
cases where we believe this to be true for C/C++ code: functions local
to a translation unit, and functions with an inline definition in every
translation unit which uses them. These are represented as internal
linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for
linkonce-odr in r152737.

Unfortunately, when I did that, I also introduced a subtle bug. There
was an implicit assumption that the last caller of the function within
the TU was the last caller of the function in the program. We want to
bonus the last caller of the function in the program by a huge amount
for inlining because inlining that callsite has very little cost.
Unfortunately, the last caller in the TU of a linkonce-odr function is
*not* the last caller in the program, and so we don't want to apply this
bonus. If we do, we can apply it to one callsite *per-TU*. Because of
the way deferred inlining works, when it sees this bonus applied to one
callsite in the TU for 'B', it decides that inlining 'B' is of the
*utmost* importance just so we can get that final bonus. It then
proceeds to essentially force deferred inlining regardless of the actual
cost tradeoff.

The result? PR12345: code bloat, code bloat, code bloat. Another result
is getting *damn* lucky on a few benchmarks, and the over-inlining
exposing critically important optimizations. I would very much like
a list of benchmarks that regress after this change goes in, with
bitcode before and after. This will help me greatly understand what
opportunities the current cost analysis is missing.

Initial benchmark numbers look very good. WebKit files that exhibited
the worst of PR12345 went from growing to shrinking compared to Clang
with r152737 reverted.

- Bootstrapped Clang is 3% smaller with this change.
- Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4%
  faster with this change.

Please let me know about any other performance impact you see. Thanks to
Nico for reporting and urging me to actually fix, Richard Smith, Duncan
Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues
today.

llvm-svn: 153506
2012-03-27 10:48:28 +00:00
Craig Topper 1fcf5bcae1 Prune some includes
llvm-svn: 153502
2012-03-27 07:54:11 +00:00
Craig Topper f6e7e12f75 Remove unnecessary llvm:: qualifications
llvm-svn: 153500
2012-03-27 07:21:54 +00:00
Akira Hatanaka 8a7633c74e Pass the llvm IR pointer value and offset to the constructor of
MachinePointerInfo when getStore is called to create a node that stores an
argument passed in register to the stack. Without this change, the post RA 
scheduler will fail to discover the dependencies between the stores
instructions and the instructions that load from a structure passed by value. 

The link to the related discussion is here:
http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-March/048055.html

llvm-svn: 153499
2012-03-27 03:13:56 +00:00
Akira Hatanaka 769f69f9b6 Fix bug in LowerConstantPool.
llvm-svn: 153498
2012-03-27 02:55:31 +00:00
Akira Hatanaka 2a36c9f4a8 Add T9 to the list of live-in registers of the entry basic block.
llvm-svn: 153497
2012-03-27 02:46:25 +00:00
Akira Hatanaka fe384a2c84 Retrieve and add the offset of a symbol in applyFixup rather than retrieve and
set it in MipsMCCodeEmitter::getMachineOpValue. Assert in getMachineOpValue if
MachineOperand MO is of an unexpected type. 

llvm-svn: 153494
2012-03-27 02:33:05 +00:00
Akira Hatanaka a06bc1c6e3 Define function MipsGetSymAndOffset which returns a fixup's symbol and the
offset applied to it.

llvm-svn: 153493
2012-03-27 02:04:18 +00:00
Evan Cheng 7fede87349 Post-ra LICM should take care not to hoist an instruction that would clobber a
register that's read by the preheader terminator.

rdar://11095580

llvm-svn: 153492
2012-03-27 01:50:58 +00:00
Akira Hatanaka da72819725 Rewrite computation of Value in adjustFixupValue so that the upper 48-bits are
cleared. No functionality change.

llvm-svn: 153491
2012-03-27 01:50:08 +00:00
Lang Hames 551662bf5d During MachineCopyPropagation a register may be the source operand of multiple
copies being considered for removal. Make sure to track all of the copies,
rather than just the most recent encountered, by holding a DenseSet instead of
an unsigned in SrcMap.

No test case - couldn't reduce something with a sane size.

llvm-svn: 153487
2012-03-27 00:44:47 +00:00
Akira Hatanaka ba5100c117 Reserve hardware registers.
llvm-svn: 153486
2012-03-27 00:40:56 +00:00
Evan Cheng a2b48d985b ARM has a peephole optimization which looks for a def / use pair. The def
produces a 32-bit immediate which is consumed by the use. It tries to 
fold the immediate by breaking it into two parts and fold them into the
immmediate fields of two uses. e.g
       movw    r2, #40885
       movt    r3, #46540
       add     r0, r0, r3
=>
       add.w   r0, r0, #3019898880
       add.w   r0, r0, #30146560
;
However, this transformation is incorrect if the user produces a flag. e.g.
       movw    r2, #40885
       movt    r3, #46540
       adds    r0, r0, r3
=>
       add.w   r0, r0, #3019898880
       adds.w  r0, r0, #30146560
Note the adds.w may not set the carry flag even if the original sequence
would.

rdar://11116189

llvm-svn: 153484
2012-03-26 23:31:00 +00:00
Lang Hames 95e021faf5 Add a debug option to dump PBQP graphs during register allocation.
llvm-svn: 153483
2012-03-26 23:07:23 +00:00
Andrew Trick 7004e4b95e SCEV fix: Handle loop invariant loads.
Fixes PR11882: NULL dereference in ComputeLoadConstantCompareExitLimit.

llvm-svn: 153480
2012-03-26 22:33:59 +00:00
Eric Christopher 0925c62c74 Use the file in the inlined die rather than the compile unit for
backtrace locations.

Testcase forthcoming, but I wanted to get some testing here.

Should fix:

PR12323
PR12314
rdar://11091100

llvm-svn: 153471
2012-03-26 21:38:38 +00:00
Nadav Rotem a8f3562e8f 153465 was incorrect. In this code we wanted to check that the pointer operand is of pointer type (and not vector type).
llvm-svn: 153468
2012-03-26 21:00:53 +00:00
Sean Callanan a375943d82 Made RuntimeDyldMachO support vanilla i386
relocations.  The algorithm is the same as
that for x86_64.  Scattered relocations, a
feature present in i386 but not on x86_64,
are not yet supported.

llvm-svn: 153466
2012-03-26 20:45:52 +00:00
Nadav Rotem e63e59cc44 PR12357: The pointer was used before it was checked.
llvm-svn: 153465
2012-03-26 20:39:18 +00:00
Andrew Trick 14779cc49e LSR ivchain bug fix: corner case with ConstantExpr.
Fixes PR11950.

llvm-svn: 153463
2012-03-26 20:28:37 +00:00
Andrew Trick 356a896394 comment typo
llvm-svn: 153462
2012-03-26 20:28:35 +00:00
Chris Lattner b1e2e1e091 eliminate an unneeded branch, part of PR12357
llvm-svn: 153458
2012-03-26 19:13:57 +00:00
Eric Christopher 2b40fdf3ae Tidy.
llvm-svn: 153456
2012-03-26 19:09:40 +00:00
Eric Christopher f16bee8682 Tidy.
llvm-svn: 153455
2012-03-26 19:09:38 +00:00
Chad Rosier 08e57e5ccf Revert r153423 as this is causing failures on our internal nightly testers.
Original commit message:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loading a boolean value.

llvm-svn: 153452
2012-03-26 18:07:14 +00:00
Andrew Trick e51feea79c LSR cleanup: potential bug caught by PVS-Studio.
Thanks Andrey.

llvm-svn: 153451
2012-03-26 18:03:16 +00:00
Kostya Serebryany 6f8a776041 [tsan] treat vtable pointer updates in a special way (requires tbaa); fix a bug (forgot to return true after instrumenting); make sure the tsan tests are run
llvm-svn: 153448
2012-03-26 17:35:03 +00:00
Benjamin Kramer 3e6719c133 No need to do an expensive stable sort for a bunch of integers.
llvm-svn: 153438
2012-03-26 14:17:26 +00:00
Douglas Gregor c0f6380464 Add missing include of <new>
llvm-svn: 153436
2012-03-26 14:04:17 +00:00
Anton Korobeynikov 4547077a2b Fix GetMainExecutable on kFreeBSD.
Patch by Sylvestre Ledru!

llvm-svn: 153435
2012-03-26 12:05:51 +00:00
Craig Topper 6e80c28017 Prune some includes and forward declarations.
llvm-svn: 153429
2012-03-26 06:58:25 +00:00
Eric Christopher c1e2dcdb8a Add a debug statement.
llvm-svn: 153428
2012-03-26 06:10:32 +00:00
Rafael Espindola df9b4adb82 Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153423
2012-03-26 01:44:11 +00:00
Craig Topper 5fa0caafc0 Prune includes and replace uses of ARMRegisterInfo.h with ARMBaeRegisterInfo.h
llvm-svn: 153422
2012-03-26 00:45:15 +00:00
Craig Topper 07720d8dcd Replace uses of ARMBaseInstrInfo and ARMTargetMachine with the Base versions.
llvm-svn: 153421
2012-03-25 23:49:58 +00:00
Chandler Carruth 8059c84af1 Teach instsimplify how to simplify comparisons of pointers which are
constant-offsets of a common base using the generic GEP-walking logic
I added for computing pointer differences in the same situation.

llvm-svn: 153419
2012-03-25 21:28:14 +00:00
Chandler Carruth 2741aae80b Switch the pointer-difference simplification logic to only work with
inbounds GEPs. This isn't really necessary for simplifying pointer
differences, but I'm planning to re-use the same code to simplify
pointer comparisons where it is necessary. Since real code almost
exclusively uses inbounds GEPs, it doesn't seem worth it to support the
extra complexity of turning it on and off. If anyone would like that
back, feel free to shout. Note that instcombine will still catch any of
these patterns.

llvm-svn: 153418
2012-03-25 20:43:07 +00:00
Craig Topper d4a964cd70 Prune some includes and forward declarations.
llvm-svn: 153415
2012-03-25 18:10:17 +00:00
Chandler Carruth ef82cf5b1e Teach the function cloner (and thus the inliner) to simplify PHINodes
aggressively. There are lots of dire warnings about this being expensive
that seem to predate switching to the TrackingVH-based value remapper
that is automatically updated on RAUW. This makes it easy to not just
prune single-entry PHIs, but to fully simplify PHIs, and to recursively
simplify the newly inlined code to propagate PHINode simplifications.

This introduces a bit of a thorny problem though. We may end up
simplifying a branch condition to a constant when we fold PHINodes, and
we would like to nuke any dead blocks resulting from this so that time
isn't wasted continually analyzing them, but this isn't easy. Deleting
basic blocks *after* they are fully cloned and mapped into the new
function currently requires manually updating the value map. The last
piece of the simplification-during-inlining puzzle will require either
switching to WeakVH mappings or some other piece of refactoring. I've
left a FIXME in the testcase about this.

llvm-svn: 153410
2012-03-25 10:34:54 +00:00
Chandler Carruth 2121199241 Move the instruction simplification of callsite arguments in the inliner
to instead rely on much more generic and powerful instruction
simplification in the function cloner (and thus inliner).

This teaches the pruning function cloner to use instsimplify rather than
just the constant folder to fold values during cloning. This can
simplify a large number of things that constant folding alone cannot
begin to touch. For example, it will realize that 'or' and 'and'
instructions with certain constant operands actually become constants
regardless of what their other operand is. It also can thread back
through the caller to perform simplifications that are only possible by
looking up a few levels. In particular, GEPs and pointer testing tend to
fold much more heavily with this change.

This should (in some cases) have a positive impact on compile times with
optimizations on because the inliner itself will simply avoid cloning
a great deal of code. It already attempted to prune proven-dead code,
but now it will be use the stronger simplifications to prove more code
dead.

llvm-svn: 153403
2012-03-25 04:03:40 +00:00
Chandler Carruth 0c72e3f469 Add an asserting ValueHandle to the block simplification code which will
fire if anything ever invalidates the assumption of a terminator
instruction being unchanged throughout the routine.

I've convinced myself that the current definition of simplification
precludes such a transformation, so I think getting some asserts
coverage that we don't violate this agreement is sufficient to make this
code safe for the foreseeable future.

Comments to the contrary or other suggestions are of course welcome. =]
The bots are now happy with this code though, so it appears the bug here
has indeed been fixed.

llvm-svn: 153401
2012-03-25 03:29:25 +00:00
Chandler Carruth 17fc6ef234 Don't form a WeakVH around the sentinel node in the instructions BB
list. This is a bad idea. ;] I'm hopeful this is the bug that's showing
up with the MSVC bots, but we'll see.

It is definitely unnecessary. InstSimplify won't do anything to
a terminator instruction, we don't need to even include it in the
iteration range. We can also skip the now dead terminator check,
although I've made it an assert to help document that this is an
important invariant.

I'm still a bit queasy about this because there is an implicit
assumption that the terminator instruction cannot be RAUW'ed by the
simplification code. While that appears to be true at the moment, I see
no guarantee that would ensure it remains true in the future. I'm
looking at the cleanest way to solve that...

llvm-svn: 153399
2012-03-24 23:03:27 +00:00
Chandler Carruth 77e8bfbb5e Try to harden the recursive simplification still further. This is again
spotted by inspection, and I've crafted no test case that triggers it on
my machine, but some of the windows builders are hitting what looks like
memory corruption, so *something* is amiss here.

This patch takes a more generalized approach to eliminating
double-visits. Imagine code such as:

  %x = ...
  %y = add %x, 1
  %z = add %x, %y

You can imagine that if we simplify %x, we would add %y and %z to the
list. If the use-chain order happens to cause us to add them in reverse
order, we could pull %y off first, and simplify it, adding %z to the
list. We now have %z on the list twice, and will reference it after it
is deleted.

Currently, all my test cases happen to not trigger this, likely due to
the use-chain ordering, but there seems no guarantee that such
a situation could not occur, so we should handle it correctly.

Again, if anyone knows how to craft a testcase that actually triggers
this, please let me know.

llvm-svn: 153397
2012-03-24 22:34:26 +00:00
Chandler Carruth e41fc73f08 Don't add the instruction about to be RAUW'ed and erased to the
worklist. This can happen in theory when an instruction uses itself,
such as a PHI node. This was spotted by inspection, and unfortunately
I've not been able to come up with a test case that would trigger it. If
anyone has ideas, let me know...

llvm-svn: 153396
2012-03-24 22:34:23 +00:00
Jean-Daniel Dupas a573b22015 Fix null to integer conversion warnings.
llvm-svn: 153395
2012-03-24 22:17:50 +00:00
Chandler Carruth cf1b585f60 Refactor the interface to recursively simplifying instructions to be tad
bit simpler by handling a common case explicitly.

Also, refactor the implementation to use a worklist based walk of the
recursive users, rather than trying to use value handles to detect and
recover from RAUWs during the recursive descent. This fixes a very
subtle bug in the previous implementation where degenerate control flow
structures could cause mutually recursive instructions (PHI nodes) to
collapse in just such a way that From became equal to To after some
amount of recursion. At that point, we hit the inf-loop that the assert
at the top attempted to guard against. This problem is defined away when
not using value handles in this manner. There are lots of comments
claiming that the WeakVH will protect against just this sort of error,
but they're not accurate about the actual implementation of WeakVHs,
which do still track RAUWs.

I don't have any test case for the bug this fixes because it requires
running the recursive simplification on unreachable phi nodes. I've no
way to either run this or easily write an input that triggers it. It was
found when using instruction simplification inside the inliner when
running over the nightly test-suite.

llvm-svn: 153393
2012-03-24 21:11:24 +00:00
Rafael Espindola 8e5b40eb08 Remove always true variable.
llvm-svn: 153392
2012-03-24 20:02:25 +00:00
Hal Finkel e44eb28807 Fix small-integer VAARG on SVR4 ABI PPC64.
The PPC64 SVR4 ABI requires integer stack arguments, and thus the var. args., that
are smaller than 64 bits be zero extended to 64 bits.

llvm-svn: 153373
2012-03-24 03:53:55 +00:00
Hal Finkel 71c2ba3d2e Add the ability to promote legal integer VAARGs. This is required for the PPC64 SVR4 ABI.
llvm-svn: 153372
2012-03-24 03:53:52 +00:00
Francois Pichet 4b9ab74690 Fix the MSVC build.
llvm-svn: 153366
2012-03-24 01:36:37 +00:00
Justin Holewinski a84577dcff PTX: Fix predicate logic bug
Code such as:

%vreg100 = setcc %vreg10, -1, SETNE
brcond %vreg10, %tgt

was being incorrectly morphed into

%vreg100 = and %vreg10, 1
brcond %vreg10, %tgt

where the 'and' instruction could be eliminated since
such logic is on 1-bit types in the PTX back-end, leaving
us with just:

brcond %vreg10, %tgt

which essentially gives us inverted branch conditions.

llvm-svn: 153364
2012-03-24 01:23:20 +00:00
Andrew Trick 25553ab5fe More IndVarSimplify cleanup.
llvm-svn: 153362
2012-03-24 00:51:17 +00:00
Rafael Espindola ef9f5504ea First part of PR12251. Add documentation and verifier support for the range
metadata.

llvm-svn: 153359
2012-03-24 00:14:51 +00:00
Kostya Serebryany e505a5abe9 add EP_OptimizerLast extension point
llvm-svn: 153353
2012-03-23 23:22:59 +00:00
Bill Wendling 8737480dfa It's possible for two types, which are isomorphic, to be added to the
destination module, but one of them isn't used in the destination module. If
another module comes along and the uses the unused type, there could be type
conflicts when the modules are finally linked together. (This happened when
building LLVM.)

The test that was reduced is:

Module A:

%Z = type { %A }
%A = type { %B.1, [7 x x86_fp80] }
%B.1 = type { %C }
%C = type { i8* }

declare void @func_x(%C*, i64, i64)
declare void @func_z(%Z* nocapture)

Module B:

%B = type { %C.1 }
%C.1 = type { i8* }
%A.2 = type { %B.3, [5 x x86_fp80] }
%B.3 = type { %C.1 }

define void @func_z() {
  %x = alloca %A.2, align 16
  %y = getelementptr inbounds %A.2* %x, i64 0, i32 0, i32 0
  call void @func_x(%C.1* %y, i64 37, i64 927) nounwind
  ret void
}

declare void @func_x(%C.1*, i64, i64)
declare void @func_y(%B* nocapture)

(Unfortunately, this test doesn't fail under llvm-link, only during an LTO
linking.) The '%C' and '%C.1' clash. The destination module gets the '%C'
declaration. When merging Module B, it looks at the '%C.1' subtype of the '%B'
structure. It adds that in, because that's cool. And when '%B.3' is processed,
it uses the '%C.1'. But the '%B' has used '%C' and we prefer to use '%C'. So the
'@func_x' type is changed to 'void (%C*, i64, i64)', but the type of '%x' in
'@func_z' remains '%A.2'. The GEP resolves to a '%C.1', which conflicts with the
'@func_x' signature.

We can resolve this situation by making sure that the type is used in the
destination before saying that it should be used in the module being merged in.

With this fix, LLVM and Clang both compile under LTO.
<rdar://problem/10913281>

llvm-svn: 153351
2012-03-23 23:17:38 +00:00
Jim Grosbach 190e7b6e18 ARM tidy up ARMConstantIsland.cpp.
No functional change, just tidy up the code and nomenclature a bit.

llvm-svn: 153347
2012-03-23 23:07:03 +00:00
Jim Grosbach 4a2909ab0f Pretty-printing comments for literal floating point in .s files.
Dump the hex representation to the comment stream as well as the float
value.

llvm-svn: 153346
2012-03-23 23:06:47 +00:00
Akira Hatanaka 64ad2cf1e4 Add a hook in MCELFObjectTargetWriter to allow targets to sort relocation
entries in the relocation table before they are written out to the file. 

llvm-svn: 153345
2012-03-23 23:06:45 +00:00
Dan Gohman e3ed2b0699 Don't convert objc_retainAutoreleasedReturnValue to objc_retain if it
is retaining the return value of an invoke that it immediately follows.

llvm-svn: 153344
2012-03-23 18:09:00 +00:00
Dan Gohman 5c70fadc17 It's not possible to insert code immediately after an invoke in the
same basic block, and it's not safe to insert code in the successor
blocks if the edges are critical edges. Splitting those edges is
possible, but undesirable, especially on the unwind side. Instead,
make the bottom-up code motion to consider invokes to be part of
their successor blocks, rather than part of their parent blocks, so
that it doesn't push code past them and onto the edges. This fixes
PR12307.

llvm-svn: 153343
2012-03-23 17:47:54 +00:00
Owen Anderson add6f1d2e9 Make it feasible for clients using EngineBuilder to capture the TargetMachine that is created as part of selecting the appropriate target.
This is necessary if the client wants to be able to mutate TargetOptions (for example, fast FP math mode) after the initial creation of the ExecutionEngine.

llvm-svn: 153342
2012-03-23 17:40:56 +00:00
Lang Hames 45c6d21ae1 Add support for register masks to PBQP.
llvm-svn: 153341
2012-03-23 17:33:42 +00:00
Benjamin Kramer b0640db80e Include cstdio in a few place that depended on getting it transitively through StringExtras.h
llvm-svn: 153328
2012-03-23 11:35:30 +00:00
Benjamin Kramer cbf108eda6 Move ftostr into its last user (cppbackend) and simplify it a bit.
New code should use raw_ostream.

llvm-svn: 153326
2012-03-23 11:26:29 +00:00
Duncan Sands a11ef6e4ea When propagating equalities, eg replacing A with B in every basic block
dominated by Root, check that B is available throughout the scope.  This
is obviously true (famous last words?) given the current logic, but the
check may be helpful if more complicated reasoning is added one day.

llvm-svn: 153323
2012-03-23 08:45:52 +00:00
Duncan Sands 8f897dc88b Indentation.
llvm-svn: 153322
2012-03-23 08:29:04 +00:00
Bill Wendling 00623787c0 Ignore the last message.
llvm-svn: 153315
2012-03-23 07:22:49 +00:00
Bill Wendling 947f92869a Revert patch. It broke the build.
llvm-svn: 153314
2012-03-23 07:21:18 +00:00
Bill Wendling 7389acda89 Dematerialize the source functions after we're done with them. This saves a bit
of memory during LTO.

llvm-svn: 153313
2012-03-23 07:18:22 +00:00
Eric Christopher 64a232343a Remove the C backend.
llvm-svn: 153307
2012-03-23 05:50:46 +00:00
Eric Christopher bdb64495c4 Fix up cmake build.
llvm-svn: 153306
2012-03-23 03:55:14 +00:00
Eric Christopher 3c0d51661f Take out the debug info probe stuff. It's making some changes to
the PassManager annoying and should be reimplemented as a decorator
on top of existing passes (as should the timing data).

llvm-svn: 153305
2012-03-23 03:54:05 +00:00
Andrew Trick e3502cb204 Remove -enable-lsr-retry in time for 3.1.
llvm-svn: 153287
2012-03-22 22:42:51 +00:00
Andrew Trick d97b83e320 Remove -enable-lsr-nested in time for 3.1.
Tests cases have been removed but attached to open PR12330.

llvm-svn: 153286
2012-03-22 22:42:45 +00:00
Bill Wendling 44022b2ae8 Some whitespace and comment cleanup.
llvm-svn: 153278
2012-03-22 20:47:54 +00:00
Bill Wendling 8c2cc41691 Remove unneeded #ifdefs.
llvm-svn: 153277
2012-03-22 20:30:41 +00:00
Bill Wendling b6af2f36cd Add a 'dump' method to the type map. Doxygenify some of the comments and add a
few comments where none existed before. Also change a function's name to match
the current coding standard.
No functionality change.

llvm-svn: 153276
2012-03-22 20:28:27 +00:00