Commit Graph

41224 Commits

Author SHA1 Message Date
Dan Gohman 002ff89cbd Optionally rerun dedicated-register filtering after applying
other filtering techniques, as those may allow it to filter
out more obviously unprofitable candidates.

llvm-svn: 112441
2010-08-29 16:39:22 +00:00
Dan Gohman f031792cc6 Fix several areas in LSR to do a better job keeping the main
LSRInstance data structures up to date. This fixes some
pessimizations caused by stale data which will be exposed
in an upcoming change.

llvm-svn: 112440
2010-08-29 16:32:54 +00:00
Dan Gohman e9e0873b08 Refactor the three main groups of code out of
NarrowSearchSpaceUsingHeuristics into separate functions.

llvm-svn: 112439
2010-08-29 16:09:42 +00:00
Dan Gohman 37a0f68036 Delete a bogus check.
llvm-svn: 112438
2010-08-29 15:30:29 +00:00
Dan Gohman b6a520d63c Add some comments.
llvm-svn: 112437
2010-08-29 15:27:08 +00:00
Dan Gohman bf673e0652 Move this debug output into GenerateAllReuseFormula, to declutter
the high-level logic.

llvm-svn: 112436
2010-08-29 15:21:38 +00:00
Dan Gohman d366b6d5c8 Delete an unused declaration.
llvm-svn: 112435
2010-08-29 15:19:11 +00:00
Dan Gohman 4f13bbfefc Do one lookup instead of two.
llvm-svn: 112434
2010-08-29 15:18:49 +00:00
Dan Gohman d1da5cdfee Restructure the {A,+,B}<L> * {C,+,D}<L> folding so that it folds
all applicable addrecs before recursing on getMulExpr, instead of
recursing on getMulExpr for each one.

llvm-svn: 112433
2010-08-29 15:16:58 +00:00
Dan Gohman 3e6fc18943 Batch up subtracts along with adds, when analyzing long chains of
operations.

llvm-svn: 112432
2010-08-29 15:10:06 +00:00
Dan Gohman 7712d2900d Micro-optimize GroupByComplexity.
llvm-svn: 112431
2010-08-29 15:07:13 +00:00
Dan Gohman 0f2de01355 Hold AddRec->getLoop() in a variable, to make the Mul code more consistent
with the Add code.

llvm-svn: 112430
2010-08-29 14:55:19 +00:00
Dan Gohman 028c18158a Rename a variable, for consistency.
llvm-svn: 112429
2010-08-29 14:53:34 +00:00
Dan Gohman 28a84d4ba1 Use iterators instead of indices.
llvm-svn: 112428
2010-08-29 14:52:02 +00:00
Kalle Raiskila 1e616572d9 Fix lowering of INSERT_VECTOR_ELT in SPU.
The IDX was treated as byte index, not element index.

llvm-svn: 112422
2010-08-29 12:41:50 +00:00
Bill Wendling 8fc2b590b9 Fix whitespaces. No functionality changes.
llvm-svn: 112421
2010-08-29 11:31:07 +00:00
Chris Lattner f94f6bb0ba licm preserves the cfg, it doesn't have to explicitly say it
preserves domfrontier.  It does preserve AA though.

llvm-svn: 112419
2010-08-29 07:02:56 +00:00
Chris Lattner abe61ef3b4 now that it doesn't use the PromoteMemToReg function, LICM doesn't
require DomFrontier.  Dropping this doesn't actually save any runs
of the pass though.

llvm-svn: 112418
2010-08-29 06:49:44 +00:00
Chris Lattner 1dc98b47b5 completely rewrite the memory promotion algorithm in LICM.
Among other things, this uses SSAUpdater instead of 
PromoteMemToReg.

llvm-svn: 112417
2010-08-29 06:43:52 +00:00
Bob Wilson d0c054886c Remove NEON vaddl, vaddw, vsubl, and vsubw intrinsics. Instead, use llvm
IR add/sub operations with one or both operands sign- or zero-extended.
Auto-upgrade the old intrinsics.

llvm-svn: 112416
2010-08-29 05:57:34 +00:00
Chris Lattner 9c3931a544 use getUniqueExitBlocks instead of a manual set.
llvm-svn: 112412
2010-08-29 05:12:21 +00:00
Eli Friedman f75de6eae7 A couple of small missed optimizations.
llvm-svn: 112411
2010-08-29 05:07:40 +00:00
Chris Lattner 85bf5421e1 reimplement LICM::sink to use SSAUpdater instead of PromoteMemToReg.
This leads to much simpler code.

llvm-svn: 112410
2010-08-29 04:55:06 +00:00
Chris Lattner c3fb03e289 implement SSAUpdater::RewriteUseAfterInsertions, a helpful form of RewriteUse.
llvm-svn: 112409
2010-08-29 04:54:06 +00:00
Chris Lattner b50407f104 remove dead proto
llvm-svn: 112408
2010-08-29 04:53:24 +00:00
Chris Lattner cd96b4df56 reduce indentation in LICM::sink by using early exits, use
getUniqueExitBlocks instead of getExitBlocks and a manual
set to eliminate dupes.

llvm-svn: 112405
2010-08-29 04:28:20 +00:00
Chris Lattner 188cc5a0fc modernize this pass a bit: use efficient set/map and reduce indentation.
llvm-svn: 112404
2010-08-29 04:23:04 +00:00
Chris Lattner dc8070ed6d when merging two alias sets, the result set is volatile if either
of the sets is volatile.  We were dropping the volatile bit of the
merged in set, leading (luckily) to assertions in cases like 
PR7535.  I cannot produce a testcase that repros with opt, but this
is obviously correct.

llvm-svn: 112402
2010-08-29 04:14:47 +00:00
Chris Lattner eef6b19dcb more cleanup
llvm-svn: 112401
2010-08-29 04:13:43 +00:00
Chris Lattner afb7074f18 clean this up
llvm-svn: 112400
2010-08-29 04:06:55 +00:00
Bill Wendling df9ec17d53 - Add a parameter to T2I_bin_irs for those patterns which set the S bit.
- Create T2I_bin_sw_irs to be like T2I_bin_w_irs, but that it sets the S bit.

llvm-svn: 112399
2010-08-29 03:55:31 +00:00
Chris Lattner 38ccc8b884 add a bunch more common shuffles to the instprinter.
llvm-svn: 112397
2010-08-29 03:08:08 +00:00
Bill Wendling b0dc465c04 Name ANDflag to ANDS, which is less stupid.
llvm-svn: 112395
2010-08-29 03:06:09 +00:00
Bill Wendling ac64ed0923 File missing from last commit.
llvm-svn: 112394
2010-08-29 03:02:28 +00:00
Bill Wendling 0a65116cce Create an ARMISD::AND node. This node is exactly like the "ARM::AND" node, but
it sets the CPSR register.

llvm-svn: 112393
2010-08-29 03:02:11 +00:00
Chris Lattner 7a05e6dca2 I have manually decoded the imm field of an insertps one too many
times.  This patch causes llc and llvm-mc (which both default to
verbose-asm) to print out comments after a few common shuffle 
instructions which indicates the shuffle mask, e.g.:

	insertps	$113, %xmm3, %xmm0     ## xmm0 = zero,xmm0[1,2],xmm3[1]
	unpcklps	%xmm1, %xmm0    ## xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
	pshufd	$1, %xmm1, %xmm1        ## xmm1 = xmm1[1,0,0,0]

This is carefully factored to keep the information extraction (of the
shuffle mask) separate from the printing logic.  I plan to move the
extraction part out somewhere else at some point for other parts of
the x86 backend that want to introspect on the behavior of shuffles.

llvm-svn: 112387
2010-08-28 20:42:31 +00:00
Chris Lattner 94656b1c8c fix the buildvector->insertp[sd] logic to not always create a redundant
insertp[sd] $0, which is a noop.  Before:

_f32:                                   ## @f32
	pshufd	$1, %xmm1, %xmm2
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm2, %xmm3
	addss	%xmm1, %xmm0
                                        ## kill: XMM0<def> XMM0<kill> XMM0<def>
	insertps	$0, %xmm0, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

after:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movdqa	%xmm2, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

The extra movs are due to a random (poor) scheduling decision.

llvm-svn: 112379
2010-08-28 17:59:08 +00:00
Chris Lattner bcb6090ad0 fix the BuildVector -> unpcklps logic to not do pointless shuffles
when the top elements of a vector are undefined.  This happens all
the time for X86-64 ABI stuff because only the low 2 elements of
a 4 element vector are defined.  For example, on:

_Complex float f32(_Complex float A, _Complex float B) {
  return A+B;
}

We used to produce (with SSE2, SSE4.1+ uses insertps):

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$16, %xmm2, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm1
	movdqa	%xmm2, %xmm0
	unpcklps	%xmm1, %xmm0
	ret

We now produce:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movaps	%xmm2, %xmm0
	unpcklps	%xmm3, %xmm0
	ret

This implements rdar://8368414

llvm-svn: 112378
2010-08-28 17:28:30 +00:00
Chris Lattner 96db6e66f4 improve comments in the unpcklps generating logic, introduce
a new EltStride variable instead of reusing NumElems variable
for a non-obvious purpose.  No functionality change.

llvm-svn: 112377
2010-08-28 17:15:43 +00:00
Michael J. Spencer d75cf22f72 Don't cast Win32 FILETIME structs to int64. Patch by Dimitry Andric!
According to the Microsoft documentation here:
http://msdn.microsoft.com/en-us/library/ms724284%28VS.85%29.aspx

this cast used in lib/System/Win32/Path.inc:

__int64 ft = *reinterpret_cast<__int64*>(&fi.ftLastWriteTime);

should not be done.  The documentation says: "Do not cast a pointer to a
FILETIME structure to either a ULARGE_INTEGER* or __int64* value because
it can cause alignment faults on 64-bit Windows."

llvm-svn: 112376
2010-08-28 16:39:32 +00:00
Chris Lattner bd24404718 remove the MSIL backend. It isn't maintained, is buggy, has no testcases
and hasn't kept up with ToT.  Approved by Anton.

llvm-svn: 112375
2010-08-28 16:33:36 +00:00
Bob Wilson 950882be07 Use pseudo instructions for VST1 and VST2.
llvm-svn: 112357
2010-08-28 05:12:57 +00:00
Chris Lattner 13ee795c42 remove unions from LLVM IR. They are severely buggy and not
being actively maintained, improved, or extended.

llvm-svn: 112356
2010-08-28 04:09:24 +00:00
Chris Lattner 504e5100d3 remove the ABCD and SSI passes. They don't have any clients that
I'm aware of, aren't maintained, and LVI will be replacing their value.
nlewycky approved this on irc.

llvm-svn: 112355
2010-08-28 03:51:24 +00:00
Chris Lattner a5217a19a4 remove dead proto
llvm-svn: 112354
2010-08-28 03:45:03 +00:00
Chris Lattner 50df36ac0a for completeness, allow undef also.
llvm-svn: 112351
2010-08-28 03:36:51 +00:00
Chris Lattner 95bb297c26 squish dead code.
llvm-svn: 112350
2010-08-28 03:21:03 +00:00
Chris Lattner ca936ac966 zap dead code
llvm-svn: 112349
2010-08-28 03:18:45 +00:00
Bruno Cardoso Lopes a982aa24ef Clean up the logic of vector shuffles -> vector shifts.
Also teach this logic how to handle target specific shuffles if
needed, this is necessary while searching recursively for zeroed
scalar elements in vector shuffle operands.

llvm-svn: 112348
2010-08-28 02:46:39 +00:00
Chris Lattner d0214f3efe handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret

llvm-svn: 112345
2010-08-28 01:50:57 +00:00
Chris Lattner dd6601048e optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Dan Gohman e06905d1f0 Completely disable tail calls when fast-isel is enabled, as fast-isel
doesn't currently support dealing with this.

llvm-svn: 112341
2010-08-28 00:51:03 +00:00
Dan Gohman 1e06dbf881 Trim a #include.
llvm-svn: 112340
2010-08-28 00:49:13 +00:00
Dan Gohman fe22f1d3cc Fix an index calculation thinko.
llvm-svn: 112337
2010-08-28 00:39:27 +00:00
Bob Wilson 8ee9394750 We don't need to custom-select VLDMQ and VSTMQ anymore.
llvm-svn: 112336
2010-08-28 00:20:11 +00:00
Benjamin Kramer 83f9ff0452 Update CMake build. Add newline at end of file.
llvm-svn: 112332
2010-08-28 00:11:12 +00:00
Bob Wilson ca5af12920 When merging Thumb2 loads/stores, do not give up when the offset is one of
the special values that for ARM would be used with IB or DA modes.  Fall
through and consider materializing a new base address is it would be
profitable.

llvm-svn: 112329
2010-08-27 23:57:52 +00:00
Owen Anderson cf7f941121 Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
This pass addresses the missed optimizations from PR2581 and PR4420.

llvm-svn: 112325
2010-08-27 23:31:36 +00:00
Owen Anderson 38f6b7fe3b Improve the precision of getConstant().
llvm-svn: 112323
2010-08-27 23:29:38 +00:00
Bob Wilson 13ce07fa92 Change ARM VFP VLDM/VSTM instructions to use addressing mode #4, just like
all the other LDM/STM instructions.  This fixes asm printer crashes when
compiling with -O0.  I've changed one of the NEON tests (vst3.ll) to run
with -O0 to check this in the future.

Prior to this change VLDM/VSTM used addressing mode #5, but not really.
The offset field was used to hold a count of the number of registers being
loaded or stored, and the AM5 opcode field was expanded to specify the IA
or DB mode, instead of the standard ADD/SUB specifier.  Much of the backend
was not aware of these special cases.  The crashes occured when rewriting
a frameindex caused the AM5 offset field to be changed so that it did not
have a valid submode.  I don't know exactly what changed to expose this now.
Maybe we've never done much with -O0 and NEON.  Regardless, there's no longer
any reason to keep a count of the VLDM/VSTM registers, so we can use
addressing mode #4 and clean things up in a lot of places.

llvm-svn: 112322
2010-08-27 23:18:17 +00:00
Chris Lattner 6c1395f62a Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Devang Patel f2855b147f Simplify.
llvm-svn: 112305
2010-08-27 22:25:51 +00:00
Chris Lattner 18d7fc8fc6 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Bob Wilson af371b49a8 Unsigned value cannot be < 0.
llvm-svn: 112300
2010-08-27 21:44:35 +00:00
Dan Gohman 15871f23e3 When merging adjacent operands, scan ahead and merge all equal
adjacent operands at once, instead of just two at a time.

llvm-svn: 112299
2010-08-27 21:39:59 +00:00
Chris Lattner 25a198e72b remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.

llvm-svn: 112291
2010-08-27 21:04:34 +00:00
Dan Gohman c866bf4fec Make the {A,+,B}<L> + {C,+,D}<L> --> Other + {A+C,+,B+D}<L>
transformation collect all the addrecs with the same loop
add combine them at once rather than starting everything over
at the first chance.

llvm-svn: 112290
2010-08-27 20:45:56 +00:00
Bill Wendling 6628431a91 Remove now unneeded command line flag that enables 'optimize compares.'
llvm-svn: 112287
2010-08-27 20:39:09 +00:00
Owen Anderson 99d4cb861b Fix typos in comments.
llvm-svn: 112286
2010-08-27 20:32:56 +00:00
Chris Lattner 7398434675 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.

llvm-svn: 112285
2010-08-27 20:32:06 +00:00
Dan Gohman 9bad2fb378 Switch ScalarEvolution's main Value*->SCEV* map from std::map
to DenseMap.

llvm-svn: 112281
2010-08-27 18:55:03 +00:00
Chris Lattner 90cd746e63 Add an instcombine to clean up a common pattern produced
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:

   %94 = zext i16 %93 to i32                       ; <i32> [#uses=2]
   %96 = lshr i32 %94, 8                           ; <i32> [#uses=1]
   %101 = trunc i32 %96 to i8                      ; <i8> [#uses=1]

This also unblocks other xforms from happening, now clang is able to compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm2
	addss	%xmm0, %xmm2
	movdqa	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	pshufd	$1, %xmm1, %xmm0
	addss	%xmm3, %xmm0
	ret

on x86-64, instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

This seems pretty close to optimal to me, at least without
using horizontal adds.  This also triggers in lots of other
code, including SPEC.

llvm-svn: 112278
2010-08-27 18:31:05 +00:00
Bob Wilson edf722add3 Add alignment arguments to all the NEON load/store intrinsics.
Update all the tests using those intrinsics and add support for
auto-upgrading bitcode files with the old versions of the intrinsics.

llvm-svn: 112271
2010-08-27 17:13:24 +00:00
Owen Anderson 6ebbd92380 Use LVI to eliminate conditional branches where we've tested a related condition previously. Update tests for this change.
This fixes PR5652.

llvm-svn: 112270
2010-08-27 17:12:29 +00:00
Dan Gohman 2706567c5c Optimize SCEVComplexityCompare. Use a 3-way return instead of a 2-way
return to avoid needing two calls to test for equivalence, and sort
addrecs by their degree before examining their operands.

llvm-svn: 112267
2010-08-27 15:26:01 +00:00
Anton Korobeynikov c0b36921c2 Properly handle passing of FP stuff to varargs function on Win64:
value should be copied to the corresponding shadow reg as well.
Patch by Cameron Esfahani!

llvm-svn: 112262
2010-08-27 14:43:06 +00:00
Benjamin Kramer 1f6012479f MCELF: Port EmitInstruction changes from MachO streamer. Patch by Roman Divacky.
llvm-svn: 112260
2010-08-27 10:40:51 +00:00
Benjamin Kramer 05e22982c8 MCELF: Always overwrite FixedValue.
llvm-svn: 112259
2010-08-27 10:38:39 +00:00
Daniel Dunbar 1844a71e66 X86: Fix an encoding issue with LOCK_ADD64mr, which could lead to very hard to find miscompiles with the integrated assembler.
llvm-svn: 112250
2010-08-27 01:30:14 +00:00
Devang Patel b12ff5999e Revert r112213. It is not needed.
llvm-svn: 112242
2010-08-26 23:35:15 +00:00
Jim Grosbach 6a77066913 Simplify eliminateFrameIndex() interface back down now that PEI doesn't need
to try to re-use scavenged frame index reference registers. rdar://8277890

llvm-svn: 112241
2010-08-26 23:32:16 +00:00
Devang Patel ea134f56b1 If node is not available then use FuncInfo.ValueMap to emit debug info for byval parameter.
llvm-svn: 112238
2010-08-26 22:53:27 +00:00
Jim Grosbach 2a1915d04b Remove the now obsolete frame index virtual re-use algorithm from PEI. Pre-RA
virtual base registers handle this function, and more. A bit more cleanup
to do on the interface to eliminateFrameIndex() after this.

llvm-svn: 112237
2010-08-26 22:42:12 +00:00
Chris Lattner bfd2228182 optimize "integer extraction out of the middle of a vector" as produced
by SRoA.  This is part of rdar://7892780, but needs another xform to
expose this.

llvm-svn: 112232
2010-08-26 22:14:59 +00:00
Jim Grosbach e82d5b4aaf tidy up a bit. no functional change.
llvm-svn: 112228
2010-08-26 21:56:30 +00:00
Chris Lattner d4ebd6df5a optimize bitcast(trunc(bitcast(x))) where the result is a float and 'x'
is a vector to be a vector element extraction.  This allows clang to
compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movd	%eax, %xmm0
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	addss	%xmm2, %xmm1
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm1, %xmm0
	ret

... eliminating half of the horribleness.

llvm-svn: 112227
2010-08-26 21:55:42 +00:00
Jim Grosbach 17da935964 Turn off the scavenging based frame reg reuse briefly to measure whether it's
still having a significant effect. It shouldn't be now that the pre-RA
virtual base reg stuff is in. Assuming that's valididated by the nightly
testers, we can simplify a lot of the PEI frame index code.

llvm-svn: 112220
2010-08-26 21:29:54 +00:00
Bruno Cardoso Lopes e25ba0c7c2 zap the now unused MVT::getIntVectorWithNumElements
llvm-svn: 112218
2010-08-26 20:53:12 +00:00
Devang Patel 42b4ac7ed3 Speculatively revert r112207.
llvm-svn: 112216
2010-08-26 20:33:42 +00:00
Devang Patel 977057f481 80 col.
llvm-svn: 112215
2010-08-26 20:32:32 +00:00
Devang Patel 384fa91deb Update DanglingDebugInfo so that it can be used to track llvm.dbg.declare also.
llvm-svn: 112213
2010-08-26 20:06:46 +00:00
Bob Wilson 97919e9c59 Use pseudo instructions for VST3.
llvm-svn: 112208
2010-08-26 18:51:29 +00:00
Devang Patel ab596a637c Donot forget to resolve dangling debug info in a case where virtual register, used for a value, is initialized after a dbg intrinsic is seen.
llvm-svn: 112207
2010-08-26 18:36:14 +00:00
Bill Wendling a9c03f4fae Reapply r112176 without removing the other CMN patterns (that was unintentional).
llvm-svn: 112206
2010-08-26 18:33:51 +00:00
Benjamin Kramer 2c45f431fa MCELF: Fix a thinko of mine.
llvm-svn: 112203
2010-08-26 18:12:04 +00:00
Bob Wilson a967c42a3d Fix comment typos.
llvm-svn: 112202
2010-08-26 18:08:11 +00:00
Owen Anderson bd2ecc7e68 Make JumpThreading smart enough to properly thread StrSwitch when it's compiled with clang++.
llvm-svn: 112198
2010-08-26 17:40:24 +00:00
Benjamin Kramer 929cc7618f MCELF: Compensate for the addend on i386. Patch by Roman Divacky, with some cleanups.
llvm-svn: 112197
2010-08-26 17:23:02 +00:00
Jim Grosbach 074d22e1ac Restrict the register to tGPR to make sure the str instruction will be
encodable as a 16-bit wide instruction.

llvm-svn: 112195
2010-08-26 17:02:47 +00:00
Dan Gohman 10b20b2b81 Revert r112176; it broke test/CodeGen/Thumb2/thumb2-cmn.ll.
llvm-svn: 112191
2010-08-26 15:50:25 +00:00
Dan Gohman ca26f79051 Reapply r112091 and r111922, support for metadata linking, with a
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).

This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.

llvm-svn: 112190
2010-08-26 15:41:53 +00:00
Benjamin Kramer 9bf0380a54 StringRef::compare_numeric also differed from StringRef::compare for characters > 127.
llvm-svn: 112189
2010-08-26 15:25:35 +00:00
Benjamin Kramer b04d4af057 Do unsigned char comparisons in StringRef::compare_lower to be more consistent with compare in corner cases.
llvm-svn: 112185
2010-08-26 14:21:08 +00:00
Bill Wendling a9a0599b39 There seems to be a (potential) hardware bug with the CMN instruction and
comparison with 0. These two pieces of code should give identical results:

  rsbs r1, r1, 0
  cmp  r0, r1
  mov  r0, #0
  it   ls
  mov  r0, #1

and:

  cmn  r0, r1
  mov  r0, #0
  it   ls
  mov  r0, #1

However, the CMN gives the *opposite* result when r1 is 0. This is because the
carry flag is set in the CMP case but not in the CMN case. In short, the CMP
instruction doesn't perform a truncate of the (logical) NOT of 0 plus the value
of r0 and the carry bit (because the "carry bit" parameter to AddWithCarry is
defined as 1 in this case, the carry flag will always be set when r0 >= 0). The
CMN instruction doesn't perform a NOT of 0 so there is never a "carry" when this
AddWithCarry is performed (because the "carry bit" parameter to AddWithCarry is
defined as 0).

The AddWithCarry in the CMP case seems to be relying upon the identity:

  ~x + 1 = -x

However when x is 0 and unsigned, this doesn't hold:

   x = 0
  ~x = 0xFFFF FFFF
  ~x + 1 = 0x1 0000 0000
  (-x = 0) != (0x1 0000 0000 = ~x + 1)

Therefore, we should disable *all* versions of CMN, especially when comparing
against zero, until we can limit when the CMN instruction is used (when we know
that the RHS is not 0) or when we have a hardware fix for this.

(See the ARM docs for the "AddWithCarry" pseudo-code.)

This is related to <rdar://problem/7569620>.

llvm-svn: 112176
2010-08-26 09:07:33 +00:00
Chris Lattner af23e9a798 Add a hackaround for PR7993 which is causing failures on x86 builders that lack sse2.
llvm-svn: 112175
2010-08-26 06:57:07 +00:00
Chris Lattner eb2cc0ce0e implement SplitVecOp_CONCAT_VECTORS, fixing the included testcase with SSE1.
llvm-svn: 112171
2010-08-26 05:51:22 +00:00
Bob Wilson 4cec44975e Use pseudo instructions for VST1d64Q.
llvm-svn: 112170
2010-08-26 05:33:30 +00:00
Chris Lattner cc60609cb4 fix sse1 only codegen in x86-64 mode, which is something we
apparently try to support.

llvm-svn: 112168
2010-08-26 05:24:29 +00:00
Daniel Dunbar ce45863f0d Revert r111922, "MapValue support for MDNodes. This is similar to r109117,
except ...", it is causing *massive* performance regressions when building Clang
with itself (-O3 -g).

llvm-svn: 112158
2010-08-26 03:48:11 +00:00
Daniel Dunbar 95fe13c720 Revert r112091, "Remap metadata attached to instructions when remapping
individual ...", which depends on r111922, which I am reverting.

llvm-svn: 112157
2010-08-26 03:48:08 +00:00
Chris Lattner f6418b804e zap dead code.
llvm-svn: 112155
2010-08-26 02:57:35 +00:00
Chris Lattner 2d482bb96b remove dead proto
llvm-svn: 112131
2010-08-26 01:14:37 +00:00
Chris Lattner 07afbd5a08 zap dead code.
llvm-svn: 112130
2010-08-26 01:13:54 +00:00
Bruno Cardoso Lopes 184eaea855 Fix PR7748 without using microsoft extensions
llvm-svn: 112128
2010-08-26 01:02:53 +00:00
Jim Grosbach 08da771ec3 Enable pre-RA virtual frame base register allocation. rdar://8277890
llvm-svn: 112127
2010-08-26 00:58:06 +00:00
Dan Gohman 8f292e7a6d Rewrite ExtractGV, removing a bunch of stuff that didn't fully work,
and was over-complicated, and replacing it with a simple implementation.

llvm-svn: 112120
2010-08-26 00:22:55 +00:00
Bob Wilson 4629f423f8 Revert svn 107892 (with changes to work with trunk). It caused a crash if
a VLD result was not used (Radar 8355607).  It should also fix pr7988, but
I haven't verified that yet.

llvm-svn: 112118
2010-08-26 00:13:36 +00:00
Chris Lattner aecf47a5cb we should pattern match the SSE complex arithmetic ops.
llvm-svn: 112109
2010-08-25 23:31:42 +00:00
Bob Wilson 9392b0e960 Start converting NEON load/stores to use pseudo instructions, beginning here
with the VST4 instructions.  Until after register allocation, we want to
represent sets of adjacent registers by a single super-register.  These
VST4 pseudo instructions have a single QQ or QQQQ source register operand.
They get expanded to the real VST4 instructions with 4 separate D register
operands.  Once this conversion is complete, we'll be able to remove the
NEONPreAllocPass and avoid some fragile and hacky code elsewhere.

llvm-svn: 112108
2010-08-25 23:27:42 +00:00
Chris Lattner 8df99b523e remove some llvmcontext arguments that are now dead post-refactoring.
llvm-svn: 112104
2010-08-25 23:00:45 +00:00
Chris Lattner 75ff053497 Change handling of illegal vector types to widen when possible instead of
expanding: e.g. <2 x float> -> <4 x float> instead of -> 2 floats.  This
affects two places in the code: handling cross block values and handling
function return and arguments.  Since vectors are already widened by 
legalizetypes, this gives us much better code and unblocks x86-64 abi
and SPU abi work.

For example, this (which is a silly example of a cross-block value):
define <4 x float> @test2(<4 x float> %A) nounwind {
 %B = shufflevector <4 x float> %A, <4 x float> undef, <2 x i32> <i32 0, i32 1>
 %C = fadd <2 x float> %B, %B
  br label %BB
BB:
 %D = fadd <2 x float> %C, %C
 %E = shufflevector <2 x float> %D, <2 x float> undef, <4 x i32> <i32 0, i32 1, i32 undef, i32 undef>
 ret <4 x float> %E
}

Now compiles into:

_test2:                                 ## @test2
## BB#0:
 addps %xmm0, %xmm0
 addps %xmm0, %xmm0
 ret

previously it compiled into:

_test2:                                 ## @test2
## BB#0:
 addps %xmm0, %xmm0
 pshufd $1, %xmm0, %xmm1
                                        ## kill: XMM0<def> XMM0<kill> XMM0<def>
 insertps $0, %xmm0, %xmm0
 insertps $16, %xmm1, %xmm0
 addps %xmm0, %xmm0
 ret

This implements rdar://8230384

llvm-svn: 112101
2010-08-25 22:49:25 +00:00
Dan Gohman fd824487a3 Remap metadata attached to instructions when remapping individual
instructions, not when remapping modules.

llvm-svn: 112091
2010-08-25 21:36:50 +00:00
Bruno Cardoso Lopes d4085f6e91 Revert this for now, PUNPCKLDQ dont operate on v4f32
llvm-svn: 112090
2010-08-25 21:26:37 +00:00
Daniel Dunbar 3d148ac089 X86: Fix misencode of RI64mi8. This fixes OpenSSL / x86_64-apple-darwin10 / clang -O3.
llvm-svn: 112089
2010-08-25 21:11:02 +00:00
Devang Patel 32a72ab072 Fix comment.
llvm-svn: 112086
2010-08-25 20:41:24 +00:00
Devang Patel 3f53d6e56a Remove dead argument.
llvm-svn: 112085
2010-08-25 20:39:26 +00:00
Jim Grosbach 7c1b421ae6 Add some statistics for PEI register scavenging
llvm-svn: 112084
2010-08-25 20:34:28 +00:00
Dan Gohman 9b9ff467db Add a FIXME comment.
llvm-svn: 112083
2010-08-25 20:23:38 +00:00
Dan Gohman 26d837d086 Fix the bitcode reader to clear out function-specific state
from MDValueList between each function, now that the bitcode
writer is reusing the index space for function-local metadata.

llvm-svn: 112082
2010-08-25 20:22:53 +00:00
Dan Gohman 950ad65841 Fix a bug found by inspection.
llvm-svn: 112081
2010-08-25 20:20:21 +00:00
Dan Gohman 4a68f9b606 Add a comment.
llvm-svn: 112080
2010-08-25 20:17:19 +00:00
Benjamin Kramer 37b384cd66 MCELF: Use precomputed symbol indices, patch by Roman Divacky.
llvm-svn: 112079
2010-08-25 20:09:43 +00:00
Michael J. Spencer 237e4ecafb MC: Fix inconsistant naming in COFF object writer. Patch by Cameron Esfahani.
llvm-svn: 112076
2010-08-25 19:27:27 +00:00
Jim Grosbach 0a84487fa7 Don't override the var from the enclosing scope.
When doing copy/paste/modify, it's apparently rather important to remember
the 'modify' bit...

llvm-svn: 112075
2010-08-25 19:11:34 +00:00
Chris Lattner bf80d28a74 zap dead code
llvm-svn: 112073
2010-08-25 19:00:00 +00:00
Devang Patel 01262e129e DIGlobalVariable can be used to encode debug info for globals that are directly folded into a constant by FE.
llvm-svn: 112072
2010-08-25 18:52:02 +00:00
Benjamin Kramer f1f2133ac0 Remove dead recursive function. Yay for clang -Wunused-function.
llvm-svn: 112060
2010-08-25 17:27:58 +00:00
Dan Gohman 22161da9ff Clear FunctionLocalMDs in purgeFunction along with the rest of the
function-specific state.

llvm-svn: 112058
2010-08-25 17:11:16 +00:00
Dan Gohman 1f4b028b75 Fix whitespace.
llvm-svn: 112056
2010-08-25 17:09:50 +00:00
Dan Gohman 9cfe532ae5 Eliminate an unnecessary cast.
llvm-svn: 112055
2010-08-25 17:09:03 +00:00
Daniel Dunbar a54a1b0edf ARM/Thumb2: Fix a misselect in getARMCmp, when attempting to adjust a signed
comparison that would overflow.
 - The other under/overflow cases can't actually happen because the immediates
   which would trigger them are legal (so we don't enter this code), but
   adjusted the style to make it clear the transform is always valid.

llvm-svn: 112053
2010-08-25 16:58:05 +00:00
Eric Christopher 7a0d8c69cb Do type checks before we bother to do everything else.
llvm-svn: 112039
2010-08-25 08:43:57 +00:00
Anton Korobeynikov b3b53ecac0 Fix nasty mingw32 bug, which e.g. prevented llvm-gcc bootstrap there.
Mark _alloca call as clobberring EFLAGS, otherwise some DCE might remove
other flags-clobberring stuff (e.g. cmp instructions) occuring after
_alloca call.

llvm-svn: 112034
2010-08-25 07:50:11 +00:00
Eric Christopher 761e7fb605 Reorganize load mechanisms. Handle types in a little less fixed way.
Fix some todos.  No functional change.

llvm-svn: 112031
2010-08-25 07:23:49 +00:00
Bruno Cardoso Lopes 0770d25758 PUNPCKLDQ should also be used for v4f32
llvm-svn: 112020
2010-08-25 02:55:40 +00:00
Bruno Cardoso Lopes 2e45d522c1 teach lowering to get target specific nodes for pshufd, emulating the same isel behavior for now, so we can pass all vector shuffle tests
llvm-svn: 112017
2010-08-25 02:35:37 +00:00
Owen Anderson 4afea9e3c6 In the default address space, any GEP off of null results in a trap value if you try to load it. Thus,
any load in the default address space that completes implies that the base value that it GEP'd from
was not null.

llvm-svn: 112015
2010-08-25 01:16:47 +00:00
Dan Gohman 01579b20a6 Don't include the is-function-local bit in the FoldingSetNodeID
for MDNodes, since this information is effectively implied by
the operands. This allow allows the code to avoid doing a
recursive is-it-really-function-local check in some cases.

llvm-svn: 111995
2010-08-24 23:21:12 +00:00
Chris Lattner 05bcb488b5 split the vector case of getCopyFromParts out to its own function,
no functionality change.

llvm-svn: 111994
2010-08-24 23:20:40 +00:00
Dan Gohman 3d9ed28046 Use Bits.data() instead of &Bits[0].
llvm-svn: 111993
2010-08-24 23:16:53 +00:00
Chris Lattner 96a77ebd7c split the vector case out of getCopyToParts into its own function. No
functionality change.

llvm-svn: 111990
2010-08-24 23:10:06 +00:00
Chris Lattner 5b8967f8a2 tidy up, reduce indentation
llvm-svn: 111982
2010-08-24 22:43:11 +00:00
Eric Christopher 15b182f4d4 Fix predicate and add a comment.
llvm-svn: 111981
2010-08-24 22:34:11 +00:00
Eric Christopher 236ec8f3b5 Rework braindead conditionals I put in yesterday.
llvm-svn: 111974
2010-08-24 22:07:27 +00:00
Eric Christopher 6c99ebf5b0 Fix thumb2 mode loads to have the correct operand ordering. Add a todo
to fix this in the port.

llvm-svn: 111973
2010-08-24 22:03:02 +00:00
Owen Anderson a10000006e NULL loads are only invalid in the default address space.
llvm-svn: 111972
2010-08-24 22:00:55 +00:00
Owen Anderson b695c83de9 Add support for inferring values for the default cases of switches.
llvm-svn: 111971
2010-08-24 21:59:42 +00:00
Jim Grosbach 2eedb7949e Add ARM heuristic for when to allocate a virtual base register for stack
access. rdar://8277890&7352504

llvm-svn: 111968
2010-08-24 21:19:33 +00:00
Kevin Enderby a71ab07e14 Change the parsing of .loc back to allow the LineNumber field to be optional as
it is with other assemblers.

llvm-svn: 111967
2010-08-24 21:14:47 +00:00
Michael J. Spencer ccd28d0665 Fix COFF x86-64 relocations. PR7960.
Multiple symbol reloc handling part of the patch by Cameron Esfahani.

llvm-svn: 111963
2010-08-24 21:04:52 +00:00
Owen Anderson da34de1599 Add support for inferring that a load from a pointer implies that it is not null.
llvm-svn: 111959
2010-08-24 20:47:29 +00:00
Kevin Enderby 1264b7cab8 First bit of support for the dwarf .loc directive. This patch updates the
needed parsing for the .loc directive and saves the current info from that
into the context.  The next patch will take the current loc info after an
instruction is assembled and save that info into a vector for each section for
use to build the line number tables.  The patch after that will encode the info
from those vectors into the output file as the dwarf line tables.

llvm-svn: 111956
2010-08-24 20:32:42 +00:00
Bill Wendling 3aeedd1e5a - Add the LinkerPrivateWeakDefAutoLinkage to the Ada bindings.
- Support the LinkerWeak*Linkage types in llvm-nm and in LinkModules.cpp.

llvm-svn: 111952
2010-08-24 20:00:52 +00:00
Daniel Dunbar 1c8d777c93 MC/X86: Tweak imul recognition, previous hack only applies for the imul form
taking immediates.

llvm-svn: 111950
2010-08-24 19:37:56 +00:00
Dan Gohman 8ad536a902 Link NamedMDNodes after linking GlobalValues, so that MDNodes
which reference GlobalValues are properly remapped.

llvm-svn: 111949
2010-08-24 19:37:11 +00:00
Dan Gohman 3535190116 When linking NamedMDNodes, remap their operands.
llvm-svn: 111948
2010-08-24 19:31:04 +00:00
Daniel Dunbar 09392785b4 MC/X86: Add custom hack for recognizing "imul $12, %eax" and friends.
llvm-svn: 111947
2010-08-24 19:24:18 +00:00
Daniel Dunbar 2476432639 MC/AsmParser: Change ParseExpression to use ParseIdentifier(), to support
dollars in identifiers.

llvm-svn: 111946
2010-08-24 19:13:42 +00:00
Daniel Dunbar 94b84a19b9 MC/X86: Warn on scale factors > 1 without index register, instead of erroring,
for 'as' compatibility.

llvm-svn: 111945
2010-08-24 19:13:38 +00:00
Jim Grosbach b77d67f318 Move enabling the local stack allocation pass into the target where it belongs.
For now it's still a command line option, but the interface to the generic
code doesn't need to know that.

llvm-svn: 111942
2010-08-24 19:05:43 +00:00
Dan Gohman a209503467 Use MapValue in the Linker instead of having a private function
which does the same thing. This eliminates redundant code and
handles MDNodes better. MDNode linking still doesn't fully
work yet though.

llvm-svn: 111941
2010-08-24 18:50:07 +00:00
Daniel Dunbar 3b96ffdac1 MC/Parser: Accept leading dollar signs in identifiers.
- Implemented by manually splicing the tokens. If this turns out to be
   problematically platform specific, a more elegant solution would be to
   implement some context dependent lexing support.

llvm-svn: 111934
2010-08-24 18:12:12 +00:00
Dan Gohman e06c8137e0 Don't cast away qualifiers with C-style casts.
llvm-svn: 111933
2010-08-24 18:09:44 +00:00
Jim Grosbach 35b7c033d4 add ARM cmd line option to force always using virtual base regs when possible.
Intended to help ease reproducing problems by increasing base register usage
after heuristics for only using the when needed are in place.

llvm-svn: 111930
2010-08-24 18:04:52 +00:00
Benjamin Kramer a536f077fe Relocate against parent if the symbol is not in section or it's a common symbol, from Roman Divacky.
llvm-svn: 111925
2010-08-24 17:34:39 +00:00
Owen Anderson 7c853e877e Turn LVI on, previously detected failures should be fixed now.
llvm-svn: 111923
2010-08-24 17:21:18 +00:00
Dan Gohman 6901283544 MapValue support for MDNodes. This is similar to r109117, except
that it avoids a lot of unnecessary cloning by avoiding remapping
MDNode cycles when none of the nodes in the cycle actually need to
be remapped. Also it uses the new temporary MDNode mechanism.

llvm-svn: 111922
2010-08-24 17:10:10 +00:00
Dan Gohman c88fda477a Fix X86's isLegalAddressingMode to recognize that static addresses
need not be RIP-relative in small mode.

llvm-svn: 111917
2010-08-24 15:55:12 +00:00
Dan Gohman f0715b179a Add a comment explaining why this code doesn't just call
ParseMetadataValue.

llvm-svn: 111914
2010-08-24 14:35:45 +00:00
Dan Gohman 7c7f13a5e6 Add a comment explaining why this code is more complex than it
initially seems it should require.

llvm-svn: 111913
2010-08-24 14:31:06 +00:00
Kalle Raiskila 7e25bc4145 Fix SPU BE to use all the available return registers.
llc used to assert on the added testcase.

llvm-svn: 111911
2010-08-24 11:50:48 +00:00
Kalle Raiskila 8f3e3ba5ff Remove some dead code from SPU BE that remained
from 64bit vector support.

llvm-svn: 111910
2010-08-24 11:05:51 +00:00
Owen Anderson c62f704576 Don't assume that all constants with integer types are ConstantInts.
llvm-svn: 111906
2010-08-24 07:55:44 +00:00
Dan Gohman 10215a12e8 Add braces to fix dangling else.
llvm-svn: 111896
2010-08-24 02:40:27 +00:00
Dan Gohman c828c5465d Extend function-local metadata to be usable as attachments.
llvm-svn: 111895
2010-08-24 02:24:03 +00:00
Dan Gohman ab09a12cad When we know we have an MDValue or MDString, call EnumerateMetadata
directly instead of going through EnumerateValue.

llvm-svn: 111894
2010-08-24 02:10:52 +00:00
Dan Gohman 338d9a4935 Give ParseInstructionMetadata access to the PerFunctionState object.
This is in preparation for generalizing its parsing of function-local
values.

llvm-svn: 111893
2010-08-24 02:05:17 +00:00
Dan Gohman d3d2bbe620 Simplify this code. NamedMDNode operands are MDNodes.
llvm-svn: 111892
2010-08-24 02:01:24 +00:00
Bruno Cardoso Lopes 758d7b1f5c Use pshufhw and pshuflw in more cases and fix getTargetShuffleNode number of arguments
llvm-svn: 111890
2010-08-24 01:16:15 +00:00
Bill Wendling 2c64ba63a1 Add comments for what the condition code symbols mean.
llvm-svn: 111889
2010-08-24 01:11:30 +00:00
Eric Christopher 46d3a56e5d Update comment.
llvm-svn: 111887
2010-08-24 01:10:52 +00:00
Eric Christopher c0c00ca33f Fix the opcode and the operands for the load instruction.
llvm-svn: 111885
2010-08-24 01:10:04 +00:00
Eric Christopher eb47692c22 Add register class hack that needs to go away, but makes it more obvious
that it needs to go away.  Use loadRegFromStackSlot where possible.

Also, remember to update the value map.

llvm-svn: 111883
2010-08-24 00:50:47 +00:00
Chris Lattner 02db8f6415 fix rdar://7997827 - Accept and ignore LL and ULL suffixes on integer literals.
Also fix 0b010 syntax to actually work while we're at it :-)

llvm-svn: 111876
2010-08-24 00:43:25 +00:00
Eric Christopher 9d4e471cc2 Add some more debugging code, make it more obvious that RegOffset is
getting an address for an object and select some default values.

llvm-svn: 111871
2010-08-24 00:07:24 +00:00
Devang Patel 4a213870db Revert r107202. It is not adding any value.
llvm-svn: 111870
2010-08-24 00:06:12 +00:00
Eric Christopher e3107d6283 Don't need the extra register here.
llvm-svn: 111864
2010-08-23 23:28:04 +00:00
Devang Patel dd719f701d Let FE use derived types for DW_TAG_friend.
Patch by Alexander Herz!

llvm-svn: 111861
2010-08-23 23:16:25 +00:00
Eric Christopher 414501c511 Add some more "get address into register" code and a more TODOs/FIXMEs.
llvm-svn: 111860
2010-08-23 23:14:31 +00:00
Eric Christopher 8d03b8a8ce Add an ARMFunctionInfo member and use it.
llvm-svn: 111854
2010-08-23 22:32:45 +00:00
Dan Gohman 5d29673855 Verify that a non-uniqued non-temporary MDNode is not deleted via
MDNode::deleteTemporary.

llvm-svn: 111853
2010-08-23 22:32:05 +00:00
Eric Christopher 00202ee329 Start getting ARM loads/address computation going.
llvm-svn: 111850
2010-08-23 21:44:12 +00:00
Benjamin Kramer d41b53c037 Fix thinko. Having no tests is great ...
llvm-svn: 111848
2010-08-23 21:32:00 +00:00
Jim Grosbach 616bc356e9 Remove the MFI storage of the local allocation block size. It's not needed.
llvm-svn: 111847
2010-08-23 21:29:29 +00:00
Benjamin Kramer c4809c930a Reduce code duplication.
llvm-svn: 111846
2010-08-23 21:23:52 +00:00
Benjamin Kramer 86511dce18 ELFObjectWriter: Run ComputeSymbolTable before recording relocations. This way we can use the information it has computed and don't have to recompute the same stuff over and over again.
llvm-svn: 111844
2010-08-23 21:19:37 +00:00
Bruno Cardoso Lopes 264d90fff7 Start using target speficic nodes for shuffles: pshufhw and pshuflw
llvm-svn: 111837
2010-08-23 20:41:02 +00:00
Jim Grosbach 754f8e600e Better handling of local offsets for downwards growing stacks. This corrects
relative offsets when there are offsets encoded in the instructions and
simplifies final allocation in PEI. rdar://8277890

llvm-svn: 111836
2010-08-23 20:40:38 +00:00
Gabor Greif 21fed6616c tyops
llvm-svn: 111835
2010-08-23 20:30:51 +00:00
Owen Anderson 6ffa3f2aea Turn LVI back off, I have a testcase now.
llvm-svn: 111834
2010-08-23 19:59:27 +00:00
Chris Lattner 58bd73a5a7 Add a new llvm.x86.int intrinsic, allowing access to the
x86 int and int3 instructions.  Patch by Peter Housel!

llvm-svn: 111831
2010-08-23 19:39:25 +00:00
Mikhail Glushenkov c6c79ddcb9 Add a TODO.
llvm-svn: 111828
2010-08-23 19:24:12 +00:00
Mikhail Glushenkov bf38e0749d llvmc: Properly handle (error) in edge properties.
llvm-svn: 111827
2010-08-23 19:24:08 +00:00
Benjamin Kramer 40f83489b4 Add the symbol offset to the relocation value when we relocate against section. By Roman Divacky.
llvm-svn: 111824
2010-08-23 19:05:46 +00:00
Devang Patel a8652674e0 Handle qualified constants that are directly folded by FE.
PR 7920.

llvm-svn: 111820
2010-08-23 18:25:56 +00:00
Benjamin Kramer 620b68e883 Use the proper relocation section + cleanup, from Roman Divacky.
llvm-svn: 111819
2010-08-23 18:24:20 +00:00
Benjamin Kramer 08fd2cf26a Avoid O(n*m) complexity in StringRef::find_first(_not)_of(StringRef).
- Cache used characters in a bitset to reduce memory overhead to just 32 bytes.
- On my core2 this code is faster except when the checked string was very short
  (smaller than the list of delimiters).

llvm-svn: 111817
2010-08-23 18:16:08 +00:00
Owen Anderson 630add39a6 Re-enable LazyValueInfo. Monitoring for failures.
llvm-svn: 111816
2010-08-23 18:12:23 +00:00
Owen Anderson d31d82d75c Now that PassInfo and Pass::ID have been separated, move the rest of the passes over to the new registration API.
llvm-svn: 111815
2010-08-23 17:52:01 +00:00
Chris Lattner a42202e0e4 random improvement for variable shift codegen.
llvm-svn: 111813
2010-08-23 17:30:29 +00:00
Chandler Carruth 191c4f73b2 Fix some GCC warnings by providing a virtual destructor in the base of a class
hierarchy with virtual methods and using llvm_unreachable to properly indicate
unreachable states which would otherwise leave variables uninitialized.

llvm-svn: 111803
2010-08-23 08:25:07 +00:00
Anton Korobeynikov cbbe4501df Revert invalid r111792. Jump tables are not broken on x86-64 / coff,
it's COFF emitter which does not support differences of two symbols
(and needs to be fixed). GAS is pretty fine with code produced.

llvm-svn: 111801
2010-08-23 07:38:51 +00:00
Michael J. Spencer db06215b7f Revert part of my last commit. the mingw32 build bot doesn't seem to like it.
llvm-svn: 111793
2010-08-23 05:25:23 +00:00
Michael J. Spencer e87231232a Workaround broken jump tables on x86-64 COFF.
llvm-svn: 111792
2010-08-23 04:45:37 +00:00
Chris Lattner c3847a8134 remove some dead code.
llvm-svn: 111791
2010-08-23 03:12:06 +00:00
Nick Lewycky c72d2853e1 Verify the predicates on icmp/fcmp. Suggested by Jeff Yasskin!
llvm-svn: 111787
2010-08-22 23:45:14 +00:00
Eli Friedman ac305d2024 Delete dead comment.
llvm-svn: 111744
2010-08-21 20:19:51 +00:00
Anton Korobeynikov db9820ecaa Use rip-rel addressing on win64 by default. For this we just
defaults to small pic code model.

llvm-svn: 111741
2010-08-21 17:21:11 +00:00
Benjamin Kramer 1f3b0c03e5 Use MDNode::destroy(). Fixes a delete/free mismatch.
llvm-svn: 111739
2010-08-21 15:07:23 +00:00
Michael J. Spencer 377aa20e6e MC: Add partial x86-64 support to COFF.
llvm-svn: 111728
2010-08-21 05:58:13 +00:00
Dan Gohman 573869fe5b Add an assert to MDNode::deleteTemporary check that the node being deleted
is not non-temporary.

llvm-svn: 111713
2010-08-21 02:52:29 +00:00
Dan Gohman 42ef669d81 Fix x86 fast-isel's cmp+branch folding to avoid folding when the
comparison is in a different basic block from the branch. In such
cases, the comparison's operands may not have initialized virtual
registers available.

llvm-svn: 111709
2010-08-21 02:32:36 +00:00
Bruno Cardoso Lopes 9f20e7a1bf Prepare LowerVECTOR_SHUFFLEv8i16 to use x86 target specific nodes directly
llvm-svn: 111704
2010-08-21 01:32:18 +00:00
Bruno Cardoso Lopes 6f3b38a851 This is the first step towards refactoring the x86 vector shuffle code. The
general idea here is to have a group of x86 target specific nodes which are
going to be selected during lowering and then directly matched in isel.

The commit includes the addition of those specific nodes and a *bunch* of
patterns, and incrementally we're going to switch between them and what we
have right now. Both the patterns and target specific nodes can change as
we move forward with this work.

llvm-svn: 111691
2010-08-20 22:55:05 +00:00
Dan Gohman 5fc55dc3cf CreateTemporaryType doesn't needs its Context argument.
llvm-svn: 111687
2010-08-20 22:39:47 +00:00
Bill Wendling 578ee4070c Create the new linker type "linker_private_weak_def_auto".
It's similar to "linker_private_weak", but it's known that the address of the
object is not taken. For instance, functions that had an inline definition, but
the compiler decided not to inline it. Note, unlike linker_private and
linker_private_weak, linker_private_weak_def_auto may have only default
visibility.  The symbols are removed by the linker from the final linked image
(executable or dynamic library).

llvm-svn: 111684
2010-08-20 22:05:50 +00:00
Dan Gohman 16a5d98c3a Introduce a new temporary MDNode concept. Temporary MDNodes are
not part of the IR, are not uniqued, and may be safely RAUW'd.
This replaces a variety of alternate mechanisms for achieving
the same effect.

llvm-svn: 111681
2010-08-20 22:02:26 +00:00
Daniel Dunbar 2b2b79edde Fix --disable-threads build, PR7949.
llvm-svn: 111676
2010-08-20 20:54:37 +00:00
Jim Grosbach 7648a21152 Downwards growing stack allocation order reverses relative offsets
llvm-svn: 111673
2010-08-20 20:25:31 +00:00
Jim Grosbach 7110941d68 Add more dbg output
llvm-svn: 111670
2010-08-20 19:04:43 +00:00
Benjamin Kramer d3eb989f37 Update CMake build.
llvm-svn: 111669
2010-08-20 18:56:46 +00:00
Owen Anderson 84c29a096b Re-apply r111568 with a fix for the clang self-host.
llvm-svn: 111665
2010-08-20 18:24:43 +00:00
Dan Gohman 12cbe696e4 Delete SlowOperationInformer, which is no longer used.
llvm-svn: 111661
2010-08-20 18:07:37 +00:00
Dan Gohman a931605647 Convert DbgInfoPrinter to use errs() instead of outs().
llvm-svn: 111659
2010-08-20 18:03:05 +00:00
Jim Grosbach 0600691fe6 properly check for whether base regs were inserted
llvm-svn: 111646
2010-08-20 16:48:30 +00:00
Dan Gohman e9a469115c Make outs() close its file when its stream is destructed, so that
pending output errors are detected.

llvm-svn: 111643
2010-08-20 16:44:56 +00:00
Dan Gohman 443f2d6426 Delete raw_stdout_ostream and raw_stderr_ostream, which are unused
outside of outs() and errs() themselves, and they don't really
need custom classes.

llvm-svn: 111642
2010-08-20 16:39:41 +00:00
Dan Gohman 38adfdd100 Move raw_ostream's Error flag into raw_fd_ostream, as that's the only
class which is using it.

llvm-svn: 111639
2010-08-20 16:34:20 +00:00
Erick Tryzelaar b4d48706ca Expose LLVMSetOperand and LLVMGetNumOperands to llvm-c and ocaml.
llvm-svn: 111625
2010-08-20 14:51:22 +00:00
Mikhail Glushenkov 024ec17332 llvmc: Cut global namespace pollution.
llvm-svn: 111619
2010-08-20 11:24:44 +00:00