Commit Graph

13034 Commits

Author SHA1 Message Date
Chris Lattner d124535de1 Split memcpy/memset/memmove intrinsics into i32/i64 versions, resolving
PR709, and paving the way for future progress.

Significantly refactor autoupgrading code, to handle the more complex case
(where we upgrade one argument in a function), and fix some bugs in it.

Testcase here: llvm/test/Regression/Bytecode/memcpy.ll

llvm-svn: 26474
2006-03-02 23:58:40 +00:00
Chris Lattner 9067500e2e add a note
llvm-svn: 26472
2006-03-02 22:34:38 +00:00
Evan Cheng 4e3904f637 - Fixed some priority calculation bugs that were causing bug 478. Among them:
a predecessor appearing more than once in the operand list was counted as
  multiple predecessor; priority1 should be updated during scheduling;
  CycleBound was updated after the node is inserted into priority queue; one
  of the tie breaking condition was flipped.
- Take into consideration of two address opcodes. If a predecessor is a def&use
  operand, it should have a higher priority.
- Scheduler should also favor floaters, i.e. nodes that do not have real
  predecessors such as MOV32ri.
- The scheduling fixes / tweaks fixed bug 478:
        .text
        .align  4
        .globl  _f
_f:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        imull %ecx, %edx
        imull %eax, %eax
        imull %ecx, %ecx
        addl %eax, %ecx
        leal (%ecx,%edx,2), %eax
        ret

  It is also a slight performance win (1% - 3%) for most tests.

llvm-svn: 26470
2006-03-02 21:38:29 +00:00
Chris Lattner 85dda9a2bd Generalize the REM folding code to handle another case Nick Lewycky
pointed out: realize the AND can provide factors and look through Casts.

llvm-svn: 26469
2006-03-02 06:50:58 +00:00
Jim Laskey 862001ad75 Support for enumerations.
llvm-svn: 26466
2006-03-01 23:52:37 +00:00
Evan Cheng 38d5e768b2 Don't print llvm constant in assmebly file. Assembler won't like comments that
span multiple lines.

llvm-svn: 26463
2006-03-01 22:18:09 +00:00
Evan Cheng 5b19a80321 Back out my last check-in. Wrong place to fix it.
llvm-svn: 26462
2006-03-01 22:17:00 +00:00
Evan Cheng 302bdb586f AsmWriter should not print LLVM constant in comment. Assembler won't like
multi-line comments.

llvm-svn: 26461
2006-03-01 22:00:59 +00:00
Chris Lattner 0db2f2c689 Fix CodeGen/Generic/2006-03-01-dagcombineinfloop.ll, an infinite loop
in the dag combiner on 176.gcc on x86.

llvm-svn: 26459
2006-03-01 21:47:21 +00:00
Jim Laskey 4e71db13d6 Switch back to using actual dwarf tags. Simplifies code without loss to other
debug forms.

llvm-svn: 26455
2006-03-01 20:39:36 +00:00
Chris Lattner 232024edb8 Fix a typo evan noticed
llvm-svn: 26454
2006-03-01 19:55:35 +00:00
Jim Laskey f770cf5b90 Use context and not compile unit.
llvm-svn: 26453
2006-03-01 18:20:30 +00:00
Jim Laskey 1246d5c054 I guess I can handle large type sizes.
llvm-svn: 26452
2006-03-01 18:13:05 +00:00
Jim Laskey b9ac4cba66 Basic array support.
llvm-svn: 26451
2006-03-01 17:53:02 +00:00
Chris Lattner 60a60f4b1e Implement CodeGen/PowerPC/or-addressing-mode.ll, which is also PR668.
llvm-svn: 26450
2006-03-01 07:14:48 +00:00
Chris Lattner 3cb349a068 add a note
llvm-svn: 26448
2006-03-01 06:36:20 +00:00
Chris Lattner 27f5345b1f Compile this:
void foo(float a, int *b) { *b = a; }

to this:

_foo:
        fctiwz f0, f1
        stfiwx f0, 0, r4
        blr

instead of this:

_foo:
        fctiwz f0, f1
        stfd f0, -8(r1)
        lwz r2, -4(r1)
        stw r2, 0(r4)
        blr

This implements CodeGen/PowerPC/stfiwx.ll, and also incidentally does the
right thing for GCC bugzilla 26505.

llvm-svn: 26447
2006-03-01 05:50:56 +00:00
Chris Lattner f418435819 Use a target-specific dag-combine to implement CodeGen/PowerPC/fp-int-fp.ll.
llvm-svn: 26445
2006-03-01 04:57:39 +00:00
Chris Lattner bc1c85beea Add support for target-specific dag combines
llvm-svn: 26443
2006-03-01 04:53:38 +00:00
Chris Lattner 4a2eeea671 Add interfaces for targets to provide target-specific dag combiner optimizations.
llvm-svn: 26442
2006-03-01 04:52:55 +00:00
Chris Lattner fbcd62d3bb Add a new AddToWorkList method, start using it
llvm-svn: 26441
2006-03-01 04:03:14 +00:00
Chris Lattner 324871ef1a Pull shifts by a constant through multiplies (a form of reassociation),
implementing Regression/CodeGen/X86/mul-shift-reassoc.ll

llvm-svn: 26440
2006-03-01 03:44:24 +00:00
Evan Cheng 1926427351 Vector op lowering.
llvm-svn: 26438
2006-03-01 01:11:20 +00:00
Evan Cheng b97aab4371 Vector ops lowering.
llvm-svn: 26436
2006-03-01 01:09:54 +00:00
Evan Cheng 91c574b642 New type v2f32.
llvm-svn: 26435
2006-03-01 01:06:22 +00:00
Evan Cheng be85e89ec4 - Added VConstant as an abstract version of ConstantVec.
- All abstrct vector nodes must have # of elements and element type as their
first two operands.

llvm-svn: 26432
2006-03-01 00:51:13 +00:00
Evan Cheng 0e69f45b07 Another entry.
llvm-svn: 26430
2006-02-28 23:38:49 +00:00
Evan Cheng 990c3602bd Don't match x << 1 to LEAL. It's better to emit x + x.
llvm-svn: 26429
2006-02-28 21:13:57 +00:00
Jim Laskey 716edb9754 Add const, volatile, restrict support.
Add array of debug descriptor support.

llvm-svn: 26428
2006-02-28 20:15:07 +00:00
Chris Lattner c5b6c9a12a Fix a regression in a patch from a couple of days ago. This fixes
Transforms/InstCombine/2006-02-28-Crash.ll

llvm-svn: 26427
2006-02-28 19:47:20 +00:00
Chris Lattner b9f35f06bc Add a subtarget feature for the stfiwx instruction. I know the G5 has it,
but I don't know what other PPC impls do.  If someone could update the proc
table, I would appreciate it :)

llvm-svn: 26421
2006-02-28 07:08:22 +00:00
Chris Lattner f0032b350c Compile:
unsigned foo4(unsigned short *P) { return *P & 255; }
unsigned foo5(short *P) { return *P & 255; }

to:

_foo4:
        lbz r3,1(r3)
        blr
_foo5:
        lbz r3,1(r3)
        blr

not:

_foo4:
        lhz r2, 0(r3)
        rlwinm r3, r2, 0, 24, 31
        blr
_foo5:
        lhz r2, 0(r3)
        rlwinm r3, r2, 0, 24, 31
        blr

llvm-svn: 26419
2006-02-28 06:49:37 +00:00
Chris Lattner 872810da6c remove implemented item
llvm-svn: 26418
2006-02-28 06:36:04 +00:00
Chris Lattner bdbc4476d9 Fold "and (LOAD P), 255" -> zextload. This allows us to compile:
unsigned foo3(unsigned *P) { return *P & 255; }
as:
_foo3:
        lbz r3, 3(r3)
        blr

instead of:

_foo3:
        lwz r2, 0(r3)
        rlwinm r3, r2, 0, 24, 31
        blr

and:

unsigned short foo2(float a) { return a; }

as:
_foo2:
        fctiwz f0, f1
        stfd f0, -8(r1)
        lhz r3, -2(r1)
        blr

instead of:

_foo2:
        fctiwz f0, f1
        stfd f0, -8(r1)
        lwz r2, -4(r1)
        rlwinm r3, r2, 0, 16, 31
        blr

llvm-svn: 26417
2006-02-28 06:35:35 +00:00
Chris Lattner 0f8a727c49 fold (sra (sra x, c1), c2) -> (sra x, c1+c2)
llvm-svn: 26416
2006-02-28 06:23:04 +00:00
Chris Lattner b70f141893 Implement rem.ll:test[7-9] and PR712
llvm-svn: 26415
2006-02-28 05:49:21 +00:00
Chris Lattner 2a7c7b8bab Simplify some code now that the RHS of a rem can't be 0
llvm-svn: 26413
2006-02-28 05:40:55 +00:00
Chris Lattner 0de4a8d7b7 Rearrange some code, fold "rem X, 0", implementing rem.ll:test6
llvm-svn: 26411
2006-02-28 05:30:45 +00:00
Chris Lattner 9fed5b6122 Add support for output memory constraints.
llvm-svn: 26410
2006-02-27 23:45:39 +00:00
Jim Laskey 6d5c2a0156 Qualify dwarf namespace inside llvm namespace.
llvm-svn: 26409
2006-02-27 22:37:23 +00:00
Nate Begeman f918ed2e33 readme updates
llvm-svn: 26405
2006-02-27 22:08:36 +00:00
Jim Laskey bc7a3832e8 Partial enabling of functions.
llvm-svn: 26404
2006-02-27 20:37:42 +00:00
Chris Lattner ec185f7843 Don't print constant initializers, they may span lines now.
llvm-svn: 26403
2006-02-27 20:09:23 +00:00
Jim Laskey 72b66d6d8a Supporting multiple compile units.
llvm-svn: 26402
2006-02-27 17:27:12 +00:00
Jim Laskey 22e47b9f4e Re-orging file.
llvm-svn: 26401
2006-02-27 12:43:29 +00:00
Jim Laskey 6be3d8e0df Pretty print large struct constants.
llvm-svn: 26400
2006-02-27 10:33:53 +00:00
Jim Laskey 8f2c1021b4 Removed dependency on how operands are printed (want multi-line.)
llvm-svn: 26399
2006-02-27 10:29:04 +00:00
Chris Lattner c7bfed0f7b Merge two almost-identical pieces of code.
Make this code more powerful by using ComputeMaskedBits instead of looking
for an AND operand.  This lets us fold this:

int %test23(int %a) {
        %tmp.1 = and int %a, 1
        %tmp.2 = seteq int %tmp.1, 0
        %tmp.3 = cast bool %tmp.2 to int  ;; xor tmp1, 1
        ret int %tmp.3
}

into: xor (and a, 1), 1
llvm-svn: 26396
2006-02-27 02:38:23 +00:00
Chris Lattner f5c8a0b83f Fold (A^B) == A -> B == 0
and  (A-B) == A  ->  B == 0

llvm-svn: 26394
2006-02-27 01:44:11 +00:00
Chris Lattner ab8164042a Implement bit propagation through sub nodes, this (re)implements
PowerPC/div-2.ll

llvm-svn: 26392
2006-02-27 01:00:42 +00:00
Chris Lattner 47ee42829d remove some completed notes
llvm-svn: 26390
2006-02-27 00:39:31 +00:00
Chris Lattner a60751dd43 Check RHS simplification before LHS simplification to avoid infinitely looping
on PowerPC/small-arguments.ll

llvm-svn: 26389
2006-02-27 00:36:27 +00:00
Chris Lattner 27220f8958 Just like we use the RHS of an AND to simplify the LHS, use the LHS to
simplify the RHS.  This allows for the elimination of many thousands of
ands from multisource, and compiles CodeGen/PowerPC/and-elim.ll:test2
into this:

_test2:
        srwi r2, r3, 1
        xori r3, r2, 40961
        blr

instead of this:

_test2:
        rlwinm r2, r3, 31, 17, 31
        xori r2, r2, 40961
        rlwinm r3, r2, 0, 16, 31
        blr

llvm-svn: 26388
2006-02-27 00:22:28 +00:00
Chris Lattner 118ddba929 Add a bunch of missed cases. Perhaps the most significant of which is that
assertzext produces zero bits.

llvm-svn: 26386
2006-02-26 23:36:02 +00:00
Chris Lattner f78df7c14d Fold (X|C1)^C2 -> X^(C1|C2) when possible. This implements
InstCombine/or.ll:test23.

llvm-svn: 26385
2006-02-26 19:57:54 +00:00
Jim Laskey 702c1d11b5 Reverting. Didn't realize some developers were embedding constants in their
target assembler code gen.

llvm-svn: 26383
2006-02-26 10:16:05 +00:00
Evan Cheng 877ab55e06 ConstantPoolIndex is now the displacement portion of the address (rather
than base).

llvm-svn: 26382
2006-02-26 09:12:34 +00:00
Evan Cheng 9f9662b86e Print ConstantPoolSDNode offset field.
llvm-svn: 26381
2006-02-26 08:36:57 +00:00
Evan Cheng 75b8783aaf Fixed ConstantPoolIndex operand asm print bug. This fixed 2005-07-17-INT-To-FP
and 2005-05-12-Int64ToFP.

llvm-svn: 26380
2006-02-26 08:28:12 +00:00
Jim Laskey 3bb7874dd3 Format large struct constants for readability.
llvm-svn: 26379
2006-02-25 12:27:03 +00:00
Evan Cheng 77d86ff8fc * Cleaned up addressing mode matching code.
* Cleaned up and tweaked LEA cost analysis code. Removed some hacks.
* Handle ADD $X, c to MOV32ri $X+c. These patterns cannot be autogen'd and
  they need to be matched before LEA.

llvm-svn: 26376
2006-02-25 10:09:08 +00:00
Evan Cheng 1c557bfeb5 Updates.
llvm-svn: 26375
2006-02-25 10:04:07 +00:00
Evan Cheng 1fac3b3360 * Allow mul, shl nodes to be codegen'd as LEA (if appropriate).
* Add patterns to handle GlobalAddress, ConstantPool, etc.
  MOV32ri to materialize these nodes in registers.
  ADD32ri to handle %reg + GA, etc.
  MOV32mi to handle store GA, etc. to memory.

llvm-svn: 26374
2006-02-25 10:02:21 +00:00
Evan Cheng e4a8b74e4f ConstantPoolIndex is now the displacement field of addressing mode.
llvm-svn: 26373
2006-02-25 09:56:50 +00:00
Evan Cheng 994700101e Added a common about the need for X86ISD::Wrapper.
llvm-svn: 26372
2006-02-25 09:55:19 +00:00
Evan Cheng ed169db8a5 Added an offset field to ConstantPoolSDNode.
llvm-svn: 26371
2006-02-25 09:54:52 +00:00
Chris Lattner 7d01f95a57 Fix a bug that Evan exposed with some changes he's making, and that was
exposed with a fastcc problem (breaking pcompress2 on x86 with -enable-x86-fastcc).

When reloading a reused reg, make sure to invalidate the reloaded reg, and
check to see if there are any other pending uses of the same register.

llvm-svn: 26369
2006-02-25 02:17:31 +00:00
Chris Lattner 28a0b8bec7 Remove debugging printout :)
Add a minor compile time win, no codegen change.

llvm-svn: 26368
2006-02-25 02:03:40 +00:00
Chris Lattner 525522e429 Refactor some code from being inline to being out in a new class with methods.
This gets rid of two gotos, which is always nice, and also adds some comments.

No functionality change, this is just a refactor.

llvm-svn: 26367
2006-02-25 01:51:33 +00:00
Evan Cheng 42d5ac557c Fix an obvious bug exposed when we are doing
ADD X, 4
==>
MOV32ri $X+4, ...

llvm-svn: 26366
2006-02-25 01:37:02 +00:00
Chris Lattner 7674d90fa1 Add memory printing support for PPC. Input memory operands now work with
inline asms! :)

llvm-svn: 26365
2006-02-24 20:27:40 +00:00
Chris Lattner 1d08c6534c Use the PrintAsmMemoryOperand to print addressing modes.
llvm-svn: 26364
2006-02-24 20:21:58 +00:00
Chris Lattner 5af3fdec12 Pass all the flags to the asm printer, not just the # operands.
llvm-svn: 26362
2006-02-24 19:50:58 +00:00
Chris Lattner 2f8a794b13 rename NumOps -> NumVals to avoid shadowing a NumOps var in an outer scope.
Add support for addressing modes.

llvm-svn: 26361
2006-02-24 19:18:20 +00:00
Chris Lattner 86c51000db Refactor operand adding out to a new AddOperand method
llvm-svn: 26358
2006-02-24 18:54:03 +00:00
Chris Lattner b580d26e7d Fix a problem that Nate noticed that boils down to an over conservative check
in the code that does "select C, (X+Y), (X-Y) --> (X+(select C, Y, (-Y)))".
We now compile this loop:

LBB1_1: ; no_exit
        add r6, r2, r3
        subf r3, r2, r3
        cmpwi cr0, r2, 0
        addi r7, r5, 4
        lwz r2, 0(r5)
        addi r4, r4, 1
        blt cr0, LBB1_4 ; no_exit
LBB1_3: ; no_exit
        mr r3, r6
LBB1_4: ; no_exit
        cmpwi cr0, r4, 16
        mr r5, r7
        bne cr0, LBB1_1 ; no_exit

into this instead:

LBB1_1: ; no_exit
        srawi r6, r2, 31
        add r2, r2, r6
        xor r6, r2, r6
        addi r7, r5, 4
        lwz r2, 0(r5)
        addi r4, r4, 1
        add r3, r3, r6
        cmpwi cr0, r4, 16
        mr r5, r7
        bne cr0, LBB1_1 ; no_exit

llvm-svn: 26356
2006-02-24 18:05:58 +00:00
Jim Laskey 723d3e0746 Add pointer and reference types. Added short-term code to ignore NULL types
(to allow llvm-gcc4 to build.)

llvm-svn: 26355
2006-02-24 16:46:40 +00:00
Jeff Cohen 83c22e0d75 Get VC++ building again.
llvm-svn: 26351
2006-02-24 02:52:40 +00:00
Chris Lattner dcf785bf46 Implement (most of) selection of inline asm memory operands.
llvm-svn: 26350
2006-02-24 02:13:54 +00:00
Chris Lattner a1ec1ddd59 Implement selection of inline asm memory operands
llvm-svn: 26348
2006-02-24 02:13:12 +00:00
Chris Lattner 7ef7a64ebb Lower C_Memory operands.
llvm-svn: 26346
2006-02-24 01:11:24 +00:00
Chris Lattner 2a9e1e3e74 Recognize memory operand codes
llvm-svn: 26345
2006-02-24 01:10:46 +00:00
Chris Lattner de5a9f4517 Parse the %*# constraint modifiers
llvm-svn: 26341
2006-02-23 23:36:53 +00:00
Jim Laskey e5386d4d98 Added basic support for typedefs.
llvm-svn: 26339
2006-02-23 22:37:30 +00:00
Evan Cheng 0ed48fe601 PPC JIT relocation model should be DynamicNoPIC.
llvm-svn: 26338
2006-02-23 22:18:07 +00:00
Evan Cheng e0ed6ec13f - Clean up the lowering and selection code of ConstantPool, GlobalAddress,
and ExternalSymbol.
- Use C++ code (rather than tblgen'd selection code) to match the above
  mentioned leaf nodes. Do not mutate and nodes and do not record the
  selection in CodeGenMap. These nodes should be safe to duplicate. This is
  a performance win.

llvm-svn: 26335
2006-02-23 20:41:18 +00:00
Chris Lattner e7c0ffb3a0 Fix an endianness problem on big-endian targets with expanded operands
to inline asms.  Mark some methods const.

llvm-svn: 26334
2006-02-23 20:06:57 +00:00
Chris Lattner 1bad2546d0 Implement the PPC inline asm "L" modifier. This allows us to compile:
long long test(long long X) {
  __asm__("foo %0 %L0 %1 %L1" : "=r"(X): "r"(X));
  return X;
}

to:
        foo r2 r3 r2 r3

llvm-svn: 26333
2006-02-23 19:31:10 +00:00
Chris Lattner 571d9647c6 Record all of the expanded registers in the DAG and machine instr, fixing
several bugs in inline asm expanded operands.

llvm-svn: 26332
2006-02-23 19:21:04 +00:00
Jim Laskey 69b9e26186 DwarfWriter reading basic type information from llvm-gcc4 code.
llvm-svn: 26331
2006-02-23 16:58:18 +00:00
Chris Lattner 2988921dc4 Code cleanups, no functionality change
llvm-svn: 26328
2006-02-23 06:44:17 +00:00
Chris Lattner 16f08f53b1 "." isn't enough to get a private label on linux, use ".L".
llvm-svn: 26327
2006-02-23 05:25:02 +00:00
Chris Lattner 2bacf981bf add a small and simple case.
llvm-svn: 26326
2006-02-23 05:17:43 +00:00
Evan Cheng f4448cee66 A couple of new entries.
llvm-svn: 26325
2006-02-23 02:50:21 +00:00
Evan Cheng 1f342c2884 PIC related bug fixes.
1. Various asm printer bug.
2. Lowering bug. Now TargetGlobalAddress is wrapped in X86ISD::TGAWrapper.

llvm-svn: 26324
2006-02-23 02:43:52 +00:00
Evan Cheng 7eabbfd618 X86 codegen tweak to use lea in another case:
Suppose base == %eax and it has multiple uses, then instead of
  movl %eax, %ecx
  addl $8, %ecx
use
  leal 8(%eax), %ecx.

llvm-svn: 26323
2006-02-23 00:13:58 +00:00
Evan Cheng 7714a59d91 Missing .globl for weak / link-once .text symbols.
llvm-svn: 26321
2006-02-22 23:59:57 +00:00
Chris Lattner e5521db5bc Fix Regression/Transforms/LoopUnswitch/2006-02-22-UnswitchCrash.ll, which
caused SPASS to fail building last night.

We can't trivially unswitch a loop if the exit block has phi nodes in it,
because we don't know which predecessor to use.

llvm-svn: 26320
2006-02-22 23:55:00 +00:00
Chris Lattner b1124f3c76 This fixes a couple of problems with expansion
llvm-svn: 26318
2006-02-22 23:09:03 +00:00
Chris Lattner 2e124af406 Don't return registers from register classes that aren't legal.
llvm-svn: 26317
2006-02-22 23:00:51 +00:00