Commit Graph

11103 Commits

Author SHA1 Message Date
Chris Lattner 02d3ba3db8 consistency with other cases, no functionality change
llvm-svn: 23524
2005-09-29 17:38:52 +00:00
Chris Lattner eca4f56646 Make the JIT default to the DAG isel instead of the pattern isel, like LLC.
The Pattern isel has some strange memory corruption issues going on. :(

This should have been converted over anyway, but it got forgotten somehow
when switching to the dag isel.

llvm-svn: 23523
2005-09-29 17:31:03 +00:00
Chris Lattner 5b2be1f890 Fix two bugs in my patch earlier today that broke int->fp conversion on X86.
llvm-svn: 23522
2005-09-29 06:44:39 +00:00
Chris Lattner 87ef943a4c Fold isascii into a simple comparison. This speeds up 197.parser by 7.4%,
bringing the LLC time down to the CBE time.

llvm-svn: 23521
2005-09-29 06:17:27 +00:00
Chris Lattner 5f6035feb0 remove a bunch of unneeded stuff, or self evident comments
llvm-svn: 23519
2005-09-29 06:16:11 +00:00
Chris Lattner c244e7c178 Implement a couple of memcmp folds from the todo list
llvm-svn: 23517
2005-09-29 04:54:20 +00:00
Jeff Cohen b01a41a06d Silence VC++ redeclaration warnings.
llvm-svn: 23516
2005-09-29 01:59:49 +00:00
Chris Lattner 08c319fbdd Never rely on ReplaceAllUsesWith when selecting, use CodeGenMap instead.
ReplaceAllUsesWith does not replace scalars SDOperand floating around on
the stack, permitting things to be selected multiple times.

llvm-svn: 23515
2005-09-29 00:59:32 +00:00
Chris Lattner d4e9e8b7ec Codegen ADD X, IMM -> addis/addi if needed.
This implements PowerPC/fold-li.ll

llvm-svn: 23514
2005-09-28 23:07:13 +00:00
Chris Lattner b9b2e77295 Autogen MUL, move FP cases together
llvm-svn: 23512
2005-09-28 22:53:16 +00:00
Chris Lattner 5769311c92 disentangle FP from INT versions of div/mul
llvm-svn: 23511
2005-09-28 22:50:24 +00:00
Chris Lattner 585131baaf Use the autogenerated matcher for ADD/SUB
llvm-svn: 23510
2005-09-28 22:47:28 +00:00
Chris Lattner f023b2cda2 add a patter for SUBFIC
llvm-svn: 23509
2005-09-28 22:47:06 +00:00
Chris Lattner 21551ea5ab Mark int binops as int-only, add FP binops. Mark FADD/FMUL as commutative but
not associative.  Add [SU]REM.

llvm-svn: 23508
2005-09-28 22:38:27 +00:00
Chris Lattner cd002b2461 wrap a long line
llvm-svn: 23507
2005-09-28 22:30:58 +00:00
Chris Lattner d3ea19b51a Add FP versions of the binary operators, keeping the int and fp worlds seperate.
llvm-svn: 23506
2005-09-28 22:29:58 +00:00
Chris Lattner 0815dcae3f Add FP versions of the binary operators, keeping the int and fp worlds seperate.
Though I have done extensive testing, it is possible that this will break
things in configs I can't test.  Please let me know if this causes a problem
and I'll fix it ASAP.

llvm-svn: 23505
2005-09-28 22:29:17 +00:00
Chris Lattner 6f3b577ee6 Add FP versions of the binary operators, keeping the int and fp worlds seperate.
Though I have done extensive testing, it is possible that this will break
things in configs I can't test.  Please let me know if this causes a problem
and I'll fix it ASAP.

llvm-svn: 23504
2005-09-28 22:28:18 +00:00
Chris Lattner 7fe6734dff Mark associative nodes as associative
llvm-svn: 23503
2005-09-28 20:58:39 +00:00
Chris Lattner b97b054ba7 Nate pointed out that mulh[us] are commutative as well. Thanks!
llvm-svn: 23500
2005-09-28 19:01:44 +00:00
Chris Lattner 89d168ceb3 expose commutativity information
llvm-svn: 23498
2005-09-28 18:27:58 +00:00
Chris Lattner fab48b3285 All (xor *) cases are autogenerated now
llvm-svn: 23497
2005-09-28 18:12:37 +00:00
Chris Lattner 037d69a404 add support for missed eqv tests
llvm-svn: 23496
2005-09-28 18:10:51 +00:00
Chris Lattner 33f8e08c8f Implement PowerPC/eqv-andc-orc-nor.ll:EQV3
llvm-svn: 23494
2005-09-28 18:04:52 +00:00
Chris Lattner 8cd7b88a88 learn to codegen not as NOR instead of xoris/xori
llvm-svn: 23490
2005-09-28 17:13:15 +00:00
Chris Lattner bb5939a436 These nodes are all autogenerated
llvm-svn: 23489
2005-09-28 17:07:09 +00:00
Chris Lattner ea7214b23d Constant fold llvm.sqrt
llvm-svn: 23487
2005-09-28 01:34:32 +00:00
Chris Lattner 3b63bb375c add a note about a way to improve this code further, that I won't be getting
to right now.

llvm-svn: 23485
2005-09-27 22:44:59 +00:00
Chris Lattner eb953f0ef8 Fix a regression in my previous patch, fixing GlobalOpt/2005-09-27-Crash.ll
and PR632.

llvm-svn: 23484
2005-09-27 22:28:11 +00:00
Chris Lattner a028e7a39c Darwin, like many BSD systems, has a setjmp/longjmp which saves the signal mask
on setjmp calls and restores it on longjmp calls (both of which require syscalls).

This makes the calls REALLY slow.  Use _setjmp/_longjmp instead.  This speeds up
hexxagon from 120.31s to 15.68s: from 5.53x slower than GCC to 28% faster than GCC.

llvm-svn: 23482
2005-09-27 22:18:25 +00:00
Chris Lattner 0fd8f9fbc9 If the target prefers it, use _setjmp/_longjmp should be used instead of setjmp/longjmp for llvm.setjmp/llvm.longjmp.
llvm-svn: 23481
2005-09-27 22:15:53 +00:00
Chris Lattner 59dc1e082c initialize new flag
llvm-svn: 23480
2005-09-27 22:13:56 +00:00
Chris Lattner e285f5ed8f Avoid spilling stack slots... to stack slots.
llvm-svn: 23478
2005-09-27 21:33:12 +00:00
Chris Lattner 87eb249300 Completely rewrite 'correct' eh support. This changes how setjmp insertion
is performed so it is only at most once per function that contains an invoke
instead of once per invoke in the function.  This patch has the following perks:

1. It fixes PR631, which complains about slowness.
2. If fixes PR240, which complains about non-volatile vars being live across
   setjmp/longjmps.
3. It improves (but does not fix) the jmpbuf alignment issue on itanium by not
   forcing the jmpbufs to always be 8-bytes off the alignment of the structure.
4. It speeds up 253.perlbmk from 338s to 13.70s (a 25x improvement!), making us
   now about 4% faster than GCC.

Further improvements are also possible.

llvm-svn: 23477
2005-09-27 21:18:17 +00:00
Chris Lattner 92233d2175 Make the pass name simpler
llvm-svn: 23476
2005-09-27 21:10:32 +00:00
Chris Lattner 5635cc069f fix CBackend/2005-09-27-VolatileFuncPtr.ll
llvm-svn: 23475
2005-09-27 20:52:44 +00:00
Chris Lattner 16cd356fb2 allow demotion to volatile values, add support for invoke
llvm-svn: 23473
2005-09-27 19:39:00 +00:00
Chris Lattner c628f00845 Make sure to clear the CodeGenMap after each basic block is selected to avoid
cross MBB pollution.

llvm-svn: 23470
2005-09-27 17:45:33 +00:00
Jim Laskey 63523f98d5 Remove some redundancies.
llvm-svn: 23469
2005-09-27 17:32:45 +00:00
Chris Lattner e7e139e8e8 Split SimpleConstantVal up into its components, so each Constant subclass getsa different enum value. This allows 'classof' for these to be really simple,not needing to call getType() anymore.
This speeds up isa/dyncast/etc for constants, and also makes them smaller.
For example, the text section of a release build of InstCombine.cpp shrinks
from 230037 bytes to 216363 bytes, a 6% reduction.

llvm-svn: 23467
2005-09-27 06:09:08 +00:00
Chris Lattner 3d27e7f27f Add support for external calls that we know how to constant fold. This implements
ctor-list-opt.ll:CTOR8

llvm-svn: 23465
2005-09-27 05:02:43 +00:00
Chris Lattner 29b2780c8a Fix a bug where we would evaluate stores into linkonce objects which could be
potentially replaced at link-time.

llvm-svn: 23463
2005-09-27 04:50:03 +00:00
Chris Lattner 65a3a0918f Implement support for static constructors with calls in them. This is useful
because gccas runs globalopt before inlining.

This implements ctor-list-opt.ll:CTOR7

llvm-svn: 23462
2005-09-27 04:45:34 +00:00
Chris Lattner da1889b778 Refactor this code a bit, no functionality changes.
llvm-svn: 23460
2005-09-27 04:27:01 +00:00
Chris Lattner 54ec5f2089 Move the post-lsr simplify cfg pass after lowereh, so it can clean up after
eh lowering as well.

llvm-svn: 23459
2005-09-27 00:14:41 +00:00
Chris Lattner 4435b149a0 minor pattern shuffling
llvm-svn: 23458
2005-09-26 22:20:16 +00:00
Jim Laskey 5f2443c8a3 Addition of a simple two pass scheduler. This version is currently hacked up
for testing and will require target machine info to do a proper scheduling.
The simple scheduler can be turned on using -sched=simple (defaults
to -sched=none)

llvm-svn: 23455
2005-09-26 21:57:04 +00:00
Chris Lattner f2f89af69a Remove some dead code. ctor evaluation subsumes empty ctor elim
llvm-svn: 23453
2005-09-26 20:38:20 +00:00
Chris Lattner 6bf2cd5735 Add support for alloca, implementing ctor-list-opt.ll:CTOR6
llvm-svn: 23452
2005-09-26 17:07:09 +00:00
Chris Lattner 46d9ff081d Add a debug printout, fix a crash on kc++
llvm-svn: 23450
2005-09-26 07:34:35 +00:00
Chris Lattner 46af55e0e4 Implement loads/stores through GEP's of globals. This implements
ctor-list-opt.ll:CTOR5.

llvm-svn: 23449
2005-09-26 06:52:44 +00:00
Chris Lattner 61ff32cd70 Replace TraverseGEPInitializer with ConstantFoldLoadThroughGEPConstantExpr
llvm-svn: 23447
2005-09-26 05:34:07 +00:00
Chris Lattner 02ae21e1e0 Eliminate GetGEPGlobalInitializer in favor of the more powerful
ConstantFoldLoadThroughGEPConstantExpr function in the utils lib.

llvm-svn: 23446
2005-09-26 05:28:52 +00:00
Chris Lattner 0b011ec8e2 Factor the GetGEPGlobalInitializer out of this pass and into Transforms/Utils
as ConstantFoldLoadThroughGEPConstantExpr.

llvm-svn: 23445
2005-09-26 05:28:06 +00:00
Chris Lattner c13c7b9376 Move the ConstantFoldLoadThroughGEPConstantExpr function out of the InstCombine
pass.

llvm-svn: 23444
2005-09-26 05:27:10 +00:00
Chris Lattner b009663e27 add a comment
llvm-svn: 23442
2005-09-26 05:16:34 +00:00
Chris Lattner 4b05c322d5 Add support for getelementptr, load, and correctly reject volatile stores.
llvm-svn: 23441
2005-09-26 05:15:37 +00:00
Chris Lattner 3e9ea5ffec Add support for br/brcond/switch and phi
llvm-svn: 23439
2005-09-26 04:57:38 +00:00
Chris Lattner 99e23fa74c Add a simple interpreter to this code, allowing us to statically evaluate
global ctors that are simple enough.  This implements ctor-list-opt.ll:CTOR2.

llvm-svn: 23437
2005-09-26 04:44:35 +00:00
Chris Lattner 696beefabb factor some code into a InstallGlobalCtors method, add comments. No functionality change.
llvm-svn: 23435
2005-09-26 02:31:18 +00:00
Chris Lattner 838bdc1836 Make the global opt optimizer work on modules with a null terminator, by
accepting the null even with a non-65535 init prio

llvm-svn: 23434
2005-09-26 02:19:27 +00:00
Chris Lattner 41b6a5a693 Factor this code out into a few methods.
Implement the start of global ctor optimization.  It is currently smart
enough to remove the global ctor for cases like this:

struct foo {
  foo() {}
} x;

... saving a bit of startup time for the program.

llvm-svn: 23433
2005-09-26 01:43:45 +00:00
Chris Lattner f487768062 Fix some logic I broke that caused a regression on
SimplifyLibCalls/2005-05-20-sprintf-crash.ll

llvm-svn: 23430
2005-09-25 07:06:48 +00:00
Chris Lattner 0b3557f54a Move MaskedValueIsZero up.
Match a bunch of idioms for sign extensions, implementing InstCombine/signext.ll

llvm-svn: 23428
2005-09-24 23:43:33 +00:00
Chris Lattner 175463a165 Simplify this code a bit by relying on recursive simplification. Support
sprintf("%s", P)'s that have uses.

s/hasNUses(0)/use_empty()/

llvm-svn: 23425
2005-09-24 22:17:06 +00:00
Chris Lattner cc9c03386f Add support for a marker byte that indicates that we shouldn't add the user
prefix to a symbol name

llvm-svn: 23421
2005-09-24 08:24:28 +00:00
Chris Lattner 6736a6cdd2 Teach the dag isel generator how to construct arbitrary immediates. The
generated isel now tries li then lis, then lis+ori.

llvm-svn: 23418
2005-09-24 00:41:58 +00:00
Chris Lattner 499e33646e remove some debugging code
llvm-svn: 23411
2005-09-23 18:49:09 +00:00
Chris Lattner c59a371d45 Fold two consequtive branches that share a common destination between them.
This implements SimplifyCFG/branch-fold.ll, and is useful on ?:/min/max heavy
code

llvm-svn: 23410
2005-09-23 18:47:20 +00:00
Chris Lattner 3a978bf66d simplify some logic further
llvm-svn: 23408
2005-09-23 07:23:18 +00:00
Chris Lattner cc14ebc17b pull a bunch of logic out of SimplifyCFG into a helper fn
llvm-svn: 23407
2005-09-23 06:39:30 +00:00
Chris Lattner 1e3d3148bb speed up Archive::isBytecodeArchive in the case when the archive doesn't have
an llvm-ranlib symtab.  This speeds up gccld -native on an almost empty .o file
from 1.63s to 0.18s.

llvm-svn: 23406
2005-09-23 06:22:58 +00:00
Chris Lattner 59a05bdde6 Turn (X^C1) == C2 into X == C1^C2 iff X&~C1 = 0 (and move a function)
This happens all the time on PPC for bool values, e.g. eliminating a xori
in inverted-bool-compares.ll.

This should be added to the dag combiner as well.

llvm-svn: 23403
2005-09-23 00:55:52 +00:00
Chris Lattner b1f8982ff0 Expose the LiveInterval interfaces as public headers.
llvm-svn: 23400
2005-09-21 04:19:09 +00:00
Chris Lattner 6c70106053 Start threading across blocks with code in them, so long as the code does
not define a value that is used outside of it's block.  This catches many
more simplifications, e.g. 854 in 176.gcc, 137 in vpr, etc.

This implements branch-phi-thread.ll:test3.ll

llvm-svn: 23397
2005-09-20 01:48:40 +00:00
Chris Lattner f0bd8d0107 Implement merging of blocks with the same condition if the block has multiple
predecessors.  This implements branch-phi-thread.ll::test1

llvm-svn: 23395
2005-09-20 00:43:16 +00:00
Chris Lattner 049cb4482f Reject a case we don't handle yet
llvm-svn: 23393
2005-09-19 23:57:04 +00:00
Chris Lattner a160924d57 remove debugging code :-/
llvm-svn: 23392
2005-09-19 23:50:15 +00:00
Chris Lattner 748f903046 Implement SimplifyCFG/branch-phi-thread.ll, the most trivial case of threading
control across branches with determined outcomes.  More generality to follow.
This triggers a couple thousand times in specint.

llvm-svn: 23391
2005-09-19 23:49:37 +00:00
Nate Begeman c760f80fed Stub out the rest of the DAG Combiner. Just need to fill in the
select_cc bits and then wrap it in a convenience function for  use with
regular select.

llvm-svn: 23389
2005-09-19 22:34:01 +00:00
Chris Lattner 2f838f2192 Teach the local spiller to turn stack slot loads into register-register copies
when possible, avoiding the load (and avoiding the copy if the value is already
in the right register).

This patch came about when I noticed code like the following being generated:

  store R17 -> [SS1]
  ...blah...
  R4 = load [SS1]

This was causing an LSU reject on the G5.  This problem was due to the register
allocator folding spill code into a reg-reg copy (producing the load), which
prevented the spiller from being able to rewrite the load into a copy, despite
the fact that the value was already available in a register.  In the case
above, we now rip out the R4 load and replace it with a R4 = R17 copy.

This speeds up several programs on X86 (which spills a lot :) ), e.g.
smg2k from 22.39->20.60s, povray from 12.93->12.66s, 168.wupwise from
68.54->53.83s (!), 197.parser from 7.33->6.62s (!), etc.  This may have a larger
impact in some cases on the G5 (by avoiding LSU rejects), though it probably
won't trigger as often (less spilling in general).

Targets that implement folding of loads/stores into copies should implement
the isLoadFromStackSlot hook to get this.

llvm-svn: 23388
2005-09-19 06:56:21 +00:00
Chris Lattner de3c87a2ab Implement the isLoadFromStackSlot interface
llvm-svn: 23387
2005-09-19 05:23:44 +00:00
Chris Lattner b4b2530a1a Refactor this code a bit and make it more general. This now compiles:
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus2 (unsigned int x) { b.j += x; }

To:

_plus2:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        slwi r3, r3, 6
        add r3, r4, r3
        rlwimi r3, r4, 0, 26, 14
        stw r3, 0(r2)
        blr


instead of:

_plus2:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        rlwinm r5, r4, 26, 21, 31
        add r3, r5, r3
        rlwimi r4, r3, 6, 15, 25
        stw r4, 0(r2)
        blr

by eliminating an 'and'.

I'm pretty sure this is as small as we can go :)

llvm-svn: 23386
2005-09-18 07:22:02 +00:00
Chris Lattner 797dee7705 Compile
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus2 (unsigned int x) {
  b.j += x;
}

to:

plus2:
        mov %EAX, DWORD PTR [b]
        mov %ECX, %EAX
        and %ECX, 131008
        mov %EDX, DWORD PTR [%ESP + 4]
        shl %EDX, 6
        add %EDX, %ECX
        and %EDX, 131008
        and %EAX, -131009
        or %EDX, %EAX
        mov DWORD PTR [b], %EDX
        ret

instead of:

plus2:
        mov %EAX, DWORD PTR [b]
        mov %ECX, %EAX
        shr %ECX, 6
        and %ECX, 2047
        add %ECX, DWORD PTR [%ESP + 4]
        shl %ECX, 6
        and %ECX, 131008
        and %EAX, -131009
        or %ECX, %EAX
        mov DWORD PTR [b], %ECX
        ret

llvm-svn: 23385
2005-09-18 06:30:59 +00:00
Chris Lattner 01f56c68e9 Generalize this transform, using MaskedValueIsZero, allowing us to compile:
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus3 (unsigned int x) { b.k += x; }

To:

plus3:
        mov %EAX, DWORD PTR [%ESP + 4]
        shl %EAX, 17
        add DWORD PTR [b], %EAX
        ret

instead of:

plus3:
        mov %EAX, DWORD PTR [%ESP + 4]
        shl %EAX, 17
        mov %ECX, DWORD PTR [b]
        add %EAX, %ECX
        and %EAX, -131072
        and %ECX, 131071
        or %ECX, %EAX
        mov DWORD PTR [b], %ECX
        ret

llvm-svn: 23384
2005-09-18 06:02:59 +00:00
Chris Lattner 4ebc8ab4e0 fix typeo
llvm-svn: 23383
2005-09-18 05:25:20 +00:00
Chris Lattner e5b23a6d67 Remove unintentionally committed code
llvm-svn: 23382
2005-09-18 05:12:51 +00:00
Chris Lattner 27cb9dbd35 implement shift.ll:test25. This compiles:
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus3 (unsigned int x) {
  b.k += x;
}

to:

_plus3:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r3, 0(r2)
        rlwinm r4, r3, 0, 0, 14
        add r4, r4, r3
        rlwimi r4, r3, 0, 15, 31
        stw r4, 0(r2)
        blr

instead of:

_plus3:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        srwi r5, r4, 17
        add r3, r5, r3
        slwi r3, r3, 17
        rlwimi r3, r4, 0, 15, 31
        stw r3, 0(r2)
        blr

llvm-svn: 23381
2005-09-18 05:12:10 +00:00
Chris Lattner af517574ce Implement add.ll:test29. Codegening:
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus1 (unsigned int x) {
  b.i += x;
}

as:
_plus1:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        add r3, r4, r3
        rlwimi r3, r4, 0, 0, 25
        stw r3, 0(r2)
        blr

instead of:

_plus1:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        rlwinm r5, r4, 0, 26, 31
        add r3, r5, r3
        rlwimi r3, r4, 0, 0, 25
        stw r3, 0(r2)
        blr

llvm-svn: 23379
2005-09-18 04:24:45 +00:00
Chris Lattner 027eaf01cf remove debug output
llvm-svn: 23377
2005-09-18 03:50:25 +00:00
Chris Lattner 1521298993 Implement or.ll:test21. This teaches instcombine to be able to turn this:
struct {
   unsigned int bit0:1;
   unsigned int ubyte:31;
} sdata;

void foo() {
  sdata.ubyte++;
}

into this:

foo:
        add DWORD PTR [sdata], 2
        ret

instead of this:

foo:
        mov %EAX, DWORD PTR [sdata]
        mov %ECX, %EAX
        add %ECX, 2
        and %ECX, -2
        and %EAX, 1
        or %EAX, %ECX
        mov DWORD PTR [sdata], %EAX
        ret

llvm-svn: 23376
2005-09-18 03:42:07 +00:00
Chris Lattner 4d9cf68023 Implement hook for ppc
llvm-svn: 23374
2005-09-17 01:03:26 +00:00
Nate Begeman 24a7eca282 More DAG combining. Still need the branch instructions, and select_cc
llvm-svn: 23371
2005-09-16 00:54:12 +00:00
Chris Lattner 0ebec06671 disable this for now
llvm-svn: 23366
2005-09-15 21:44:00 +00:00
Chris Lattner 9e4a4ee3dc Give all operands names
llvm-svn: 23357
2005-09-14 21:11:13 +00:00
Chris Lattner 2e84be22a8 give all operands names
llvm-svn: 23356
2005-09-14 21:10:24 +00:00
Chris Lattner f006d15e7f Fix some issues exposed by more testing. XORIS had the wrong operands
specified.  The various *imm operands defined by PPC are really all i32,
even though the actual immediate is restricted to a smaller value in it.

llvm-svn: 23352
2005-09-14 20:53:05 +00:00
Chris Lattner 6b013fc923 Fix some bugs noticed by new checking code
llvm-svn: 23350
2005-09-14 18:18:39 +00:00
Chris Lattner a393e4d4b3 Fix the regression last night compiling povray
llvm-svn: 23348
2005-09-14 17:32:56 +00:00
Chris Lattner b42e962d23 fix a major regression from my patch this afternoon
llvm-svn: 23347
2005-09-14 06:06:45 +00:00