Chris Lattner
f3ebc3f3d2
Remove attribution from file headers, per discussion on llvmdev.
...
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Bill Wendling
ca77ecb40a
Mark the "isRemat" instruction as never having side effects.
...
llvm-svn: 45190
2007-12-19 06:07:48 +00:00
Evan Cheng
6e68381e02
Implicit def instructions, e.g. X86::IMPLICIT_DEF_GR32, are always re-materializable and they should not be spilled.
...
llvm-svn: 44960
2007-12-12 23:12:09 +00:00
Bill Wendling
fb706bc52b
Initial commit of the machine code LICM pass. It successfully hoists this:
...
_foo:
li r2, 0
LBB1_1: ; bb
li r5, 0
stw r5, 0(r3)
addi r2, r2, 1
addi r3, r3, 4
cmplw cr0, r2, r4
bne cr0, LBB1_1 ; bb
LBB1_2: ; return
blr
to:
_foo:
li r2, 0
li r5, 0
LBB1_1: ; bb
stw r5, 0(r3)
addi r2, r2, 1
addi r3, r3, 4
cmplw cr0, r2, r4
bne cr0, LBB1_1 ; bb
LBB1_2: ; return
blr
ZOMG!! :-)
Moar to come...
llvm-svn: 44687
2007-12-07 21:42:31 +00:00
Bill Wendling
77b13af9a6
Unifacalize the CALLSEQ{START,END} stuff.
...
llvm-svn: 44045
2007-11-13 09:19:02 +00:00
Bill Wendling
f359fed9f9
Unify CALLSEQ_{START,END}. They take 4 parameters: the chain, two stack
...
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).
This can only result in tears...
llvm-svn: 44037
2007-11-13 00:44:25 +00:00
Owen Anderson
933b5b7e62
Add a flag for indirect branch instructions.
...
Target maintainers: please check that the instructions for your target are correctly marked.
llvm-svn: 44012
2007-11-12 07:39:39 +00:00
Evan Cheng
ec271b104c
Temporary solution: added a different set of BCTRL_Macho / BCTRL_ELF with right callee-saved defs set for ppc64.
...
llvm-svn: 43248
2007-10-23 06:42:42 +00:00
Dale Johannesen
666323eacd
Next PPC long double bits: ppcf128->i32 conversion.
...
Surprisingly complicated.
Adds getTargetNode for 2 outputs, no inputs (missing).
llvm-svn: 42822
2007-10-10 01:01:31 +00:00
Evan Cheng
3e18e504ae
Remove (somewhat confusing) Imp<> helper, use let Defs = [], Uses = [] instead.
...
llvm-svn: 41863
2007-09-11 19:55:27 +00:00
Evan Cheng
58c3c30921
Some out operands were incorrectly specified as input operands.
...
llvm-svn: 40697
2007-08-01 23:07:38 +00:00
Evan Cheng
ac1591be42
No more noResults.
...
llvm-svn: 40132
2007-07-21 00:34:19 +00:00
Evan Cheng
9081ab8127
Oops. These stores actually produce results.
...
llvm-svn: 40074
2007-07-20 00:20:46 +00:00
Evan Cheng
94b5a80b93
Change instruction description to split OperandList into OutOperandList and
...
InOperandList. This gives one piece of important information: # of results
produced by an instruction.
An example of the change:
def ADD32rr : I<0x01, MRMDestReg, (ops GR32:$dst, GR32:$src1, GR32:$src2),
"add{l} {$src2, $dst|$dst, $src2}",
[(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;
=>
def ADD32rr : I<0x01, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
"add{l} {$src2, $dst|$dst, $src2}",
[(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;
llvm-svn: 40033
2007-07-19 01:14:50 +00:00
Evan Cheng
76a97c5f8a
Do away with ImmutablePredicateOperand.
...
llvm-svn: 37961
2007-07-06 23:22:46 +00:00
Evan Cheng
ea4a82bcfb
PPC conditional branch predicate does not change after isel.
...
llvm-svn: 37893
2007-07-05 07:09:50 +00:00
Evan Cheng
d194a8603d
PredicateOperand can be used as a normal operand for isel.
...
llvm-svn: 36947
2007-05-08 21:06:08 +00:00
Nicolas Geoffray
fbfc451ba9
The ELF ABI specifies F1-F8 registers as argument registers for double, not
...
F1-F10. This affects only ELF, not MachO.
llvm-svn: 35622
2007-04-03 10:27:07 +00:00
Nicolas Geoffray
89d81878d2
Differentiate between the MachO and the ELF ABI the CALL instruction.
...
llvm-svn: 34667
2007-02-27 13:01:19 +00:00
Chris Lattner
535bd6d3ba
always lower to RETFLAG, never leave it as just ret.
...
llvm-svn: 34639
2007-02-26 19:44:02 +00:00
Chris Lattner
84ab9a556c
one important bugfix: PPC32 didn't have both elf and macho support for
...
external symbols and global addresses. Add the missing ones.
one important workaround: PPCISD::CALL is matched by both PPCcall_ELF
and PPCcall_Macho, disable the _ELF patterns for now.
llvm-svn: 34601
2007-02-25 19:20:53 +00:00
Chris Lattner
43df5b335c
implement support for the linux/ppc function call ABI. Patch by
...
Nicolas Geoffray!
llvm-svn: 34574
2007-02-25 05:34:32 +00:00
Jim Laskey
f9e5445ed4
Make LABEL a builtin opcode.
...
llvm-svn: 33537
2007-01-26 14:34:52 +00:00
Chris Lattner
542dfd5510
Rewrite the branch selector to be correct in the face of large functions.
...
The algorithm it used before wasn't 100% correct, we now use an iterative
expansion model. This fixes assembler errors when compiling 403.gcc with
tail merging enabled.
Change the way the branch selector works overall: Now, the isel generates
PPC::BCC instructions (as it used to) directly, and these BCC instructions
are emitted to the output or jitted directly if branches don't need
expansion. Only if branches need expansion are instructions rewritten
and created. This should make branch select faster, and eliminates the
Bxx instructions from the .td file.
llvm-svn: 31837
2006-11-18 00:32:03 +00:00
Chris Lattner
33fc1d45e5
add encoding for BCC, after finally wrestling strange ppc/tblgen endianness
...
issues to the ground.
llvm-svn: 31836
2006-11-17 23:53:28 +00:00
Chris Lattner
be9377a1e3
convert PPC::BCC to use the 'pred' operand instead of separate predicate
...
value and CR reg #. This requires swapping the order of these everywhere
that touches BCC and requires us to write custom matching logic for
PPCcondbranch :(
llvm-svn: 31835
2006-11-17 22:37:34 +00:00
Chris Lattner
e0263794f4
rename PPC::COND_BRANCH to PPC::BCC
...
llvm-svn: 31834
2006-11-17 22:14:47 +00:00
Chris Lattner
8c6a41ea12
start using PPC predicates more consistently.
...
llvm-svn: 31833
2006-11-17 22:10:59 +00:00
Jim Laskey
48850c10c0
This is a general clean up of the PowerPC ABI. Address several problems and
...
bugs including making sure that the TOS links back to the previous frame,
that the maximum call frame size is not included twice when using frame
pointers, no longer growing the frame on calls, double storing of SP and
a cleaner/faster dynamic alloca.
llvm-svn: 31792
2006-11-16 22:43:37 +00:00
Chris Lattner
a7ff5162b0
fix broken encoding
...
llvm-svn: 31778
2006-11-16 01:01:28 +00:00
Chris Lattner
6f5840c409
add patterns for ppc32 preinc stores. ppc64 next.
...
llvm-svn: 31775
2006-11-16 00:41:37 +00:00
Chris Lattner
3a494989a6
switch these back to the 'bad old way'
...
llvm-svn: 31774
2006-11-16 00:33:34 +00:00
Chris Lattner
5771156be0
Stop using isTwoAddress, switching to operand constraints instead.
...
Tell the codegen emitter that specific operands are not to be encoded, fixing
JIT regressions w.r.t. pre-inc loads and stores (e.g. lwzu, which we generate
even when general preinc loads are not enabled).
llvm-svn: 31770
2006-11-15 23:24:18 +00:00
Chris Lattner
474b5b7c95
fix ldu/stu jit encoding. Swith 64-bit preinc load instrs to use memri
...
addrmodes.
llvm-svn: 31757
2006-11-15 19:55:13 +00:00
Chris Lattner
1396961e85
Switch loads over to use memri as the operand instead of a reg/imm operand
...
pair for cleanliness. Add instructions for PPC32 preinc-stores with commented
out patterns. More improvement is needed to enable the patterns, but we're
getting close.
llvm-svn: 31749
2006-11-15 02:43:19 +00:00
Chris Lattner
e79a451475
group load and store instructions together. No functionality change.
...
llvm-svn: 31736
2006-11-14 19:19:53 +00:00
Chris Lattner
44dbdbe5cf
Rework PPC64 calls. Now we have a LR8/CTR8 register which the PPC64 calls
...
clobber. This allows LR8 to be save/restored correctly as a 64-bit quantity,
instead of handling it as a 32-bit quantity. This unbreaks ppc64 codegen when
the code is actually located above the 4G boundary.
llvm-svn: 31734
2006-11-14 18:44:47 +00:00
Chris Lattner
2ff632c54b
Mark operands as symbol lo instead of imm32 so that they print lo(x) around
...
globals.
llvm-svn: 31672
2006-11-11 04:51:36 +00:00
Chris Lattner
6c8656a6b1
dform 8/9 are identical to dform 1
...
llvm-svn: 31637
2006-11-10 17:51:02 +00:00
Chris Lattner
ce6455489a
add an initial cut at preinc loads for ppc32. This is broken for ppc64
...
(because the 64-bit reg target versions aren't implemented yet), doesn't
support r+r addr modes, and doesn't handle stores, but it works otherwise. :)
This is disabled unless -enable-ppc-preinc is passed to llc for now.
llvm-svn: 31621
2006-11-10 02:08:47 +00:00
Chris Lattner
6a5a4f85d3
correct the (currently unused) pattern for lwzu.
...
llvm-svn: 31535
2006-11-08 02:13:12 +00:00
Chris Lattner
2959789c92
encode BLR predicate info for the JIT
...
llvm-svn: 31450
2006-11-04 05:42:48 +00:00
Chris Lattner
6be726048e
Go through all kinds of trouble to mark 'blr' as having a predicate operand
...
that takes a register and condition code. Print these pieces of BLR the
right way, even though it is currently set to 'always'.
Next up: get the JIT encoding right, then enhance branch folding to produce
predicated blr for simple examples.
llvm-svn: 31449
2006-11-04 05:27:39 +00:00
Chris Lattner
c8a68d08c3
Describe PPC predicates, which are a pair of CR# and condition.
...
llvm-svn: 31438
2006-11-03 23:53:25 +00:00
Chris Lattner
895d199348
remove dead vars
...
llvm-svn: 31433
2006-11-03 23:46:45 +00:00
Chris Lattner
d43e8a7429
Add intrinsics for the rest of the DCB* instructions.
...
llvm-svn: 31148
2006-10-24 01:08:42 +00:00
Evan Cheng
ab51cf2e78
Merge ISD::TRUNCSTORE to ISD::STORE. Switch to using StoreSDNode.
...
llvm-svn: 30945
2006-10-13 21:14:26 +00:00
Chris Lattner
cf56917053
set isBarrier correctly
...
llvm-svn: 30936
2006-10-13 19:10:34 +00:00
Chris Lattner
7374bc0577
mark adjcallstack up/down as clobbering and using the SP
...
llvm-svn: 30908
2006-10-12 17:56:34 +00:00
Evan Cheng
577ef7694e
Add properties to ComplexPattern.
...
llvm-svn: 30891
2006-10-11 21:03:53 +00:00
Evan Cheng
e71fe34d75
Reflects ISD::LOAD / ISD::LOADX / LoadSDNode changes.
...
llvm-svn: 30844
2006-10-09 20:57:25 +00:00
Chris Lattner
67f8cc51f4
Use abstract private/comment directives, to increase portability to ppc/linux
...
llvm-svn: 30621
2006-09-27 02:55:21 +00:00
Nate Begeman
d31efd190f
Fold AND and ROTL more often
...
llvm-svn: 30577
2006-09-22 05:01:56 +00:00
Evan Cheng
81b645a76b
CALLSEQ_* produces chain even if that's not needed.
...
llvm-svn: 29603
2006-08-11 09:03:33 +00:00
Chris Lattner
4f8eb5ccaf
bswapped load/store instructions are only availble in indexed addressing form.
...
As such, use xoaddr (indexed only), not xaddr for address selection.
This fixes CodeGen/PowerPC/2006-07-19-stwbrx-crash.ll, a crash compiling lencod.
llvm-svn: 29208
2006-07-19 17:15:36 +00:00
Chris Lattner
b00b6c2e86
Make the implicit def instructions look like other instrs.
...
llvm-svn: 29174
2006-07-18 16:33:26 +00:00
Chris Lattner
a7976d329e
Implement Regression/CodeGen/PowerPC/bswap-load-store.ll by folding bswaps
...
into i16/i32 load/stores.
llvm-svn: 29089
2006-07-10 20:56:58 +00:00
Chris Lattner
3b5873456e
Add 64-bit MTCTR so that indirect calls work.
...
llvm-svn: 28931
2006-06-27 18:36:44 +00:00
Chris Lattner
d48ce27532
Implement 64-bit undef, sub, shl/shr, srem/urem
...
llvm-svn: 28929
2006-06-27 18:18:41 +00:00
Chris Lattner
97b3da1519
Implement a bunch of 64-bit cleanliness work. With this, treeadd builds (but
...
doesn't work right).
llvm-svn: 28921
2006-06-27 00:04:13 +00:00
Chris Lattner
b6a65f4661
Remove two more definitions
...
llvm-svn: 28918
2006-06-26 22:47:37 +00:00
Chris Lattner
86e6046515
remove two unused instructions.
...
llvm-svn: 28917
2006-06-26 22:44:13 +00:00
Chris Lattner
1f1b096142
Make these predicates correct in 64-bit mode too.
...
llvm-svn: 28890
2006-06-20 23:21:20 +00:00
Chris Lattner
52a956da52
Rename OR4 -> OR. Move some PPC64-specific stuff to the 64-bit file
...
llvm-svn: 28889
2006-06-20 23:18:58 +00:00
Chris Lattner
5705d4d519
remove unused flag
...
llvm-svn: 28888
2006-06-20 23:15:07 +00:00
Chris Lattner
7a856a6d88
remove some unused patterns
...
llvm-svn: 28886
2006-06-20 23:11:36 +00:00
Chris Lattner
7e742e46ac
Add some 64-bit logical ops.
...
Split imm16Shifted into a sext/zext form for 64-bit support.
Add some patterns for immediate formation. For example, we now compile this:
static unsigned long long Y;
void test3() {
Y = 0xF0F00F00;
}
into:
_test3:
li r2, 3840
lis r3, ha16(_Y)
xoris r2, r2, 61680
std r2, lo16(_Y)(r3)
blr
GCC produces:
_test3:
li r0,0
lis r2,ha16(_Y)
ori r0,r0,61680
sldi r0,r0,16
ori r0,r0,3840
std r0,lo16(_Y)(r2)
blr
llvm-svn: 28883
2006-06-20 22:34:10 +00:00
Chris Lattner
d6e160d14d
64-bit bugfix: 0xFFFF0000 cannot be formed with a single lis.
...
llvm-svn: 28880
2006-06-20 21:39:30 +00:00
Chris Lattner
868a75bec6
Remove some now-unneeded casts from instruction patterns. With the casts
...
removed, tblgen produces identical output to with them in.
llvm-svn: 28867
2006-06-20 00:39:56 +00:00
Chris Lattner
e8fe5e2bf4
In 64-bit mode, addr mode operands use G8RC instead of GPRC.
...
llvm-svn: 28840
2006-06-16 21:29:03 +00:00
Chris Lattner
a5190ae7a9
fix some assumptions that pointers can only be 32-bits. With this, we can
...
now compile:
static unsigned long X;
void test1() {
X = 0;
}
into:
_test1:
lis r2, ha16(_X)
li r3, 0
stw r3, lo16(_X)(r2)
blr
Totally amazing :)
llvm-svn: 28839
2006-06-16 21:01:35 +00:00
Chris Lattner
b429983988
Split 64-bit instructions out into a separate .td file
...
llvm-svn: 28838
2006-06-16 20:22:01 +00:00
Chris Lattner
006b2c6ab9
Fix a problem exposed by the local allocator. CALL instructions are not marked
...
as using incoming argument registers, so the local allocator would clobber them
between their set and use. To fix this, we give the call instructions a variable
number of uses in the CALL MachineInstr itself, so live variables understands
the live ranges of these register arguments.
llvm-svn: 28744
2006-06-10 01:14:28 +00:00
Chris Lattner
c8587d4b81
Add PowerPC intrinsics to support dcbz[l]
...
llvm-svn: 28696
2006-06-06 21:29:23 +00:00
Chris Lattner
eb755fc1b3
Make PPC call lowering more aggressive, making the isel matching code simple
...
enough to be autogenerated.
llvm-svn: 28354
2006-05-17 19:00:46 +00:00
Chris Lattner
b1e9e37c58
Switch PPC over to a call-selection model where the lowering code creates
...
the copyto/fromregs instead of making the PPCISD::CALL selection code create
them. This vastly simplifies the selection code, and moves the ABI handling
parts into one place.
llvm-svn: 28346
2006-05-17 06:01:33 +00:00
Nate Begeman
4ca2ea5b43
JumpTable support! What this represents is working asm and jit support for
...
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.
llvm-svn: 27947
2006-04-22 18:53:45 +00:00
Chris Lattner
34c901b50e
These are correctly encoded by the JIT. I checked :)
...
llvm-svn: 27810
2006-04-18 19:03:38 +00:00
Chris Lattner
9754d142a4
Implement an important entry from README_ALTIVEC:
...
If an altivec predicate compare is used immediately by a branch, don't
use a (serializing) MFCR instruction to read the CR6 register, which requires
a compare to get it back to CR's. Instead, just branch on CR6 directly. :)
For example, for:
void foo2(vector float *A, vector float *B) {
if (!vec_any_eq(*A, *B))
*B = (vector float){0,0,0,0};
}
We now generate:
_foo2:
mfspr r2, 256
oris r5, r2, 12288
mtspr 256, r5
lvx v2, 0, r4
lvx v3, 0, r3
vcmpeqfp. v2, v3, v2
bne cr6, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
vxor v2, v2, v2
stvx v2, 0, r4
mtspr 256, r2
blr
LBB1_2: ; UnifiedReturnBlock
mtspr 256, r2
blr
instead of:
_foo2:
mfspr r2, 256
oris r5, r2, 12288
mtspr 256, r5
lvx v2, 0, r4
lvx v3, 0, r3
vcmpeqfp. v2, v3, v2
mfcr r3, 2
rlwinm r3, r3, 27, 31, 31
cmpwi cr0, r3, 0
beq cr0, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
vxor v2, v2, v2
stvx v2, 0, r4
mtspr 256, r2
blr
LBB1_2: ; UnifiedReturnBlock
mtspr 256, r2
blr
This implements CodeGen/PowerPC/vec_br_cmp.ll.
llvm-svn: 27804
2006-04-18 17:59:36 +00:00
Chris Lattner
0a3d1bbca4
Add VRRC select support
...
llvm-svn: 27543
2006-04-08 22:45:08 +00:00
Chris Lattner
d7495ae7e9
Lower vector compares to VCMP nodes, just like we lower vector comparison
...
predicates to VCMPo nodes.
llvm-svn: 27285
2006-03-31 05:13:27 +00:00
Chris Lattner
cb5ec07cc3
Use normal lvx for scalar_to_vector instead of lve*x. They do the exact
...
same thing and we have a dag node for the former.
llvm-svn: 27205
2006-03-28 01:43:22 +00:00
Chris Lattner
6961fc76bb
Codegen vector predicate compares.
...
llvm-svn: 27151
2006-03-26 10:06:40 +00:00
Chris Lattner
2a85fa1f79
Move all Altivec stuff out into a new PPCInstrAltivec.td file.
...
Add a bunch of patterns for different datatypes, e.g. bit_convert, undef and
zero vector support.
llvm-svn: 27117
2006-03-25 07:51:43 +00:00
Chris Lattner
1cb91b3cd9
Add some basic patterns for other datatypes
...
llvm-svn: 27116
2006-03-25 07:39:07 +00:00
Chris Lattner
f653cdd3f9
Add support for __builtin_altivec_vnmsubfp /vmaddfp
...
llvm-svn: 27112
2006-03-25 07:05:55 +00:00
Chris Lattner
2771e2c960
Codegen things like:
...
<int -1, int -1, int -1, int -1>
and
<int 65537, int 65537, int 65537, int 65537>
Using things like:
vspltisb v0, -1
and:
vspltish v0, 1
instead of using constant pool loads.
This implements CodeGen/PowerPC/vec_splat.ll:splat_imm_i{32|16}.
llvm-svn: 27106
2006-03-25 06:12:06 +00:00
Chris Lattner
d589dd1352
Fix a bad JIT encoding of VPERM. Why is VPERM D,A,B,C but vfmadd is D,A,C,B ??
...
llvm-svn: 27069
2006-03-24 18:24:43 +00:00
Chris Lattner
ab882abce8
add support for using vxor to build zero vectors. This implements
...
Regression/CodeGen/PowerPC/vec_zero.ll
llvm-svn: 27059
2006-03-24 07:48:08 +00:00
Chris Lattner
f5efddf80b
Gabor points out that we can't spell. :)
...
llvm-svn: 27049
2006-03-24 07:12:19 +00:00
Chris Lattner
81137629e0
Add PPC vector bit-convert support
...
llvm-svn: 26995
2006-03-23 19:54:27 +00:00
Chris Lattner
4a66d69433
When possible, custom lower 32-bit SINT_TO_FP to this:
...
_foo2:
extsw r2, r3
std r2, -8(r1)
lfd f0, -8(r1)
fcfid f0, f0
frsp f1, f0
blr
instead of this:
_foo2:
lis r2, ha16(LCPI2_0)
lis r4, 17200
xoris r3, r3, 32768
stw r3, -4(r1)
stw r4, -8(r1)
lfs f0, lo16(LCPI2_0)(r2)
lfd f1, -8(r1)
fsub f0, f1, f0
frsp f1, f0
blr
This speeds up Misc/pi from 2.44s->2.09s with LLC and from 3.01->2.18s
with llcbeta (16.7% and 38.1% respectively).
llvm-svn: 26943
2006-03-22 05:30:33 +00:00
Chris Lattner
4e7371758f
Fix the JIT encoding of the VAForm_1 instructions, including vmaddfp
...
llvm-svn: 26935
2006-03-22 01:44:36 +00:00
Chris Lattner
d2132f87d7
When codegen'ing vector MUL using VFMADD, *add* the 0, don't *mul* the 0.
...
llvm-svn: 26913
2006-03-21 00:51:38 +00:00
Chris Lattner
a1bc294f0c
Fix a couple of bugs in permute/splat generate, thanks to Nate for actually
...
figuring these out! :)
llvm-svn: 26904
2006-03-20 18:26:51 +00:00
Chris Lattner
f96d523b8f
Fix the pattern for VADDUWM, add i32 splat
...
llvm-svn: 26901
2006-03-20 17:51:58 +00:00
Evan Cheng
89f3cff0f5
Use tblgen'd VECTOR_SHUFFLE selection code.
...
llvm-svn: 26900
2006-03-20 08:14:16 +00:00
Chris Lattner
a9a1313386
Add support for generating vspltw, instead of a vperm instruction with a
...
constant pool load. This generates significantly nicer code for splats.
When tblgen gets bugfixed, we can remove the custom selection code.
llvm-svn: 26898
2006-03-20 06:51:10 +00:00
Chris Lattner
382f356bd9
Check in some intermediate code that adds a skeleton for matching vsplt*
...
instructions
llvm-svn: 26894
2006-03-20 06:15:45 +00:00
Chris Lattner
93d99f9928
fix typo
...
llvm-svn: 26889
2006-03-20 05:05:55 +00:00