Go to file
Chris Lattner bf3b57f221 optimize single MBB loops better. In particular, produce:
LBB1_57:        #bb207.i
        movl 72(%esp), %ecx
        movb (%ecx,%eax), %cl
        movl 80(%esp), %edx
        movb %cl, 1(%edx,%eax)
        incl %eax
        cmpl $143, %eax
        jne LBB1_57     #bb207.i
        jmp LBB1_64     #cond_next255.i

intead of:

LBB1_57:        #bb207.i
        movl 72(%esp), %ecx
        movb (%ecx,%eax), %cl
        movl 80(%esp), %edx
        movb %cl, 1(%edx,%eax)
        incl %eax
        cmpl $143, %eax
        je LBB1_64      #cond_next255.i
        jmp LBB1_57     #bb207.i

This eliminates a branch per iteration of the loop.  This hurted PPC
particularly, because the extra branch meant another dispatch group for each
iteration of the loop.

llvm-svn: 31530
2006-11-08 01:03:21 +00:00
llvm optimize single MBB loops better. In particular, produce: 2006-11-08 01:03:21 +00:00