into a temporary graph, remember it for later, then inline the tmp graph into
the call site.
In the case where there are other call sites to the same set of functions, this
permits us to just inline the temporary graph instead of all of the callees.
This turns N*M inlining situations into an N+M inlining situation.
llvm-svn: 20036
This list does not provide the ability to go backwards in the list (its
more of an unordered collection, stored in the shape of a list).
This change means that use iterators are now only forward iterators, not
bidirectional.
This improves the memory usage of use lists from '5 + 4*#use' per value to
'1 + 4*#use'. While it would be better to reduce the multiplied factor,
I'm not smart enough to do so. This list also has slightly more efficient
operators for manipulating list nodes (a few less loads/stores), due to not
needing to be able to iterate backwards through the list.
This change reduces the memory footprint required to hold 176.gcc from
66.025M -> 57.687M, a 14% reduction. It also speeds up the compiler,
7.73% in the case of bytecode loading alone (release build loading 176.gcc).
llvm-svn: 19956
* Change the FunctionCalls and AuxFunctionCalls vectors into std::lists.
This makes many operations on these lists much more natural, and avoids
*exteremely* expensive copying of DSCallSites (e.g. moving nodes around
between lists, erasing a node from not the end of the vector, etc).
With a profile build of analyze, this speeds up BU DS from 25.14s to
12.59s on 176.gcc. I expect that it would help TD even more, but I don't
have data for it.
This effectively eliminates removeIdenticalCalls and children from the
profile, going from 6.53 to 0.27s.
llvm-svn: 19939
Based on the ilist changes avoid allocating an entire Use object for the
end of the Use chain. This saves 8 bytes of memory for each Value allocated
in the program. For 176.gcc, this reduces us from 69.5M -> 66.0M, a 5.3%
memory savings.
llvm-svn: 19925
cases it represented them as 'unsigned's, which are not enough for 64-bit
hosts. In other cases, it represented them as uint64_t's, which are
inefficient for 32-bit hosts.
This patch unifies all of the sizes to use size_t instead.
llvm-svn: 19918
This file was schizophrenic when it came to representing sizes. In some
cases it represented them as 'unsigneds', which are not enough for 64-bit
hosts. In other cases, it represented them as uint64_t's, which are
inefficient for 32-bit hosts.
This patch unifies all of the sizes to use size_t instead.
llvm-svn: 19917
not invalidated on entry and on exit of the block. This fixes some N^2
behavior in common cases, and speeds up gcc another 5% to 22.35s.
llvm-svn: 19910
will cause us to miss cases where the input pointer to a load could be value
numbered to another load. Something like this:
%X = load int* %P1
%Y = load int* %P2
Those are obviously the same if P1/P2 are the same. The code this patch
removes attempts to handle that. However, since GCSE iterates, this doesn't
actually buy us anything: GCSE will first replace P1 or P2 with the other
one, then the load can be value numbered as equal.
Removing this code speeds up gcse a lot. On 176.gcc in debug mode, this
speeds up gcse from 29.08s -> 25.73s, a 13% savings.
llvm-svn: 19906
Adjust to changes in the User class, operand handling is very different.
PHI node and switch statements must handle explicit resizing of operand
lists.
llvm-svn: 19891
and num operands in the User class. this allows us to embed the operands
directly in the subclasses if possible. For example, for binary operators
we store the two operands in the derived class.
The has several effects:
1. it improves locality because the operands and instruction are together
2. it makes accesses to operands faster (one less load) if you access them
through the derived class pointer. For example this:
Value *GetBinaryOperatorOp(BinaryOperator *I, int i) {
return I->getOperand(i);
}
Was compiled to:
_Z19GetBinaryOperatorOpPN4llvm14BinaryOperatorEi:
movl 4(%esp), %edx
movl 8(%esp), %eax
sall $4, %eax
movl 24(%edx), %ecx
addl %ecx, %eax
movl (%eax), %eax
ret
and is now compiled to:
_Z19GetBinaryOperatorOpPN4llvm14BinaryOperatorEi:
movl 8(%esp), %eax
movl 4(%esp), %edx
sall $4, %eax
addl %edx, %eax
movl 44(%eax), %eax
ret
Accesses through "Instruction*" are unmodified.
3. This reduces memory consumption (by about 3%) by eliminating 1 word of
vector overhead and a malloc header on a seperate object.
4. This speeds up gccas about 10% (both debug and release builds) on
large things (such as 176.gcc). For example, it takes a debug build
from 172.9 -> 155.6s and a release gccas from 67.7 -> 61.8s
llvm-svn: 19883
* Properly compile this:
struct a {};
int test() {
struct a b[2];
if (&b[0] != &b[1])
abort ();
return 0;
}
to 'return 0', not abort().
llvm-svn: 19875
If needed, this can be resurrected from CVS.
Note that several of the interfaces (e.g. the IPModRef ones) are supersumed
by generic AliasAnalysis interfaces that have been written since this code
was developed (and they are not DSA specific).
llvm-svn: 19864
large nested types over and over again to determine if they are sized or not.
Now, isSized() is able to make snap decisions about all concrete types, which
are a common occurance (and includes all primitives).
On 177.mesa, this speeds up DSE from 39.5s -> 21.3s and GCSE from
13.2s -> 11.3s, reducing gccas time from 80s -> 61s (this is a debug build).
DSE and GCSE are still too slow on this testcase, but this is a simple
improvement.
llvm-svn: 19800
all. This should speed up the X86 backend fairly significantly on integer
codes. Now if only we didn't have to compute livevar still... ;-)
llvm-svn: 19796
doesn't support certain directives and symbols on cygwin are prefixed with
an underscore. This patch makes the necessary adjustments to the output.
llvm-svn: 19775
differences, which means that identical instructions (after stripping off
the first literal string) do not run any different code at all. On the X86,
this turns this code:
switch (MI->getOpcode()) {
case X86::ADC32mi: printOperand(MI, 4, MVT::i32); break;
case X86::ADC32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::ADC32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ADD32mi: printOperand(MI, 4, MVT::i32); break;
case X86::ADD32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::ADD32mr: printOperand(MI, 4, MVT::i32); break;
case X86::AND32mi: printOperand(MI, 4, MVT::i32); break;
case X86::AND32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::AND32mr: printOperand(MI, 4, MVT::i32); break;
case X86::CMP32mi: printOperand(MI, 4, MVT::i32); break;
case X86::CMP32mr: printOperand(MI, 4, MVT::i32); break;
case X86::MOV32mi: printOperand(MI, 4, MVT::i32); break;
case X86::MOV32mr: printOperand(MI, 4, MVT::i32); break;
case X86::OR32mi: printOperand(MI, 4, MVT::i32); break;
case X86::OR32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::OR32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ROL32mi: printOperand(MI, 4, MVT::i8); break;
case X86::ROR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SAR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SBB32mi: printOperand(MI, 4, MVT::i32); break;
case X86::SBB32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::SBB32mr: printOperand(MI, 4, MVT::i32); break;
case X86::SHL32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SHLD32mrCL: printOperand(MI, 4, MVT::i32); break;
case X86::SHR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SHRD32mrCL: printOperand(MI, 4, MVT::i32); break;
case X86::SUB32mi: printOperand(MI, 4, MVT::i32); break;
case X86::SUB32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::SUB32mr: printOperand(MI, 4, MVT::i32); break;
case X86::TEST32mi: printOperand(MI, 4, MVT::i32); break;
case X86::TEST32mr: printOperand(MI, 4, MVT::i32); break;
case X86::TEST8mi: printOperand(MI, 4, MVT::i8); break;
case X86::XCHG32mr: printOperand(MI, 4, MVT::i32); break;
case X86::XOR32mi: printOperand(MI, 4, MVT::i32); break;
case X86::XOR32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::XOR32mr: printOperand(MI, 4, MVT::i32); break;
}
into this:
switch (MI->getOpcode()) {
case X86::ADC32mi:
case X86::ADC32mr:
case X86::ADD32mi:
case X86::ADD32mr:
case X86::AND32mi:
case X86::AND32mr:
case X86::CMP32mi:
case X86::CMP32mr:
case X86::MOV32mi:
case X86::MOV32mr:
case X86::OR32mi:
case X86::OR32mr:
case X86::SBB32mi:
case X86::SBB32mr:
case X86::SHLD32mrCL:
case X86::SHRD32mrCL:
case X86::SUB32mi:
case X86::SUB32mr:
case X86::TEST32mi:
case X86::TEST32mr:
case X86::XCHG32mr:
case X86::XOR32mi:
case X86::XOR32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ADC32mi8:
case X86::ADD32mi8:
case X86::AND32mi8:
case X86::OR32mi8:
case X86::ROL32mi:
case X86::ROR32mi:
case X86::SAR32mi:
case X86::SBB32mi8:
case X86::SHL32mi:
case X86::SHR32mi:
case X86::SUB32mi8:
case X86::TEST8mi:
case X86::XOR32mi8: printOperand(MI, 4, MVT::i8); break;
}
After this, the generated asmwriters look pretty much as though they were
generated by hand. This shrinks the X86 asmwriter.inc files from 55101->39669
and 55429->39551 bytes each, and PPC from 16766->12859 bytes.
llvm-svn: 19760
strings starts out with a constant string, we emit the string first, using
a table lookup (instead of a switch statement).
Because this is usually the opcode portion of the asm string, the differences
between the instructions have now been greatly reduced. This allows many
more case statements to be grouped together.
This patch also allows instruction cases to be grouped together when the
instruction patterns are exactly identical (common after the opcode string
has been ripped off), and when the differing operand is a MachineInstr
operand that needs to be formatted.
The end result of this is a mean and lean generated AsmPrinter!
llvm-svn: 19759
emitting code like this:
case PPC::ADD: O << "add "; printOperand(MI, 0, MVT::i64); O << ", "; prin
tOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '\n
'; break;
case PPC::ADDC: O << "addc "; printOperand(MI, 0, MVT::i64); O << ", "; pr
intOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '
\n'; break;
case PPC::ADDE: O << "adde "; printOperand(MI, 0, MVT::i64); O << ", "; pr
intOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '
\n'; break;
...
Emit code like this:
case PPC::ADD:
case PPC::ADDC:
case PPC::ADDE:
...
switch (MI->getOpcode()) {
case PPC::ADD: O << "add "; break;
case PPC::ADDC: O << "addc "; break;
case PPC::ADDE: O << "adde "; break;
...
}
printOperand(MI, 0, MVT::i64);
O << ", ";
printOperand(MI, 1, MVT::i64);
O << ", ";
printOperand(MI, 2, MVT::i64);
O << "\n";
break;
This shrinks the PPC asm writer from 24785->15205 bytes (even though the new
asmwriter has much more whitespace than the old one), and the X86 printers shrink
quite a bit too. The important implication of this is that GCC no longer hits swap
when building the PPC backend in optimized mode. Thus this fixes PR448.
-Chris
llvm-svn: 19755
and more understandable. It also allows us to do simple things like fold
consequtive literal strings together. For example, instead of emitting this
for the X86 backend:
O << "adc" << "l" << " ";
we now generate this:
O << "adcl ";
*whoa* :)
This shrinks the X86 asmwriters from 62729->58267 and 65176->58644 bytes
for the intel/att asm writers respectively.
llvm-svn: 19749
since we are dirty, special case __main. This should fix the infinite loop
horrible stuff that happens on linux-alpha when configuring llvm-gcc. It
might also help cygwin, who knows??
llvm-svn: 19729
The second folds operations into selects, e.g. (select C, (X+Y), (Y+Z))
-> (Y+(select C, X, Z)
This occurs a few times across spec, e.g.
select add/sub
mesa: 83 0
povray: 5 2
gcc 4 2
parser 0 22
perlbmk 13 30
twolf 0 3
llvm-svn: 19706
pressure, not decreases register pressure. Fix problem where we accidentally
swapped the operands of SHLD, which caused fourinarow to fail. This fixes
fourinarow.
llvm-svn: 19697
typically cost 1 cycle) instead of shld/shrd instruction (which are typically
6 or more cycles). This also saves code space.
For example, instead of emitting:
rotr:
mov %EAX, DWORD PTR [%ESP + 4]
mov %CL, BYTE PTR [%ESP + 8]
shrd %EAX, %EAX, %CL
ret
rotli:
mov %EAX, DWORD PTR [%ESP + 4]
shrd %EAX, %EAX, 27
ret
Emit:
rotr32:
mov %CL, BYTE PTR [%ESP + 8]
mov %EAX, DWORD PTR [%ESP + 4]
ror %EAX, %CL
ret
rotli32:
mov %EAX, DWORD PTR [%ESP + 4]
ror %EAX, 27
ret
We also emit byte rotate instructions which do not have a sh[lr]d counterpart
at all.
llvm-svn: 19692
select operations or to shifts that are by a constant. This automatically
implements (with no special code) all of the special cases for shift by 32,
shift by < 32 and shift by > 32.
llvm-svn: 19679
range. Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).
This defaults to undefined, which is conservatively correct.
llvm-svn: 19677
Add a hook to find out how the target handles shift amounts that are out of
range. Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).
This defaults to undefined, which is conservatively correct.
llvm-svn: 19676
do it. This results in better code on X86 for floats (because if strict
precision is not required, we can elide some more expensive double -> float
conversions like the old isel did), and allows other targets to emit
CopyFromRegs that are not legal for arguments.
llvm-svn: 19668