1. The ST*UX instructions that store and update the stack pointer did not set define/kill on R1. This became a problem when I activated post-RA scheduling (and had incorrectly adjusted the Frames-large test).
2. eliminateFrameIndex did not kill its scavenged temporary register, and this could cause the scavenger to exhaust all available registers (and its emergency spill slot) when there were a lot of CR values to spill. The 2010-02-12-saveCR test has been adjusted to check for this.
llvm-svn: 147359
Watch for empty symbol tables by doing a lot more error checking on
all mach-o symbol table load command values and data that is obtained.
This avoids a crash that was happening when there was no string table.
llvm-svn: 147358
Fixed an issue where our new accelerator tables could cause a crash
when we got a full 32 bit hash match, yet a C string mismatch.
We had a member variable in DWARFMappedHash::Prologue named
"min_hash_data_byte_size" the would compute the byte size of HashData
so we could skip hash data efficiently. It started out with a byte size
value of 4. When we read the table in from disk, we would clear the
atom array and read it from disk, and the byte size would still be set
to 4. We would then, as we read each atom from disk, increment this count.
So the byte size of the HashData was off, which means when we get a lookup
whose 32 bit hash does matches, but the C string does NOT match (which is
very very rare), then we try and skip the data for that hash and we would
add an incorrect offset and get off in our parsing of the hash data and
cause this crash.
To fix this I added a few safeguards:
1 - I now correctly clear the hash data size when we reset the atom array using the new DWARFMappedHash::Prologue::ClearAtoms() function.
2 - I now correctly always let the AppendAtom() calculate the byte size of the hash (before we were doing things manually some times, which was correct, but not good)
3 - I also track if the size of each HashData is a fixed byte size or not, and "do the right thing" when we need to skip the data.
4 - If we do get off in the weeds, then I make sure to return an error and stop any further parsing from happening.
llvm-svn: 147334
captured. This allows the tracker to look at the specific use, which may be
especially interesting for function calls.
Use this to fix 'nocapture' deduction in FunctionAttrs. The existing one does
not iterate until a fixpoint and does not guarantee that it produces the same
result regardless of iteration order. The new implementation builds up a graph
of how arguments are passed from function to function, and uses a bottom-up walk
on the argument-SCCs to assign nocapture. This gets us nocapture more often, and
does so rather efficiently and independent of iteration order.
llvm-svn: 147327