This change replaces getTypeStoreSize with getTypeAllocSize in AddressSanitizer
instrumentation for stack allocations.
One case where old behaviour produced undesired results is an optimization in
InstCombine pass (PromoteCastOfAllocation), which can replace alloca(T) with
alloca(S), where S has the same AllocSize, but a smaller StoreSize. Another
case is memcpy(long double => long double), where ASan will poison bytes 10-15
of a stack-allocated long double (StoreSize 10, AllocSize 16,
sizeof(long double) = 16).
See http://llvm.org/bugs/show_bug.cgi?id=12047 for more context.
llvm-svn: 151887
hashable data. This matters when we have pair<T*, U*> as a key, which is
quite common in DenseMap, etc. To that end, we need to detect when this
is safe. The requirements on a generic std::pair<T, U> are:
1) Both T and U must satisfy the existing is_hashable_data trait. Note
that this includes the requirement that T and U have no internal
padding bits or other bits not contributing directly to equality.
2) The alignment constraints of std::pair<T, U> do not require padding
between consecutive objects.
3) The alignment constraints of U and the size of T do not conspire to
require padding between the first and second elements.
Grow two somewhat magical traits to detect this by forming a pod
structure and inspecting offset artifacts on it. Hopefully this won't
cause any compilers to panic.
Added and adjusted tests now that pairs, even nested pairs, are treated
as just sequences of data.
Thanks to Jeffrey Yasskin for helping me sort through this and reviewing
the somewhat subtle traits.
llvm-svn: 151883
an open question of whether we can do better than this by treating pairs
as boring data containers and directly hashing the two subobjects. This
at least makes the API reasonable.
In order to make this change, I reorganized the header a bit. I lifted
the declarations of the hash_value functions up to the top of the header
with their doxygen comments as these are intended for users to interact
with. They shouldn't have to wade through implementation details. I then
defined them at the very end so that they could be defined in terms of
hash_combine or any other hashing infrastructure.
Added various pair-hashing unittests.
llvm-svn: 151882
In this instance we are generating the tail-call during legalizeDAG. The 2nd
floor call can't be a tail call because it clobbers %xmm1, which is defined by
the first floor call. The first floor call can't be a tail-call because it's
not in the tail position. The only reasonable way I could think to fix this
in a target-independent manner was to check for glue logic on the copy reg.
rdar://10930395
llvm-svn: 151877
the hash_code. I'm not sure what I was thinking here, the use cases for
special values are in the *keys*, not in the hashes of those keys.
We can always resurrect this if needed, or clients can accomplish the
same goal themselves. This makes the general case somewhat faster (~5
cycles faster on my machine) and smaller with less branching.
llvm-svn: 151865
The inline table needs to be constructed ahead of time so that it doesn't try to
create new strings while we're emitting everything.
This reverts commit a8ff9bccb399183cdd5f1c3cec2bda763664b4b0.
llvm-svn: 151864
floating point equality comparisons into integer ones with -ffast-math. The
issue is the optimization causes +0.0 != -0.0.
Now the optimization is only done when one side is known to be 0.0. The other
side's sign bit is masked off for the comparison.
rdar://10964603
llvm-svn: 151861
to do more invasive refactoring here to get FoldingSet to use size_t or
even hash_code directly, but for now this is a good first step to remove
Yet Another Hashing Algorithm from LLVM.
llvm-svn: 151859
This function could have r12 live across a function call when compiling
thumb1 code.
The test case for this is not included because it is very long. It must
provoke emergency spilling near a function call. The behavior is
provoked by MultiSource/Applications/JM/lencod, and it triggers an
assertion in the scavenger.
<rdar://problem/10963642>
llvm-svn: 151855
fixups that are being used to determine section offsets. Reduces
the total number of fixups by 50% for a non-trivial testcase.
Part of rdar://10413936
llvm-svn: 151852
and stores was added.
- SelectAddr should return false if Parent is an unaligned f32 load or store.
- Only aligned load and store nodes should be matched to select reg+imm
floating point instructions.
- MIPS does not have support for f64 unaligned load or store instructions.
llvm-svn: 151843
so that the test will not fail when run on an Intel Atom
processor, due to the Atom scheduler producing an instruction sequence that is
different from that which is normally expected.
llvm-svn: 151832
of the proposed standard hashing interfaces (N3333), and to use
a modified and tuned version of the CityHash algorithm.
Some of the highlights of this change:
-- Significantly higher quality hashing algorithm with very well
distributed results, and extremely few collisions. Should be close to
a checksum for up to 64-bit keys. Very little clustering or clumping of
hash codes, to better distribute load on probed hash tables.
-- Built-in support for reserved values.
-- Simplified API that composes cleanly with other C++ idioms and APIs.
-- Better scaling performance as keys grow. This is the fastest
algorithm I've found and measured for moderately sized keys (such as
show up in some of the uniquing and folding use cases)
-- Support for enabling per-execution seeds to prevent table ordering
or other artifacts of hashing algorithms to impact the output of
LLVM. The seeding would make each run different and highlight these
problems during bootstrap.
This implementation was tested extensively using the SMHasher test
suite, and pased with flying colors, doing better than the original
CityHash algorithm even.
I've included a unittest, although it is somewhat minimal at the moment.
I've also added (or refactored into the proper location) type traits
necessary to implement this, and converted users of GeneralHash over.
My only immediate concerns with this implementation is the performance
of hashing small keys. I've already started working to improve this, and
will continue to do so. Currently, the only algorithms faster produce
lower quality results, but it is likely there is a better compromise
than the current one.
Many thanks to Jeffrey Yasskin who did most of the work on the N3333
paper, pair-programmed some of this code, and reviewed much of it. Many
thanks also go to Geoff Pike Pike and Jyrki Alakuijala, the original
authors of CityHash on which this is heavily based, and Austin Appleby
who created MurmurHash and the SMHasher test suite.
Also thanks to Nadav, Tobias, Howard, Jay, Nick, Ahmed, and Duncan for
all of the review comments! If there are further comments or concerns,
please let me know and I'll jump on 'em.
llvm-svn: 151822
Allows us to de-virtualize the function and provides access to it in
the instruction printer, which is useful for handling composite
physical registers (e.g., ARM register lists).
llvm-svn: 151815
This reverts commit 151760.
We want to move getSubReg() from TargetRegisterInfo into MCRegisterInfo,
but to do that, the type of the lookup table needs to be the same for
all targets.
llvm-svn: 151814
This allows us to make TRC non-polymorphic and value-initializable, eliminating a huge static
initializer and a ton of cruft from the generated code.
Shrinks ARMBaseRegisterInfo.o by ~100k.
llvm-svn: 151806
Simply treat bundles as instructions. Spill code is inserted between
bundles, never inside a bundle. Rewrite all operands in a bundle at
once.
Don't attempt and memory operand folding inside bundles.
llvm-svn: 151787
* Add begin_dynamic_table() / end_dynamic_table() private interface to ELFObjectFile.
* Add begin_libraries_needed() / end_libraries_needed() interface to ObjectFile, for grabbing the list of needed libraries for a shared object or dynamic executable.
* Implement this new interface completely for ELF, leave stubs for COFF and MachO.
* Add 'llvm-readobj' tool for dumping ObjectFile information.
llvm-svn: 151785
This allows the function to be inlined, and makes it suitable for use in
getInstructionIndex().
Also provide a const version. C++ is great for touch typing practice.
llvm-svn: 151782
- The search bounds are constant, in the worst case (ARM target) it will scan over 30 uint16_ts.
- This method isn't very hot, I had problems finding a testcase where it's called more than a dozen of times (no perf impact).
llvm-svn: 151773
Instead of nested switch statements, use a lookup table. On ARM, this replaces
a 23k (x86_64 release build) function with a 16k table. Its not unlikely to
be faster, as well.
llvm-svn: 151751
std::vector.
- Good for 1-2% speedup on writing PCH for Cocoa.h.
- Clang side API match to follow shortly, there wasn't an easy way to make this
non-breaking.
llvm-svn: 151750
Rename ST_External to ST_Unknown, and slightly change its semantics. It now only indicates that the symbol's type
is unknown, not that the symbol is undefined. (For that, use ST_Undefined).
llvm-svn: 151696
This function does more or less the same as
MI::readsWritesVirtualRegister(), but it supports bundles as well.
It also determines if any constraint requires reading and writing
operands to use the same register. Most clients want to know.
Use the more modern MO.readsReg() instead of trying to sort out undefs
and partial redefines. Stop supporting the extra full <imp-def> operand
as an alternative to <def,undef> sub-register defines.
llvm-svn: 151690
Extract a base class and provide four specific sub-classes for iterating
over const/non-const bundles/instructions.
This eliminates the mystery bool constructor argument.
llvm-svn: 151684
find root names on Unix.
- This fixes make_absolute to not basically always call current_path() on
Unix systems.
- I think the API probably needs cleanup in this area, but I'll let Michael
handle that.
llvm-svn: 151681
Without this hook, functions w/ a completely empty body (including no
epilogue) will cause an MCEmitter assertion failure.
For example,
define internal fastcc void @empty_function() {
unreachable
}
rdar://10947471
llvm-svn: 151673
SlotIndexes are not assigned to instructions inside bundles, but it is
still valid to look up the index of those instructions.
The reverse getInstructionFromIndex() will return the first instruction
in the bundle.
llvm-svn: 151672
methods are no longer needed now that LinearScan has gone away.
(Contains tweaks trivialSpillEverywhere to enable the removal of getNewVRegs).
llvm-svn: 151658
debug info for assembly files. We were already doing the right thing when
producing debug info for C/C++.
ELF linkers don't know dwarf, so they depend on these relocations to produce
valid dwarf output.
llvm-svn: 151655
Clang builds. The detection logic for compilers that support the warning
isn't working. Rafael is going to investigate it, but didn't want people
to have to wade through build spam until then.
llvm-svn: 151649
To avoid problems with zero shifts when getting the bits that move between words
we use a trick: first shift the by amount-1, then do another shift by one. When
amount is 0 (and size 32) we first shift by 31, then by one, instead of by 32.
Also fix a latent bug that emitted the low and high words in the wrong order
when shifting right.
Fixes PR12113.
llvm-svn: 151637
When the GEP index is a vector of pointers, the code that calculated the size
of the element started from the vector type, and not the contained pointer type.
As a result, instead of looking at the data element pointed by the vector, this
code used the size of the vector. This works for 32bit members (on 32bit
systems), but not for other types. Added code to peel the vector type and
added a test.
llvm-svn: 151626
the processor keeps a return addresses stack (RAS) which stores the address
and the instruction execution state of the instruction after a function-call
type branch instruction.
Calling a "noreturn" function with normal call instructions (e.g. bl) can
corrupt RAS and causes 100% return misprediction so LLVM should use a
unconditional branch instead. i.e.
mov lr, pc
b _foo
The "mov lr, pc" is issued in order to get proper backtrace.
rdar://8979299
llvm-svn: 151623
When an outgoing call takes more than 2k of arguments on the stack, we
don't allocate that call frame in the prolog, but adjust the stack
pointer immediately before the call instead.
This causes problems with the emergency spill slot because PEI can't
track stack pointer adjustments on the second pass, and if the outgoing
arguments are too big, SP can't be used to reach the emergency spill
slot at all.
Work around these problems by ensuring there is a base or frame pointer
that can be used to access the emergency spill slot.
<rdar://problem/10917166>
llvm-svn: 151604
We on the linker to resolve calls to the appropriate BL/BLX instruction
to make interworking function correctly. It uses the symbol in the
relocation to do that, so we need to be careful about being too clever.
To enable this for ARM mode, split the BL/BLX fixup kind off from the
unconditional-branch fixups.
rdar://10927209
llvm-svn: 151571
After the SlotIndex slot names were updated, it is possible to apply
stricter checks to live intervals.
Also treat bundles as bags of operands when checking live intervals.
llvm-svn: 151531
The MIOperands iterator can visit operands on a single instruction, or
all operands in a bundle. This simplifies code like the register
allocator that treats bundles as a set of operands.
llvm-svn: 151529
value numbers to be assigned when calculating any particular value number.
Enhance the logic that detects new value numbers to take this into account,
for a tiny compile time speedup. Fix a comment typo while there.
llvm-svn: 151522
%cmp (eg: A==B) we already replace %cmp with "true" under the true edge, and
with "false" under the false edge. This change enhances this to replace the
negated compare (A!=B) with "false" under the true edge and "true" under the
false edge. Reported to improve perlbench results by 1%.
llvm-svn: 151517
verifier does. This correctly handles invoke.
Thanks to Duncan, Andrew and Chris for the comments.
Thanks to Joerg for the early testing.
llvm-svn: 151469
Reverting this because it breaks static linking on ppc64. Specifically, it may be linkonce_odr functions that are the problem.
With this patch, if you link statically, calls to some functions end up calling their descriptor addresses instead
of calling to their entry points. This causes the execution to fail with SIGILL (b/c the descriptor address just
has some pointers, not code).
llvm-svn: 151433
[Joe Groff] Hi everyone. My previous patch applied as r151382 had a few problems:
Clang raised a warning, and X86 LowerOperation would assert out for
fptoui f64 to i32 because it improperly lowered to an illegal
BUILD_PAIR. Here's a patch that addresses these issues. Let me know if
any other changes are necessary. Thanks.
llvm-svn: 151432
are optimization hints, but at -O0 we're not optimizing. This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594
llvm-svn: 151429
uses of the vreg, since the old kills may no longer be valid. This was causing
-verify-machineinstrs to complain about uses after kills, and could potentially
have been causing subtle register allocation issues, but I haven't come across a
test case yet.
llvm-svn: 151425
reserving a physical register ($gp or $28) for that purpose.
This will completely eliminate loads that restore the value of $gp after every
function call, if the register allocator assigns a callee-saved register, or
eliminate unnecessary loads if it assigns a temporary register.
example:
.cpload $25 // set $gp.
...
.cprestore 16 // store $gp to stack slot 16($sp).
...
jalr $25 // function call. clobbers $gp.
lw $gp, 16($sp) // not emitted if callee-saved reg is chosen.
...
lw $2, 4($gp)
...
jalr $25 // function call.
lw $gp, 16($sp) // not emitted if $gp is not live after this instruction.
...
llvm-svn: 151402
Add support for a missed case when the symbols in a difference
expression are in the same section but not the same fragment.
rdar://10924681
llvm-svn: 151345
variable declaration as an argument because we want that address
anyhow for our debug information.
This seems to fix rdar://9965111, at least we have more debug
information than before and from reading the assembly it appears
to be the correct location.
llvm-svn: 151335
I'll let the buildbots determine the compile time improvements from this
change, but 464.h264ref has 5% faster codegen at -O2.
This patch does cause some assembly changes. Branch folding can make
different decisions about calls with dead return values.
CriticalAntiDepBreaker may choose different registers because its
liveness tracking is affected. MachineCopyPropagation may sometimes
leave a dead copy behind.
llvm-svn: 151331
The tied source operand of tMUL is the second source operand, not the
first like every other two-address thumb instruction. Special case it
in the size reduction pass to make sure we create the tMUL instruction
properly.
llvm-svn: 151315
In historical reason, Interpreter's external entries had prefix "lle_X_" as C linkage, even for well-known entries in EE/Interpreter.
Now, at least on ToT, they are resolved via FuncNames[] mapper.
We will not need their symbols are expected to be exported any more.
Clang r150128 has introduced the warning <"%0 has C-linkage specified, but returns user-defined type %1 which is incompatible with C">.
llvm-svn: 151312
Assuming that a single std::set node adds 3 control words, a bitvector
can store (3*8+4)*8=224 registers in the allocated memory of a single
element in the std::set (x86_64). Also we don't have to call malloc
for every register added.
llvm-svn: 151269
rdar://10873652
As part of this I updated the llvm-mc disassembler C API to always call the
SymbolLookUp call back even if there is no getOpInfo call back. If there is a
getOpInfo call back that is tried first and then if that gets no information
then the SymbolLookUp is called. I also made the code more robust by
memset(3)'ing to zero the LLVMOpInfo1 struct before then setting
SymbolicOp.Value before for the call to getOpInfo. And also don't use any
values from the LLVMOpInfo1 struct if getOpInfo returns 0. And also don't
use any of the ReferenceType or ReferenceName values from SymbolLookUp if it
returns NULL. rdar://10873563 and rdar://10873683
For the X86 target also fixed bugs so the annotations get printed.
Also fixed a few places in the ARM target that was not producing symbolic
operands for some instructions. rdar://10878166
llvm-svn: 151267
Before register allocation, instructions can be moved across calls in
order to reduce register pressure. After register allocation, we don't
gain a lot by moving callee-saved defs across calls. In fact, since the
scheduler doesn't have a good idea how registers are used in the callee,
it can't really make good scheduling decisions.
This changes the schedule in two ways: 1. Latencies to call uses and
defs are no longer accounted for, causing some random shuffling around
calls. This isn't really a problem since those uses and defs are
inaccurate proxies for what happens inside the callee. They don't
represent registers used by the call instruction itself.
2. Instructions are no longer moved across calls. This didn't happen
very often, and the scheduling decision was made on dubious information
anyway.
As with any scheduling change, benchmark numbers shift around a bit,
but there is no positive or negative trend from this change.
This makes the post-ra scheduler 5% faster for ARM targets.
The secret motivation for this patch is the introduction of register
mask operands representing call clobbers. The most efficient way of
handling regmasks in ScheduleDAGInstrs is to model them as barriers for
physreg live ranges, but not for virtreg live ranges. That's fine
pre-ra, but post-ra it would have the same effect as this patch.
llvm-svn: 151265
it with memcpy. This also fixes a problem on big-endian hosts, where
addUnaligned would return different results depending on the alignment
of the data.
llvm-svn: 151247
value is zero. Instead of a cmov + op, issue an conditional op instead. e.g.
cmp r9, r4
mov r4, #0
moveq r4, #1
orr lr, lr, r4
should be:
cmp r9, r4
orreq lr, lr, #1
That is, optimize (or x, (cmov 0, y, cond)) to (or.cond x, y). Similarly extend
this to xor as well as (and x, (cmov -1, y, cond)) => (and.cond x, y).
It's possible to extend this to ADD and SUB but I don't think they are common.
rdar://8659097
llvm-svn: 151224