Problem: LLVM needs more function attributes than currently available (32 bits).
One such proposed attribute is "address_safety", which shows that a function is being checked for address safety (by AddressSanitizer, SAFECode, etc).
Solution:
- extend the Attributes from 32 bits to 64-bits
- wrap the object into a class so that unsigned is never erroneously used instead
- change "unsigned" to "Attributes" throughout the code, including one place in clang.
- the class has no "operator uint64 ()", but it has "uint64_t Raw() " to support packing/unpacking.
- the class has "safe operator bool()" to support the common idiom: if (Attributes attr = getAttrs()) useAttrs(attr);
- The CTOR from uint64_t is marked explicit, so I had to add a few explicit CTOR calls
- Add the new attribute "address_safety". Doing it in the same commit to check that attributes beyond first 32 bits actually work.
- Some of the functions from the Attribute namespace are worth moving inside the class, but I'd prefer to have it as a separate commit.
Tested:
"make check" on Linux (32-bit and 64-bit) and Mac (10.6)
built/run spec CPU 2006 on Linux with clang -O2.
This change will break clang build in lib/CodeGen/CGCall.cpp.
The following patch will fix it.
llvm-svn: 148553
LSR has gradually been improved to more aggressively reuse existing code, particularly existing phi cycles. This exposed problems with the SCEVExpander's sloppy treatment of its insertion point. I applied some rigor to the insertion point problem that will hopefully avoid an endless bug cycle in this area. Changes:
- Always used properlyDominates to check safe code hoisting.
- The insertion point provided to SCEV is now considered a lower bound. This is usually a block terminator or the use itself. Under no cirumstance may SCEVExpander insert below this point.
- LSR is reponsible for finding a "canonical" insertion point across expansion of different expressions.
- Robust logic to determine whether IV increments are in "expanded" form and/or can be safely hoisted above some insertion point.
Fixes PR11783: SCEVExpander assert.
llvm-svn: 148535
to instruction right after the last instruction in the bundle.
- Add a finalizeBundle() variant that doesn't specify LastMI. Instead, the code
will find the last instruction in the bundle by following the 'InsideBundle'
marker. This is useful in case bundles are formed early (i.e. during MI
scheduling) but finalized later (i.e. after register allocator has finished
rewriting virtual registers with physical registers).
llvm-svn: 148444
This SelectionDAG node will be attached to call nodes by LowerCall(),
and eventually becomes a MO_RegisterMask MachineOperand on the
MachineInstr representing the call instruction.
LowerCall() will attach a register mask that depends on the calling
convention.
llvm-svn: 148436
When set, this bit indicates that a register is completely defined by
the value of its sub-registers.
Use the CoveredBySubRegs property to infer which super-registers are
call-preserved given a list of callee-saved registers. For example, the
ARM registers D8-D15 are callee-saved. This now automatically implies
that Q4-Q7 are call-preserved.
Conversely, Win64 callees save XMM6-XMM15, but the corresponding
YMM6-YMM15 registers are not call-preserved because they are not fully
defined by their sub-registers.
llvm-svn: 148363
Targets can now add CalleeSavedRegs defs to their *CallingConv.td file.
TableGen will use this to create a *_SaveList array suitable for
returning from getCalleeSavedRegs() as well as a *_RegMask bit mask
suitable for returning from getCallPreservedMask().
llvm-svn: 148346
BitVector uses the native word size for its internal representation.
That doesn't work well for literal bit masks in source code.
This patch adds BitVector operations to efficiently apply literal bit
masks specified as arrays of uint32_t. Since each array entry always
holds exactly 32 bits, these portable bit masks can be source code
literals, probably produced by TableGen.
llvm-svn: 148272
Move to a by-section allocation and relocation scheme. This allows
better support for sections which do not contain externally visible
symbols.
Flesh out the relocation address vs. local storage address separation a
bit more as well. Remote process JITs use this to tell the relocation
resolution code where the code will live when it executes.
The startFunctionBody/endFunctionBody interfaces to the JIT and the
memory manager are deprecated. They'll stick around for as long as the
old JIT does, but the MCJIT doesn't use them anymore.
llvm-svn: 148258
Register masks will be used as a compact representation of large clobber
lists. Currently, an x86 call instruction has some 40 operands
representing call-clobbered registers. That's more than 1kB of useless
operands per call site.
A register mask operand references a bit mask of call-preserved
registers, everything else is clobbered. The bit mask will typically
come from TargetRegisterInfo::getCallPreservedMask().
By abandoning ImplicitDefs for call-clobbered registers, it also becomes
possible to share call instruction descriptions between calling
conventions, and we can get rid of the WINCALL* instructions.
This patch introduces the new operand kind. Future patches will add
RegMask support to target-independent passes before finally the fixed
clobber lists can be removed from call instruction descriptions.
llvm-svn: 148250
or Clang is using this, and it would be hard to use it correctly given
the thread hostility of the function. Also, it never checked the return
which is rather dangerous with chdir. If someone was in fact using this,
please let me know, as well as what the usecase actually is so that
I can add it back and make it more correct and secure to use. (That
said, it's never going to be "safe" per-se, but we could at least
document the risks...)
llvm-svn: 148211
The hook returns a bit-mask of call-preserved registers that will
eventually replace the current list of implicit defs on call
instructions. This will make it possible to support multiple calling
conventions without duplicating call instruction descriptors.
The call-preserved mask is slightly different from the list returned by
the getCalleeSavedRegs() hook, it includes all aliases that are
preserved by calls.
The hook takes a CallingConv::ID argument instead of a MachineFunction
pointer, so it can provide information about calls to extern functions,
and even indirect function calls.
TRI::getCalleeSavedRegs() returns information about the function
currently being compiled. TRI::getCallPreservedMask() returns
information about the functions it is calling.
llvm-svn: 148165
Consider this code:
int h() {
int x;
try {
x = f();
g();
} catch (...) {
return x+1;
}
return x;
}
The variable x is undefined on the first edge to the landing pad, but it
has the f() return value on the second edge to the landing pad.
SplitAnalysis::getLastSplitPoint() would assume that the return value
from f() was live into the landing pad when f() throws, which is of
course impossible.
Detect these cases, and treat them as if the landing pad wasn't there.
This allows spill code to be inserted after the function call to f().
<rdar://problem/10664933>
llvm-svn: 147912
Delete the alternative implementation in LiveIntervalAnalysis.
These functions computed the same thing, but SplitAnalysis caches the
result.
llvm-svn: 147911
with other symbols.
An object in the __cfstring section is suppoed to be filled with CFString
objects, which have a pointer to ___CFConstantStringClassReference followed by a
pointer to a __cstring. If we allow the object in the __cstring section to be
merged with another global, then it could end up in any section. Because the
linker is going to remove these symbols in the final executable, we shouldn't
bother to merge them.
<rdar://problem/10564621>
llvm-svn: 147899