(This time I believe I've checked all the -Wreturn-type warnings from GCC & added the couple of llvm_unreachables necessary to silence them. If I've missed any, I'll happily fix them as soon as I know about them)
llvm-svn: 148262
Move to a by-section allocation and relocation scheme. This allows
better support for sections which do not contain externally visible
symbols.
Flesh out the relocation address vs. local storage address separation a
bit more as well. Remote process JITs use this to tell the relocation
resolution code where the code will live when it executes.
The startFunctionBody/endFunctionBody interfaces to the JIT and the
memory manager are deprecated. They'll stick around for as long as the
old JIT does, but the MCJIT doesn't use them anymore.
llvm-svn: 148258
Register masks will be used as a compact representation of large clobber
lists. Currently, an x86 call instruction has some 40 operands
representing call-clobbered registers. That's more than 1kB of useless
operands per call site.
A register mask operand references a bit mask of call-preserved
registers, everything else is clobbered. The bit mask will typically
come from TargetRegisterInfo::getCallPreservedMask().
By abandoning ImplicitDefs for call-clobbered registers, it also becomes
possible to share call instruction descriptions between calling
conventions, and we can get rid of the WINCALL* instructions.
This patch introduces the new operand kind. Future patches will add
RegMask support to target-independent passes before finally the fixed
clobber lists can be removed from call instruction descriptions.
llvm-svn: 148250
- Add atomic-to/from-nonatomic cast types
- Emit atomic operations for arithmetic on atomic types
- Emit non-atomic stores for initialisation of atomic types, but atomic stores and loads for every other store / load
- Add a __atomic_init() intrinsic which does a non-atomic store to an _Atomic() type. This is needed for the corresponding C11 stdatomic.h function.
- Enables the relevant __has_feature() checks. The feature isn't 100% complete yet, but it's done enough that we want people testing it.
Still to do:
- Make the arithmetic operations on atomic types (e.g. Atomic(int) foo = 1; foo++;) use the correct LLVM intrinsic if one exists, not a loop with a cmpxchg.
- Add a signal fence builtin
- Properly set the fenv state in atomic operations on floating point values
- Correctly handle things like _Atomic(_Complex double) which are too large for an atomic cmpxchg on some platforms (this requires working out what 'correctly' means in this context)
- Fix the many remaining corner cases
llvm-svn: 148242
We know that the blend instructions only use the MSB, so if the mask is
sign-extended then we can convert it into a SHL instruction. This is a
common pattern because the type-legalizer sign-extends the i1 type which
is used by the LLVM-IR for the condition.
Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL.
llvm-svn: 148225
class/Objective-C protocol suffices get all of the redeclarations of
that declaration wired to the definition, we no longer need to record
the identity of the definition in every declaration. Instead, just
record a bit to indicate whether a particular declaration is the
definition.
llvm-svn: 148224
protocol, record the definition pointer in the canonical declaration
for that entity, and then propagate that definition pointer from the
canonical declaration to all other deserialized declarations. This
approach works well even when deserializing declarations that didn't
know about the original definition, which can occur with modules.
A nice bonus from this definition-deserialization approach is that we
no longer need update records when a definition is added, because the
redeclaration chains ensure that the if any declaration is loaded, the
definition will also get loaded.
llvm-svn: 148223
chains, again. The prior implementation was very linked-list oriented, and
the list-splicing logic was both fairly convoluted (when loading from
multiple modules) and failed to preserve a reasonable ordering for the
redeclaration chains.
This new implementation uses a simpler strategy, where we store the
ordered redeclaration chains in an array-like structure (indexed based
on the first declaration), and use that ordering to add individual
deserialized declarations to the end of the existing chain. That way,
the chain mimics the ordering from its modules, and a bug somewhere is
far less likely to result in a broken linked list.
llvm-svn: 148222
Message for r148132:
LoopUnswitch: All helper data that is collected during loop-unswitch iterations was moved to separated class (LUAnalysisCache).
llvm-svn: 148215