We can't mark partially undefined registers, so we have to allow reading
a register in the machine verifier if just parts of a register are
defined.
llvm-svn: 223896
In the subregister liveness tracking case we do not create implicit
reads on partial register writes anymore, still we need to produce a new
SSA value for partial writes so the live segment has to end.
llvm-svn: 223895
Adding the implicit defs/uses to the superregisters is semantically questionable
but was not dangerous before as the register allocator never assigned the same
register to two overlapping LiveIntervals even when the actually live
subregisters do not overlap. With subregister liveness tracking enabled this
does actually happen and leads to subsequent bugs if we don't stop adding
the superregister defs/uses.
llvm-svn: 223892
Let tablegen compute the combination of subregister lanemasks for all
subregisters in a register/register class. This is preparation for further
work subregister allocation
llvm-svn: 223873
clang does not yet support MS-ABI record layout for externally-sourced
ASTs. As a result, attempting to format something that requires data
layout results in undefined behavior in clang, in this case an assert.
http://llvm.org/pr21800 tracks fixing this on the clang side.
llvm-svn: 223868
The complicated situation is when we have to keep an alias but drop a GV
that is part of the aliasee.
We used to clone the dropped GV and make the clone internal. This is wasteful
as we know the original will be dropped.
With this patch what is done instead is set the linkage of the original to
internal and replace all uses (but the one in the alias) with a new
declaration that takes the name of the old GV. This saves us from having
to copy the body.
llvm-svn: 223863
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
Differential Revision: http://reviews.llvm.org/D6585
llvm-svn: 223862
This way, the step generating SVNVersion.inc gets rerun every time someone
changes GetSVN.cmake (which is the file that decides how the contents of
SVNVersion.inc look). This makes hacking on GetSVN.cmake a bit easier.
llvm-svn: 223861
LinkageSpecDecls. This is relevant when LLDB
wants to import Decls from non-C++ modules,
since many declarations are in extern "C"
blocks.
llvm-svn: 223860
In the current implementation, GCStrategy is a part of the ownership structure for the gc metadata which describes a Module. It also contains a reference to the module in question. As a result, GCStrategy instances are essentially Module specific.
I plan to transition away from this design. Instead, a GCStrategy will be owned by the LLVMContext. It will be a lightweight policy object which contains no information about the Modules or Functions involved, but can be easily reached given a Function.
The first step in this transition is to remove the direct Module reference from GCStrategy. This also requires removing the single user of this reference, the GCMetadataPrinter hierarchy. In theory, this will allow the lifetime of the printers to be scoped to the LLVMContext as well, but in practice, I'm not actually changing that. (Yet?)
An alternate design would have been to move the direct Module reference into the GCMetadataPrinter and change the keying of the owning maps to explicitly key off both GCStrategy and Module. I'm open to doing it that way instead, but didn't see much value in preserving the per Module association for GCMetadataPrinters.
The next change in this sequence will be to start unwinding the intertwined ownership between GCStrategy, GCModuleInfo, and GCFunctionInfo.
Differential Revision: http://reviews.llvm.org/D6566
llvm-svn: 223859
There were two major problems with `MDNode` memory management.
1. `MDNode::operator new()` called a placement array constructor for
`MDOperand`. What? Each operand needs to be placed individually.
2. `MDNode::operator delete()` failed to destruct the `MDOperand`s at
all.
Frankly it's hard to understand how this worked locally, how this
survived an LTO bootstrap, or how it worked on most of the bots.
llvm-svn: 223858
in debugger mode) to accept @import declarations
and pass them to the debugger.
In the preprocessor, accept import declarations
if the debugger is enabled, but don't actually
load the module, just pass the import path on to
the preprocessor callbacks.
In the Objective-C parser, if it sees an import
declaration in statement context (usual for LLDB),
ignore it and return a NullStmt.
llvm-svn: 223855
The issue with Thumb IT (if/then) instructions is the IT instruction preceeds up to four instructions that are made conditional. If a breakpoint is placed on one of the conditional instructions, the instruction either needs to match the thumb opcode size (2 or 4 bytes) or a BKPT instruction needs to be used as these are always unconditional (even in a IT instruction). If BKPT instructions are used, then we might end up stopping on an instruction that won't get executed. So if we do stop at a BKPT instruction, we need to continue if the condition is not true.
When using the BKPT isntructions are easy in that you don't need to detect the size of the breakpoint that needs to be used when setting a breakpoint even in a thumb IT instruction. The bad part is you will now always stop at the opcode location and let LLDB determine if it should auto-continue. If the BKPT instruction is used, the BKPT that is used for ARM code should be something that also triggers the BKPT instruction in Thumb in case you set a breakpoint in the middle of code and the code is actually Thumb code. A value of 0xE120BE70 will work since the lower 16 bits being 0xBE70 happens to be a Thumb BKPT instruction.
The alternative is to use trap or illegal instructions that the kernel will translate into breakpoint hits. On Mac this was 0xE7FFDEFE for ARM and 0xDEFE for Thumb. The darwin kernel currently doesn't recognize any 32 bit Thumb instruction as a instruction that will get turned into a breakpoint exception (EXC_BREAKPOINT), so we had to use the BKPT instruction on Mac. The linux kernel recognizes a 16 and a 32 bit instruction as valid thumb breakpoint opcodes. The benefit of using 16 or 32 bit instructions is you don't stop on opcodes in a IT block when the condition doesn't match.
To further complicate things, single stepping on ARM is often implemented by modifying the BCR/BVR registers and setting the processor to stop when the PC is not equal to the current value. This means single stepping is another way the ARM target can stop on instructions that won't get executed.
This patch does the following:
1 - Fix the internal debugserver for Apple to use the BKPT instruction for ARM and Thumb
2 - Fix LLDB to catch when we stop in the middle of a Thumb IT instruction and continue if we stop at an instruction that won't execute
3 - Fixes this in a way that will work for any target on any platform as long as it is ARM/Thumb
4 - Adds a patch for ignoring conditions that don't match when in ARM mode (see below)
This patch also provides the code that implements the same thing for ARM instructions, though it is disabled for now. The ARM patch will check the condition of the instruction in ARM mode and continue if the condition isn't true (and therefore the instruction would not be executed). Again, this is not enable, but the code for it has been added.
<rdar://problem/19145455>
llvm-svn: 223851