Once the intrinsics are marked as having a custom inserter, it will call this
method to emit the dispatch table into the machine function.
llvm-svn: 142245
It really doesn't, but when r141929 removed the hasSideEffects flag from
this instruction, it caused miscompilations. I am guessing that it got
moved across a stack pointer update.
Also clear isRematerializable after checking that this instruction is
in fact never rematerialized in the nightly test suite.
llvm-svn: 142030
rdar://10288916 is tracking this fix.
In the past, instcombine and other passes were promoting alloca alignment past
the natural alignment, resulting in dynamic stack realignment. Lang's work now
prevents this from happening (LLVM commit r141599). Now that this really
shouldn't happen report a fatal error rather than silently generate bad code.
llvm-svn: 142028
The callee-saved registers cannot be live across an invoke call because the
control flow may continue along the exceptional edge. When this happens, all of
the callee-saved registers are no longer valid.
llvm-svn: 142018
TableGen infers unmodeled side effects on instructions without a
pattern. Fix some instruction definitions where that was overlooked.
Also raise an error if a rematerializable instruction has unmodeled side
effects. That doen't make any sense.
llvm-svn: 141929
When widening a copy, we are reading a larger register that may not be
live. Use an <undef> flag to tell the register scavenger and machine
code verifier that we know the value isn't defined.
We now widen:
%S6<def> = COPY %S4<kill>, %D3<imp-def>
into:
%D3<def> = VMOVD %D2<undef>, pred:14, pred:%noreg, %S4<imp-use,kill>
This also keeps the <kill> flag on %S4 so we don't inadvertently kill a
live value in %S5.
Finally, ensure that ARMBaseInstrInfo::setExecutionDomain() preserves
the <undef> flag when converting VMOVD to VORR.
llvm-svn: 141746
Fill out the rest of the encoding information, update to properly mark
the LDC/STC instructions as predicable while the LDC2/STC2 instructions are
not, and adjust the parser accordingly.
llvm-svn: 141721
The VMOVS widening needs to look at the implicit COPY operands. Trying
to dig out the COPY instruction from an iterator in copyPhysReg() is the
wrong approach.
The expandPostRAPseudo() hook gets to look at COPY instructions before
they are converted to copyPhysReg() calls.
llvm-svn: 141619
promoting allocas to preferred alignments that exceed the natural
alignment. This avoids some potentially expensive dynamic stack realignments.
The natural stack alignment is set in target data strings via the "S<size>"
option. Size is in bits and must be a multiple of 8. The natural stack alignment
defaults to "unspecified" (represented by a zero value), and the "unspecified"
value does not prevent any alignment promotions. Target maintainers that care
about avoiding promotions should explicitly add the "S<size>" option to their
target data strings.
llvm-svn: 141599
block. E.g., if we have:
movs r1, r1
rsb r1, 0
movs r2, r2
rsb r2, 0
we don't want this to be converted to:
movs r1, r1
movs r2, r2
itt mi
rsb r1, 0
rsb r2, 0
PR11107 & <rdar://problem/10259534>
llvm-svn: 141589
ARMII::AddrModeT1_s, we need to take into account that if the frame register is
ARM::SP, then the number of bits is 8. If it's not ARM::SP, then the number of
bits is 5.
llvm-svn: 141529
successor. Remove the old landing pad from their successor list, because it's
now the successor of the dispatch block. Now that the landing pad blocks are no
longer the destination of invokes, we can mark them as normal basic blocks
instead of landing pads.
This more closely resembles what the CFG is actually doing.
llvm-svn: 141436
Consider:
mov r8, r11 fred
Previously, we issued the not very informative:
x.s:6:1: error: unexpected token in argument list
^
Now we generate:
x.s:5:14: error: unexpected token in argument list
mov r8, r11 fred
^
llvm-svn: 141380
merging an lsl #2 that has multiple uses on A9. This shift is free, so there is
no problem merging it in multiple places. Other unprofitable shifts will not be
merged.
llvm-svn: 141247
This is a first pass at generating the jump table for the sjlj dispatch. It
currently generates something plausible, but hasn't been tested thoroughly.
llvm-svn: 141140
using llvm's public 'C' disassembler API now including annotations.
Hooked this up to Darwin's otool(1) so it can again print things like branch
targets for example this:
blx _puts
instead of this:
blx #-36
and includes support for annotations for branches to symbol stubs like:
bl 0x40 @ symbol stub for: _puts
and annotations for pc relative loads like this:
ldr r3, #8 @ literal pool for: Hello, world!
Also again can print the expression encoded in the Mach-O relocation entries for
things like this:
movt r0, :upper16:((_foo-_bar)+1234)
llvm-svn: 141129
This code will replace the version in ARMAsmPrinter.cpp. It creates a new
machine basic block, which is the dispatch for the return from a longjmp
call. It then shoves the address of that machine basic block into the correct
place in the function context so that the EH runtime will jump to it directly
instead of having to go through a compare-and-jump-to-the-dispatch bit. This
should be more efficient in the common case.
llvm-svn: 141031
It's documented as a separate instruction to line up with the Thumb1
encodings, for which it really is a distinct instruction encoding.
llvm-svn: 141020
* Add a couple of Create methods to the ARMConstantPoolConstant class,
* Add its own version of getExistingMachineCPValue, and
* Modify hasSameValue to return false if the object isn't an ARMConstantPoolConstant.
llvm-svn: 140935
This uses less memory and it reduces the complexity of sub-class
operations:
- hasSubClassEq() and friends become O(1) instead of O(N).
- getCommonSubClass() becomes O(N) instead of O(N^2).
In the future, TableGen will infer register classes. This makes it
cheap to add them.
llvm-svn: 140898
Remove an assert that was expecting only the relevant 16bit portion for
the fixup being handled. Also kill some dead code in the T2 portion.
rdar://9653509
llvm-svn: 140861