Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100304
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100191
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.
llvm-svn: 99928
--- Reverse-merging r98889 into '.':
U lib/Target/ARM/ARMInstrNEON.td
U lib/Target/ARM/ARMISelLowering.h
U lib/Target/ARM/ARMInstrInfo.td
U lib/Target/ARM/ARMInstrVFP.td
U lib/Target/ARM/ARMISelLowering.cpp
U lib/Target/ARM/ARMInstrFormats.td
llvm-svn: 99010
Place the LSDA into the TEXT section for ARM platforms. This involves making the
encoding indirect, pcrel, and sdata4 instead of an absolute pointer. The
references to the type infos are then non-lazy pointers. Revision 98019 changed
the encoding of non-lazy pointers to add the symbol to the non-lazy pointer
definition if it's a local symbol (otherwise, it's external and set to '0' so
that the loader can adjust it to the real value). This paved the way for this
change to work on ARM.
llvm-svn: 98068
greater-than-or-equal SELECT_CCs to NEON vmin/vmax instructions. This is
only allowed when UnsafeFPMath is set or when at least one of the operands
is known to be nonzero.
llvm-svn: 97065
Even if they are suported by the core, they can be disabled
(this is just a configuration bit inside some register).
Allow unaligned memops on darwin and conservatively disallow them otherwise.
llvm-svn: 94889
Target independent isel should always pass along the "tail call" property. Change
target hook LowerCall's parameter "isTailCall" into a refernce. If the target
decides it's impossible to honor the tail call request, it should set isTailCall
to false to make target independent isel happy.
llvm-svn: 94626
This is consistent with llvm-gcc's arm/constraints.md.
Certain instructions (e.g. CBZ, CBNZ) require a low register, even in Thumb2
mode.
llvm-svn: 93436
return partial registers. This affected the back-end lowering code some.
Also patch up some places I missed before in the "get" functions.
llvm-svn: 91880
MI was not being used but it was also not being deleted, so it was kept in the garbage list. The memory itself was freed once the function code gen was done.
Once in a while the codegen of another function would create an instruction on the same address. Adding it to the garbage group would work once, but when another pointer was added it would cause an assert as "Cache" was about to be pushed to Ts.
For a patch that make us detect problems like this earlier, take a look at
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20091214/092758.html
With that patch we assert as soon and the new instruction is added to the garbage set.
llvm-svn: 91691
The change in SelectionDAGBuilder is needed to allow using bitcasts to convert
between f64 (the default type for ARM "d" registers) and 64-bit Neon vector
types. Radar 7457110.
llvm-svn: 91649
just issues an error for the moment. The front end won't yet generate these
intrinsics for ARM, so this is behind the scenes until complete.
llvm-svn: 91200
both source operands. In the canonical form, the 2nd operand is changed to an
undef and the shuffle mask is adjusted to only reference elements from the 1st
operand. Radar 7434842.
llvm-svn: 90417
than doing the same via constpool:
1. Load from constpool costs 3 cycles on A9, movt/movw pair - just 2.
2. Load from constpool might stall up to 300 cycles due to cache miss.
3. Movt/movw does not use load/store unit.
4. Less constpool entries => better compiler performance.
This is only enabled on ELF systems, since darwin does not have needed
relocations (yet).
llvm-svn: 89720
Note that "hasDotLocAndDotFile"-style debug info was already broken;
people wanting this functionality should implement it in the
AsmPrinter/DwarfWriter code.
llvm-svn: 89711
slots. The AsmPrinter will use this information to determine whether to
print a spill/reload comment.
Remove default argument values. It's too easy to pass a wrong argument
value when multiple arguments have default values. Make everything
explicit to trap bugs early.
Update all targets to adhere to the new interfaces..
llvm-svn: 87022
void f (int a1, int a2, int a3, int a4, int a5,...)
In ARMTargetLowering::LowerFormalArguments if the function has 4 or
more regular arguments we used to set VarArgsFrameIndex using an
offset of 0, which is only correct if the function has exactly 4
regular arguments.
llvm-svn: 85590
I can see with the original code was that I forgot that this runs after
type legalization and hence the result type will always be i32. (Custom
legalization of EXTRACT_VECTOR_ELT is only enabled for vector types with
8- and 16-bit elements.)
Regarding the FIXME comment: any information about sign and zero-extension
should be captured by separate extension operations. The DAG combiner should
handle those to produce either VGETLANEu or VGETLANEs, and that seems to be
working now. If there are cases that we're missing, let me know.
llvm-svn: 84218
makes an eggregious hack somewhat more palatable. Bringing the LSDA forward
and making it a GV available for reference would be even better, but is
beyond the scope of what I'm looking to solve at this point.
Objective C++ code could generate function names that broke the previous
scheme. This fixes that.
llvm-svn: 80649
The instructions can be selected directly from the intrinsics. We will need
to add some ARM-specific nodes for VLD/VST of 3 and 4 128-bit vectors, but
those are not yet implemented.
llvm-svn: 80117
This is derived from a patch by Anton Korzh. I modified it to recognize
the VEXT shuffles during legalization and lower them to a target-specific
DAG node.
llvm-svn: 79428
support unaligned mem access only for certain types. (Should it be size
instead?)
ARM v7 supports unaligned access for i16 and i32, some v6 variants support it
as well.
llvm-svn: 79127
libcall. Take advantage of this in the ARM backend to rectify broken
choice of CC when hard float is in effect. PIC16 may want to see if
it could be of use in MakePIC16Libcall, which works unchanged.
Patch by Sandeep!
llvm-svn: 79033
target-specific VDUPLANE nodes. This allows the subreg handling for the
quad-register version to be done easily with Pats in the .td file, instead
of with custom code in ARMISelDAGToDAG.cpp.
llvm-svn: 78993
the overloaded vector types allowed floating-point or integer vector elements.
Most of these operations actually depend on the element type, so bitcasting
was not an option.
If you include the vpadd intrinsics that I updated earlier, this gets rid
of 20 intrinsics.
llvm-svn: 78646
and short. Well, it's kinda short. Definitely nasty and brutish.
The front-end generates the register/unregister calls into the SjLj runtime,
call-site indices and landing pad dispatch. The back end fills in the LSDA
with the call-site information provided by the front end. Catch blocks are
not yet implemented.
Built on Darwin and verified no llvm-core "make check" regressions.
llvm-svn: 78625
This patch takes pain to ensure all the PEI lowering code does the right thing when lowering frame indices, insert code to manipulate stack pointers, etc. It's also custom lowering dynamic stack alloc into pseudo instructions so we can insert the right instructions at scheduling time.
This fixes PR4659 and PR4682.
llvm-svn: 78361
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.
This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.
This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.
llvm-svn: 78142
Get rid of yesterday's code to fix the register usage during isel.
Select the new DAG nodes to machine instructions. The new pre-alloc pass
to choose adjacent registers for these results is not done, so the
results of this will generally not assemble yet.
llvm-svn: 78136
instructions for calls since BL and BLX are always 32-bit long and BX is always
16-bit long.
Also, we should be using BLX to call external function stubs.
llvm-svn: 77756
it is highly specific to the object file that will be generated in the end,
this introduces a new TargetLoweringObjectFile interface that is implemented
for each of ELF/MachO/COFF/Alpha/PIC16 and XCore.
Though still is still a brutal and ugly refactoring, this is a major step
towards goodness.
This patch also:
1. fixes a bunch of dangling pointer problems in the PIC16 backend.
2. disables the TargetLowering copy ctor which PIC16 was accidentally using.
3. gets us closer to xcore having its own crazy target section flags and
pic16 not having to shadow sections with its own objects.
4. fixes wierdness where ELF targets would set CStringSection but not
CStringSection_. Factor the code better.
5. fixes some bugs in string lowering on ELF targets.
llvm-svn: 77294
Before:
adr r12, #LJTI3_0_0
ldr pc, [r12, +r0, lsl #2]
LJTI3_0_0:
.long LBB3_24
.long LBB3_30
.long LBB3_31
.long LBB3_32
After:
adr r12, #LJTI3_0_0
add pc, r12, +r0, lsl #2
LJTI3_0_0:
b.w LBB3_24
b.w LBB3_30
b.w LBB3_31
b.w LBB3_32
This has several advantages.
1. This will make it easier to optimize this to a TBB / TBH instruction +
(smaller) table.
2. This eliminate the need for ugly asm printer hack to force the address
into thumb addresses (bit 0 is one).
3. Same codegen for pic and non-pic.
4. This eliminate the need to align the table so constantpool island pass
won't have to over-estimate the size.
Based on my calculation, the later is probably slightly faster as well since
ldr pc with shifter address is very slow. That is, it should be a win as long
as the HW implementation can do a reasonable job of branch predict the second
branch.
llvm-svn: 77024
This adds location info for all llvm_unreachable calls (which is a macro now) in
!NDEBUG builds.
In NDEBUG builds location info and the message is off (it only prints
"UREACHABLE executed").
llvm-svn: 75640
Make llvm_unreachable take an optional string, thus moving the cerr<< out of
line.
LLVM_UNREACHABLE is now a simple wrapper that makes the message go away for
NDEBUG builds.
llvm-svn: 75379
With the SVR4 ABI on PowerPC, vector arguments for vararg calls are passed differently depending on whether they are a fixed or a variable argument. Variable vector arguments always go into memory, fixed vector arguments are put
into vector registers. If there are no free vector registers available, fixed vector arguments are put on the stack.
The NumFixedArgs attribute allows to decide for an argument in a vararg call whether it belongs to the fixed or variable portion of the parameter list.
llvm-svn: 74764
have the alignment be calculated up front, and have the back-ends obey whatever
alignment is decided upon.
This allows for future work that would allow for precise no-op placement and the
like.
llvm-svn: 74564
to run last because it needs to know the exact size and position of every
basic block. Currently CodePlacementOpt is set up to run last. It might be
worthwhile to investigate reordering these passes, but for now, let's just
make it work.
llvm-svn: 72037
llvm.eh.sjlj.* for better clarity as to their purpose and scope. Add
a description of llvm.eh.sjlj.setjmp to ExceptionHandling.html.
(llvm.eh.sjlj.longjmp documentation coming when that implementation is
added).
llvm-svn: 71758
a supporting preliminary patch for GCC-compatible SjLJ exception handling. Note that these intrinsics are not designed to be invoked directly by the user, but
rather used by the front-end as target hooks for exception handling.
llvm-svn: 71610
between registers and the stack may be required with the APCS ABI, but it
isn't tied to using a particular version of the ARM architecture.
llvm-svn: 69978
chained and "flagged" together. I also made a few changes to handle the
chain and flag values more consistently. I found these problems by
inspection so I'm not aware of anything that breaks because of them
(thus no testcase).
llvm-svn: 69977
in the MachineFunction class, renaming it to addLiveIn for consistency with
the same method in MachineBasicBlock. Thanks for Anton for suggesting this.
llvm-svn: 69615
When compiling in Thumb mode, only the low (R0-R7) registers are available
for most instructions. Breaking the low registers into a new register class
handles this. Uses of R12, SP, etc, are handled explicitly where needed
with copies inserted to move results into low registers where the rest of
the code generator can deal with them.
llvm-svn: 68545
1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants.
2. MachineConstantPool alignment field is also a log2 value.
3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values.
4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries.
5. Asm printer uses expensive data structure multimap to track constant pool entries by sections.
6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic.
Solutions:
1. ConstantPoolSDNode alignment field is changed to keep non-log2 value.
2. MachineConstantPool alignment field is also changed to keep non-log2 value.
3. Functions that create ConstantPool nodes are passing in non-log2 alignments.
4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT.
5. Asm printer uses cheaper data structure to group constant pool entries.
6. Asm printer compute entry offsets after grouping is done.
7. Change JIT code to compute entry offsets on the fly.
llvm-svn: 66875
related transformations out of target-specific dag combine into the
ARM backend. These were added by Evan in r37685 with no testcases
and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll).
Add some simple X86-specific (for now) DAG combines that turn things
like cond ? 8 : 0 -> (zext(cond) << 3). This happens frequently
with the recently added cp constant select optimization, but is a
very general xform. For example, we now compile the second example
in const-select.ll to:
_test:
movsd LCPI2_0, %xmm0
ucomisd 8(%esp), %xmm0
seta %al
movzbl %al, %eax
movl 4(%esp), %ecx
movsbl (%ecx,%eax,4), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
leal 4(%eax), %ecx
movsd LCPI2_0, %xmm0
ucomisd 8(%esp), %xmm0
cmovbe %eax, %ecx
movsbl (%ecx), %eax
ret
This passes multisource and dejagnu.
llvm-svn: 66779
and extern_weak_odr. These are the same as the non-odr versions,
except that they indicate that the global will only be overridden
by an *equivalent* global. In C, a function with weak linkage can
be overridden by a function which behaves completely differently.
This means that IP passes have to skip weak functions, since any
deductions made from the function definition might be wrong, since
the definition could be replaced by something completely different
at link time. This is not allowed in C++, thanks to the ODR
(One-Definition-Rule): if a function is replaced by another at
link-time, then the new function must be the same as the original
function. If a language knows that a function or other global can
only be overridden by an equivalent global, it can give it the
weak_odr linkage type, and the optimizers will understand that it
is alright to make deductions based on the function body. The
code generators on the other hand map weak and weak_odr linkage
to the same thing.
llvm-svn: 66339
Many targets build placeholder nodes for special operands, e.g.
GlobalBaseReg on X86 and PPC for the PIC base. There's no
sensible way to associate debug info with these. I've left
them built with getNode calls with explicit DebugLoc::getUnknownLoc operands.
I'm not too happy about this but don't see a good improvement;
I considered adding a getPseudoOperand or something, but it
seems to me that'll just make it harder to read.
llvm-svn: 63992