define void @f() {
...
call i32 @g()
...
}
define void @g() {
...
}
The hazards are:
- @f and @g have GC, but they differ GC. Inlining is invalid. This
may never occur.
- @f has no GC, but @g does. g's GC must be propagated to @f.
The other scenarios are safe:
- @f and @g have the same GC.
- @f and @g have no GC.
- @g has no GC.
This patch adds inliner checks for the former two scenarios.
llvm-svn: 45351
as on functions. Make it verify invokes and not just
ordinary calls. As a (desired) side-effect, it is no
longer legal to have call attributes on arguments that
are being passed to the varargs part of a varargs
function (llvm-as drops them on the floor anyway).
llvm-svn: 45286
return attributes on the floor. In the case of a call
to a varargs function where the varargs arguments are
being removed, any call attributes on those arguments
need to be dropped. I didn't do this because I plan to
make it illegal to have such attributes (see next patch).
With this change, compiling the gcc filter2 eh test at -O0
and then running opt -std-compile-opts on it results in
a correctly working program (compiling at -O1 or higher
results in the test failing due to a problem with how we
output eh info into the IR).
llvm-svn: 45285
calls 'nounwind'. It is important for correct C++
exception handling that nounwind markings do not get
lost, so this transformation is actually needed for
correctness.
llvm-svn: 45218
calls. Remove special casing of inline asm from the
inliner. There is a potential problem: the verifier
rejects invokes of inline asm (not sure why). If an
asm call is not marked "nounwind" in some .ll, and
instcombine is not run, but the inliner is run, then
an illegal module will be created. This is bad but
I'm not sure what the best approach is. I'm tempted
to remove the check in the verifier...
llvm-svn: 45073
endianness of the target not of the host. Done by the
simple expedient of reversing bytes for primitive types
if the host and target endianness don't match. This is
correct for integer and pointer types. I don't know if
it is correct for floating point types.
llvm-svn: 45039
SelectionDAG::getConstant, in the same way as vector floating-point
constants. This allows the legalize expansion code for @llvm.ctpop and
friends to be usable with vector types.
llvm-svn: 44954
2. Using zero-extended value of Scale and unsigned division is safe provided
that Scale doesn't have the sign bit set.
Previously these 2 instructions:
%p = bitcast [100 x {i8,i8,i8}]* %x to i8*
%q = getelementptr i8* %p, i32 -4
were combined into:
%q = getelementptr [100 x { i8, i8, i8 }]* %x, i32 0,
i32 1431655764, i32 0
what was incorrect.
llvm-svn: 44936
regions of memory that have a target specific relationship, as described in the
Embedded C Technical Report.
This also implements the 2007-12-11-AddressSpaces test,
which demonstrates how address space attributes can be used in LLVM IR.
In addition, this patch changes the bitcode signature for stores (in a backwards
compatible manner), such that the pointer type, rather than the pointee type, is
encoded. This permits type information in the pointer (e.g. address space) to be
preserved for stores.
LangRef updates are forthcoming.
llvm-svn: 44858
possible before resorting to pextrw and pinsrw.
- Better codegen for v4i32 shuffles masquerading as v8i16 or v16i8 shuffles.
- Improves (i16 extract_vector_element 0) codegen by recognizing
(i32 extract_vector_element 0) does not require a pextrw.
llvm-svn: 44836
Thompson. Usage should be something like this:
open Llvm
open Llvm_bitreader
match read_bitcode_file fn with
| Bitreader_failure msg ->
prerr_endline msg
| Bitreader_success m ->
...;
dispose_module m
Compile with: ocamlc llvm.cma llvm_bitreader.cma
ocamlopt llvm.cmxa llvm_bitreader.cmxa
llvm-svn: 44824
Reimplement the xform in Analysis/ConstantFolding.cpp where we can use
targetdata to validate that it is safe. While I'm in there, fix some const
correctness issues and generalize the interface to the "operand folder".
llvm-svn: 44817
using the minimum possible number of bytes. For little
endian targets run on little endian machines, apints are
stored in memory from LSB to MSB as before. For big endian
targets on big endian machines they are stored from MSB to
LSB which wasn't always the case before (if the target and
host endianness doesn't match values are stored according
to the host's endianness). Doing this requires knowing the
endianness of the host, which is determined when configuring -
thanks go to Anton for this. Only having access to little
endian machines I was unable to properly test the big endian
part, which is also the most complicated...
llvm-svn: 44796
methods are new to Function:
bool hasCollector() const;
const std::string &getCollector() const;
void setCollector(const std::string &);
void clearCollector();
The assembly representation is as such:
define void @f() gc "shadow-stack" { ...
The implementation uses an on-the-side table to map Functions to
collector names, such that there is no overhead. A StringPool is
further used to unique collector names, which are extremely
likely to be unique per process.
llvm-svn: 44769
_foo:
movl $12, %eax
andl 4(%esp), %eax
movl _array(%eax), %eax
ret
instead of:
_foo:
movl 4(%esp), %eax
shrl $2, %eax
andl $3, %eax
movl _array(,%eax,4), %eax
ret
As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:
- movl 8(%eax), %eax
- shll $2, %eax
- andl $1020, %eax
- movl (%esi,%eax), %eax
+ movzbl 8(%eax), %eax
+ movl (%esi,%eax,4), %eax
- shll $2, %edx
- andl $1020, %edx
- movl (%edi,%edx), %edx
+ andl $255, %edx
+ movl (%edi,%edx,4), %edx
Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:
- andl $85, %ebx
- addl _bit_count(,%ebx,4), %ebp
+ shll $2, %ebx
+ andl $340, %ebx
+ addl _bit_count(%ebx), %ebp
llvm-svn: 44656
the function type, instead they belong to functions
and function calls. This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll). Hopefully
a bitcode guru (who might that be? :) ) will fix it.
llvm-svn: 44359
optimized. This avoids creating illegal divisions when the combiner is
running after legalize; this fixes PR1815. Also, it produces better
code in the included testcase by avoiding the subtract and multiply
when the division isn't optimized.
llvm-svn: 44341
trivial difference in function attributes, allow calls to it to
be converted to direct calls. Based on a patch by Török Edwin.
While there, move the various lists of mutually incompatible
parameters etc out of the verifier and into ParameterAttributes.h.
llvm-svn: 44315
sometimes emit "zero" and "all one" vectors multiple times,
for example:
_test2:
pcmpeqd %mm0, %mm0
movq %mm0, _M1
pcmpeqd %mm0, %mm0
movq %mm0, _M2
ret
instead of:
_test2:
pcmpeqd %mm0, %mm0
movq %mm0, _M1
movq %mm0, _M2
ret
This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type. This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.
This patch makes the following changes:
1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
immAllZerosV in the wrong form now use *_bc to match them with a
bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle
bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
is legal, instead of generating one that is illegal and expecting
a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.
This patch is definite goodness, but has the potential to cause random
code quality regressions. Please be on the lookout for these and let
me know if they happen.
llvm-svn: 44310
OnlyReadsMemoryFns tables are dead! We
get more, and more accurate, information
from gcc via the readnone and readonly
function attributes.
llvm-svn: 44288
node A gets back into the DAG again because it was hiding in
one of the node maps: make sure that node replacement happens
in those maps too.
llvm-svn: 44263
or getTypeSizeInBits as appropriate in ScalarReplAggregates.
The right change to make was not always obvious, so it would
be good to have an sroa guru review this. While there I noticed
some bugs, and fixed them: (1) arrays of x86 long double have
holes due to alignment padding, but this wasn't being spotted
by HasStructPadding (renamed to HasPadding). The same goes
for arrays of oddly sized ints. Vectors also suffer from this,
in fact the problem for vectors is much worse because basic
vector assumptions seem to be broken by vectors of type with
alignment padding. I didn't try to fix any of these vector
problems. (2) The code for extracting smaller integers from
larger ones (in the "int union" case) was wrong on big-endian
machines for integers with size not a multiple of 8, like i1.
Probably this is impossible to hit via llvm-gcc, but I fixed
it anyway while there and added a testcase. I also got rid of
some trailing whitespace and changed a function name which
had an obvious typo in it.
llvm-svn: 43672
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.
The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
legal) so the copy can be eliminated.
This eliminates 443 extra moves from 403.gcc.
llvm-svn: 43662
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).
llvm-svn: 43465
FE.
- Explicitly pass in the alignment of the load & store.
- XFAIL 2007-10-23-UnalignedMemcpy.ll because llc has a bug that crashes on
unaligned pointers.
llvm-svn: 43398
and the compaison is against a constant value, try eliminate the stride
by moving the compare instruction to another stride and change its
constant operand accordingly. e.g.
loop:
...
v1 = v1 + 3
v2 = v2 + 1
if (v2 < 10) goto loop
=>
loop:
...
v1 = v1 + 3
if (v1 < 30) goto loop
llvm-svn: 43336