This fix is very lightweight. The same fix already existed for AddRec
but was missing for NAry expressions.
This is obviously an improvement and I'm unsure how to test compile
time problems.
Patch by Xiaoyi Guo!
llvm-svn: 187475
For a testcase like the following:
typedef unsigned long uint64_t;
typedef struct {
uint64_t lo;
uint64_t hi;
} blob128_t;
void add_128_to_128(const blob128_t *in, blob128_t *res) {
asm ("PAND %1, %0" : "+Q"(*res) : "Q"(*in));
}
where we'll fail to allocate the register for the output constraint,
our matching input constraint will not find a register to match,
and could try to search past the end of the current operands array.
On the idea that we'd like to attempt to keep compilation going
to find more errors in the module, change the error cases when
we're visiting inline asm IR to return immediately and avoid
trying to create a node in the DAG. This leaves us with only
a single error message per inline asm instruction, but allows us
to safely keep going in the general case.
llvm-svn: 187470
One form would accept a vector of pointers, and the other did not.
Make both accept vectors of pointers, and add an assertion
for the number of elements.
llvm-svn: 187464
The unix one was returning no_such_file_or_directory, but the windows one
was return success.
Update the one one caller that was depending on the old behavior.
llvm-svn: 187463
This avoids constant folding bitcast/ptrtoint/inttoptr combinations
that have illegal bitcasts between differently sized address spaces.
llvm-svn: 187455
Call into ComputeMaskedBits to figure out which bits are set on both add
operands and determine if the value is a power-of-two-or-zero or not.
llvm-svn: 187445
It will now only convert the arguments / return value and call
the underlying function if the types are able to be bitcasted.
This avoids using fp<->int conversions that would occur before.
llvm-svn: 187444
When registers must be live throughout the scheduling region, increase
the limit for the register class. Once we exceed the original limit,
they will be spilled, and there's no point further reducing pressure.
This isn't a perfect heuristics but avoids a situation where the
scheduler could become trapped by trying to achieve the impossible.
llvm-svn: 187436
When simplifying a (or (and B A) (and C ~A)) to a (VBSL A B C) ensure that the
bitwidth of the second operands to both ands match before comparing the negation
of the values.
Split the check of the value of the second operands to the ands. Move the cast
and variable declaration slightly higher to make it slightly easier to follow.
Bug-Id: 16700
Signed-off-by: Saleem Abdulrasool <compnerd@compnerd.org>
llvm-svn: 187404
This is the first of many upcoming patches for PowerPC fast
instruction selection support. This patch implements the minimum
necessary for a functional (but extremely limited) FastISel pass. It
allows the table-generated portions of the selector to be created and
used, but in most cases selection will fall back to the DAG selector.
None of the block terminator instructions are implemented yet, and
most interesting instructions require some special handling.
Therefore there aren't any new test cases with this patch. There will
be quite a few tests coming with future patches.
This patch adds the make/CMake support for the new code (including
tablegen -gen-fast-isel) and creates the FastISel object for PPC64 ELF
only. It instantiates the necessary virtual functions
(TargetSelectInstruction, TargetMaterializeConstant,
TargetMaterializeAlloca, tryToFoldLoadIntoMI, and FastLowerArguments),
but of these, only TargetMaterializeConstant contains any useful
implementation. This is present since the table-generated code
requires the ability to materialize integer constants for some
instructions.
This patch has been tested by building and running the
projects/test-suite code with -O0. All tests passed with the
exception of a couple of long-running tests that time out using -O0
code generation.
llvm-svn: 187399
This patch prevents the following combine when the input vector is used more
than once.
insert_vector_elt (build_vector elt0, ..., eltN), NewEltIdx, idx
=>
build_vector elt0, ..., NewEltIdx, ..., eltN
The reasons are:
- Building a vector may be expensive, so try to reuse the existing part of a
vector instead of creating a new one (think big vectors).
- elt0 to eltN now have two users instead of one. This may prevent some other
optimizations.
llvm-svn: 187396
update testcase to make sure we generate debug info for walrus
by adding a non-trivial constructor and verify that we don't
emit an ODR signature for the type.
llvm-svn: 187393
32-bit symbols have "_" as global prefix, but when forming the name of
COMDAT sections this prefix is ignored. The current behavior assumes that
this prefix is always present which is not the case for 64-bit and names
are truncated.
llvm-svn: 187356
If no other operation is specified, 's' becomes an operation instead of an
modifier. The s operation just creates a symbol table. It is the same as
running ranlib.
We assume the archive was created by a sane ar (like llvm-ar or gnu ar) and
if the symbol table is present, then it is current. We use that to optimize
the most common case: a broken build system that thinks it has to run ranlib.
llvm-svn: 187353
infrastructure to do promotion without a domtree the same smarts about
looking through GEPs, bitcasts, etc., that I just taught mem2reg about.
This way, if SROA chooses to promote an alloca which still has some
noisy instructions this code can cope with them.
I've not used as principled of an approach here for two reasons:
1) This code doesn't really need it as we were already set up to zip
through the instructions used by the alloca.
2) I view the code here as more of a hack, and hopefully a temporary one.
The SSAUpdater path in SROA is a real sore point for me. It doesn't make
a lot of architectural sense for many reasons:
- We're likely to end up needing the domtree anyways in a subsequent
pass, so why not compute it earlier and use it.
- In the future we'll likely end up needing the domtree for parts of the
inliner itself.
- If we need to we could teach the inliner to preserve the domtree. Part
of the re-work of the pass manager will allow this to be very powerful
even in large SCCs with many functions.
- Ultimately, computing a domtree has gotten significantly faster since
the original SSAUpdater-using code went into ScalarRepl. We no longer
use domfrontiers, and much of domtree is lazily done based on queries
rather than eagerly.
- At this point keeping the SSAUpdater-based promotion saves a total of
0.7% on a build of the 'opt' tool for me. That's not a lot of
performance given the complexity!
So I'm leaving this a bit ugly in the hope that eventually we just
remove all of this nonsense.
I can't even readily test this because this code isn't reachable except
through SROA. When I re-instate the patch that fast-tracks allocas
already suitable for promotion, I'll add a testcase there that failed
before this change. Before that, SROA will fix any test case I give it.
llvm-svn: 187347
standards for LLVM. Remove duplicated comments on the interface from the
implementation file (implementation comments are left there of course).
Also clean up, re-word, and fix a few typos and errors in the commenst
spotted along the way.
This is in preparation for changes to these files and to keep the
uninteresting tidying in a separate commit.
llvm-svn: 187335
uses of an alloca, we can pre-compute promotability while analyzing an
alloca for splitting in SROA. That lets us short-circuit the common case
of a bunch of trivially promotable allocas. This cuts 20% to 30% off the
run time of SROA for typical frontend-generated IR sequneces I'm seeing.
It gets the new SROA to within 20% of ScalarRepl for such code. My
current benchmark for these numbers is PR15412, but it fits the general
pattern of IR emitted by Clang so it should be widely applicable.
llvm-svn: 187323
The tests !defined(__ppc__) && !defined(__powerpc__) are not needed
or helpful when verifying that code is being compiled for a 64-bit
target. The simpler test provided by this revision is sufficient to
tell if the target is 64-bit.
llvm-svn: 187318
IEEE-754R 1.4 Exclusions states that IEEE-754R does not specify the
interpretation of the sign of NaNs. In order to remove an irrelevant
variable that most floating point implementations do not use,
standardize add, sub, mul, div, mod so that operating anything with
NaN always yields a positive NaN.
In a later commit I am going to update the APIs for creating NaNs so
that one can not even create a negative NaN.
llvm-svn: 187314
Zeroing the significand of a floating point number does not necessarily cause a
floating point number to become finite non zero. For instance, if one has a NaN,
zeroing the significand will cause it to become +/- infinity.
llvm-svn: 187313
do in the SDag when lowering references to the GOT: use
ARMConstantPoolSymbol rather than creating a dummy global variable. The
computation of the alignment still feels weird (it uses IR types and
datalayout) but it preserves the exact previous behavior. This change
fixes the memory leak of the global variable detected on the valgrind
leak checking bot.
Thanks to Benjamin Kramer for pointing me at ARMConstantPoolSymbol to
handle this use case.
llvm-svn: 187303
me) should start watching this bot more as its catching lots of bugs.
The fix here is to not construct the global if we aren't going to need
it. That's cheaper anyways, and globals have highly predictable types in
practice. I've added an assert to catch skew between our manual testing
of the type and the actual type just for paranoia's sake.
Note that this pattern is actually fine in most globals because when you
build a global with a module it automatically is moved to be owned by
that module. But here, we're in isel and don't really want to do that.
The solution of not creating a global is simpler anyways.
llvm-svn: 187302
There doesn't appear to be any reason to put this variable on the heap.
I'm suspicious of the LexicalScope above that we stuff in a map and then
delete afterward, but I'm just trying to get the valgrind bot clean.
llvm-svn: 187301
than once, and the second time through we leaked memory. Found thanks to
the vg-leak bot, but I can't locally reproduce it with valgrind. The
debugger confirms that it is in fact leaking here.
This whole code is totally gross. Why is initialize being called on each
runOnFunction??? Why aren't these OwningPtr<>s, and why aren't their
lifetimes better defined? Anyways, this is just a surgical change to
help out the leak checking bots.
llvm-svn: 187299
their being optimized out in debug mode. Realistically, this just isn't
going to be the slow part anyways. This also fixes unused variable
warnings that are breaking LLD build bots. =/ I didn't see these at
first, and kept losing track of the fact that they were broken.
llvm-svn: 187297
analysis of the alloca. We don't need to visit all the users twice for
this. We build up a kill list during the analysis and then just process
it afterward. This recovers the tiny bit of performance lost by moving
to the visitor based analysis system as it removes one entire use-list
walk from mem2reg. In some cases, this is now faster than mem2reg was
previously.
llvm-svn: 187296
Also always add DIType, DISubprogram and DIGlobalVariable to the list
in DebugInfoFinder without checking them, so we can verify them later
on.
llvm-svn: 187285
Adds unit tests for it too.
Split BasicBlockUtils into an analysis-half and a transforms-half, and put the
analysis bits into a new Analysis/CFG.{h,cpp}. Promote isPotentiallyReachable
into llvm::isPotentiallyReachable and move it into Analysis/CFG.
llvm-svn: 187283
Merge consecutive if-regions if they contain identical statements.
Both transformations reduce number of branches. The transformation
is guarded by a target-hook, and is currently enabled only for +R600,
but the correctness has been tested on X86 target using a variety of
CPU benchmarks.
Patch by: Mei Ye
llvm-svn: 187278
This change makes test with RUN lines like
RUN: opt ... | FileCheck
fail if opt fails, even if it prints what FileCheck wants. Enabling this
found some interesting cases of broken tests that were not being noticed
because opt (or some other tool) was crashing late.
Pipefail is used when the shell supports it or when using the internal
python based tester.
llvm-svn: 187261
Both GCC and LLVM will implicitly define __ppc__ and __powerpc__ for
all PowerPC targets, whether 32- or 64-bit. They will both implicitly
define __ppc64__ and __powerpc64__ for 64-bit PowerPC targets, and not
for 32-bit targets. We cannot be sure that all other possible
compilers used to compile Clang/LLVM define both __ppc__ and
__powerpc__, for example, so it is best to check for both when relying
on either inside the Clang/LLVM code base.
This patch makes sure we always check for both variants. In addition,
it fixes one unnecessary check in lib/Target/PowerPC/PPCJITInfo.cpp.
(At least one of __ppc__ and __powerpc__ should always be defined when
compiling for a PowerPC target, no matter which compiler is used, so
testing for them is unnecessary.)
There are some places in the compiler that check for other variants,
like __POWERPC__ and _POWER, and I have left those in place. There is
no need to add them elsewhere. This seems to be in Apple-specific
code, and I won't take a chance on breaking it.
There is no intended change in behavior; thus, no test cases are
added.
llvm-svn: 187248
We used to call Verify before adding DICompileUnit to the list, and now we
remove the check and always add DICompileUnit to the list in DebugInfoFinder,
so we can verify them later on.
llvm-svn: 187237
These were reverted in r167222 along with the rest
of the last different address space pointer size attempt.
These will be used in later commits.
llvm-svn: 187223
type units.
Initially this support is used in the computation of an ODR checker
for C++. For now we're attaching it to the DIE, but in the future
it will be attached to the type unit.
This also starts breaking out types into the separation for type
units, but without actually splitting the DIEs.
In preparation for hashing the DIEs this adds a DIEString type
that contains a StringRef with the string contained at the label.
llvm-svn: 187213
On Windows, this improves clean cmake configuration time on my
workstation from 1m58s to 1m32s, which is pretty significant. There's
probably more that can be done here, but this is the low hanging fruit.
Eric volunteered to regenerate ./configure for me.
llvm-svn: 187209
* Remove LLVM_ENABLE_CRT_REPORT. LLVM_DISABLE_CRASH_REPORT made it redundant.
* set Return to 1, so that we get a stack trace on failure.
* don't call _exit, so that we get a negative exit value and "not --crash"
correctly differentiates crashes and regular errors.
This is a bit experimental since the documentation on this interface is sparse.
It doesn't bring up a dialog on my windows setup, but feel free to revert
if it causes problem for your setup (and let me know what it is so that I
can try to fix this patch).
llvm-svn: 187206
CustomLowerNode was not being called during SplitVectorOperand,
meaning custom legalization could not be used by targets.
This also adds a test case for NVPTX that depends on this custom
legalization.
Differential Revision: http://llvm-reviews.chandlerc.com/D1195
Attempt to fix the buildbots by making the X86 test I just added platform independent
llvm-svn: 187202
This reverts commit 187198. It broke the bots.
The soft float test probably needs a -triple because of name differences.
On the hard float test I am getting a "roundss $1, %xmm0, %xmm0", instead of
"vroundss $1, %xmm0, %xmm0, %xmm0".
llvm-svn: 187201
CustomLowerNode was not being called during SplitVectorOperand,
meaning custom legalization could not be used by targets.
This also adds a test case for NVPTX that depends on this custom
legalization.
Differential Revision: http://llvm-reviews.chandlerc.com/D1195
llvm-svn: 187198
robust. It now uses an InstVisitor and worklist to actually walk the
uses of the Alloca transitively and detect the pattern which we can
directly promote: loads & stores of the whole alloca and instructions we
can completely ignore.
Also, with this new implementation teach both the predicate for testing
whether we can promote and the promotion engine itself to use the same
code so we no longer have strange divergence between the two code paths.
I've added some silly test cases to demonstrate that we can handle
slightly more degenerate code patterns now. See the below for why this
is even interesting.
Performance impact: roughly 1% regression in the performance of SROA or
ScalarRepl on a large C++-ish test case where most of the allocas are
basically ready for promotion. The reason is because of silly redundant
work that I've left FIXMEs for and which I'll address in the next
commit. I wanted to separate this commit as it changes the behavior.
Once the redundant work in removing the dead uses of the alloca is
fixed, this code appears to be faster than the old version. =]
So why is this useful? Because the previous requirement for promotion
required a *specific* visit pattern of the uses of the alloca to verify:
we *had* to look for no more than 1 intervening use. The end goal is to
have SROA automatically detect when an alloca is already promotable and
directly hand it to the mem2reg machinery rather than trying to
partition and rewrite it. This is a 25% or more performance improvement
for SROA, and a significant chunk of the delta between it and
ScalarRepl. To get there, we need to make mem2reg actually capable of
promoting allocas which *look* promotable to SROA without have SROA do
tons of work to massage the code into just the right form.
This is actually the tip of the iceberg. There are tremendous potential
savings we can realize here by de-duplicating work between mem2reg and
SROA.
llvm-svn: 187191
The bitcode representation attribute kinds are encoded into / decoded from
should be independent of the current set of LLVM attributes and their position
in the AttrKind enum. This patch explicitly encodes attributes to fixed bitcode
values.
With this patch applied, LLVM does not silently misread attributes written by
LLVM 3.3. We also enhance the decoding slightly such that an error message is
printed if an unknown AttrKind encoding was dected.
Bonus: Dropping bitcode attributes from AttrKind is now easy, as old AttrKinds
do not need to be kept to support the Bitcode reader.
llvm-svn: 187186
This patch provides basic support for powerpc64le as an LLVM target.
However, use of this target will not actually generate little-endian
code. Instead, use of the target will cause the correct little-endian
built-in defines to be generated, so that code that tests for
__LITTLE_ENDIAN__, for example, will be correctly parsed for
syntax-only testing. Code generation will otherwise be the same as
powerpc64 (big-endian), for now.
The patch leaves open the possibility of creating a little-endian
PowerPC64 back end, but there is no immediate intent to create such a
thing.
The LLVM portions of this patch simply add ppc64le coverage everywhere
that ppc64 coverage currently exists. There is nothing of any import
worth testing until such time as little-endian code generation is
implemented. In the corresponding Clang patch, there is a new test
case variant to ensure that correct built-in defines for little-endian
code are generated.
llvm-svn: 187179
Back in r140220 we removed the autoconf code that would set LLVMCC_OPTION
since it was only used by the test-suite. This patch now removes code
that would only be used if LLVMCC_OPTION was set.
llvm-svn: 187154
The previous change to local live range allocation also suppressed
eviction of local ranges. In rare cases, this could result in more
expensive register choices. This commit actually revives a feature
that I added long ago: check if live ranges can be reassigned before
eviction. But now it only happens in rare cases of evicting a local
live range because another local live range wants a cheaper register.
The benefit is improved code size for some benchmarks on x86 and armv7.
I measured no significant compile time increase and performance
changes are noise.
llvm-svn: 187140
Also avoid locals evicting locals just because they want a cheaper register.
Problem: MI Sched knows exactly how many registers we have and assumes
they can be colored. In cases where we have large blocks, usually from
unrolled loops, greedy coloring fails. This is a source of
"regressions" from the MI Scheduler on x86. I noticed this issue on
x86 where we have long chains of two-address defs in the same live
range. It's easy to see this in matrix multiplication benchmarks like
IRSmk and even the unit test misched-matmul.ll.
A fundamental difference between the LLVM register allocator and
conventional graph coloring is that in our model a live range can't
discover its neighbors, it can only verify its neighbors. That's why
we initially went for greedy coloring and added eviction to deal with
the hard cases. However, for singly defined and two-address live
ranges, we can optimally color without visiting neighbors simply by
processing the live ranges in instruction order.
Other beneficial side effects:
It is much easier to understand and debug regalloc for large blocks
when the live ranges are allocated in order. Yes, global allocation is
still very confusing, but it's nice to be able to comprehend what
happened locally.
Heuristics could be added to bias register assignment based on
instruction locality (think late register pairing, banks...).
Intuituvely this will make some test cases that are on the threshold
of register pressure more stable.
llvm-svn: 187139
For two intrinsics 'llvm.nvvm.texsurf.handle' and 'llvm.nvvm.texsurf.handle.internal',
TableGen was emitting matching code like:
if (Name.startswith("llvm.nvvm.texsurf.handle")) ...
if (Name.startswith("llvm.nvvm.texsurf.handle.internal")) ...
We can never match "llvm.nvvm.texsurf.handle.internal" here because it will
always be erroneously matched by the first condition.
The fix is to sort the intrinsic names and emit them in reverse order.
llvm-svn: 187119
Before the patch we took advantage of the fact that the compare and
branch are glued together in the selection DAG and fused them together
(where possible) while emitting them. This seemed to work well in practice.
However, fusing the compare so early makes it harder to remove redundant
compares in cases where CC already has a suitable value. This patch
therefore uses the peephole analyzeCompare/optimizeCompareInstr pair of
functions instead.
No behavioral change intended, but it paves the way for a later patch.
llvm-svn: 187116
These instructions are allowed to trap even if the condition is false,
so for now they are only used for "*ptr = (cond ? x : *ptr)"-style
constructs.
llvm-svn: 187111
Make sure the context and type fields are MDNodes. We will generate
verification errors if those fields are non-empty strings.
Fix testing cases to make them pass the verifier.
llvm-svn: 187106
The language reference says that:
"If a symbol appears in the @llvm.used list, then the compiler,
assembler, and linker are required to treat the symbol as if there is
a reference to the symbol that it cannot see"
Since even the linker cannot see the reference, we must assume that
the reference can be using the symbol table. For example, a user can add
__attribute__((used)) to a debug helper function like dump and use it from
a debugger.
llvm-svn: 187103
There's no need to specify a flag to omit frame pointer elimination on non-leaf
nodes...(Honestly, I can't parse that option out.) Use the function attribute
stuff instead.
llvm-svn: 187093
Prior to this patch, IfConverter may widen the cases where a sequence of
instructions were executed because of the way it uses nested predicates. This
result in incorrect execution.
For instance, Let A be a basic block that flows conditionally into B and B be a
predicated block.
B can be predicated with A.BrToBPredicate into A iff B.Predicate is less
"permissive" than A.BrToBPredicate, i.e., iff A.BrToBPredicate subsumes
B.Predicate.
The IfConverter was checking the opposite: B.Predicate subsumes
A.BrToBPredicate.
<rdar://problem/14379453>
llvm-svn: 187071
Before this patch we would strdup each argument. If one was a response file,
we would replace it with the response file contents, leaking the original
strdup result.
We now don't strdup the originals and let StringSaver free any memory it
allocated. This also saves a bit of malloc traffic when response files are
not used.
Leak found by the valgrind build bot.
llvm-svn: 187042
The Binary constructor takes ownership of the memory buffer. This is a fairly
unfortunate interface, but for now make createObjectFile consistent with it
by also deleting the buffer if it fails.
Fixes a leak in llvm-ar found by the valgrind bots.
llvm-svn: 187039
schedule an alloca for another iteration in SROA. This only showed up
with a mixture of promotable and unpromotable selects and phis. Added
a test case for this.
llvm-svn: 187031
pending speculation for a phi node. The problem here is that we were
using growth of the specluation set as an indicator of whether
speculation would occur, and if the phi node is already in the set we
don't see it grow. This is a symptom of the fact that this signal is
a total hack.
Unfortunately, I couldn't really come up with a non-hacky way of
signaling that promotion remains valid *after* speculation occurs, such
that we only speculate when all else looks good for promotion. In the
end, I went with at least a much more explicit approach of doing the
work of queuing inside the phi and select processing and setting
a preposterously named flag to convey that we're in the special state of
requiring speculating before promotion.
Thanks to Richard Trieu and Nick Lewycky for the excellent work reducing
a testcase for this from a pretty giant, nasty assert in a big
application. =] The testcase was excellent.
llvm-svn: 187029
This removes the need to store the asm variant in each row of the single table that existed before. Shaves ~16K off the size of X86AsmParser.o.
llvm-svn: 187026
Similar to ARM change r182800, dynamic linker will read bits/addends from
the original object rather than from the object that might have been patched
previously. For the purpose of relocations for MCJIT stubs on MIPS, we
internally use otherwise unused MIPS relocations.
The change also enables MCJIT unit tests for MIPS (EL/BE), and the following
two tests now pass:
- MCJITTest.return_global and
- MCJITTest.multiple_functions.
These issues have been tracked as Bug 16250.
Patch by Petar Jovanovic.
llvm-svn: 187019
These are really the same address space in hardware. The only
difference is that CONSTANT_ADDRESS uses a special cache for faster
access. When we are unable to use the constant kcache for some reason
(e.g. smaller types or lack of indirect addressing) then the instruction
selector must use GLOBAL_ADDRESS loads instead.
llvm-svn: 187006
When vectors are built from a single value, the ARM lowering issues a
scalar_to_vector node.
This node is then always morphed into a move from the general purpose unit to
the vector unit.
When the value comes from a load, this can be simplified into a vector load to
the right lane.
This patch changes the lowering of insert_vector_elt to expose a vector
friendly pattern in this situation.
This is a step toward fixing <rdar://problem/14170854>.
llvm-svn: 186999
The main observation is that we never need both the filesize and the map size.
When mapping a slice of a file, it doesn't make sense to request a null
terminator and that would be the only case where the filesize would be used.
There are other cleanups that should be done in this area:
* A client should not have to pass the size (even an explicit -1) to say if
it wants a null terminator or not, so we should probably swap the argument
order.
* The default should be to not require a null terminator. Very few clients
require this, but many end up asking for it just because it is the default.
llvm-svn: 186984
The gold plugin was passing the desired map size as the file size. This was
working for two reasons:
* Recent version of gold provide the get_view callback, so this code was not
used.
* In older versions, getOpenFile was called, but the file size is never used
if we don't require null terminated buffers and map size defaults to the
file size.
Thanks to Eli Bendersky for noticing this.
I will try to make this api a bit less error prone.
llvm-svn: 186978
The symbol table has forward references in the file. Instead of allocating
a temporary buffer or counting the size and then writing, this implementation
writes a dummy value first and patches it once the final value is known.
There is room for performance improvement. I will implement them as soon as I
get some other features (like a ranlib mode) in.
llvm-svn: 186934
This increases the number of opportunites we have for folding. With the
previous implementation we were unable to fold into any instructions
other than the first when multiple instructions were selected from a
single SDNode.
Reviewed-by: Vincent Lejeune <vljn at ovi.com>
llvm-svn: 186919
A side-effect of this is that now the compiler expects kernel arguments
to be 4-byte aligned.
Reviewed-by: Vincent Lejeune <vljn at ovi.com>
llvm-svn: 186916
This makes them consistent with 'bt' which already had this handling. gas has the same behavior. There have been discussions on the mailing list about determining size based on the immediate, but my goal here was just to remove the inconsistency.
llvm-svn: 186904
MDNodes used by DbgDeclareInst and DbgValueInst.
Another 16 testing cases failed and they are disabled with
-disable-debug-info-verifier.
A total of 34 cases are disabled with -disable-debug-info-verifier and will be
corrected.
llvm-svn: 186902
It only didn't use it before because it seems InstAlias handling in the asm printer fails to count tied operands so it tried to find an xor with 2 operands instead of the 3 it wfails to count tied.
llvm-svn: 186900
Use the function attributes to pass along the stack protector buffer size.
Now that we have robust function attributes, don't use a command line option to
specify the stack protecto buffer size.
llvm-svn: 186863
Enable parsing all 32 floating point control registers $0-31 and stop trying to
parse floating point condition code register $fcc0. Also, return ParseFail if
the operand being parsed is not in the expected format.
llvm-svn: 186861
There already have two "dead" functions, initialize{IPO|IPA}, defined for
similar purpose. I decide not to call these two functions for two reasons:
o. they don't cover all LTO passes (which will soon be separated into IPO
and post-IPO passes)
o. We have not yet figured out the right passes and the ordering for IPO
and post-IPO stages, meaning this change is only for the time being.
Since LTO passes are registered, we are now able to print IR before and
after particular point.
For OSX users:
--------------
"...-Wl,-mllvm -Wl,-print-after=<pass-name>" will print IR after the
specified pass.
For Other UNIX with GNU gold linker:
------------------------------------
"-Wl,-plugin-opt=-print-after=<pass-name>" should work.
(NOTE: no need for "-Wl,-mllvm")
Strip "-Wl," if flags are fed directly to linker instead of clang/clang++.
llvm-svn: 186853
Variadic MC instructions don't note whether the variable operands
are uses or defs, so mayAffectControlFlow() must conservatively
assume they are defs and return true if the PC is in the operand
list.
rdar://14488628
llvm-svn: 186846
We don't have tests for the effect of if-conversion loops because it requires a big test (that includes if-converted loops) and it is difficult to find and balance a loop to do the right thing.
llvm-svn: 186845
Option aliases in option groups were previously disallowed by an assert.
As far as I can tell, there was no technical reason for this, and I would
like to be able to put cl.exe compatible options in their own group for Clang,
so let's change the assert.
llvm-svn: 186838
instructions. With this patch:
1. ldr.n is recognized as mnemonic for the short encoding
2. ldr.w is recognized as menmonic for the long encoding
3. ldr will map to either short or long encodings depending on the size of the offset
llvm-svn: 186831
This matches gnu archive behavior and since archive member order can change
which member is used, not changing the order on replacement looks like the
right thing to do.
This patch also refactors the logic for which archive member to keep and
whether to move it to a helper function (computeInsertAction). The
nesting in computeNewArchiveMembers was getting a bit confusing.
llvm-svn: 186829
GNU ar when not given the a or b modifiers replaces archive members in the
same location of the old ones. I am about to implement that in llvm-ar. For
now, just don't depend on the current llvm-ar behavior on this test.
llvm-svn: 186823
After Ulrich's r180677 (thanks!) TableGen is intelligent enough to
handle tied constraints involving complex operands properly, so
virtually all of the ARM custom converters are now unnecessary.
llvm-svn: 186810
helper function. This leaves both trivial cases handled entirely in
helper functions and merely manages the list of allocas to process in
the run method.
The next step will be to handle all of the trivial promotion work prior
to even creating the core class and the subsequent simplifications that
enables.
llvm-svn: 186784
a single block into the helper routine. This takes advantage of the fact
that we can directly replace uses prior to any store with undef to
simplify matters and unconditionally promote allocas only used within
one block.
I've removed the special handling for the case of no stores existing.
This has no semantic effect but might slow things down. I'll fix that in
a later patch when I refactor this entire thing to be easier to manage
the different cases.
llvm-svn: 186783
handles the general cases.
The hope is to refactor this so that we don't end up building the entire
class for the trivial cases. I also want to lift a lot of the early
pre-processing in the initial segment of run() into a separate routine,
and really none of it needs to happen inside the primary promotion
class.
These routines in particular used none of the actual state in the
promotion class, so they don't really make sense as members.
llvm-svn: 186781
This struct is nicely independent of everything else, and we already
needed a foward declaration here. It's simpler to just define it
immediately.
llvm-svn: 186780
GlobalOpt simplifies llvm.compiler.used by removing any members that are also
in the more strict llvm.used. Handle the special case where llvm.compiler.used
becomes empty.
llvm-svn: 186778
Simplify DIxxx:Verify to not call Verify on an operand. Instead, we use
DebugInfoFinder to list all MDNodes that should be a DIScope and all MDNodes
that should be a DIType and we will call Verify on those lists.
llvm-svn: 186737
indirect branches correctly. Under some circumstances, this led to the deletion
of basic blocks that were the destination of indirect branches. In that case it
left indirect branches to nowhere in the code.
This patch replaces, and is more general than either of the previous fixes for
indirect-branch-analysis issues, r181161 and r186461.
For other branches (not indirect) this refactor should have *almost* identical
behavior to the previous version. There are some corner cases where this
refactor is able to analyze blocks that the previous version could not (e.g.
this necessitated the update to thumb2-ifcvt2.ll).
<rdar://problem/14464830>
llvm-svn: 186735
The original change was rolled back in r186627 because of test
failures on the big endian machine. I believe I fixed the issue
so re-submitting.
llvm-svn: 186734
We were incorrectly using compiler_used instead of compiler.used. Unfortunately
the passes using the broken name had tests also using the broken name.
llvm-svn: 186705
Summary:
This allows the clang driver to put MSVC compatible options in the same
enumerator space as its normal options but exclude them from normal
option parsing.
Also changes the standard ParseArgs() method to consider unknown
arguments with a leading slash as being inputs rather than flags.
High level discussion for clang-cl is here:
http://lists.cs.uiuc.edu/pipermail/cfe-dev/2013-June/030404.html
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1049
llvm-svn: 186703
The current machinery using KeyboardInterrupt for canceling doesn't work
with multiple threads on Windows as it just cancels the currently run tests
but the runners continue.
We install a handler for Ctrl-C which stops the provider from providing any
more tests to the runners. Together with aborting all currently running
tests, this brings lit to a halt.
llvm-svn: 186695
The atomic tests assume the two-operand forms, so I've restricted them to z10.
Running and-01.ll, or-01.ll and xor-01.ll for z196 as well as z10 shows why
using convertToThreeAddress() is better than exposing the three-operand forms
first and then converting back to two operands where possible (which is what
I'd originally tried). Using the three-operand form first stops us from
taking advantage of NG, OG and XG for spills.
llvm-svn: 186683
This first step just adds definitions for SLLK, SRLK and SRAK.
The next patch will actually make use of them during codegen.
insn-bad.s tests that some form of error is reported when using these
instructions on z10. More work is needed to get the "instruction requires:
distinct-ops" that we'd ideally like, so I've stubbed that part out for now.
I'll come back and make it mandatory once the necessary changes are in.
llvm-svn: 186680
Somehow forgot to git rm these two files. I believe I left the remaining
invalid* tests intentionally, though whether my reasons were sound is a
different matter.
llvm-svn: 186663
The tests were checking for barriers which the ARM ARM says they must execute
as a full system DMB/DSB, rather than that they're UNDEFINED and LLVM does in
fact represent them.
The tests happened to be passing because they were using a non-versioned ARM
triple which didn't have *any* DMB/DSB instructions.
llvm-svn: 186662
This allows "llvm-mc -disassemble" to accept two new features:
+ Using comma as a byte separator
+ Grouping bytes with '[' and ']' pairs.
The behaviour outside a [...] group is unchanged. But within the group once
llvm-mc encounters a true error, it stops rather than trying to resynchronise
the stream at the next byte. This is more useful for disassembly tests, where
we have an almost-instruction in mind and don't care what the misaligned
interpretation would be. Particularly if it means llvm-mc won't actually see
the next intended almost-instruction.
As a side effect, this means llvm-mc can disassemble its own -show-encoding
output if copy-pasted.
llvm-svn: 186661
implementation of the SROA algorithm. We were using the term 'partition'
in many places that no longer ever represented an actual partition, but
rather just an arbitrary slice of an alloca.
No functionality change intended here. Mostly just renaming of types,
functions, variables, and rewording of comments. Several comments were
rewritten to make a lot more sense in the new structure of things.
The stats are still weird and not reflective of how this really works.
I'll fix those up in a separate patch as it is a touch more semantic of
a change...
llvm-svn: 186659
SROA.
The crux of the issue is that now we track uses of a partition of the
alloca in two places: the iterators over the partitioning uses and the
previously collected split uses vector. We weren't accounting for the
fact that the split uses might invalidate integer widening in ways other
than due to their width (in this case due to being volatile).
Further reduced testcase added to the tests.
llvm-svn: 186655
1> Use DebugInfoFinder to find debug info MDNodes.
2> Add disable-debug-info-verifier to disable verifying debug info.
3> Disable verifying for testing cases that fail (will update the testing cases
later on).
4> MDNodes generated by clang can have empty filename for TAG_inheritance and
TAG_friend, so DIType::Verify is modified accordingly.
Note that DebugInfoFinder does not list all debug info MDNode.
For example, clang can generate:
metadata !{i32 786468}, which will fail to verify.
This MDNode is used by debug info but not included in DebugInfoFinder.
This MDNode is generated as a temporary node in DIBuilder::createFunction
Value *TElts[] = { GetTagConstant(VMContext, DW_TAG_base_type) };
MDNode::getTemporary(VMContext, TElts)
llvm-svn: 186634
All changes were made by the following bash script:
find test/CodeGen -name "*.ll" | \
while read NAME; do
echo "$NAME"
grep -q "^; *RUN: *llc.*debug" $NAME && continue
grep -q "^; *RUN:.*llvm-objdump" $NAME && continue
grep -q "^; *RUN: *opt.*" $NAME && continue
TEMP=`mktemp -t temp`
cp $NAME $TEMP
sed -n "s/^define [^@]*@\([A-Za-z0-9_]*\)(.*$/\1/p" < $NAME | \
while read FUNC; do
sed -i '' "s/;\([A-Za-z0-9_-]*\)\([A-Za-z0-9_-]*\):\( *\)$FUNC[:]* *\$/;\1\2-LABEL:\3$FUNC:/g" $TEMP
done
sed -i '' "s/;\(.*\)-LABEL-LABEL:/;\1-LABEL:/" $TEMP
sed -i '' "s/;\(.*\)-NEXT-LABEL:/;\1-NEXT:/" $TEMP
sed -i '' "s/;\(.*\)-NOT-LABEL:/;\1-NOT:/" $TEMP
sed -i '' "s/;\(.*\)-DAG-LABEL:/;\1-DAG:/" $TEMP
mv $TEMP $NAME
done
This script catches a superset of the cases caught by the script associated with commit r186280. It initially found some false positives due to unusual constructs in a minority of tests; all such cases were disambiguated first in commit r186621.
llvm-svn: 186624
Summary:
Dump optional data directory entries in the PE/COFF header, so that
we can test the output of LLD linker. This patch updates the test binary
file, but the source of the binary is the same. I just re-linked the file.
I don't know how the previous file was linked, but the previous file did
not have any data directory entries for some reason.
Reviewers: rafael
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1148
llvm-svn: 186623
* assert that the return value is one of the documented values on msdn.
* on FILE_TYPE_UNKNOWN, check GetLastError.
Unfortunately I can't think of a way to get a FILE_TYPE_UNKNOWN on a test.
llvm-svn: 186595