identify_magic is not free, so we should avoid calling it twice. The argument
also makes it cheap for createBinary to just forward to createObjectFile.
llvm-svn: 199813
This actually totally breaks and causes the machine verifier to cry in several cases, one of which being:
%RAX<def> = COPY %RCX<kill>
%ECX<def> = COPY %EAX<kill>, %RAX<imp-use,kill>
These subregister copies are together identified as noops, so are both removed. However, the second one as it has an imp-use gets converted into a kill:
%ECX<def> = KILL %EAX<kill>, %RAX<imp-use,kill>
As the original COPY has been removed, the verifier goes into tears at the use of undefined EAX and RAX.
There are several hacky solutions to this hacky problem (which is all to do with imp-use/def weirdnesses), but the least hacky I've come up with is to *always* remove COPYs by converting to KILLs. KILLs are no-ops to the code generator so the generated code doesn't change (which is why they were partially used in the first place), but using them also keeps the def/use and imp-def/imp-use chains alive:
%RAX<def> = KILL %RCX<kill>
%ECX<def> = KILL %EAX<kill>, %RAX<imp-use,kill>
The patch passes all test cases including the ones that check the removal of MOVs in this circumstance, along with an extra test I added to check subregister behaviour (which made the machine verifier fall over before my patch).
The patch also adds some DEBUG() statements because the file hadn't got any.
llvm-svn: 199797
inconsistent results for different orderings of alloca slices. The
fundamental issue is that it is just always a mistake to return early
from this function. There is no effective early exit to leverage. This
patch stops trynig to do so and simplifies the code a bit as
a consequence.
Original diagnosis and patch by James Molloy with some name tweaks by me
in part reflecting feedback from Duncan Smith on the mailing list.
llvm-svn: 199771
The constructors of classes deriving from Binary normally take an error_code
as an argument to the constructor. My original intent was to change them
to have a trivial constructor and move the initial parsing logic to a static
method returning an ErrorOr. I changed my mind because:
* A constructor with an error_code out parameter is extremely convenient from
the implementation side. We can incrementally construct the object and give
up when we find an error.
* It is very efficient when constructing on the stack or when there is no
error. The only inefficient case is where heap allocating and an error is
found (we have to free the memory).
The result is that this is a much smaller patch. It just standardizes the
create* helpers to return an ErrorOr.
Almost no functionality change: The only difference is that this found that
we were trying to read past the end of COFF import library but ignoring the
error.
llvm-svn: 199770
Fix a crash in SjLjEHPrepare::lowerIncomingArguments caused by treating
VectorType like an aggregate. It's first-class!
<rdar://problem/15854596>
llvm-svn: 199768
For PPC64 SVR (and Darwin), the stores that take byval aggregate parameters
from registers into the stack frame had MachinePointerInfo objects with
incorrect offsets. These offsets are relative to the object itself, not to the
stack frame base.
This fixes self hosting on PPC64 when compiling with -enable-aa-sched-mi.
llvm-svn: 199763
This implements the unwind_raw directive for the ARM IAS. The unwind_raw
directive takes the form of a stack offset value followed by one or more bytes
representing the opcodes to be emitted. The opcode emitted will interpreted as
if it were assembled by the opcode assembler via the standard unwinding
directives.
Thanks to Logan Chien for an extra test!
llvm-svn: 199707
The .personalityindex directive is equivalent to the .personality directive with
the ARM EABI personality with the specific index (0, 1, 2). Both of these
directives indicate personality routines, so enhance the personality directive
handling to take into account personalityindex.
Bonus fix: flush the UnwindContext at the beginning of a new function.
Thanks to Logan Chien for additional tests!
llvm-svn: 199706
It was commited as r199628 but reverted in r199628 as causing
regression test failed. It's because of old vervsion of patch
I used to commit. Sorry for mistake.
llvm-svn: 199704
to not guess at a symbol name in some cases.
The problem is that in object files assembled starting at address 0, when
trying to symbolicate something that starts like this:
% cat x.s
_t1:
vpshufd $0x0, %xmm1, %xmm0
the symbolic disassembly can end up like this:
% otool -tV x.o
x.o:
(__TEXT,__text) section
_t1:
0000000000000000 vpshufd $_t1, %xmm1, %xmm0
Which is in this case produced incorrect symbolication.
But it is useful in some cases to use the SymbolLookUp() call back
to guess at some immediate values. For example one like this
that does not have an external relocation entry:
% cat y.s
_t1:
movl $_d1, %eax
.data
_d1: .long 0
% clang -c -arch i386 y.s
% otool -tV y.o
y.o:
(__TEXT,__text) section
_t1:
0000000000000000 movl $_d1, %eax
% otool -rv y.o
y.o:
Relocation information (__TEXT,__text) 1 entries
address pcrel length extern type scattered symbolnum/value
00000001 False long False VANILLA False 2 (__DATA,__data)
So the change is based on it is not likely that an immediate Value
coming from an instruction field of a width of 1 byte, other than branches
and items with relocation, are not likely symbol addresses.
With the change the first case above simply becomes:
% otool -tV x.o
x.o:
(__TEXT,__text) section
_t1:
0000000000000000 vpshufd $0x0, %xmm1, %xmm0
and the second case continues to work as expected.
rdar://14863405
llvm-svn: 199698
when used with symbolic disassembly, add a check that the operand
is an immediate and has not been symbolicated to MCExpr operand.
I’m trying to enable the ‘C’ disassembly API option
LLVMDisassembler_Option_SetInstrComments for darwin’s
otool(1) that uses the llvm disassembler API. The problem is
that the disassembler API can change an immediate operand to
an MCExpr operand if it symbolicates it with the call backs.
And if it does the code in llvm::EmitAnyX86InstComments()
will crash when it assumes these operands are immediates.
The fix for this is very straight forward to just protect the call
to getImm() with a check of isImm(). So if the immediate for
an instruction is symbolicated it simply doesn’t get the X86
verbose assembly comments:
% otool -tV test_asm.o
test_asm.o:
(__TEXT,__text) section
_t1:
0000000000000000 vpshufd $_t1, %xmm1, %xmm0
0000000000000005 retq
0000000000000006 nopw %cs:_t1(%rax,%rax)
_t2:
0000000000000010 vpshufd $-0x1, %xmm0, %xmm0 ## xmm0 = xmm0[3,3,3,3]
0000000000000015 retq
0000000000000016 nopw %cs:_t1(%rax,%rax)
_t3:
0000000000000020 vpshufd $_t1, %xmm1, %xmm0
0000000000000025 retq
0000000000000026 nopw %cs:_t1(%rax,%rax)
_t4:
0000000000000030 vpshufd $0x2d, %xmm0, %xmm0 ## xmm0 = xmm0[1,3,2,0]
0000000000000035 retq
The fact that the immediate $0x0 is being symbolicated at
all in this case is a different problem which my next patch
will address.
rdar://10989286
llvm-svn: 199697
StackProtector keeps a ValueMap of alloca instructions to layout kind tags for
use by PEI and other later passes. When stack coloring replaces one alloca with
a bitcast to another one, the key replacement in this map does not work.
Instead, provide an interface to manage this updating directly. This seems like
an improvement over the old behavior, where the layout map would not get
updated at all when the stack slots were merged. In practice, however, there is
likely no observable difference because PEI only did anything special with
'large array' kinds, and if one large array is merged with another, than the
replacement should already have been a large array.
This is an attempt to unbreak the clang-x86_64-darwin11-RA builder.
llvm-svn: 199684
Add target specific rules for combining vselect dag nodes into movss/movsd
when possible.
If the vector type of the vselect dag node in input is either MVT::v4i13 or
MVT::v4f32, then try to fold according to rules:
1) fold (vselect (build_vector (0, -1, -1, -1)), A, B) -> (movss A, B)
2) fold (vselect (build_vector (-1, 0, 0, 0)), A, B) -> (movss B, A)
If the vector type of the vselect dag node in input is either MVT::v2i64 or
MVT::v2f64 (and we have SSE2), then try to fold according to rules:
3) fold (vselect (build_vector (0, -1)), A, B) -> (movsd A, B)
4) fold (vselect (build_vector (-1, 0)), A, B) -> (movsd B, A)
llvm-svn: 199683
optional DWARF sections, so compiling with -g does not result in
different code being generated for PC-relative loads.
This is reapplying a diet r197922 (__TEXT-only).
llvm-svn: 199681
The way that stack coloring updated MMOs when merging stack slots, while
correct, is suboptimal, and is incompatible with the use of AA during
instruction scheduling. The solution, which involves the use of const_cast (and
more importantly, updating the IR from within an MI-level pass), obviously
requires some explanation:
When the stack coloring pass was originally committed, the code in
ScheduleDAGInstrs::buildSchedGraph tracked possible alias sets by using
GetUnderlyingObject, and all load/store and store/store memory control
dependencies where added between SUs at the object level (where only one
object, that returned by GetUnderlyingObject, was used to identify the object
associated with each MMO). When stack coloring merged stack slots, it would
replace MMOs derived from the remapped alloca with the alloca with which the
remapped alloca was being replaced. Because ScheduleDAGInstrs only used single
objects, and tracked alias sets at the object level, this was a fine solution.
In r169744, (Andy and) I updated the code in ScheduleDAGInstrs to use
GetUnderlyingObjects, and track alias sets using, potentially, multiple
underlying objects for each MMO. This was done, primarily, to provide the
ability to look through PHIs, and provide better scheduling for
induction-variable-dependent loads and stores inside loops. At this point, the
MMO-updating code in stack coloring became suboptimal, because it would clear
the MMOs for (i.e. completely pessimize) all instructions for which r169744
might help in scheduling. Updating the IR directly is the simplest fix for this
(and the one with, by far, the least compile-time impact), but others are
possible (we could give each MMO a small vector of potential values, or make
use of a remapping table, constructed from MFI, inside ScheduleDAGInstrs).
Unfortunately, replacing all MMO values derived from the remapped alloca with
the base replacement alloca fundamentally breaks our ability to use AA during
instruction scheduling (which is critical to performance on some targets). The
reason is that the original MMO might have had an offset (either constant or
dynamic) from the base remapped alloca, and that offset is not present in the
updated MMO. One possible way around this would be to use
GetPointerBaseWithConstantOffset, and update not only the MMO's value, but also
its offset based on the original offset. Unfortunately, this solution would
only handle constant offsets, and for safety (because AA is not completely
restricted to deducing relationships with constant offsets), we would need to
clear all MMOs without constant offsets over the entire function. This would be
an even worse pessimization than the current single-object restriction. Any
other solution would involve passing around a vector of remapped allocas, and
teaching AA to use it, introducing additional complexity and overhead into AA.
Instead, when remapping an alloca, we replace all IR uses of that alloca as
well (optionally inserting a bitcast as necessary). This is even more efficient
that the old MMO-updating code in the stack coloring pass (because it removes
the need to call GetUnderlyingObject on all MMO values), removes the
single-object pessimization in the default configuration, and enables the
correct use of AA during instruction scheduling (all without any additional
overhead).
LLVM now no longer miscompiles itself on x86_64 when using -enable-misched
-enable-aa-sched-mi -misched-bottomup=0 -misched-topdown=0 -misched=shuffle!
Fixed PR18497.
Because the alloca replacement is now done at the IR level, unless the MMO
directly refers to the remapped alloca, the change cannot be seen at the MI
level. As a result, there is no good way to fix test/CodeGen/X86/pr14090.ll.
llvm-svn: 199658
When using AA to break false chain dependencies, we need to track multiple
stores per object in ScheduleDAGInstrs. Historically, we tracked potential alias
chains at the object level, and so all loads of an object would retain
dependencies on any store to that object. With AA, however, this is not
sufficient: non-overlapping stores and loads to the same object all need to be
tested for dependencies separately, we cannot only test all loads to an object
against only the last store (see PR18497 for an explicit example).
To mitigate any unwelcome compile-time impact when not using AA, only one store
is kept in the list per object when not using AA.
This, along with a stack coloring change to come shortly, will provide a test
case, fix PR18497 (and allow LLVM to compile itself using -enable-aa-sched-mi
on x86-64).
llvm-svn: 199657
The addition of IC_OPSIZE_ADSIZE in r198759 wasn't quite complete. It
also turns out to have been unnecessary. The disassembler handles the
AdSize prefix for itself, and doesn't care about the difference between
(e.g.) MOV8ao8 and MOB8ao8_16 definitions. So just let them coexist and
don't worry about it.
llvm-svn: 199654
The disassembler has a special case for 'L' vs. 'W' in its heuristic for
checking for 32-bit and 16-bit equivalents. We could expand the heuristic,
but better just to be consistent in using the 'L' suffix.
llvm-svn: 199652
When disassembling in 16-bit mode the meaning of the OpSize bit is
inverted. Instructions found in the IC_OPSIZE context will actually
*not* have the 0x66 prefix, and instructions in the IC context will
have the 0x66 prefix. Make use of the existing special-case handling
for the 0x66 prefix being in the wrong place, to cope with this.
llvm-svn: 199650
Aside from cleaning up the code, this also adds support for the -code16
environment and actually enables the MODE_16BIT mode that was previously
not accessible.
There is no point adding any testing for 16-bit yet though; basically
nothing will work because we aren't handling the OpSize prefix correctly
for 16-bit mode.
llvm-svn: 199649
various opt verifier commandline options.
Mostly mechanical wiring of the verifier to the new pass manager.
Exercises one of the more unusual aspects of it -- a pass can be either
a module or function pass interchangably. If this is ever problematic,
we can make things more constrained, but for things like the verifier
where there is an "obvious" applicability at both levels, it seems
convenient.
This is the next-to-last piece of basic functionality left to make the
opt commandline driving of the new pass manager minimally functional for
testing and further development. There is still a lot to be done there
(notably the factoring into .def files to kill the current boilerplate
code) but it is relatively uninteresting. The only interesting bit left
for minimal functionality is supporting the registration of analyses.
I'm planning on doing that on top of the .def file switch mostly because
the boilerplate for the analyses would be significantly worse.
llvm-svn: 199646
ADDITIONAL_HEADERS is intended to add header files for IDEs as hint.
For example:
add_llvm_library(LLVMSupport
Host.cpp
ADDITIONAL_HEADERS
Unix/Host.inc
Windows/Host.inc
)
llvm-svn: 199639
type units were enabled. The crux of the issue is that the
addDwarfTypeUnitType routine can end up being indirectly recursive. In
this case, the reference into the dense map (TU) became invalid by the
time we popped all the way back and used it to add the DIE type
signature.
Instead, use early return in the case where we can bypass the recursive
step and creating a type unit. Then use the pointer to the new type unit
to set up the DIE type signature in the case where we have to.
I tried really hard to reduce a testcase for this, but it's really
annoying. You have to get this to be mid-recursion when the densemap
grows. Even if we got a test case for this today, it'd be very unlikely
to continue exercising this pattern.
llvm-svn: 199630
This logic hadn't been updated to handle FastMathFlags, and it took me a while to detect it because it doesn't show up in a simple search for CreateFAdd.
llvm-svn: 199629
For FCMEQ, FCMGE, FCMGT, FCMLE and FCMLT, floating point zero will be
printed as #0.0 instead of #0. To support the history codes using #0,
we consider to let asm parser accept both #0.0 and #0.
llvm-svn: 199621
intrinsics.
Reported on the list by Evan with a couple of attempts to fix, but it
took a while to dig down to the root cause. There are two overlapping
bugs here, both centering around the circumstance of discovering
a memcpy operand which is known to be completely outside the bounds of
the alloca.
First, we need to kill the *other* side of the memcpy if it was added to
this alloca. Otherwise we'll factor it into our slicing and try to
rewrite it even though we know for a fact that it is dead. This is made
more tricky because we can visit the sides in either order. So we have
to both kill the other side and skip instructions marked as dead. The
latter really should be goodness in every case, but here is a matter of
correctness.
Second, we need to actually remove the *uses* of the alloca by the
memcpy when queuing it for later deletion. Otherwise it may still be
using the alloca when we go to promote it (if the rewrite re-uses the
existing alloca instruction). Do this by factoring out the
use-clobbering used when for nixing a Phi argument and re-using it
across the operands of a to-be-deleted instruction.
llvm-svn: 199590
Ensure that the tag types are reflected on a replacement. This is particularly
important for the compatibility tag which has multiple representations where the
last definition wins.
llvm-svn: 199577
This moves the ARM build attributes definitions and support routines into the
Support library. The support routines simply permit the conversion of the value
to and from a string representation.
The movement is prompted in order to permit access to the constants and string
representations from readobj in order to facilitate decoding of the attributes
section.
llvm-svn: 199575
a reduction.
Really. Under certain circumstances (the use list of an instruction has to be
set up right - hence the extra pass in the test case) we would not recognize
when a value in a potential reduction cycle was used multiple times by the
reduction cycle.
Fixes PR18526.
radar://15851149
llvm-svn: 199570
This makes the 'verifyFunction' and 'verifyModule' functions totally
independent operations on the LLVM IR. It also cleans up their API a bit
by lifting the abort behavior into their clients and just using an
optional raw_ostream parameter to control printing.
The implementation of the verifier is now just an InstVisitor with no
multiple inheritance. It also is significantly more const-correct, and
hides the const violations internally. The two layers that force us to
break const correctness are building a DomTree and dispatching through
the InstVisitor.
A new VerifierPass is used to implement the legacy pass manager
interface in terms of the other pieces.
The error messages produced may be slightly different now, and we may
have slightly different short circuiting behavior with different usage
models of the verifier, but generally everything works equivalently and
this unblocks wiring the verifier up to the new pass manager.
llvm-svn: 199569
one, but not create one. This is useful in the verifier when we want to
query the constant if it exists but not create one. To be used in an
upcoming commit.
llvm-svn: 199568
Summary:
The only current use of this flag is to mark the alloca as dynamic, even
if its in the entry block. The stack adjustment for the alloca can
never be folded into the prologue because the call may clear it and it
has to be allocated at the top of the stack.
Reviewers: majnemer
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2571
llvm-svn: 199525
This patch adds two new target-independent calling conventions for runtime
calls - PreserveMost and PreserveAll.
The target-specific implementation for X86-64 is defined as following:
- Arguments are passed as for the default C calling convention
- The same applies for the return value(s)
- PreserveMost preserves all GPRs - except R11
- PreserveAll preserves all GPRs and all XMMs/YMMs - except R11
Reviewed by Lang and Philip
llvm-svn: 199508
Summary:
$rs and $rt were the wrong way round in the .td and the testcase wasn't
strict enough to detect the mistake.
Reviewers: matheusalmeida
Reviewed By: matheusalmeida
Differential Revision: http://llvm-reviews.chandlerc.com/D2554
llvm-svn: 199498
IIImul -> II_MUL
IIImult -> II_MULT, II_MULTU, II_MADD, II_MADDU, II_MSUB, II_MSUBU, II_DMULT, II_DMULTU
No functional change since the InstrItinData's have been duplicated.
llvm-svn: 199495
Fix MLA defs to use register class GPRnopc.
Add encoding tests for multiply instructions.
(Alias for MUL/SMLAL/UMLAL added by r199026.)
Patch by Zhaoshi.
llvm-svn: 199491
and tweak comments prior to more invasive surgery. Also clean up some
other non-doxygen comments, and run clang-format over the parts that are
going to change dramatically in subsequent commits so that those don't
get cluttered with formatting changes.
No functionality changed.
llvm-svn: 199489
the verifier after ensuring the CFG is at least usefully formed.
This fixes a number of problems:
1) The PreVerifier was missing the controls the Verifier provides over
*how* an invalid module is handled -- it just aborted the program!
Now it uses the same logic as the Verifier which is significantly
more library-friendly.
2) The DominatorTree used previously could have been cached and not
updated due to bugs in prior passes and we would silently use the
stale tree. This could cause dominance errors to not be as quickly
diagnosed.
3) We can now (in the next patch) pull the functionality of the verifier
apart from the pass infrastructure so that you can verify IR without
having any form of pass manager. This in turn frees the code to share
logic between old and new pass manager variants.
Along the way I fixed at least one annoying bug -- the state for
'Broken' wasn't being cleared from run to run causing all functions
visited after the first broken function to be marked as broken
regardless of whether *they* were a problem. Fortunately, I don't really
know much of a way to observe this peculiarity.
In case folks are worried about the runtime cost, its negligible.
I looked at running the entire regression test suite (which should be
a relatively good use of the verifier) before and after but was unable
to even measure the time spent on the verifier and there was no
regresion from before to after. I checked both with debug builds and
optimized builds.
llvm-svn: 199487
This makes things a lot easier, because we can now talk about the
"argument allocation", which allocates all the memory for the call in
one shot.
The only functional change is to the verifier for a feature that hasn't
shipped yet.
llvm-svn: 199434
When registering a pass, a pass can now specify a second construct that takes as
argument a pointer to TargetMachine.
The PassInfo class has been updated to reflect that possibility.
If such a constructor exists opt will use it instead of the default constructor
when instantiating the pass.
Since such IR passes are supposed to be rare, no specific support has been
added to this commit to allow an easy registration of such a pass.
In other words, for such pass, the initialization function has to be
hand-written (see CodeGenPrepare for instance).
Now, codegenprepare can be tested using opt:
opt -codegenprepare -mtriple=mytriple input.ll
llvm-svn: 199430
with raw_ostream's write_escaped() method.
For example darwin's otool(1) program that uses the llvm
disassembler now produces disassembly like this:
leaq 0x7b(%rip), %rdi ## literal pool for: "%f\ntoto\n"
and not print the new lines which messes up the output.
rdar://15145300
llvm-svn: 199407
This is necessary because the classes are shared between all implementations.
No functional change since the InstrItinData's have been duplicated.
llvm-svn: 199394
IIArith -> II_ADD, II_ADDU, II_AND, II_CL[ZO], II_DADDIU, II_DADDU,
II_DROTR, II_DROTR32, II_DROTRV, II_DSLL, II_DSLL32, II_DSLLV,
II_DSR[AL], II_DSR[AL]32, II_DSR[AL]V, II_DSUBU, II_LUI, II_MOV[ZFNT],
II_NOR, II_OR, II_RDHWR, II_ROTR, II_ROTRV, II_SLL, II_SLLV, II_SR[AL],
II_SR[AL]V, II_SUBU, II_XOR
No functional change since the InstrItinData's have been duplicated.
This is necessary because the classes are shared between all schedulers.
Once this patch series is committed there will be an InstrItinClass for
each mnemonic with minimal grouping. This does increase the size of the
itinerary tables for each MIPS scheduler but we have a few options for dealing
with that later. These options include reducing the number of classes once
we see the best way to simplify them, or by extending tablegen to be able
to compress the table by eliminating duplicates entries, etc.
llvm-svn: 199391
Affects:
DMULT, DMULTu, MADD, MADD_MM, MADDU, MADDU_MM, MSUB, MSUB_MM, MSUBU,
MSUBU_MM, MULT, MULTu
Does not affect MULT_MM, MULTu_MM since they are currently miscategorised
as IIImul.
llvm-svn: 199381
There are two attempted optimisations in reMaterializeTrivialDef, trying to
avoid promoting the size of a register too much when rematerializing.
Unfortunately, both appear to be flawed. First, we see if the original register
would have worked, but this is inadequate. Consider:
v1 = SOMETHING (v1 is QQ)
v2:Q0 = COPY v1:Q1 (v1, v2 are QQ)
...
uses of v2
In this case even though v2 *could* be used directly as the output of
SOMETHING, this would set the wrong bits of the QQ register involved. The
correct rematerialization must be:
v2:Q0_Q1 = SOMETHING (v2 promoted to QQQ)
...
uses of v2:Q1_Q2
For the second optimisation, if the correct remat is "v2:idx = SOMETHING" then
we can't necessarily expect v2 itself to be valid for SOMETHING, but we do try
to hunt for a class between v1 and v2 that works. Unfortunately, this is also
wrong:
v1 = SOMETHING (v1 is QQ)
v2:Q0_Q1 = COPY v1 (v1 is QQ, v2 is QQQ)
...
uses of v2 as a QQQ
The canonical rematerialization here is "v2:Q0_Q1 = SOMETHING". However current
logic would decide that v2 could be a QQ (no interest is taken in later uses).
This patch, therefore, always accepts the widened register class without trying
to be clever. Generally there is no penalty to this (e.g. in the common GR32 <
GR64 case, expanding the width doesn't matter because it's not like you were
going to do anything else with the high bits of a GR32 register). It can
increase register pressure in cases like the ARM VFP regs though (multiple
non-overlapping but equivalent subregisters). This situation can be
spotted by the fact that both source and destination in the
not-quite-coalesced pair have a sub-register index and
rematerialisation is skipped in that situation.
Unfortunately, no in-tree targets actually expose this as far as I can tell
(there are so few isAsCheapAsAMove instructions for it to trigger on) so I've
been unable to produce a test. It was exposed in our ARM64 SPEC tests though,
and I will be adding a test there that we should be able to contribute
soon(TM).
rdar://problem/15775279
llvm-svn: 199376
flag from clang, and disable zero-base shadow support on all platforms
where it is not the default behavior.
- It is completely unused, as far as we know.
- It is ABI-incompatible with non-zero-base shadow, which means all
objects in a process must be built with the same setting. Failing to
do so results in a segmentation fault at runtime.
- It introduces a backward dependency of compiler-rt on user code,
which is uncommon and complicates testing.
This is the LLVM part of a larger change.
llvm-svn: 199371
This patch adds the capability to dump export table contents. An example
output is this:
Export Table:
Ordinal RVA Name
5 0x2008 exportfn1
6 0x2010 exportfn2
By adding this feature to llvm-objdump, we will be able to use it to check
export table contents in LLD's tests. Currently we are doing binary
comparison in the tests, which is fragile and not readable to humans.
llvm-svn: 199358
Move copying of global initializers below the cloning of functions.
The BlockAddress doesn't have access to the correct basic blocks until the
functions have been cloned. This causes the BlockAddress to point to the old
values. Just wait until the functions have been cloned before copying the
initializers.
PR13163
llvm-svn: 199354
DataRefImpl (a union of two integers and a pointer) is not the ideal data type
to represent a reference to an import directory entity. We should just use the
pointer to the import table and an offset instead to simplify. No functionality
change.
llvm-svn: 199349
than it needs to be by 1 bit but I need to finish some other things so
that all the boundary cases will work in that situation. constpool.c
in test-suite will fail to assemble under our new internal test-suite sync
without this change.
llvm-svn: 199343
ARM assembly syntax uses @ for a comment, execpt for the second
parameter of the .symver directive which requires @ as part of the
symbol name. This commit fixes the parsing of this directive by
adding a special case for ARM for this one argumnet.
To make the change we had to move the AllowAtInIdentifier variable
to the MCAsmLexer interface (from AsmLexer) and expose a setter for
the value. The ELFAsmParser then toggles this value when parsing
the second argument to the .symver directive for a target that
uses @ as a comment symbol
llvm-svn: 199339
Add a hook in the C API of LTO so that clients of the code generator can set
their own handler for the LLVM diagnostics.
The handler is defined like this:
typedef void (*lto_diagnostic_handler_t)(lto_codegen_diagnostic_severity_t
severity, const char *diag, void *ctxt)
- severity says how bad this is.
- diag is a string that contains the diagnostic message.
- ctxt is the registered context for this handler.
This hook is more general than the lto_get_error_message, since this function
keeps only the latest message and can only be queried when something went wrong
(no warning for instance).
<rdar://problem/15517596>
llvm-svn: 199338
This fixes a regression intruced by r199135.
Revision 199135 tried to simplify part of the logic in method
DAGCombiner::SimplifyVBinOp introducing calls to method BuildVectorSDNode::isConstant().
However, that revision wrongly changed the check performed by method
SimplifyVBinOp to identify dag nodes that can be folded.
Before revision 199135, that method only tried to simplify vector binary operations
if both operands were build_vector of Constant/ConstantFP/Undef only.
After revision 199135, method SimplifyVBinop tried to
simplify also vector binary operations with only one constant operand.
This fixes the problem restoring the old behavior of SimplifyVBinOp.
llvm-svn: 199328
I did write a version returning ErrorOr<OwningPtr<Binary> >, but it is too
cumbersome to use without std::move. I will keep the patch locally and submit
when we switch to c++11.
llvm-svn: 199326
MSVC on x64 requires that we create image relative symbol
references to refer to RTTI data. Seeing as how there is no way to
explicitly make reference to a given relocation type in LLVM IR, pattern
match expressions of the form &foo - &__ImageBase.
Differential Revision: http://llvm-reviews.chandlerc.com/D2523
llvm-svn: 199312
There has been an old FIXME to find the right cut-off for when it's worth
analyzing and potentially transforming a switch to a lookup table.
The switches always have two or more cases. I could not measure any speed-up
by transforming a switch with two cases. A switch with three cases gets a nice
speed-up, and I couldn't measure any compile-time regression, so I think this
is the right threshold.
In a Clang self-host, this causes 480 new switches to be transformed,
and reduces the final binary size with 8 KB.
llvm-svn: 199294
The GNU as behavior is a bit different and very strange. It will mark any
label that contains an instruction. We can implement that, but using the
type looks more natural since gas will not mark a function if a .word is
used to output the instructions!
llvm-svn: 199287
When expanding neon pseudo stores, it may miss the implicit uses of sub
regs, which may cause post RA scheduler reorder instructions that
breakes anti dependency.
For example:
VST1d64QPseudo %R0<kill>, 16, %Q9_Q10, pred:14, pred:%noreg
will be expanded to
VST1d64Q %R0<kill>, 16, %D18, pred:14, pred:%noreg;
An instruction that defines %D20 may be scheduled before the store by
mistake.
This patches adds implicit uses for such case. For the example above, it
emits:
VST1d64Q %R0<kill>, 8, %D18, pred:14, pred:%noreg, %Q9_Q10<imp-use>
llvm-svn: 199282
The changes caused by folding an sp-adjustment into a "pop" previously
disrupted the forward search for the final real instruction in a
terminating block. This switches to a backward search (skipping debug
instrs).
This fixes PR18399.
Patch by Zhaoshi.
llvm-svn: 199266
We should set them to expand for now since there are no patterns
dealing with them. Actually, there are no instructions either so I
doubt they'll ever be acceptable.
llvm-svn: 199265
promotion code, Tablegen will now select FPExt for floating point promotions
(previously it had returned AExt, which is not valid for floating point types).
Any out-of-tree targets that were relying on AExt being returned for FP
promotions will need to update their code check for FPExt instead.
llvm-svn: 199252
This also fixes the placement of the function label comment. It was being
placed next to the mips16 directive instead of next to the label.
llvm-svn: 199245
Reapply r199191, reverted in r199197 because it carelessly broke
Other/link-opts.ll. The problem was that calling
createInternalizePass("main") would select
createInternalizePass(bool("main")) instead of
createInternalizePass(ArrayRef<const char *>("main")). This commit
fixes the bug.
The original commit message follows.
Add API to LTOCodeGenerator to specify a strategy for the -internalize
pass.
This is a new attempt at Bill's change in r185882, which he reverted in
r188029 due to problems with the gold linker. This puts the onus on the
linker to decide whether (and what) to internalize.
In particular, running internalize before outputting an object file may
change a 'weak' symbol into an internal one, even though that symbol
could be needed by an external object file --- e.g., with arclite.
This patch enables three strategies:
- LTO_INTERNALIZE_FULL: the default (and the old behaviour).
- LTO_INTERNALIZE_NONE: skip -internalize.
- LTO_INTERNALIZE_HIDDEN: only -internalize symbols with hidden
visibility.
LTO_INTERNALIZE_FULL should be used when linking an executable.
Outputting an object file (e.g., via ld -r) is more complicated, and
depends on whether hidden symbols should be internalized. E.g., for
ld -r, LTO_INTERNALIZE_NONE can be used when -keep_private_externs, and
LTO_INTERNALIZE_HIDDEN can be used otherwise. However,
LTO_INTERNALIZE_FULL is inappropriate, since the output object file will
eventually need to link with others.
lto_codegen_set_internalize_strategy() sets the strategy for subsequent
calls to lto_codegen_write_merged_modules() and lto_codegen_compile*().
<rdar://problem/14334895>
llvm-svn: 199244
Representing dllexport/dllimport as distinct linkage types prevents using
these attributes on templates and inline functions.
Instead of introducing further mixed linkage types to include linkonce and
weak ODR, the old import/export linkage types are replaced with a new
separate visibility-like specifier:
define available_externally dllimport void @f() {}
@Var = dllexport global i32 1, align 4
Linkage for dllexported globals and functions is now equal to their linkage
without dllexport. Imported globals and functions must be either
declarations with external linkage, or definitions with
AvailableExternallyLinkage.
llvm-svn: 199218
Sorry, I don't understand why the warning is generated (a gcc
bug?). Anyhow, the change should improve readablity. No functionality
change intended.
llvm-svn: 199214
This fixes a regression intruced by r198113.
Revision r198113 introduced an algorithm that tries to fold a vector shift
by immediate count into a build_vector if the input vector is a known vector
of constants.
However the algorithm only worked under the assumption that the input vector
type and the shift type are exactly the same.
This patch disables the folding of vector shift by immediate count if the
input vector type and the shift value type are not the same.
llvm-svn: 199213
Representing dllexport/dllimport as distinct linkage types prevents using
these attributes on templates and inline functions.
Instead of introducing further mixed linkage types to include linkonce and
weak ODR, the old import/export linkage types are replaced with a new
separate visibility-like specifier:
define available_externally dllimport void @f() {}
@Var = dllexport global i32 1, align 4
Linkage for dllexported globals and functions is now equal to their linkage
without dllexport. Imported globals and functions must be either
declarations with external linkage, or definitions with
AvailableExternallyLinkage.
llvm-svn: 199204
fastcall requires @ as global prefix instead of _ but getNameWithPrefix
wrongly assumes the OutName buffer is empty and replaces at index 0.
For imported functions this buffer is pre-filled with "__imp_" resulting
in broken "@_imp_foo@0" mangling.
Instead replace at the proper index. We also never have to prepend the
@-prefix because this fastcall mangling is only used on 32-bit Windows
targets which have _ has global prefix.
llvm-svn: 199203
Add API to LTOCodeGenerator to specify a strategy for the -internalize
pass.
This is a new attempt at Bill's change in r185882, which he reverted in
r188029 due to problems with the gold linker. This puts the onus on the
linker to decide whether (and what) to internalize.
In particular, running internalize before outputting an object file may
change a 'weak' symbol into an internal one, even though that symbol
could be needed by an external object file --- e.g., with arclite.
This patch enables three strategies:
- LTO_INTERNALIZE_FULL: the default (and the old behaviour).
- LTO_INTERNALIZE_NONE: skip -internalize.
- LTO_INTERNALIZE_HIDDEN: only -internalize symbols with hidden
visibility.
LTO_INTERNALIZE_FULL should be used when linking an executable.
Outputting an object file (e.g., via ld -r) is more complicated, and
depends on whether hidden symbols should be internalized. E.g., for
ld -r, LTO_INTERNALIZE_NONE can be used when -keep_private_externs, and
LTO_INTERNALIZE_HIDDEN can be used otherwise. However,
LTO_INTERNALIZE_FULL is inappropriate, since the output object file will
eventually need to link with others.
lto_codegen_set_internalize_strategy() sets the strategy for subsequent
calls to lto_codegen_write_merged_modules() and lto_codegen_compile*().
<rdar://problem/14334895>
llvm-svn: 199191
When creating a virtual register for a def, the value type should be
used to pick the register class. If we only use the register class
constraint on the instruction, we might pick a too large register class.
Some registers can store values of different sizes. For example, the x86
xmm registers can hold f32, f64, and 128-bit vectors. The three
different value sizes are represented by register classes with identical
register sets: FR32, FR64, and VR128. These register classes have
different spill slot sizes, so it is important to use the right one.
The register class constraint on an instruction doesn't necessarily care
about the size of the value its defining. The value type determines
that.
This fixes a problem where InstrEmitter was picking 32-bit register
classes for 64-bit values on SPARC.
llvm-svn: 199187
The already allocatable DPair superclass contains odd-even D register
pair in addition to the even-odd pairs in the QPR register class. There
is no reason to constrain the set of D register pairs that can be used
for NEON values. Any NEON instructions that require a Q register will
automatically constrain the register class to QPR.
The allocation order for DPair begins with the QPR registers, so
register allocation is unlikely to change much.
llvm-svn: 199186
We need to ensure that StackSlotColoring.cpp does not reuse stack
spill slots in functions that call "returns_twice" functions such as
setjmp(), otherwise this can lead to miscompiled code, because a stack
slot would be clobbered when it's still live.
This was already handled correctly for functions that call setjmp()
(though this wasn't covered by a test), but not for functions that
invoke setjmp().
We fix this by changing callsFunctionThatReturnsTwice() to check for
invoke instructions.
This fixes PR18244.
llvm-svn: 199180
This will allow it to be called from target independent parts of the main
streamer that don't know if there is a registered target streamer or not. This
in turn will allow targets to perform extra actions at specified points in the
interface: add extra flags for some labels, extra work during finalization, etc.
llvm-svn: 199174
This commit teaches DAG to reassociate vector ops, which in turn enables
constant folding of vector op chains that appear later on during custom lowering
and DAG combine.
Reviewed by Andrea Di Biagio
llvm-svn: 199135
This is a very confusing option for a feature that will go away.
-enable-misched is exposed instead to help triage issues with the new
scheduler.
llvm-svn: 199133
The issue is caused when Post-RA scheduler reorders a bundle instruction
(IT block). However, it only flips the CPSR liveness of the bundle instruction,
leaves the instructions inside the bundle unchanged, which causes inconstancy and crashes
Thumb2SizeReduction.cpp::ReduceMBB().
llvm-svn: 199127
APInt only knows how to compare values with the same BitWidth and asserts
in all other cases.
With this fix, function PerformORCombine does not use the APInt equality
operator if the APInt values returned by 'isConstantSplat' differ in BitWidth.
In that case they are different and no comparison is needed.
llvm-svn: 199119
...into (ashr (shl (anyext X), ...), ...), which requires one fewer
instruction. The (anyext X) can sometimes be simplified too.
I didn't do this in DAGCombiner because widening shifts isn't a win
on all targets.
llvm-svn: 199114
Previously we only used GPR for the destination placeholder in "ldr rD, [pc,
incorrect codegen under the integrated assembler.
This should fix both issues (which probably only affect MachO targets at the
moment).
rdar://problem/15800156
llvm-svn: 199108
This finishes the job started in r198756, and creates separate opcodes for
64-bit vs. 32-bit versions of the rest of the RET instructions too.
LRETL/LRETQ are interesting... I can't see any justification for their
existence in the SDM. There should be no 'LRETL' in 64-bit mode, and no
need for a REX.W prefix for LRETQ. But this is what GAS does, and my
Sandybridge CPU and an Opteron 6376 concur when tested as follows:
asm __volatile__("pushq $0x1234\nmovq $0x33,%rax\nsalq $32,%rax\norq $1f,%rax\npushq %rax\nlretl $8\n1:");
asm __volatile__("pushq $1234\npushq $0x33\npushq $1f\nlretq $8\n1:");
asm __volatile__("pushq $0x33\npushq $1f\nlretq\n1:");
asm __volatile__("pushq $0x1234\npushq $0x33\npushq $1f\nlretq $8\n1:");
cf. PR8592 and commit r118903, which added LRETQ. I only added LRETIQ to
match it.
I don't quite understand how the Intel syntax parsing for ret
instructions is working, despite r154468 allegedly fixing it. Aren't the
explicitly sized 'retw', 'retd' and 'retq' supposed to work? I have at
least made the 'lretq' work with (and indeed *require*) the 'q'.
llvm-svn: 199106
can be used by both the new pass manager and the old.
This removes it from any of the virtual mess of the pass interfaces and
lets it derive cleanly from the DominatorTreeBase<> template. In turn,
tons of boilerplate interface can be nuked and it turns into a very
straightforward extension of the base DominatorTree interface.
The old analysis pass is now a simple wrapper. The names and style of
this split should match the split between CallGraph and
CallGraphWrapperPass. All of the users of DominatorTree have been
updated to match using many of the same tricks as with CallGraph. The
goal is that the common type remains the resulting DominatorTree rather
than the pass. This will make subsequent work toward the new pass
manager significantly easier.
Also in numerous places things became cleaner because I switched from
re-running the pass (!!! mid way through some other passes run!!!) to
directly recomputing the domtree.
llvm-svn: 199104
trees into the Support library.
These are all expressed in terms of the generic GraphTraits and CFG,
with no reliance on any concrete IR types. Putting them in support
clarifies that and makes the fact that the static analyzer in Clang uses
them much more sane. When moving the Dominators.h file into the IR
library I claimed that this was the right home for it but not something
I planned to work on. Oops.
So why am I doing this? It happens to be one step toward breaking the
requirement that IR verification can only be performed from inside of
a pass context, which completely blocks the implementation of
verification for the new pass manager infrastructure. Fixing it will
also allow removing the concept of the "preverify" step (WTF???) and
allow the verifier to cleanly flag functions which fail verification in
a way that precludes even computing dominance information. Currently,
that results in a fatal error even when you ask the verifier to not
fatally error. It's awesome like that.
The yak shaving will continue...
llvm-svn: 199095
Very sorry, this was a premature patch that I still need to investigate and
finish off (for some reason beyond me at the moment it doesn't actually fix the
issue in all cases).
This reverts commit r199091.
llvm-svn: 199093
There are two attempted optimisations in reMaterializeTrivialDef, trying to
avoid promoting the size of a register too much when rematerializing.
Unfortunately, both appear to be flawed. First, we see if the original register
would have worked, but this is inadequate. Consider:
v1 = SOMETHING (v1 is QQ)
v2:Q0 = COPY v1:Q1 (v1, v2 are QQ)
...
uses of v2
In this case even though v2 *could* be used directly as the output of
SOMETHING, this would set the wrong bits of the QQ register involved. The
correct rematerialization must be:
v2:Q0_Q1 = SOMETHING (v2 promoted to QQQ)
...
uses of v2:Q1_Q2
For the second optimisation, if the correct remat is "v2:idx = SOMETHING" then
we can't necessarily expect v2 itself to be valid for SOMETHING, but we do try
to hunt for a class between v1 and v2 that works. Unfortunately, this is also
wrong:
v1 = SOMETHING (v1 is QQ)
v2:Q0_Q1 = COPY v1 (v1 is QQ, v2 is QQQ)
...
uses of v2 as a QQQ
The canonical rematerialization here is "v2:Q0_Q1 = SOMETHING". However current
logic would decide that v2 could be a QQ (no interest is taken in later uses).
This patch, therefore, always accepts the widened register class without trying
to be clever. Generally there is no penalty to this (e.g. in the common GR32 <
GR64 case, expanding the width doesn't matter because it's not like you were
going to do anything else with the high bits of a GR32 register). It can
increase register pressure in cases like the ARM VFP regs though (multiple
non-overlapping but equivalent subregisters). Hopefully this situation is rare
enough that it won't matter.
Unfortunately, no in-tree targets actually expose this as far as I can tell
(there are so few isAsCheapAsAMove instructions for it to trigger on) so I've
been unable to produce a test. It was exposed in our ARM64 SPEC tests though,
and I will be adding a test there that we should be able to contribute
soon(TM).
llvm-svn: 199091
directory. These passes are already defined in the IR library, and it
doesn't make any sense to have the headers in Analysis.
Long term, I think there is going to be a much better way to divide
these matters. The dominators code should be fully separated into the
abstract graph algorithm and have that put in Support where it becomes
obvious that evn Clang's CFGBlock's can use it. Then the verifier can
manually construct dominance information from the Support-driven
interface while the Analysis library can provide a pass which both
caches, reconstructs, and supports a nice update API.
But those are very long term, and so I don't want to leave the really
confusing structure until that day arrives.
llvm-svn: 199082
This moves the old pass creation functionality to its own header and
updates the callers of that routine. Then it adds a new PM supporting
bitcode writer to the header file, and wires that up in the opt tool.
A test is added that round-trips code into bitcode and back out using
the new pass manager.
llvm-svn: 199078
This patch covered 2 more scenarios:
1. Two operands of shuffle_vector are the same, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> %a, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>
2. One of operands is undef, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> undef, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>
After this patch, perm instructions will have chance to be emitted instead of lots of INS.
llvm-svn: 199069
The target specific parser should return `false' if the target AsmParser handles
the directive, and `true' if the generic parser should handle the directive.
Many of the target specific directive handlers would `return Error' which does
not follow these semantics. This change simply changes the target specific
routines to conform to the semantis of the ParseDirective correctly.
Conformance to the semantics improves diagnostics emitted for the invalid
directives. X86 is taken as a sample to ensure that multiple diagnostics are
not presented for a single error.
llvm-svn: 199068
Targets like SPARC and MIPS have delay slots and normally bundle the
delay slot instruction with the corresponding terminator.
Teach isBlockOnlyReachableByFallthrough to find any MBB operands on
bundled terminators so SPARC doesn't need to specialize this function.
llvm-svn: 199061
This implements the legacy passes in terms of the new ones. It adds
basic testing using explicit runs of the passes. Next up will be wiring
the basic output mechanism of opt up when the new pass manager is
engaged unless bitcode writing is requested.
llvm-svn: 199049
API is exposed.
This removes the support for deleting the ostream, switches the member
and constructor order arround to be consistent with the creation
routines, and switches to using references.
llvm-svn: 199047
Nothing was using the ability of the pass to delete the raw_ostream it
printed to, and nothing was trying to pass it a pointer to the
raw_ostream. Also, the function variant had a different order of
arguments from all of the others which was just really confusing. Now
the interface accepts a reference, doesn't offer to delete it, and uses
a consistent order. The implementation of the printing passes haven't
been updated with this simplification, this is just the API switch.
llvm-svn: 199044
name to match the source file which I got earlier. Update the include
sites. Also modernize the comments in the header to use the more
recommended doxygen style.
llvm-svn: 199041
An improper qualifier would result in a superfluous error due to the parser not
consuming the remainder of the statement. Simply consume the remainder of the
statement to avoid the error.
llvm-svn: 199035
The implicit immediate 0 forms are assembly aliases, not distinct instruction
encodings. Fix the initial implementation introduced in r198914 to an alias to
avoid two separate instruction definitions for the same encoding.
An InstAlias is insufficient in this case as the necessary due to the need to
add a new additional operand for the implicit zero. By using the AsmPsuedoInst,
fall back to the C++ code to transform the instruction to the equivalent
_POST_IMM form, inserting the additional implicit immediate 0.
llvm-svn: 199032
This is different from the argument passing convention which puts the
first float argument in %f1.
With this patch, all returned floats are treated as if the 'inreg' flag
were set. This means multiple float return values get packed in %f0,
%f1, %f2, ...
Note that when returning a struct in registers, clang will set the
'inreg' flag on the return value, so that behavior is unchanged. This
also happens when returning a float _Complex.
llvm-svn: 199028
case when the lookup table doesn't have any holes.
This means we can build a lookup table for switches like this:
switch (x) {
case 0: return 1;
case 1: return 2;
case 2: return 3;
case 3: return 4;
default: exit(1);
}
The default case doesn't yield a constant result here, but that doesn't matter,
since a default result is only necessary for filling holes in the lookup table,
and this table doesn't have any holes.
This makes us transform 505 more switches in a clang bootstrap, and shaves 164 KB
off the resulting clang binary.
llvm-svn: 199025
A 32-bit immediate value can be formed from a constant expression and loaded
into a register. Add support to emit this into an object file. Because this
value is a constant, a relocation must *not* be produced for it.
llvm-svn: 199023
mode that can be used to debug the execution of everything.
No support for analyses here, that will come later. This already helps
show parts of the opt commandline integration that isn't working. Tests
of that will start using it as the bugs are fixed.
llvm-svn: 199004