intrinsics.
Reported on the list by Evan with a couple of attempts to fix, but it
took a while to dig down to the root cause. There are two overlapping
bugs here, both centering around the circumstance of discovering
a memcpy operand which is known to be completely outside the bounds of
the alloca.
First, we need to kill the *other* side of the memcpy if it was added to
this alloca. Otherwise we'll factor it into our slicing and try to
rewrite it even though we know for a fact that it is dead. This is made
more tricky because we can visit the sides in either order. So we have
to both kill the other side and skip instructions marked as dead. The
latter really should be goodness in every case, but here is a matter of
correctness.
Second, we need to actually remove the *uses* of the alloca by the
memcpy when queuing it for later deletion. Otherwise it may still be
using the alloca when we go to promote it (if the rewrite re-uses the
existing alloca instruction). Do this by factoring out the
use-clobbering used when for nixing a Phi argument and re-using it
across the operands of a to-be-deleted instruction.
llvm-svn: 199590
Ensure that the tag types are reflected on a replacement. This is particularly
important for the compatibility tag which has multiple representations where the
last definition wins.
llvm-svn: 199577
This moves the ARM build attributes definitions and support routines into the
Support library. The support routines simply permit the conversion of the value
to and from a string representation.
The movement is prompted in order to permit access to the constants and string
representations from readobj in order to facilitate decoding of the attributes
section.
llvm-svn: 199575
a reduction.
Really. Under certain circumstances (the use list of an instruction has to be
set up right - hence the extra pass in the test case) we would not recognize
when a value in a potential reduction cycle was used multiple times by the
reduction cycle.
Fixes PR18526.
radar://15851149
llvm-svn: 199570
This makes the 'verifyFunction' and 'verifyModule' functions totally
independent operations on the LLVM IR. It also cleans up their API a bit
by lifting the abort behavior into their clients and just using an
optional raw_ostream parameter to control printing.
The implementation of the verifier is now just an InstVisitor with no
multiple inheritance. It also is significantly more const-correct, and
hides the const violations internally. The two layers that force us to
break const correctness are building a DomTree and dispatching through
the InstVisitor.
A new VerifierPass is used to implement the legacy pass manager
interface in terms of the other pieces.
The error messages produced may be slightly different now, and we may
have slightly different short circuiting behavior with different usage
models of the verifier, but generally everything works equivalently and
this unblocks wiring the verifier up to the new pass manager.
llvm-svn: 199569
one, but not create one. This is useful in the verifier when we want to
query the constant if it exists but not create one. To be used in an
upcoming commit.
llvm-svn: 199568
Summary:
The only current use of this flag is to mark the alloca as dynamic, even
if its in the entry block. The stack adjustment for the alloca can
never be folded into the prologue because the call may clear it and it
has to be allocated at the top of the stack.
Reviewers: majnemer
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2571
llvm-svn: 199525
This patch adds two new target-independent calling conventions for runtime
calls - PreserveMost and PreserveAll.
The target-specific implementation for X86-64 is defined as following:
- Arguments are passed as for the default C calling convention
- The same applies for the return value(s)
- PreserveMost preserves all GPRs - except R11
- PreserveAll preserves all GPRs and all XMMs/YMMs - except R11
Reviewed by Lang and Philip
llvm-svn: 199508
Summary:
$rs and $rt were the wrong way round in the .td and the testcase wasn't
strict enough to detect the mistake.
Reviewers: matheusalmeida
Reviewed By: matheusalmeida
Differential Revision: http://llvm-reviews.chandlerc.com/D2554
llvm-svn: 199498
IIImul -> II_MUL
IIImult -> II_MULT, II_MULTU, II_MADD, II_MADDU, II_MSUB, II_MSUBU, II_DMULT, II_DMULTU
No functional change since the InstrItinData's have been duplicated.
llvm-svn: 199495
Fix MLA defs to use register class GPRnopc.
Add encoding tests for multiply instructions.
(Alias for MUL/SMLAL/UMLAL added by r199026.)
Patch by Zhaoshi.
llvm-svn: 199491
and tweak comments prior to more invasive surgery. Also clean up some
other non-doxygen comments, and run clang-format over the parts that are
going to change dramatically in subsequent commits so that those don't
get cluttered with formatting changes.
No functionality changed.
llvm-svn: 199489
the verifier after ensuring the CFG is at least usefully formed.
This fixes a number of problems:
1) The PreVerifier was missing the controls the Verifier provides over
*how* an invalid module is handled -- it just aborted the program!
Now it uses the same logic as the Verifier which is significantly
more library-friendly.
2) The DominatorTree used previously could have been cached and not
updated due to bugs in prior passes and we would silently use the
stale tree. This could cause dominance errors to not be as quickly
diagnosed.
3) We can now (in the next patch) pull the functionality of the verifier
apart from the pass infrastructure so that you can verify IR without
having any form of pass manager. This in turn frees the code to share
logic between old and new pass manager variants.
Along the way I fixed at least one annoying bug -- the state for
'Broken' wasn't being cleared from run to run causing all functions
visited after the first broken function to be marked as broken
regardless of whether *they* were a problem. Fortunately, I don't really
know much of a way to observe this peculiarity.
In case folks are worried about the runtime cost, its negligible.
I looked at running the entire regression test suite (which should be
a relatively good use of the verifier) before and after but was unable
to even measure the time spent on the verifier and there was no
regresion from before to after. I checked both with debug builds and
optimized builds.
llvm-svn: 199487
This makes things a lot easier, because we can now talk about the
"argument allocation", which allocates all the memory for the call in
one shot.
The only functional change is to the verifier for a feature that hasn't
shipped yet.
llvm-svn: 199434
When registering a pass, a pass can now specify a second construct that takes as
argument a pointer to TargetMachine.
The PassInfo class has been updated to reflect that possibility.
If such a constructor exists opt will use it instead of the default constructor
when instantiating the pass.
Since such IR passes are supposed to be rare, no specific support has been
added to this commit to allow an easy registration of such a pass.
In other words, for such pass, the initialization function has to be
hand-written (see CodeGenPrepare for instance).
Now, codegenprepare can be tested using opt:
opt -codegenprepare -mtriple=mytriple input.ll
llvm-svn: 199430
with raw_ostream's write_escaped() method.
For example darwin's otool(1) program that uses the llvm
disassembler now produces disassembly like this:
leaq 0x7b(%rip), %rdi ## literal pool for: "%f\ntoto\n"
and not print the new lines which messes up the output.
rdar://15145300
llvm-svn: 199407
This is necessary because the classes are shared between all implementations.
No functional change since the InstrItinData's have been duplicated.
llvm-svn: 199394
IIArith -> II_ADD, II_ADDU, II_AND, II_CL[ZO], II_DADDIU, II_DADDU,
II_DROTR, II_DROTR32, II_DROTRV, II_DSLL, II_DSLL32, II_DSLLV,
II_DSR[AL], II_DSR[AL]32, II_DSR[AL]V, II_DSUBU, II_LUI, II_MOV[ZFNT],
II_NOR, II_OR, II_RDHWR, II_ROTR, II_ROTRV, II_SLL, II_SLLV, II_SR[AL],
II_SR[AL]V, II_SUBU, II_XOR
No functional change since the InstrItinData's have been duplicated.
This is necessary because the classes are shared between all schedulers.
Once this patch series is committed there will be an InstrItinClass for
each mnemonic with minimal grouping. This does increase the size of the
itinerary tables for each MIPS scheduler but we have a few options for dealing
with that later. These options include reducing the number of classes once
we see the best way to simplify them, or by extending tablegen to be able
to compress the table by eliminating duplicates entries, etc.
llvm-svn: 199391
Affects:
DMULT, DMULTu, MADD, MADD_MM, MADDU, MADDU_MM, MSUB, MSUB_MM, MSUBU,
MSUBU_MM, MULT, MULTu
Does not affect MULT_MM, MULTu_MM since they are currently miscategorised
as IIImul.
llvm-svn: 199381
There are two attempted optimisations in reMaterializeTrivialDef, trying to
avoid promoting the size of a register too much when rematerializing.
Unfortunately, both appear to be flawed. First, we see if the original register
would have worked, but this is inadequate. Consider:
v1 = SOMETHING (v1 is QQ)
v2:Q0 = COPY v1:Q1 (v1, v2 are QQ)
...
uses of v2
In this case even though v2 *could* be used directly as the output of
SOMETHING, this would set the wrong bits of the QQ register involved. The
correct rematerialization must be:
v2:Q0_Q1 = SOMETHING (v2 promoted to QQQ)
...
uses of v2:Q1_Q2
For the second optimisation, if the correct remat is "v2:idx = SOMETHING" then
we can't necessarily expect v2 itself to be valid for SOMETHING, but we do try
to hunt for a class between v1 and v2 that works. Unfortunately, this is also
wrong:
v1 = SOMETHING (v1 is QQ)
v2:Q0_Q1 = COPY v1 (v1 is QQ, v2 is QQQ)
...
uses of v2 as a QQQ
The canonical rematerialization here is "v2:Q0_Q1 = SOMETHING". However current
logic would decide that v2 could be a QQ (no interest is taken in later uses).
This patch, therefore, always accepts the widened register class without trying
to be clever. Generally there is no penalty to this (e.g. in the common GR32 <
GR64 case, expanding the width doesn't matter because it's not like you were
going to do anything else with the high bits of a GR32 register). It can
increase register pressure in cases like the ARM VFP regs though (multiple
non-overlapping but equivalent subregisters). This situation can be
spotted by the fact that both source and destination in the
not-quite-coalesced pair have a sub-register index and
rematerialisation is skipped in that situation.
Unfortunately, no in-tree targets actually expose this as far as I can tell
(there are so few isAsCheapAsAMove instructions for it to trigger on) so I've
been unable to produce a test. It was exposed in our ARM64 SPEC tests though,
and I will be adding a test there that we should be able to contribute
soon(TM).
rdar://problem/15775279
llvm-svn: 199376
flag from clang, and disable zero-base shadow support on all platforms
where it is not the default behavior.
- It is completely unused, as far as we know.
- It is ABI-incompatible with non-zero-base shadow, which means all
objects in a process must be built with the same setting. Failing to
do so results in a segmentation fault at runtime.
- It introduces a backward dependency of compiler-rt on user code,
which is uncommon and complicates testing.
This is the LLVM part of a larger change.
llvm-svn: 199371
This patch adds the capability to dump export table contents. An example
output is this:
Export Table:
Ordinal RVA Name
5 0x2008 exportfn1
6 0x2010 exportfn2
By adding this feature to llvm-objdump, we will be able to use it to check
export table contents in LLD's tests. Currently we are doing binary
comparison in the tests, which is fragile and not readable to humans.
llvm-svn: 199358
Move copying of global initializers below the cloning of functions.
The BlockAddress doesn't have access to the correct basic blocks until the
functions have been cloned. This causes the BlockAddress to point to the old
values. Just wait until the functions have been cloned before copying the
initializers.
PR13163
llvm-svn: 199354
DataRefImpl (a union of two integers and a pointer) is not the ideal data type
to represent a reference to an import directory entity. We should just use the
pointer to the import table and an offset instead to simplify. No functionality
change.
llvm-svn: 199349
than it needs to be by 1 bit but I need to finish some other things so
that all the boundary cases will work in that situation. constpool.c
in test-suite will fail to assemble under our new internal test-suite sync
without this change.
llvm-svn: 199343
ARM assembly syntax uses @ for a comment, execpt for the second
parameter of the .symver directive which requires @ as part of the
symbol name. This commit fixes the parsing of this directive by
adding a special case for ARM for this one argumnet.
To make the change we had to move the AllowAtInIdentifier variable
to the MCAsmLexer interface (from AsmLexer) and expose a setter for
the value. The ELFAsmParser then toggles this value when parsing
the second argument to the .symver directive for a target that
uses @ as a comment symbol
llvm-svn: 199339
Add a hook in the C API of LTO so that clients of the code generator can set
their own handler for the LLVM diagnostics.
The handler is defined like this:
typedef void (*lto_diagnostic_handler_t)(lto_codegen_diagnostic_severity_t
severity, const char *diag, void *ctxt)
- severity says how bad this is.
- diag is a string that contains the diagnostic message.
- ctxt is the registered context for this handler.
This hook is more general than the lto_get_error_message, since this function
keeps only the latest message and can only be queried when something went wrong
(no warning for instance).
<rdar://problem/15517596>
llvm-svn: 199338
This fixes a regression intruced by r199135.
Revision 199135 tried to simplify part of the logic in method
DAGCombiner::SimplifyVBinOp introducing calls to method BuildVectorSDNode::isConstant().
However, that revision wrongly changed the check performed by method
SimplifyVBinOp to identify dag nodes that can be folded.
Before revision 199135, that method only tried to simplify vector binary operations
if both operands were build_vector of Constant/ConstantFP/Undef only.
After revision 199135, method SimplifyVBinop tried to
simplify also vector binary operations with only one constant operand.
This fixes the problem restoring the old behavior of SimplifyVBinOp.
llvm-svn: 199328
I did write a version returning ErrorOr<OwningPtr<Binary> >, but it is too
cumbersome to use without std::move. I will keep the patch locally and submit
when we switch to c++11.
llvm-svn: 199326
MSVC on x64 requires that we create image relative symbol
references to refer to RTTI data. Seeing as how there is no way to
explicitly make reference to a given relocation type in LLVM IR, pattern
match expressions of the form &foo - &__ImageBase.
Differential Revision: http://llvm-reviews.chandlerc.com/D2523
llvm-svn: 199312
There has been an old FIXME to find the right cut-off for when it's worth
analyzing and potentially transforming a switch to a lookup table.
The switches always have two or more cases. I could not measure any speed-up
by transforming a switch with two cases. A switch with three cases gets a nice
speed-up, and I couldn't measure any compile-time regression, so I think this
is the right threshold.
In a Clang self-host, this causes 480 new switches to be transformed,
and reduces the final binary size with 8 KB.
llvm-svn: 199294
The GNU as behavior is a bit different and very strange. It will mark any
label that contains an instruction. We can implement that, but using the
type looks more natural since gas will not mark a function if a .word is
used to output the instructions!
llvm-svn: 199287
When expanding neon pseudo stores, it may miss the implicit uses of sub
regs, which may cause post RA scheduler reorder instructions that
breakes anti dependency.
For example:
VST1d64QPseudo %R0<kill>, 16, %Q9_Q10, pred:14, pred:%noreg
will be expanded to
VST1d64Q %R0<kill>, 16, %D18, pred:14, pred:%noreg;
An instruction that defines %D20 may be scheduled before the store by
mistake.
This patches adds implicit uses for such case. For the example above, it
emits:
VST1d64Q %R0<kill>, 8, %D18, pred:14, pred:%noreg, %Q9_Q10<imp-use>
llvm-svn: 199282
The changes caused by folding an sp-adjustment into a "pop" previously
disrupted the forward search for the final real instruction in a
terminating block. This switches to a backward search (skipping debug
instrs).
This fixes PR18399.
Patch by Zhaoshi.
llvm-svn: 199266
We should set them to expand for now since there are no patterns
dealing with them. Actually, there are no instructions either so I
doubt they'll ever be acceptable.
llvm-svn: 199265
promotion code, Tablegen will now select FPExt for floating point promotions
(previously it had returned AExt, which is not valid for floating point types).
Any out-of-tree targets that were relying on AExt being returned for FP
promotions will need to update their code check for FPExt instead.
llvm-svn: 199252
This also fixes the placement of the function label comment. It was being
placed next to the mips16 directive instead of next to the label.
llvm-svn: 199245
Reapply r199191, reverted in r199197 because it carelessly broke
Other/link-opts.ll. The problem was that calling
createInternalizePass("main") would select
createInternalizePass(bool("main")) instead of
createInternalizePass(ArrayRef<const char *>("main")). This commit
fixes the bug.
The original commit message follows.
Add API to LTOCodeGenerator to specify a strategy for the -internalize
pass.
This is a new attempt at Bill's change in r185882, which he reverted in
r188029 due to problems with the gold linker. This puts the onus on the
linker to decide whether (and what) to internalize.
In particular, running internalize before outputting an object file may
change a 'weak' symbol into an internal one, even though that symbol
could be needed by an external object file --- e.g., with arclite.
This patch enables three strategies:
- LTO_INTERNALIZE_FULL: the default (and the old behaviour).
- LTO_INTERNALIZE_NONE: skip -internalize.
- LTO_INTERNALIZE_HIDDEN: only -internalize symbols with hidden
visibility.
LTO_INTERNALIZE_FULL should be used when linking an executable.
Outputting an object file (e.g., via ld -r) is more complicated, and
depends on whether hidden symbols should be internalized. E.g., for
ld -r, LTO_INTERNALIZE_NONE can be used when -keep_private_externs, and
LTO_INTERNALIZE_HIDDEN can be used otherwise. However,
LTO_INTERNALIZE_FULL is inappropriate, since the output object file will
eventually need to link with others.
lto_codegen_set_internalize_strategy() sets the strategy for subsequent
calls to lto_codegen_write_merged_modules() and lto_codegen_compile*().
<rdar://problem/14334895>
llvm-svn: 199244
Representing dllexport/dllimport as distinct linkage types prevents using
these attributes on templates and inline functions.
Instead of introducing further mixed linkage types to include linkonce and
weak ODR, the old import/export linkage types are replaced with a new
separate visibility-like specifier:
define available_externally dllimport void @f() {}
@Var = dllexport global i32 1, align 4
Linkage for dllexported globals and functions is now equal to their linkage
without dllexport. Imported globals and functions must be either
declarations with external linkage, or definitions with
AvailableExternallyLinkage.
llvm-svn: 199218
Sorry, I don't understand why the warning is generated (a gcc
bug?). Anyhow, the change should improve readablity. No functionality
change intended.
llvm-svn: 199214
This fixes a regression intruced by r198113.
Revision r198113 introduced an algorithm that tries to fold a vector shift
by immediate count into a build_vector if the input vector is a known vector
of constants.
However the algorithm only worked under the assumption that the input vector
type and the shift type are exactly the same.
This patch disables the folding of vector shift by immediate count if the
input vector type and the shift value type are not the same.
llvm-svn: 199213
Representing dllexport/dllimport as distinct linkage types prevents using
these attributes on templates and inline functions.
Instead of introducing further mixed linkage types to include linkonce and
weak ODR, the old import/export linkage types are replaced with a new
separate visibility-like specifier:
define available_externally dllimport void @f() {}
@Var = dllexport global i32 1, align 4
Linkage for dllexported globals and functions is now equal to their linkage
without dllexport. Imported globals and functions must be either
declarations with external linkage, or definitions with
AvailableExternallyLinkage.
llvm-svn: 199204
fastcall requires @ as global prefix instead of _ but getNameWithPrefix
wrongly assumes the OutName buffer is empty and replaces at index 0.
For imported functions this buffer is pre-filled with "__imp_" resulting
in broken "@_imp_foo@0" mangling.
Instead replace at the proper index. We also never have to prepend the
@-prefix because this fastcall mangling is only used on 32-bit Windows
targets which have _ has global prefix.
llvm-svn: 199203
Add API to LTOCodeGenerator to specify a strategy for the -internalize
pass.
This is a new attempt at Bill's change in r185882, which he reverted in
r188029 due to problems with the gold linker. This puts the onus on the
linker to decide whether (and what) to internalize.
In particular, running internalize before outputting an object file may
change a 'weak' symbol into an internal one, even though that symbol
could be needed by an external object file --- e.g., with arclite.
This patch enables three strategies:
- LTO_INTERNALIZE_FULL: the default (and the old behaviour).
- LTO_INTERNALIZE_NONE: skip -internalize.
- LTO_INTERNALIZE_HIDDEN: only -internalize symbols with hidden
visibility.
LTO_INTERNALIZE_FULL should be used when linking an executable.
Outputting an object file (e.g., via ld -r) is more complicated, and
depends on whether hidden symbols should be internalized. E.g., for
ld -r, LTO_INTERNALIZE_NONE can be used when -keep_private_externs, and
LTO_INTERNALIZE_HIDDEN can be used otherwise. However,
LTO_INTERNALIZE_FULL is inappropriate, since the output object file will
eventually need to link with others.
lto_codegen_set_internalize_strategy() sets the strategy for subsequent
calls to lto_codegen_write_merged_modules() and lto_codegen_compile*().
<rdar://problem/14334895>
llvm-svn: 199191
When creating a virtual register for a def, the value type should be
used to pick the register class. If we only use the register class
constraint on the instruction, we might pick a too large register class.
Some registers can store values of different sizes. For example, the x86
xmm registers can hold f32, f64, and 128-bit vectors. The three
different value sizes are represented by register classes with identical
register sets: FR32, FR64, and VR128. These register classes have
different spill slot sizes, so it is important to use the right one.
The register class constraint on an instruction doesn't necessarily care
about the size of the value its defining. The value type determines
that.
This fixes a problem where InstrEmitter was picking 32-bit register
classes for 64-bit values on SPARC.
llvm-svn: 199187
The already allocatable DPair superclass contains odd-even D register
pair in addition to the even-odd pairs in the QPR register class. There
is no reason to constrain the set of D register pairs that can be used
for NEON values. Any NEON instructions that require a Q register will
automatically constrain the register class to QPR.
The allocation order for DPair begins with the QPR registers, so
register allocation is unlikely to change much.
llvm-svn: 199186
We need to ensure that StackSlotColoring.cpp does not reuse stack
spill slots in functions that call "returns_twice" functions such as
setjmp(), otherwise this can lead to miscompiled code, because a stack
slot would be clobbered when it's still live.
This was already handled correctly for functions that call setjmp()
(though this wasn't covered by a test), but not for functions that
invoke setjmp().
We fix this by changing callsFunctionThatReturnsTwice() to check for
invoke instructions.
This fixes PR18244.
llvm-svn: 199180
This will allow it to be called from target independent parts of the main
streamer that don't know if there is a registered target streamer or not. This
in turn will allow targets to perform extra actions at specified points in the
interface: add extra flags for some labels, extra work during finalization, etc.
llvm-svn: 199174
This commit teaches DAG to reassociate vector ops, which in turn enables
constant folding of vector op chains that appear later on during custom lowering
and DAG combine.
Reviewed by Andrea Di Biagio
llvm-svn: 199135
This is a very confusing option for a feature that will go away.
-enable-misched is exposed instead to help triage issues with the new
scheduler.
llvm-svn: 199133
The issue is caused when Post-RA scheduler reorders a bundle instruction
(IT block). However, it only flips the CPSR liveness of the bundle instruction,
leaves the instructions inside the bundle unchanged, which causes inconstancy and crashes
Thumb2SizeReduction.cpp::ReduceMBB().
llvm-svn: 199127
APInt only knows how to compare values with the same BitWidth and asserts
in all other cases.
With this fix, function PerformORCombine does not use the APInt equality
operator if the APInt values returned by 'isConstantSplat' differ in BitWidth.
In that case they are different and no comparison is needed.
llvm-svn: 199119
...into (ashr (shl (anyext X), ...), ...), which requires one fewer
instruction. The (anyext X) can sometimes be simplified too.
I didn't do this in DAGCombiner because widening shifts isn't a win
on all targets.
llvm-svn: 199114
Previously we only used GPR for the destination placeholder in "ldr rD, [pc,
incorrect codegen under the integrated assembler.
This should fix both issues (which probably only affect MachO targets at the
moment).
rdar://problem/15800156
llvm-svn: 199108
This finishes the job started in r198756, and creates separate opcodes for
64-bit vs. 32-bit versions of the rest of the RET instructions too.
LRETL/LRETQ are interesting... I can't see any justification for their
existence in the SDM. There should be no 'LRETL' in 64-bit mode, and no
need for a REX.W prefix for LRETQ. But this is what GAS does, and my
Sandybridge CPU and an Opteron 6376 concur when tested as follows:
asm __volatile__("pushq $0x1234\nmovq $0x33,%rax\nsalq $32,%rax\norq $1f,%rax\npushq %rax\nlretl $8\n1:");
asm __volatile__("pushq $1234\npushq $0x33\npushq $1f\nlretq $8\n1:");
asm __volatile__("pushq $0x33\npushq $1f\nlretq\n1:");
asm __volatile__("pushq $0x1234\npushq $0x33\npushq $1f\nlretq $8\n1:");
cf. PR8592 and commit r118903, which added LRETQ. I only added LRETIQ to
match it.
I don't quite understand how the Intel syntax parsing for ret
instructions is working, despite r154468 allegedly fixing it. Aren't the
explicitly sized 'retw', 'retd' and 'retq' supposed to work? I have at
least made the 'lretq' work with (and indeed *require*) the 'q'.
llvm-svn: 199106
can be used by both the new pass manager and the old.
This removes it from any of the virtual mess of the pass interfaces and
lets it derive cleanly from the DominatorTreeBase<> template. In turn,
tons of boilerplate interface can be nuked and it turns into a very
straightforward extension of the base DominatorTree interface.
The old analysis pass is now a simple wrapper. The names and style of
this split should match the split between CallGraph and
CallGraphWrapperPass. All of the users of DominatorTree have been
updated to match using many of the same tricks as with CallGraph. The
goal is that the common type remains the resulting DominatorTree rather
than the pass. This will make subsequent work toward the new pass
manager significantly easier.
Also in numerous places things became cleaner because I switched from
re-running the pass (!!! mid way through some other passes run!!!) to
directly recomputing the domtree.
llvm-svn: 199104
trees into the Support library.
These are all expressed in terms of the generic GraphTraits and CFG,
with no reliance on any concrete IR types. Putting them in support
clarifies that and makes the fact that the static analyzer in Clang uses
them much more sane. When moving the Dominators.h file into the IR
library I claimed that this was the right home for it but not something
I planned to work on. Oops.
So why am I doing this? It happens to be one step toward breaking the
requirement that IR verification can only be performed from inside of
a pass context, which completely blocks the implementation of
verification for the new pass manager infrastructure. Fixing it will
also allow removing the concept of the "preverify" step (WTF???) and
allow the verifier to cleanly flag functions which fail verification in
a way that precludes even computing dominance information. Currently,
that results in a fatal error even when you ask the verifier to not
fatally error. It's awesome like that.
The yak shaving will continue...
llvm-svn: 199095
Very sorry, this was a premature patch that I still need to investigate and
finish off (for some reason beyond me at the moment it doesn't actually fix the
issue in all cases).
This reverts commit r199091.
llvm-svn: 199093
There are two attempted optimisations in reMaterializeTrivialDef, trying to
avoid promoting the size of a register too much when rematerializing.
Unfortunately, both appear to be flawed. First, we see if the original register
would have worked, but this is inadequate. Consider:
v1 = SOMETHING (v1 is QQ)
v2:Q0 = COPY v1:Q1 (v1, v2 are QQ)
...
uses of v2
In this case even though v2 *could* be used directly as the output of
SOMETHING, this would set the wrong bits of the QQ register involved. The
correct rematerialization must be:
v2:Q0_Q1 = SOMETHING (v2 promoted to QQQ)
...
uses of v2:Q1_Q2
For the second optimisation, if the correct remat is "v2:idx = SOMETHING" then
we can't necessarily expect v2 itself to be valid for SOMETHING, but we do try
to hunt for a class between v1 and v2 that works. Unfortunately, this is also
wrong:
v1 = SOMETHING (v1 is QQ)
v2:Q0_Q1 = COPY v1 (v1 is QQ, v2 is QQQ)
...
uses of v2 as a QQQ
The canonical rematerialization here is "v2:Q0_Q1 = SOMETHING". However current
logic would decide that v2 could be a QQ (no interest is taken in later uses).
This patch, therefore, always accepts the widened register class without trying
to be clever. Generally there is no penalty to this (e.g. in the common GR32 <
GR64 case, expanding the width doesn't matter because it's not like you were
going to do anything else with the high bits of a GR32 register). It can
increase register pressure in cases like the ARM VFP regs though (multiple
non-overlapping but equivalent subregisters). Hopefully this situation is rare
enough that it won't matter.
Unfortunately, no in-tree targets actually expose this as far as I can tell
(there are so few isAsCheapAsAMove instructions for it to trigger on) so I've
been unable to produce a test. It was exposed in our ARM64 SPEC tests though,
and I will be adding a test there that we should be able to contribute
soon(TM).
llvm-svn: 199091
directory. These passes are already defined in the IR library, and it
doesn't make any sense to have the headers in Analysis.
Long term, I think there is going to be a much better way to divide
these matters. The dominators code should be fully separated into the
abstract graph algorithm and have that put in Support where it becomes
obvious that evn Clang's CFGBlock's can use it. Then the verifier can
manually construct dominance information from the Support-driven
interface while the Analysis library can provide a pass which both
caches, reconstructs, and supports a nice update API.
But those are very long term, and so I don't want to leave the really
confusing structure until that day arrives.
llvm-svn: 199082
This moves the old pass creation functionality to its own header and
updates the callers of that routine. Then it adds a new PM supporting
bitcode writer to the header file, and wires that up in the opt tool.
A test is added that round-trips code into bitcode and back out using
the new pass manager.
llvm-svn: 199078
This patch covered 2 more scenarios:
1. Two operands of shuffle_vector are the same, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> %a, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>
2. One of operands is undef, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> undef, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>
After this patch, perm instructions will have chance to be emitted instead of lots of INS.
llvm-svn: 199069
The target specific parser should return `false' if the target AsmParser handles
the directive, and `true' if the generic parser should handle the directive.
Many of the target specific directive handlers would `return Error' which does
not follow these semantics. This change simply changes the target specific
routines to conform to the semantis of the ParseDirective correctly.
Conformance to the semantics improves diagnostics emitted for the invalid
directives. X86 is taken as a sample to ensure that multiple diagnostics are
not presented for a single error.
llvm-svn: 199068
Targets like SPARC and MIPS have delay slots and normally bundle the
delay slot instruction with the corresponding terminator.
Teach isBlockOnlyReachableByFallthrough to find any MBB operands on
bundled terminators so SPARC doesn't need to specialize this function.
llvm-svn: 199061
This implements the legacy passes in terms of the new ones. It adds
basic testing using explicit runs of the passes. Next up will be wiring
the basic output mechanism of opt up when the new pass manager is
engaged unless bitcode writing is requested.
llvm-svn: 199049
API is exposed.
This removes the support for deleting the ostream, switches the member
and constructor order arround to be consistent with the creation
routines, and switches to using references.
llvm-svn: 199047
Nothing was using the ability of the pass to delete the raw_ostream it
printed to, and nothing was trying to pass it a pointer to the
raw_ostream. Also, the function variant had a different order of
arguments from all of the others which was just really confusing. Now
the interface accepts a reference, doesn't offer to delete it, and uses
a consistent order. The implementation of the printing passes haven't
been updated with this simplification, this is just the API switch.
llvm-svn: 199044
name to match the source file which I got earlier. Update the include
sites. Also modernize the comments in the header to use the more
recommended doxygen style.
llvm-svn: 199041
An improper qualifier would result in a superfluous error due to the parser not
consuming the remainder of the statement. Simply consume the remainder of the
statement to avoid the error.
llvm-svn: 199035
The implicit immediate 0 forms are assembly aliases, not distinct instruction
encodings. Fix the initial implementation introduced in r198914 to an alias to
avoid two separate instruction definitions for the same encoding.
An InstAlias is insufficient in this case as the necessary due to the need to
add a new additional operand for the implicit zero. By using the AsmPsuedoInst,
fall back to the C++ code to transform the instruction to the equivalent
_POST_IMM form, inserting the additional implicit immediate 0.
llvm-svn: 199032
This is different from the argument passing convention which puts the
first float argument in %f1.
With this patch, all returned floats are treated as if the 'inreg' flag
were set. This means multiple float return values get packed in %f0,
%f1, %f2, ...
Note that when returning a struct in registers, clang will set the
'inreg' flag on the return value, so that behavior is unchanged. This
also happens when returning a float _Complex.
llvm-svn: 199028
case when the lookup table doesn't have any holes.
This means we can build a lookup table for switches like this:
switch (x) {
case 0: return 1;
case 1: return 2;
case 2: return 3;
case 3: return 4;
default: exit(1);
}
The default case doesn't yield a constant result here, but that doesn't matter,
since a default result is only necessary for filling holes in the lookup table,
and this table doesn't have any holes.
This makes us transform 505 more switches in a clang bootstrap, and shaves 164 KB
off the resulting clang binary.
llvm-svn: 199025
A 32-bit immediate value can be formed from a constant expression and loaded
into a register. Add support to emit this into an object file. Because this
value is a constant, a relocation must *not* be produced for it.
llvm-svn: 199023
mode that can be used to debug the execution of everything.
No support for analyses here, that will come later. This already helps
show parts of the opt commandline integration that isn't working. Tests
of that will start using it as the bugs are fixed.
llvm-svn: 199004
Use separate callee-save masks for XMM and YMM registers for anyregcc on X86 and
select the proper mask depending on the target cpu we compile for.
llvm-svn: 198985
1- Use the line_iterator class to read profile files.
2- Allow comments in profile file. Lines starting with '#'
are completely ignored while reading the profile.
3- Add parsing support for discriminators and indirect call samples.
Our external profiler can emit more profile information that we are
currently not handling. This patch does not add new functionality to
support this information, but it allows profile files to provide it.
I will add actual support later on (for at least one of these
features, I need support for DWARF discriminators in Clang).
A sample line may contain the following additional information:
Discriminator. This is used if the sampled program was compiled with
DWARF discriminator support
(http://wiki.dwarfstd.org/index.php?title=Path_Discriminators). This
is currently only emitted by GCC and we just ignore it.
Potential call targets and samples. If present, this line contains a
call instruction. This models both direct and indirect calls. Each
called target is listed together with the number of samples. For
example,
130: 7 foo:3 bar:2 baz:7
The above means that at relative line offset 130 there is a call
instruction that calls one of foo(), bar() and baz(). With baz()
being the relatively more frequent call target.
Differential Revision: http://llvm-reviews.chandlerc.com/D2355
4- Simplify format of profile input file.
This implements earlier suggestions to simplify the format of the
sample profile file. The symbol table is not necessary and function
profiles do not need to know the number of samples in advance.
Differential Revision: http://llvm-reviews.chandlerc.com/D2419
llvm-svn: 198973
This adds a propagation heuristic to convert instruction samples
into branch weights. It implements a similar heuristic to the one
implemented by Dehao Chen on GCC.
The propagation proceeds in 3 phases:
1- Assignment of block weights. All the basic blocks in the function
are initial assigned the same weight as their most frequently
executed instruction.
2- Creation of equivalence classes. Since samples may be missing from
blocks, we can fill in the gaps by setting the weights of all the
blocks in the same equivalence class to the same weight. To compute
the concept of equivalence, we use dominance and loop information.
Two blocks B1 and B2 are in the same equivalence class if B1
dominates B2, B2 post-dominates B1 and both are in the same loop.
3- Propagation of block weights into edges. This uses a simple
propagation heuristic. The following rules are applied to every
block B in the CFG:
- If B has a single predecessor/successor, then the weight
of that edge is the weight of the block.
- If all the edges are known except one, and the weight of the
block is already known, the weight of the unknown edge will
be the weight of the block minus the sum of all the known
edges. If the sum of all the known edges is larger than B's weight,
we set the unknown edge weight to zero.
- If there is a self-referential edge, and the weight of the block is
known, the weight for that edge is set to the weight of the block
minus the weight of the other incoming edges to that block (if
known).
Since this propagation is not guaranteed to finalize for every CFG, we
only allow it to proceed for a limited number of iterations (controlled
by -sample-profile-max-propagate-iterations). It currently uses the same
GCC default of 100.
Before propagation starts, the pass builds (for each block) a list of
unique predecessors and successors. This is necessary to handle
identical edges in multiway branches. Since we visit all blocks and all
edges of the CFG, it is cleaner to build these lists once at the start
of the pass.
Finally, the patch fixes the computation of relative line locations.
The profiler emits lines relative to the function header. To discover
it, we traverse the compilation unit looking for the subprogram
corresponding to the function. The line number of that subprogram is the
line where the function begins. That becomes line zero for all the
relative locations.
llvm-svn: 198972
for (i = 0; i < N; ++i)
A[i * Stride1] += B[i * Stride2];
We take loops like this and check that the symbolic strides 'Strided1/2' are one
and drop to the scalar loop if they are not.
This is currently disabled by default and hidden behind the flag
'enable-mem-access-versioning'.
radar://13075509
llvm-svn: 198950
The disassembler would no longer be able to disambiguage between the two
variants (explicit immediate #0 vs implicit, omitted #0) for the ldrt, strt,
ldrbt, strbt mnemonics as both versions indicated the disassembler routine.
llvm-svn: 198944
The GNU assembler supports prefixing the expression with a '#' to indiciate that
the value that is being moved is infact a constant. This improves the
compatibility of the integrated assembler's parser for this.
llvm-svn: 198916
The GNU assembler has an extension that allows for the elision of the paired
register (dt2) for the LDRD and STRD mnemonics. Add support for this in the
assembly parser. Canonicalise the usage during the instruction parsing from
the specified version.
llvm-svn: 198915
The ARM ARM indicates the mnemonics as follows:
ldrbt{<c>}{<q>} <Rt>, [<Rn>], {, #+/-<imm>}
ldrt{<c>}{<q>} <Rt>, [<Rn>] {, #+/-<imm>}
strbt{<c>}{<q>} <Rt>, [<Rn>] {, #<imm>}
strt{<c>}{<q>} <Rt>, [<Rn>] {, #+/-<imm>}
This improves the parser to deal with the implicit immediate 0 for the mnemonics
as per the specification.
Thanks to Joerg Sonnenberger for the tests!
llvm-svn: 198914
This reverts commit r198865 which reverts r198851.
ASan identified a use-of-uninitialized of the DwarfTypeUnit::Ty variable
in skeleton type units.
llvm-svn: 198908
The zext handling added in r197802 wasn't right for RNSBG. This patch
restricts it to ROSBG, RXSBG and RISBG. (The tests for RISBG were added
in r197802 since RISBG was the motivating example.)
llvm-svn: 198862
At the moment we expect rotates to have the form:
(or (shl X, Y), (shr X, Z))
where Y == bitsize(X) - Z or Z == bitsize(X) - Y. This form means that
the (or ...) is undefined for Y == 0 or Z == 0. This undefinedness can
be avoided by using Y == (C * bitsize(X) - Z) & (bitsize(X) - 1) or
Z == (C * bitsize(X) - Y) & (bitsize(X) - 1) for any integer C
(including 0, the most natural choice).
llvm-svn: 198861
InstCombine converts (sub 32, (add X, C)) into (sub 32-C, X),
so a rotate left of a 32-bit Y by X+C could appear as either:
(or (shl Y, (add X, C)), (shr Y, (sub 32, (add X, C))))
without InstCombine or:
(or (shl Y, (add X, C)), (shr Y, (sub 32-C, X)))
with it.
We already matched the first form. This patch handles the second too.
llvm-svn: 198860
Since we'll now also need the split dwarf file name along with the
language in DwarfTypeUnits, just use the whole DICompileUnit rather than
explicitly handling each field needed.
llvm-svn: 198842
operand into the Value interface just like the core print method is.
That gives a more conistent organization to the IR printing interfaces
-- they are all attached to the IR objects themselves. Also, update all
the users.
This removes the 'Writer.h' header which contained only a single function
declaration.
llvm-svn: 198836
In the stackmap format we advertise the constant field as signed.
However, we were determining whether to promote to a 64-bit constant
pool based on an unsigned comparison.
This fix allows -1 to be encoded as a small constant.
llvm-svn: 198816
This makes it easier to write a test that's mostly shared between
fission and non-fission (using FileCheck's multiple prefix support).
llvm-svn: 198806
MIsNeedChainEdge, which is used by -enable-aa-sched-mi (AA in misched), had an
llvm_unreachable when -enable-aa-sched-mi is enabled and we reach an
instruction with multiple MMOs. Instead, return a conservative answer. This
allows testing -enable-aa-sched-mi on x86.
Also, this moves the check above the isUnsafeMemoryObject checks.
isUnsafeMemoryObject is currently correct only for instructions with one MMO
(as noted in the comment in isUnsafeMemoryObject):
// We purposefully do no check for hasOneMemOperand() here
// in hope to trigger an assert downstream in order to
// finish implementation.
The problem with this is that, had the candidate edge passed the
"!MIa->mayStore() && !MIb->mayStore()" check, the hoped-for assert would never
happen (which could, in theory, lead to incorrect behavior if one of these
secondary MMOs was volatile, for example).
llvm-svn: 198795
We can't do a perfect job here. We *have* to allow (%dx) even in 64-bit
mode, for example, because it might be used for an unofficial form of
the in/out instructions. We actually want to do a better job of validation
*later*. Perhaps *instead* of doing it where we are at the moment.
But for now, doing what validation we *can* do in the place that the code
already has its validation, is an improvement.
llvm-svn: 198760
It seems there is no separate instruction class for having AdSize *and*
OpSize bits set, which is required in order to disambiguate between all
these instructions. So add that to the disassembler.
Hm, perhaps we do need an AdSize16 bit after all?
llvm-svn: 198759
Where "where possible" means that it's an immediate value and it's below
0x10000. In fact GAS will either truncate or error with larger values,
and will insist on using the addr32 prefix to get 32-bit addressing. So
perhaps we should do that, in a later patch.
llvm-svn: 198758
JCXZ should have the 0x67 prefix only if we're in 32-bit mode, so make that
appropriately conditional. And JECXZ needs the prefix instead.
llvm-svn: 198757
I couldn't see how to do this sanely without splitting RETQ from RETL.
Eric says: "sad about the inability to roundtrip them now, but...".
I have no idea what that means, but perhaps it wants preserving in the
commit comment.
llvm-svn: 198756
This fixes the bulk of 16-bit output, and the corresponding test case
x86-16.s now looks mostly like the x86-32.s test case that it was
originally based on. A few irrelevant instructions have been dropped,
and there are still some corner cases to be fixed in subsequent patches.
llvm-svn: 198752
Modern versions of OSX/Darwin's ld (ld64 > 97.17) have an optimisation present that allows the back end to omit relocations (and replace them with an absolute difference) for FDE some text section refs.
This patch allows a backend to opt-in to this behaviour by setting "DwarfFDESymbolsUseAbsDiff". At present, this is only enabled for modern x86 OSX ports.
test changes by David Fang.
llvm-svn: 198744
I believe the bot failures on linux systems were due to overestimating the
alignment of object-files within archives, which are only guaranteed to be
two-byte aligned. I have reduced the alignment in
RuntimeDyldELF::createObjectImageFromFile accordingly.
llvm-svn: 198737
Operands which involved label arithemetic would previously fail to parse. This
corrects that by adding the additional case for the shift operand validation.
llvm-svn: 198735
take type from the new symbol but merge them so that the type
is never "downgraded".
This is probably quite rare, except for IFUNC symbols which
we used to misassemble, losing the IFUNC type.
Fixes#18372.
llvm-svn: 198706
With the gnu objc runtime private strings are used. Since we only need to
produce a unique label, the fix is to just drop the asserts.
llvm-svn: 198701
This commit adds the pre-UAL aliases of fconsts and fconstd for
vmov.f32 and vmov.f64. They use an InstAlias rather than a
MnemonicAlias to properly support the predicate operand.
We need to support encoded 8-bit constants in order to implement the
pre-UAL fconsts/fconstd aliases for vmov.f32/vmov.f64, so this
commit also fixes parsing of encoded floating point constants used
in vmov.f32/vmov.f64 instructions. Now we can support assembly code
like this:
fconsts s0, #0x70
which is equivalent to vmov.f32 s0, #1.0.
Most of the code was already in place to support this feature.
Previously the code was trying to accept encoded 8-bit float
constants for the vmov.f32/vmov.f64 instructions. It looks like the
support for parsing encoded floats was lost in a refactoring in
commit r148556 and we did not have any tests in place to catch it.
The change in this commit is to keep the parsed value as a 32-bit
float instead of a 64-bit double because that is what the isFPImm()
function expects to find. There is no loss of precision by using a
32-bit float here because we are still limited to an 8-bit encoded
value in the end.
Additionally, we explicitly reject encoded 8-bit floats for
vmovf.32/64. This is the same as the current behavior, but we now do
it explicitly rather than accidently.
llvm-svn: 198697
are part of the core IR library in order to support dumping and other
basic functionality.
Rename the 'Assembly' include directory to 'AsmParser' to match the
library name and the only functionality left their -- printing has been
in the core IR library for quite some time.
Update all of the #includes to match.
All of this started because I wanted to have the layering in good shape
before I started adding support for printing LLVM IR using the new pass
infrastructure, and commandline support for the new pass infrastructure.
llvm-svn: 198688
subsequent changes are easier to review. About to fix some layering
issues, and wanted to separate out the necessary churn.
Also comment and sink the include of "Windows.h" in three .inc files to
match the usage in Memory.inc.
llvm-svn: 198685
This doesn't seem to have actually broken anything. It was paranoia
on my part. Trying again now that bots are more stable.
This is a follow up of the r198338 commit that added truncates for
lcssa phi nodes. Sinking the truncates below the phis cleans up the
loop and simplifies subsequent analysis within the indvars pass.
llvm-svn: 198678
Move the unwinding context for the ARM IAS into a helper class. This is purely
a structural refactoring. A follow up change allows for recording additional
depth to improve diagnostics.
llvm-svn: 198664
Parse tag names as well as expressions. The former is part of the
specification, the latter is for improved compatibility with the GNU assembler.
Fix attribute value handling to be comformant to the specification.
llvm-svn: 198662
Introduce a new virtual method Note into the AsmParser. This completements the
existing Warning and Error methods. Use the new method to clean up the output
of the unwind routines in the ARM AsmParser.
llvm-svn: 198661
This is a follow up of the r198338 commit that added truncates for
lcssa phi nodes. Sinking the truncates below the phis cleans up the
loop and simplifies subsequent analysis within the indvars pass.
llvm-svn: 198654
This patch adds .abicalls and .set pic0 support which
affects the ELF ABI and its flags. In addition the patch uses
a common interface for both the MipsTargetSteamer and
MipsObjectStreamer that both the integrated and standalone
assemblers will use for the output for these directives.
llvm-svn: 198646
SymbolLookUp() call back to return a demangled C++ name to
be used as a comment.
For example darwin's otool(1) program the uses the llvm
disassembler now can produce disassembly like:
callq __ZNK4llvm6Target20createMCDisassemblerERKNS_15MCSubtargetInfoE ## llvm::Target::createMCDisassembler(llvm::MCSubtargetInfo const&) const
Also fix a bug in LLVMDisasmInstruction() that was not flushing
the raw_svector_ostream for the disassembled instruction string
before copying it to the output buffer that was causing truncation
of the output.
rdar://10173828
llvm-svn: 198637
Now with a fix for PR18384: ValueHandleBase::ValueIsDeleted.
We need to invalidate SCEV's loop info when we delete a block, even if no values are hoisted.
llvm-svn: 198631
The ARM backend has been using most of the MachO related subtarget
checks almost interchangeably, and since the only target it's had to
run on has been IOS (which is all three of MachO, Darwin and IOS) it's
worked out OK so far.
But we'd like to support embedded targets under the "*-*-none-macho"
triple, which means everything starts falling apart and inconsistent
behaviours emerge.
This patch should pick a reasonably sensible set of behaviours for the
new triple (and any others that come along, with luck). Some choices
were debatable (notably FP == r7 or r11), but we can revisit those
later when deficiencies become apparent.
llvm-svn: 198617
This requires a knowledge of the stack size which is not known until
the frame is complete, hence the need for the XCoreFTAOElim pass
which lowers the XCoreISD::FRAME_TO_ARGS_OFFSET instrution into its
final form.
llvm-svn: 198614
We also narrow the liveness of FP & LR during the prologue to
reflect the actual usage of the registers.
I have been unable to construct a test to prove the previous live
range was too large.
llvm-svn: 198611
Longer term, we want to move users to "*-*-*-macho" for embedded work, but for
now people are relying on the last thing we told them, which is unfortunately
"*-*-darwin-eabi".
rdar://problem/15703934
llvm-svn: 198602
The 0x66 prefix toggles between 16-bit and 32-bit addressing mode.
So in 32-bit mode it is used to switch to 16-bit addressing mode for the
following instruction, while in 16-bit mode it's the other way round — it's
used to switch to 32-bit mode instead.
Thus, emit the 0x66 prefix byte for OpSize only in 32-bit (and 64-bit) mode,
and introduce a new OpSize16 bit which is used in 16-bit mode instead.
This is just the basic infrastructure for that change; a subsequent patch
will add the new OpSize16 bit to the 32-bit instructions that need it.
Patch from David Woodhouse.
llvm-svn: 198586
This is not really expected to work right yet. Mostly because we will
still emit the OpSize (0x66) prefix in all the wrong places, along with
a number of other corner cases. Those will all be fixed in the subsequent
commits.
Patch from David Woodhouse.
llvm-svn: 198584