Summary:
This was a longstanding FIXME and is a necessary precursor to cases
where foldOperandImpl may have to create more than one instruction
(e.g. to constrain a register class). This is the split out NFC changes from
D6262.
Reviewers: pete, ributzka, uweigand, mcrosier
Reviewed By: mcrosier
Subscribers: mcrosier, ted, llvm-commits
Differential Revision: http://reviews.llvm.org/D10174
llvm-svn: 239336
on a per-function basis.
Previously some of the passes were conditionally added to ARM's pass pipeline
based on the target machine's subtarget. This patch makes changes to add those
passes unconditionally and execute them conditonally based on the predicate
functor passed to the pass constructors. This enables running different sets of
passes for different functions in the module.
rdar://problem/20542263
Differential Revision: http://reviews.llvm.org/D8717
llvm-svn: 239325
The global-merge pass was crashing because it assumes that all ConstantExprs
(reached via the global variables that they use) have at least one user.
I haven't worked out a way to test this, as an unused ConstantExpr cannot be
represented by serialised IR, and global-merge can only be run in llc, which
does not run any passes which can make a ConstantExpr dead.
This (reduced to the point of silliness) C code triggers this bug when compiled
for arm-none-eabi at -O1:
static a = 7;
static volatile b[10] = {&a};
c;
main() {
c = 0;
for (; c < 10;)
printf(b[c]);
}
Differential Revision: http://reviews.llvm.org/D10314
llvm-svn: 239308
the overloaded version of addPass which takes Pass*.
This change enables inserting the machine printer pass when the overloaded
version of addPass that takes Pass* is called to add a pass, instead of the
one which takes AnalysisID. I need this to prevent make-check tests from
failing when I commit another patch later.
llvm-svn: 239192
For targets with a free fneg, this fold is always a net loss if it
ends up duplicating the multiply, so definitely avoid it.
This might be true for some targets without a free fneg too, but
I'll leave that for future investigation.
llvm-svn: 239167
Also, moved test cases from CodeGen/X86/fold-buildvector-bug.ll into
CodeGen/X86/buildvec-insertvec.ll and regenerated CHECK lines using
update_llc_test_checks.py.
llvm-svn: 239142
gc.statepoint intrinsics with a far immediate call target
were lowered incorrectly as pc-rel32 calls.
This change fixes the problem, and generates an indirect call
via a scratch register.
For example:
Intrinsic:
%safepoint_token = call i32 (i64, i32, void ()*, i32, i32, ...) @llvm.experimental.gc.statepoint.p0f_isVoidf(i64 0, i32 0, void ()* inttoptr (i64 140727162896504 to void ()*), i32 0, i32 0, i32 0, i32 0)
Old Incorrect Lowering:
callq 140727162896504
New Correct Lowering:
movabsq $140727162896504, %rax
callq *%rax
In lowerCallFromStatepoint(), the callee-target was modified and
represented as a "TargetConstant" node, rather than a "Constant" node.
Undoing this modification enabled LowerCall() to generate the
correct CALL instruction.
llvm-svn: 239114
The big/small ordering here is based on signed values so SmallValue will
be INT_MIN and BigValue 0. This shouldn't be a problem but the code
assumed that BigValue always had more bits set than SmallValue.
We used to just miss the transformation, but a recent refactoring of
mine turned this into an assertion failure.
llvm-svn: 239105
Basic block selection involves checking successor BBs for PHI nodes
that depend on the current BB. In case such BBs are found, the value
being selected is a constant and such constant already exists in
current BB, it's value is reused.
This might lead to wrong locations in some situations, especially if
same constant value ends up being materialized twice in two different
ways, which discards that sharing and leaves us with wrong debug
location in the successor BB.
In code this involves the following sequence of calls:
SelectionDAGBuilder::HandlePHINodesInSuccessorBlocks ->
SelectionDAGBuilder::CopyValueToVirtualRegister ->
SelectionDAGBuilder::getNonRegisterValue
llvm-svn: 239089
Now that we can look at users, we can trivially do this: when we would
have otherwise disabled GlobalMerge (currently -O<3), we can just run
it for minsize functions, as it's usually a codesize win.
Differential Revision: http://reviews.llvm.org/D10054
llvm-svn: 239087
Method 'visitBUILD_VECTOR' in the DAGCombiner knows how to combine a
build_vector of a bunch of extract_vector_elt nodes and constant zero nodes
into a shuffle blend with a zero vector.
However, method 'visitBUILD_VECTOR' forgot that a floating point
build_vector may contain negative zero as well as positive zero.
Example:
define <2 x double> @example(<2 x double> %A) {
entry:
%0 = extractelement <2 x double> %A, i32 0
%1 = insertelement <2 x double> undef, double %0, i32 0
%2 = insertelement <2 x double> %1, double -0.0, i32 1
ret <2 x double> %2
}
Before this patch, llc (with -mattr=+sse4.1) wrongly generated
movq %xmm0, %xmm0 # xmm0 = xmm0[0],zero
So, the sign bit of the negative zero was effectively lost.
This patch fixes the problem by adding explicit checks for positive zero.
With this patch, llc produces the following code for the example above:
movhpd .LCPI0_0(%rip), %xmm0
where .LCPI0_0 referes to a 'double -0'.
llvm-svn: 239070
When checking (High - Low + 1).sle(BitWidth), BitWidth would be truncated
to the size of the left-hand side. In the case of this PR, the left-hand
side was i4, so BitWidth=64 got truncated to 0 and the assert failed.
llvm-svn: 239048
If the compare in a select pattern has another use then it can't be removed, so we'd just
be creating repeated code if we created a min/max node.
Spotted by Matt Arsenault!
llvm-svn: 239037
Summary:
LLVM's MI level notion of invariant_load is different from LLVM's IR
level notion of invariant_load with respect to dereferenceability. The
IR notion of invariant_load only guarantees that all *non-faulting*
invariant loads result in the same value. The MI notion of invariant
load guarantees that the load can be legally moved to any location
within its containing function. The MI notion of invariant_load is
stronger than the IR notion of invariant_load -- an MI invariant_load is
an IR invariant_load + a guarantee that the location being loaded from
is dereferenceable throughout the function's lifetime.
Reviewers: hfinkel, reames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10075
llvm-svn: 238881
This create a MCSymbolELF class and moves SymbolSize since only ELF
needs a size expression.
This reduces the size of MCSymbol from 56 to 48 bytes.
llvm-svn: 238801
If a dead instruction we may not only have a last-use in the main live
range but also in a subregister range if subregisters are tracked. We
need to partially rebuild live ranges in both cases.
The testcase only broke when subregister liveness was enabled. I
commited it in the current form because there is currently no flag to
enable/disable subregister liveness.
This fixes PR23720.
llvm-svn: 238785
This is important because of different addressing modes
depending on the address space for GPU targets.
This only adds the argument, and does not update
any of the uses to provide the correct address space.
llvm-svn: 238723
r238503 fixed the problem of too-small shift types by promoting them
during legalization, but the correct solution is to promote only the
operands that actually demand promotion.
This fixes a crash on an out-of-tree target caused by trying to
promote an operand that can't be promoted.
llvm-svn: 238632
If the type isn't trivially moveable emplace can skip a potentially
expensive move. It also saves a couple of characters.
Call sites were found with the ASTMatcher + some semi-automated cleanup.
memberCallExpr(
argumentCountIs(1), callee(methodDecl(hasName("push_back"))),
on(hasType(recordDecl(has(namedDecl(hasName("emplace_back")))))),
hasArgument(0, bindTemporaryExpr(
hasType(recordDecl(hasNonTrivialDestructor())),
has(constructExpr()))),
unless(isInTemplateInstantiation()))
No functional change intended.
llvm-svn: 238602
For some history here see the commit messages of r199797 and r169060.
The original intent was to fix cases like:
%EAX<def> = COPY %ECX<kill>, %RAX<imp-def>
%RCX<def> = COPY %RAX<kill>
where simply removing the copies would have RCX undefined as in terms of
machine operands only the ECX part of it is defined. The machine
verifier would complain about this so 169060 changed such COPY
instructions into KILL instructions so some super-register imp-defs
would be preserved. In r199797 it was finally decided to always do this
regardless of super-register defs.
But this is wrong, consider:
R1 = COPY R0
...
R0 = COPY R1
getting changed to:
R1 = KILL R0
...
R0 = KILL R1
It now looks like R0 dies at the first KILL and won't be alive until the
second KILL, while in reality R0 is alive and must not change in this
part of the program.
As this only happens after register allocation there is not much code
still performing liveness queries so the issue was not noticed. In fact
I didn't manage to create a testcase for this, without unrelated changes
I am working on at the moment.
The fix is simple: As of r223896 the MachineVerifier allows reads from
partially defined registers, so the whole transforming COPY->KILL thing
is not necessary anymore. This patch also changes a similar (but more
benign case as the def and src are the same register) case in the
VirtRegRewriter.
Differential Revision: http://reviews.llvm.org/D10117
llvm-svn: 238588
This commit translates the line and column numbers for LLVM IR
errors from the numbers in the YAML block scalar to the numbers
in the MIR file so that the MIRParser users can report LLVM IR
errors with the correct line and column numbers.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10108
llvm-svn: 238576
Small (really small!) C++ exception handling examples work on 32-bit x86
now.
This change disables the use of .seh_* directives in WinException when
CFI is not in use. It also uses absolute symbol references in the tables
instead of imagerel32 relocations.
Also fixes a cache invalidation bug in MMI personality classification.
llvm-svn: 238575
MIOperands/ConstMIOperands are classes iterating over the MachineOperand
of a MachineInstr, however MachineInstr::mop_iterator does the same
thing.
I assume these two iterators exist to have a uniform interface to
iterate over the operands of a machine instruction bundle and a single
machine instruction. However in practice I find it more confusing to have 2
different iterator classes, so this patch transforms (nearly all) the
code to use mop_iterators.
The only exception being MIOperands::anlayzePhysReg() and
MIOperands::analyzeVirtReg() still needing an equivalent, I leave that
as an exercise for the next patch.
Differential Revision: http://reviews.llvm.org/D9932
This version is slightly modified from the proposed revision in that it
introduces MachineInstr::getOperandNo to avoid the extra counting
variable in the few loops that previously used MIOperands::getOperandNo.
llvm-svn: 238539
About pristine regsiters:
Pristine registers "hold a value that is useless to the current
function, but that must be preserved - they are callee saved registers
that have not been saved." This concept saves compile time as it frees
the prologue/epilogue inserter from adding every such register to every
basic blocks live-in list.
However the current code in getPristineRegs is formulated in a
complicated way: Inside the function prologue and epilogue all callee
saves are considered pristine, while in the rest of the code only the
non-saved ones are considered pristine. This requires logic to
differentiate between prologue/epilogue and the rest and in the presence
of shrink-wrapping this even becomes complicated/expensive. It's also
unnecessary because the prologue epilogue inserters already mark
callee-save registers that are saved/restores properly in the respective
blocks in the prologue/epilogue (see updateLiveness() in
PrologueEpilogueInserter.cpp). So only declaring non-saved/restored
callee saved registers as pristine just works.
Differential Revision: http://reviews.llvm.org/D10101
llvm-svn: 238524
This commit introduces a serializable structure called
'llvm::yaml::MachineFunction' that stores the machine
function's name. This structure will mirror the machine
function's state in the future.
This commit prints machine functions as YAML documents
containing a YAML mapping that stores the state of a machine
function. This commit also parses the YAML documents
that contain the machine functions.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D9841
llvm-svn: 238519
This moves all the state numbering code for C++ EH to WinEHPrepare so
that we can call it from the X86 state numbering IR pass that runs
before isel.
Now we just call the same state numbering machinery and insert a bunch
of stores. It also populates MachineModuleInfo with information about
the current function.
llvm-svn: 238514
DIEAbbrev contains a SmallVector that can leak for overly large abbrevs. They
used to be owned by the DIE, but after the recent refactoring DWARFFile
allocates its own abbrevs.
Leak found by asan.
llvm-svn: 238418
Change `DIE::addChild()` to return a reference to the just-added node,
and update consumers to use it directly. An upcoming commit will
abstract away (and eventually change) the underlying storage of
`DIE::Children`.
llvm-svn: 238372
Stop storing a `DIEAbbrev` in `DIE`, since the data fits neatly inside
the `DIEValue` list. Besides being a cleaner data structure (avoiding
the parallel arrays), this gives us more freedom to rearrange the
`DIEValue` list.
This fixes the temporary memory regression from 845 MB up to 879 MB, and
drops it further to 829 MB for a net memory decrease of around 1.9%
(incremental decrease around 5.7%).
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
llvm-svn: 238364
This reverts commit r238350, effectively reapplying r238349 after fixing
(all?) the problems, all somehow related to how I was using
`AlignedArrayCharUnion<>` inside `DIEValue`:
- MSVC can only handle `sizeof()` on types, not values. Change the
assert.
- GCC doesn't know the `is_trivially_copyable` type trait. Instead of
asserting it, add destructors.
- Call placement new even when constructing POD (i.e., the pointers).
- Instead of copying the char buffer, copy the casted classes.
I've left in a couple of `static_assert`s that I think both MSVC and GCC
know how to handle. If the bots disagree with me, I'll remove them.
- Check that the constructed type is either standard layout or a
pointer. This protects against a programming error: we really want
the "small" `DIEValue`s to be small and simple, so don't
accidentally change them not to be.
- Similarly, check that the size of the buffer is no bigger than a
`uint64_t` or a pointer. (I thought checking against
`sizeof(uint64_t)` would be good enough, but Chandler suggested that
pointers might sometimes be bigger than that in the context of
sanitizers.)
I've also committed r238359 in the meantime, which introduces a
DIEValue.def to simplify dispatching between the various types (thanks
to a review comment by David Blaikie). Without that, this commit would
be almost unintelligible.
Here's the original commit message:
--
Change `DIEValue` to be stored/passed/etc. by value, instead of
reference. It's now a discriminated union, with a `Val` field storing
the actual type. The classes that used to inherit from `DIEValue` no
longer do. There are two categories of these:
- Small values fit in a single pointer and are stored by value.
- Large values require auxiliary storage, and are stored by reference.
The only non-mechanical change is to tools/dsymutil/DwarfLinker.cpp. It
was relying on `DIEInteger`s being passed around by reference, so I
replaced that assumption with a `PatchLocation` type that stores a safe
reference to where the `DIEInteger` lives instead.
This commit causes a temporary regression in memory usage, since I've
left merging `DIEAbbrevData` into `DIEValue` for a follow-up commit. I
measured an increase from 845 MB to 879 MB, around 3.9%. The follow-up
drops it lower than the starting point, and I've only recently brought
the memory this low anyway, so I'm committing these changes separately
to keep them incremental. (I also considered swapping the commits, but
the other one first would cause a lot more code churn.)
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
--
llvm-svn: 238362
This reverts commit r238349, since it caused some errors on bots:
- std::is_trivially_copyable isn't available until GCC 5.0.
- It was complaining about strict aliasing with my use of
ArrayCharUnion.
llvm-svn: 238350
Change `DIEValue` to be stored/passed/etc. by value, instead of
reference. It's now a discriminated union, with a `Val` field storing
the actual type. The classes that used to inherit from `DIEValue` no
longer do. There are two categories of these:
- Small values fit in a single pointer and are stored by value.
- Large values require auxiliary storage, and are stored by reference.
The only non-mechanical change is to tools/dsymutil/DwarfLinker.cpp. It
was relying on `DIEInteger`s being passed around by reference, so I
replaced that assumption with a `PatchLocation` type that stores a safe
reference to where the `DIEInteger` lives instead.
This commit causes a temporary regression in memory usage, since I've
left merging `DIEAbbrevData` into `DIEValue` for a follow-up commit. I
measured an increase from 845 MB to 879 MB, around 3.9%. The follow-up
drops it lower than the starting point, and I've only recently brought
the memory this low anyway, so I'm committing these changes separately
to keep them incremental. (I also considered swapping the commits, but
the other one first would cause a lot more code churn.)
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
llvm-svn: 238349
This commit a 3rd attempt at comitting the initial MIR serialization patch.
The first commit (r237708) was reverted in 237730. Then the second commit
(r237954) was reverted in r238007, as the MIR library under CodeGen caused
a circular dependency where the CodeGen library depended on MIR and MIR
library depended on CodeGen.
This commit has fixed the dependencies between CodeGen and MIR by
reorganizing the MIR serialization code - the code that prints out
MIR has been moved to CodeGen, and the MIR library has been renamed
to MIRParser. Now the CodeGen library doesn't depend on the
MIRParser library, thus the circular dependency no longer exists.
--Original Commit Message--
MIR Serialization: print and parse LLVM IR using MIR format.
This commit is the initial commit for the MIR serialization project.
It creates a new library under CodeGen called 'MIR'. This new
library adds a new machine function pass that prints out the LLVM IR
using the MIR format. This pass is then added as a last pass when a
'stop-after' option is used in llc. The new library adds the initial
functionality for parsing of MIR files as well. This commit also
extends the llc tool so that it can recognize and parse MIR input files.
Reviewers: Duncan P. N. Exon Smith, Matthias Braun, Philip Reames
Differential Revision: http://reviews.llvm.org/D9616
llvm-svn: 238341
v2: TargetLoweringBase:: -> TargetLowering::
Use Ops array
v3: Explicitly use value 0 for ?DIV
Remove redundant newline
Differential revision: http://reviews.llvm.org/D7803
reviewer: ab
llvm-svn: 238336
- Clean documentation comment
- Change the API to accept an iterator so you can actually pass
MachineBasicBlock::end() now.
- Add more "const".
llvm-svn: 238288
remove ExecutionEngine's dependence on CodeGen. NFC.
This is a follow-up to r238080.
Differential Revision: http://reviews.llvm.org/D9830
llvm-svn: 238244
Stop creating symbols we don't need in `DwarfStringPool`. The consumers
only call `DwarfStringPoolEntryRef::getSymbol()` when DWARF is
relocatable, so this just stops creating the unused symbols when it's
not. This drops memory usage from 851 MB to 845 MB, around 0.7%.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
llvm-svn: 238122
Mint a new function, `AsmPrinter::emitDwarfStringOffset()`, which takes
a `DwarfStringPoolEntryRef`. When DWARF is relocatable across sections,
this defers to `emitSectionOffset()` and emits the `MCSymbol`;
otherwise, just emit the offset directly, without using any intermediate
symbols.
`EmitLabelDifference()` is already optimized to emit absolute label
differences cheaply when possible, so there aren't any major memory
savings here (853 MB down to 851 MB, or 0.2%). However, it prepares for
making the `MCSymbol`s in the `DwarfStringPool` optional.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
llvm-svn: 238119
Expose the `DwarfStringPool` entry in a header, and store a pointer to
it directly in `DIEString`. Instead of choosing at creation time how to
emit it, use the `dwarf::Form` to determine that at emission time.
Besides avoiding the other `DIEValue`, this shaves two pointers off of
`DIEString`; the data is now a single pointer. This is a nice cleanup
on its own -- and drops memory usage from 861 MB down to 853 MB, around
0.9% -- but it's also preparation for passing `DIEValue`s by value.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
llvm-svn: 238117
Extract out `DwarfStringPoolEntry` and `DwarfStringPoolRef` from
`DwarfStringPool` so that downstream users can start using
`DwarfStringPool::getEntry()` directly. This will allow users to delay
the decision between emitting a symbol or an offset until later.
llvm-svn: 238116
Change `DwarfStringPool` to calculate byte offsets on-the-fly, and
update `DwarfUnit::getLocalString()` to use a `DIEInteger` instead of a
`DIEDelta` when Dwarf doesn't use relocations (i.e., Mach-O). This
eliminates another call to `EmitLabelDifference()`, and drops memory
usage from 865 MB down to 861 MB, around 0.5%.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
llvm-svn: 238114
Move `DwarfStringPool`'s `getEntry()` to the header (and make it a
member function) in preparation for calculating symbol offsets
on-the-fly.
llvm-svn: 238112
On GPU targets, materializing constants is cheap and stores are
expensive, so only doing this for zero vectors was silly.
Most of the new testcases aren't optimally merged, and are for
later improvements.
llvm-svn: 238108
Remove all virtual functions from `DIEValue`, dropping the vtable
pointer from its layout. Instead, create "impl" functions on the
subclasses, and use the `DIEValue::Type` to implement the dynamic
dispatch.
This is necessary -- obviously not sufficient -- for passing `DIEValue`s
around by value. However, this change stands on its own: we make tons
of these. I measured a drop in memory usage from 888 MB down to 860 MB,
or around 3.2%.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
llvm-svn: 238084
This is part of the work to remove TargetMachine::resetTargetOptions.
In this patch, instead of updating global variable NoFramePointerElim in
resetTargetOptions, its use in DisableFramePointerElim is replaced with a call
to TargetFrameLowering::noFramePointerElim. This function determines on a
per-function basis if frame pointer elimination should be disabled.
There is no change in functionality except that cl:opt option "disable-fp-elim"
can now override function attribute "no-frame-pointer-elim".
llvm-svn: 238080
The usual CodeGenPrepare trickery, on a target-specific intrinsic.
Without this, the expansion of atomics will usually have the zext
be hoisted out of the loop, defeating the various patterns we have
to catch this precise case.
Differential Revision: http://reviews.llvm.org/D9930
llvm-svn: 238054
This change to VirtRegRewriter::addMBBLiveIns adds live-in registers for each
MachineBasicBlock's LiveIns set without isLiveIn checks as they are being added
because doing so is expensive. After all live-in registers are added, the LiveIn
vectors are sorted and uniqued.
llvm-svn: 238008
Previously `SDDbgValue`s used the general allocator that lives for all
of `SelectionDAG`. Instead, give them their own allocator, and reset it
whenever `SDDbgInfo::clear()` is called, plugging a spiritual leak.
This drops `SelectionDAGBuilder::visitIntrinsicCall()` off of my heap
profile (was at around 2% of `llc` for codegen of `-flto -g`). Thanks
to Pete Cooper for spotting the problem and suggesting the fix.
llvm-svn: 237998
Cleanup how `SDDbgValue` is initialized, and rearrange the fields to
save two pointers in the struct layout. No real functionality change
though (and I doubt the memory savings would show up in a profile).
llvm-svn: 237997
Prior to this patch, we could update the operand of another MI in the same
bundle.
Longer version:
Before InlineSpiller rematerializes a vreg, it iterates over operands of each MI
in a bundle, collecting all (MI, OpNo) pairs that reference that vreg.
Then if it does rematerialize, it goes through the pair list and replaces the
operands with the new (rematerialized) vreg. The problem is, it tries to
replace all of these operands in the main MI ! This works fine for single MIs.
However, if we are processing a bundle of MIs and the list contains multiple
pairs - the rematerialization will either crash trying to access a non-existing
operand of the main MI, or silently corrupt one of the existing ones. It will
also ignore other MIs in the bundle.
The obvious fix is to use the MI pointers saved in collected (MI, OpNo) pairs.
This must have been the original intent of the pair list but somehow these
pointers got lost.
Patch by Dmitri Shtilman <dshtilman@icloud.com>!
Differential revision: http://reviews.llvm.org/D9904
<rdar://problem/21002163>
llvm-svn: 237964
This commit is a 2nd attempt at committing the initial MIR serialization patch.
The first commit (r237708) made the incremental buildbots unstable and was
reverted in r237730. The original commit didn't add a terminating null
character to the LLVM IR source which was passed to LLParser, and this
sometimes caused the test 'llvmIR.mir' to fail with a parsing error because
the LLVM IR source didn't have a null character immediately after the end
and thus LLLexer encountered some garbage characters that ultimately caused
the error.
This commit also includes the other test fixes I committed in
r237712 (llc path fix) and r237723 (remove target triple) which
also got reverted in r237730.
--Original Commit Message--
MIR Serialization: print and parse LLVM IR using MIR format.
This commit is the initial commit for the MIR serialization project.
It creates a new library under CodeGen called 'MIR'. This new
library adds a new machine function pass that prints out the LLVM IR
using the MIR format. This pass is then added as a last pass when a
'stop-after' option is used in llc. The new library adds the initial
functionality for parsing of MIR files as well. This commit also
extends the llc tool so that it can recognize and parse MIR input files.
Reviewers: Duncan P. N. Exon Smith, Matthias Braun, Philip Reames
Differential Revision: http://reviews.llvm.org/D9616
llvm-svn: 237954
This starts merging MCSection and MCSectionData.
There are a few issues with the current split between MCSection and
MCSectionData.
* It optimizes the the not as important case. We want the production
of .o files to be really fast, but the split puts the information used
for .o emission in a separate data structure.
* The ELF/COFF/MachO hierarchy is not represented in MCSectionData,
leading to some ad-hoc ways to represent the various flags.
* It makes it harder to remember where each item is.
The attached patch starts merging the two by moving the alignment from
MCSectionData to MCSection.
Most of the patch is actually just dropping 'const', since
MCSectionData is mutable, but MCSection was not.
llvm-svn: 237936
This patch improves support for sign extension of the lower lanes of vectors of integers by making use of the SSE41 pmovsx* sign extension instructions where possible, and optimizing the sign extension by shifts on pre-SSE41 targets (avoiding the use of i64 arithmetic shifts which require scalarization).
It converts SIGN_EXTEND nodes to SIGN_EXTEND_VECTOR_INREG where necessary, that more closely matches the pmovsx* instruction than the default approach of using SIGN_EXTEND_INREG which splits the operation (into an ANY_EXTEND lowered to a shuffle followed by shifts) making instruction matching difficult during lowering. Necessary support for SIGN_EXTEND_VECTOR_INREG has been added to the DAGCombiner.
Differential Revision: http://reviews.llvm.org/D9848
llvm-svn: 237885
Create a low-overhead path for `EmitLabelDifference()` that emits a
emits an absolute number when (1) the output is an object stream and (2)
the two symbols are in the same data fragment.
This drops memory usage on Mach-O from 975 MB down to 919 MB (5.8%).
The only call is when `!doesDwarfUseRelocationsAcrossSections()` --
i.e., on Mach-O -- since otherwise an absolute offset from the start of
the section needs a relocation. (`EmitLabelDifference()` is cheaper on
ELF anyway, since it creates 1 fewer temp symbol, and it gets called far
less often. It's not clear to me if this is even a bottleneck there.)
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
llvm-svn: 237876
The ByteStreamer here wasn't taking account of whether the asm streamer was text based and verbose. Only with that combination should we emit comments.
This change makes sure that we only actually convert a Twine to a string using Twine::str() if we need the comment. This saves about 10000 small allocations on a test case involving the verify-use_list-order bitcode going through llc with debug info.
Note, this is NFC as the comments would ultimately never be emitted unless required.
Reviewed by Duncan Exon Smith and David Blaikie.
llvm-svn: 237851
This reverts commit 0037b6bcbc874aa1b93d7ce3ad8dba3753ee2d9d (r237827).
David Blaikie suggested some alternatives to this which are better. Reverting to apply a better solution later.
llvm-svn: 237849
DebugLocDwarfExpression::EmitOp was creating temporary strings by concatenating Twine's.
When emitting to object files, these comments are thrown away.
This commit adds a boolean to the constructor of the DwarfExpression to control whether it will actually emit
any comments. This prevents it from even generating the temporary comments which would have been thrown away anyway.
llvm-svn: 237827
DAG.FoldConstantArithmetic() can fail even though both operands are
Constants if OpaqueConstants are involved. Continue trying other combine
possibilities in tis case.
Differential Revision: http://reviews.llvm.org/D6946
Somewhat related to PR21801 / rdar://19211454
llvm-svn: 237822
Summary:
During icmp lowering it can happen that a constant value can be larger than expected (see the code around the change).
APInt::getMinSignedBits() must be checked again as the shift before can change the constant sign to positive.
I'm not sure it is the best fix possible though.
Test Plan: Regression test included.
Reviewers: resistor, chandlerc, spatel, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D9147
llvm-svn: 237812
Now that Intrinsic::ID is a typed enum, we can forward declare it and so return it from this method.
This updates all users which were either using an unsigned to store it, or had a now unnecessary cast.
llvm-svn: 237810
Summary:
For N32/N64, private labels begin with '.L' but for O32 they begin with '$'.
MCAsmInfo now has an initializer function which can be used to provide information from the TargetMachine to control the assembly syntax.
Reviewers: vkalintiris
Reviewed By: vkalintiris
Subscribers: jfb, sandeep, llvm-commits, rafael
Differential Revision: http://reviews.llvm.org/D9821
llvm-svn: 237789
This change implements support for lowering of the gc.relocates tied to the invoke statepoint.
This is acomplished by storing frame indices of the lowered values in "StatepointRelocatedValues" map inside FunctionLoweringInfo instead of storing them in per-basic block structure StatepointLowering.
After this change StatepointLowering is used only during "LowerStatepoint" call and it is not necessary to store it as a field in SelectionDAGBuilder anymore.
Differential Revision: http://reviews.llvm.org/D7798
llvm-svn: 237786
This change adds a new GC strategy for supporting the CoreCLR runtime.
This strategy is currently identical to Statepoint-example GC,
but is necessary for several upcoming changes specific to CoreCLR, such as:
1. Base-pointers not explicitly reported for interior pointers
2. Different format for stack-map encoding
3. Location of Safe-point polls: polls are only needed before loop-back edges and before tail-calls (not needed at function-entry)
4. Runtime specific handshake between calls to managed/unmanaged functions.
llvm-svn: 237753
The incremental buildbots entered a pass-fail cycle where during the fail
cycle one of the tests from this commit fails for an unknown reason. I
have reverted this commit and will investigate the cause of this problem.
llvm-svn: 237730
This commit is the initial commit for the MIR serialization project.
It creates a new library under CodeGen called 'MIR'. This new
library adds a new machine function pass that prints out the LLVM IR
using the MIR format. This pass is then added as a last pass when a
'stop-after' option is used in llc. The new library adds the initial
functionality for parsing of MIR files as well. This commit also
extends the llc tool so that it can recognize and parse MIR input files.
Reviewers: Duncan P. N. Exon Smith, Matthias Braun, Philip Reames
Differential Revision: http://reviews.llvm.org/D9616
llvm-svn: 237708
This cleans up the FoldConstantArithmetic code by factoring out the case
of two ConstantSDNodes into an own function. This avoids unnecessary
complexity for many callers who already have ConstantSDNode arguments.
This also avoids an intermeidate SmallVector datastructure and a loop
over that datastructure.
llvm-svn: 237651
This was previously returning int. However there are no negative opcode
numbers and more importantly this was needlessly different from
MCInstrDesc::getOpcode() (which even is the value returned here) and
SDValue::getOpcode()/SDNode::getOpcode().
llvm-svn: 237611
At the present time, we don't have a way to represent general dependency
relationships, so everything is represented using memory dependency. In order
to preserve the data dependency of a READ_REGISTER on WRITE_REGISTER, we need
to model WRITE_REGISTER as writing (which we had been doing) and model
READ_REGISTER as reading (which we had not been doing). Fix this, and also the
way that the chain operands were generated at the SDAG level.
Patch by Nicholas Paul Johnson, thanks! Test case by me.
llvm-svn: 237584
This patch implements LLVM support for the ACLE special register intrinsics in
section 10.1, __arm_{w,r}sr{,p,64}.
This patch is intended to lower the read/write_register instrinsics, used to
implement the special register intrinsics in the clang patch for special
register intrinsics (see http://reviews.llvm.org/D9697), to ARM specific
instructions MRC,MCR,MSR etc. to allow reading an writing of coprocessor
registers in AArch32 and AArch64. This is done by inspecting the register
string passed to the intrinsic and then lowering to the appropriate
instruction.
Patch by Luke Cheeseman.
Differential Revision: http://reviews.llvm.org/D9699
llvm-svn: 237579
In CombineToPreIndexedLoadStore, when the offset is a constant, we have code
that looks for other uses of the pointer which are constant offset computations
so that they can be rewritten in terms of the updated pointer so that we don't
need to keep a copy of the base pointer to compute these constant offsets.
Unfortunately, when it iterated over the uses, it did so by SDNodes, and so we
could confuse ourselves if the base pointer was produced by a node that had
multiple results (because we would not immediately exclude uses of the other
node results). This was reported as PR22755. Unfortunately, we don't have a
test case (and I've also been unable to produce one thus far), but at least the
mistake is clear. The right way to fix this problem is to make use of the information
contained in the use iterators to filter out any uses of other results of the
node producing the base pointer.
This should be mostly NFC, but should also fix PR22755 (for which,
unfortunately, we have no in-tree test case).
llvm-svn: 237576
Currently whenever we sink any instruction, we do clearKillFlags for
every use of every use operand for that instruction, apparently there
are a lot of duplication, therefore compile time penalties.
This patch collect all the interested registers first, do clearKillFlags
for it all together at once at the end, so we only need to do
clearKillFlags once for one register, duplication is avoided.
Patch by Lawrence Hu!
Differential Revision: http://reviews.llvm.org/D9719
llvm-svn: 237510
I intended this loop to only unwrap SplitVector actions, but it
was more broad than that, such as unwrapping WidenVector actions,
which makes operations seem legal when they're not.
llvm-svn: 237457
This adds new SDNodes for signed/unsigned min/max. These nodes are built from
select/icmp pairs matched at SDAGBuilder stage.
This patch adds the nodes, as well as legalization support and sets them to
be "expand" for all targets.
NFC for now; this will be tested when I switch AArch64 to using these new
nodes.
llvm-svn: 237423
Instead of doing that, create a temporary copy of MCTargetOptions and reset its
SanitizeAddress field based on the function's attribute every time an InlineAsm
instruction is emitted in AsmPrinter::EmitInlineAsm.
This is part of the work to remove TargetMachine::resetTargetOptions (the FIXME
added to TargetMachine.cpp in r236009 explains why this function has to be
removed).
Differential Revision: http://reviews.llvm.org/D9570
llvm-svn: 237412
Other targets probably should as well. Since r237161, compiler-rt has
both, but I don't see why anything other than gnueabi would use a
gnueabi naming scheme.
llvm-svn: 237324
Several updates for [DebugInfo] Add debug locations to constant SD nodes (r235989).
Includes:
* re-enabling the change (disabled recently);
* missing change for FP constants;
* resetting debug location of constant node if it's used more than at one place
to prevent emission of wrong locations in case of coalesced constants;
* a couple of additional tests.
Now all look ups in CSEMap are wrapped by additional method.
Comment in D9084 suggests that debug locations aren't useful for "target constants",
so there might be one more change related to this API (namely, dropping debug
locations for getTarget*Constant methods).
Differential Revision: http://reviews.llvm.org/D9604
llvm-svn: 237237
Summary:
This change adds two new parameters to the statepoint intrinsic, `i64 id`
and `i32 num_patch_bytes`. `id` gets propagated to the ID field
in the generated StackMap section. If the `num_patch_bytes` is
non-zero then the statepoint is lowered to `num_patch_bytes` bytes of
nops instead of a call (the spill and reload code remains unchanged).
A non-zero `num_patch_bytes` is useful in situations where a language
runtime requires complete control over how a call is lowered.
This change brings statepoints one step closer to patchpoints. With
some additional work (that is not part of this patch) it should be
possible to get rid of `TargetOpcode::STATEPOINT` altogether.
PlaceSafepoints generates `statepoint` wrappers with `id` set to
`0xABCDEF00` (the old default value for the ID reported in the stackmap)
and `num_patch_bytes` set to `0`. This can be made more sophisticated
later.
Reviewers: reames, pgavlin, swaroop.sridhar, AndyAyers
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9546
llvm-svn: 237214
We already had a method to iterate over all the incoming values of a PHI. This just changes all eligible code to use it.
Ineligible code included anything which cared about the index, or was also trying to get the i'th incoming BB.
llvm-svn: 237169