Currently, in MachineBlockPlacement pass the loop is rotated to let the best exit to be the last BB in the loop chain, to maximize the fall-through from the loop to outside. With profile data, we can determine the cost in terms of missed fall through opportunities when rotating a loop chain and select the best rotation. Basically, there are three kinds of cost to consider for each rotation:
1. The possibly missed fall through edge (if it exists) from BB out of the loop to the loop header.
2. The possibly missed fall through edges (if they exist) from the loop exits to BB out of the loop.
3. The missed fall through edge (if it exists) from the last BB to the first BB in the loop chain.
Therefore, the cost for a given rotation is the sum of costs listed above. We select the best rotation with the smallest cost. This is only for PGO mode when we have more precise edge frequencies.
Differential revision: http://reviews.llvm.org/D10717
llvm-svn: 250754
As usual, this is a polymorphic hierarchy without polymorphic ownership,
so simply make the dtor protected non-virtual, protected default copy
ctor/assign, and make derived classes final. The derived classes will
pick up correct default public copy ops (and dtor) implicitly.
(wish I could add -Wdeprecated to the build, but last time I tried it
triggered on some system headers I still need to look into/figure out)
llvm-svn: 250747
Allow LLVM to optimize the sequence like the following:
%inc = add nsw i32 %i, 1
%cmp = icmp slt %n, %inc
into:
%cmp = icmp sle i32 %n, %i
The case is not handled previously due to the complexity of compuation of %n.
Hence, LLVM cannot swap operands of icmp accordingly.
llvm-svn: 250746
Besides the usual, I finally added an overload to
`BasicBlock::splitBasicBlock()` that accepts an `Instruction*` instead
of `BasicBlock::iterator`. Someone can go back and remove this overload
later (after updating the callers I'm going to skip going forward), but
the most common call seems to be
`BB->splitBasicBlock(BB->getTerminator(), ...)` and I'm not sure it's
better to add `->getIterator()` to every one than have the overload.
It's pretty hard to get the usage wrong.
llvm-svn: 250745
This was originally checked in at r250527, but reverted at r250570 because of PR25222.
There were at least 2 problems:
1. The cost check was checking for an instruction with an exact cost of TCC_Expensive;
that should have been >=.
2. The cause of the clang stage 1 failures was illegally sinking 'call' instructions;
we can't sink instructions that may have side effects / are not safe to execute speculatively.
Fixed those conditions in sinkSelectOperand() and added test cases.
Original commit message:
This is a follow-up to the discussion in D12882.
Ideally, we would like SimplifyCFG to be able to form select instructions even when the operands
are expensive (as defined by the TTI cost model) because that may expose further optimizations.
However, we would then like a later pass like CodeGenPrepare to undo that transformation if the
target would likely benefit from not speculatively executing an expensive op (this patch).
Once we have this safety mechanism in place, we can adjust SimplifyCFG to restore its
select-formation behavior that changed with r248439.
Differential Revision: http://reviews.llvm.org/D13297
llvm-svn: 250743
Convert two halfword loads into a single 32-bit word load with bitfield extract
instructions. For example :
ldrh w0, [x2]
ldrh w1, [x2, #2]
becomes
ldr w0, [x2]
ubfx w1, w0, #16, #16
and w0, w0, #ffff
llvm-svn: 250719
- Isolate the check for the existence of a stack frame into hasFP.
- Implement getFrameIndexReference for DWARF address computation.
- Use getFrameIndexReference for offset computation in eliminateFrameIndex.
- Preserve debug information for dynamically allocated stack objects.
- Prefer FP to access local objects at -O0.
- Add experimental code to skip allocframe when not strictly necessary
(disabled by default).
llvm-svn: 250718
Emit the CFI instructions after all code transformation have been done.
This will avoid any interference between CFI instructions and packetization.
llvm-svn: 250714
memory, rather than representing the stubs in IR. Update the CompileOnDemand
layer to use this functionality.
Directly emitting stubs is much cheaper than building them in IR and codegen'ing
them (see below). It also plays well with remote JITing - stubs can be emitted
directly in the target process, rather than having to send them over the wire.
The downsides are:
(1) Care must be taken when resolving symbols, as stub symbols are held in a
separate symbol table. This is only a problem for layer writers and other
people using this API directly. The CompileOnDemand layer hides this detail.
(2) Aliases of function stubs can't be symbolic any more (since there's no
symbol definition in IR), but must be converted into a constant pointer
expression. This means that modules containing aliases of stubs cannot be
cached. In practice this is unlikely to be a problem: There's no benefit to
caching such a module anyway.
On balance I think the extra performance is more than worth the trade-offs: In a
simple stress test with 10000 dummy functions requiring stubs and a single
executed "hello world" main function, directly emitting stubs reduced user time
for JITing / executing by over 90% (1.5s for IR stubs vs 0.1s for direct
emission).
llvm-svn: 250712
The mapping of these two intrinsics in ARMInstrInfo.td had a small
omission which lead to their operands not being validated/transformed
before being lowered into usat and ssat instructions. This can cause
incorrect instructions to be emitted.
I've also added tests for the remaining two saturating arithmatic
intrinsics @llvm.arm.qadd and @llvm.arm.qsub as they are missing
codegen tests.
llvm-svn: 250697
We were keeping a reference to an object in a DenseMap then mutating it. At the end of the function we were attempting to clone that reference into other keys in the DenseMap, but DenseMap may well decide to resize its hashtable which would invalidate the reference!
It took an extremely complex testcase to catch this - many thanks to Zhendong Su for catching it in PR25225.
This fixes PR25225.
llvm-svn: 250692
Originally I planned to use the same interface for masked gather/scatter and set isConsecutive to "false" in this case.
Now I'm implementing masked gather/scatter and see that the interface is inconvenient. I want to add interfaces isLegalMaskedGather() / isLegalMaskedScatter() instead of using the "Consecutive" parameter in the existing interfaces.
Differential Revision: http://reviews.llvm.org/D13850
llvm-svn: 250686
1. Key constant values (version, magic) and data structures related to raw and
indexed profile format are moved into one centralized file: InstrProf.h.
2. Utility function such as MD5Hash computation is also moved to the common
header to allow sharing with other components in the future.
3. A header data structure is introduced for Indexed format so that the reader
and writer can always be in sync.
4. Added some comments to document different places where multiple definition
of the data structure must be kept in sync (reader/writer, runtime, lowering
etc). No functional change is intended.
Differential Revision: http://reviews.llvm.org/D13758
llvm-svn: 250638