The intrusive nature of the reference counting is not required/used
here, so simplify the ownership model to make the code easier to
understand.
llvm-svn: 291005
This change aims to unify and correct our logic for when we need to allow for
the possibility of the linker adding a TOC restoration instruction after a
call. This comes up in two contexts:
1. When determining tail-call eligibility. If we make a tail call (i.e.
directly branch to a function) then there is no place for the linker to add
a TOC restoration.
2. When determining when we need to add a nop instruction after a call.
Likewise, if there is a possibility that the linker might need to add a
TOC restoration after a call, then we need to put a nop after the call
(the bl instruction).
First problem: We were using similar, but different, logic to decide (1) and
(2). This is just wrong. Both the resideInSameModule function (used when
determining tail-call eligibility) and the isLocalCall function (used when
deciding if the post-call nop is needed) were supposed to be determining the
same underlying fact (i.e. might a TOC restoration be needed after the call).
The same logic should be used in both places.
Second problem: The logic in both places was wrong. We only know that two
functions will share the same TOC when both functions come from the same
section of the same object. Otherwise the linker might cause the functions to
use different TOC base addresses (unless the multi-TOC linker option is
disabled, in which case only shared-library boundaries are relevant). There are
a number of factors that can cause functions to be placed in different sections
or come from different objects (-ffunction-sections, explicitly-specified
section names, COMDAT, weak linkage, etc.). All of these need to be checked.
The existing logic only checked properties of the callee, but the properties of
the caller must also be checked (for example, calling from a function in a
COMDAT section means calling between sections).
There was a conceptual error in the resideInSameModule function in that it
allowed tail calls to functions with weak linkage and protected/hidden
visibility. While protected/hidden visibility does prevent the function
implementation from being replaced at runtime (via interposition), it does not
prevent the linker from using an alternate implementation at link time (i.e.
using some strong definition to replace the provided weak one during linking).
If this happens, then we're still potentially looking at a required TOC
restoration upon return.
Otherwise, in general, the post-call nop is needed wherever ELF interposition
needs to be supported. We don't currently support ELF interposition at the IR
level (see http://lists.llvm.org/pipermail/llvm-dev/2016-November/107625.html
for more information), and I don't think we should try to make it appear to
work in the backend in spite of that fact. Unfortunately, because of the way
that the ABI works, we need to generate code as if we supported interposition
whenever the linker might insert stubs for the purpose of supporting it.
Differential Revision: https://reviews.llvm.org/D27231
llvm-svn: 291003
This roughly matches the semantics of std::enable_shared_from_this - that it
does not dictate the ownership model of all users, but constrains those users
taking advantage of the intrusive nature to do so only when there's a guarantee
that that's the ownership model being used for the object being passed.
Reviewers: jlebar
Differential Revision: https://reviews.llvm.org/D28245
llvm-svn: 290987
This test case has been reduced from test/Analysis/RegionInfo/mix_1.ll and
provides us with a minimal example of a test case which caused problems while
working on an improved version of the RegionInfo analysis. We upstream this
test case, as it certainly can be helpful in future debugging and optimization
tests.
Test case reduced by Pratik Bhatu <cs12b1010@iith.ac.in>
llvm-svn: 290974
This reapplies r289828 (reverted in r289833 as it broke the address sanitizer). The
debugloc is now only set when the instruction is not a call, as this causes the
verifier to assert (the inliner requires an inlinable callsite to have a debug loc
if the caller and callee have debug info).
Original commit message:
Simplify CFG will try to sink the last instruction in a series of basic blocks,
creating a "common" instruction in the successor block (sinkLastInstruction).
When it does this, the debug location of the single instruction should be the
merged debug locations of the commoned instructions.
Original review: https://reviews.llvm.org/D27590
llvm-svn: 290973
These tests are missing a target triple and the -m elf_x86_64 gold option,
which makes them fail on non-x86 targets.
Differential revision: https://reviews.llvm.org/D28285
Reviewers: tejohnson
llvm-svn: 290965
Summary:
Change llvm-link to use the FunctionImporter handling, instead of
manually invoking the Linker. We still need to load the module
in llvm-link to do the desired testing for invalid import requests
(weak functions), and to get the GUID (in case the function is local).
Also change the drop-debug-info test to use llvm-link so that importing
is forced (in order to test debug info handling) and independent of
import logic changes.
Reviewers: mehdi_amini
Subscribers: mgorny, llvm-commits, aprantl
Differential Revision: https://reviews.llvm.org/D28277
llvm-svn: 290964
This CPU type was not previously recognized by LLVM which led to emitting
poor (and sometimes incorrect) code in some JIT workloads on such a machine.
llvm-svn: 290961
Summary:
In mergeSPUpdates, debug values need to be ignored when getting the
previous element, otherwise debug data could have an impact on codegen.
In eliminateCallFramePseudoInstr, debug values after the erased element
could have an impact on codegen and should be skipped.
Closes PR31319 (https://llvm.org/bugs/show_bug.cgi?id=31319)
Reviewers: aprantl, MatzeB, mkuper
Subscribers: gbedwell, llvm-commits
Differential Revision: https://reviews.llvm.org/D27688
llvm-svn: 290955
This just removes the usage of llvm::reverse and llvm::seq. That makes
it harder to handle the empty case correctly and so I've also added
a test there.
This is just a shot in the dark at what might be behind the buildbot
failures. I can't reproduce any issues locally including with ASan...
I feel like I'm missing something...
llvm-svn: 290954
This is both convenient and more efficient as we can skip any
intermediate reallocation of the vector.
This usage pattern came up in a subsequent patch on the pass manager,
but it seems generically useful so I factored it out and added unittests
here.
llvm-svn: 290952
Summary:
The InlineSpiller was accessing the DominatorTreeBase directly
through the public data member DT in the MachineDominatorTree.
This is not a good idea as the "cached" information in
SplitCriticalEdges is not applied before the access.
The DominatorTreeBase must be accessed through the member
function getBase() in MachineDominatorTree.
The fault was introduced in r266162.
I think the public data member DT in the MachineDominatorTree
should have been made private in the original code (r215576)
that introduced the concept of lazily updating the
MachineDominatorTree information from
MachineBasicBlock::SplitCriticalEdge().
Patch by Karl-Johan Karlsson <karl-johan.karlsson@ericsson.com>
Reviewers: wmi, qcolombet
Subscribers: llvm-commits, bjope, uabelho
Differential Revision: https://reviews.llvm.org/D27983
llvm-svn: 290950
Replacing the memory operand in the intrinsic versions of the comis/ucomis instrucions from f128mem to ssmem/sdmem accordingly.
Differential Revision: https://reviews.llvm.org/D28138
llvm-svn: 290948
In some cases its more efficient to combine TRUNC( BINOP( X, Y ) ) --> BINOP( TRUNC( X ), TRUNC( Y ) ) if the binop is legal for the truncated types.
This is true for vector integer multiplication (especially vXi64), as well as ADD/AND/XOR/OR in cases where we only need to truncate one of the inputs at runtime (e.g. a duplicated input or an one use constant we can fold).
Further work could be done here - scalar cases (especially i64) could often benefit (if we avoid partial registers etc.), other opcodes, and better analysis of when truncating the inputs reduces costs.
I have considered implementing this for all targets within the DAGCombiner but wasn't sure we could devise a suitable cost model system that would give us the range we need.
Differential Revision: https://reviews.llvm.org/D28219
llvm-svn: 290947
0-7: Address
8-11: Line
12-13: Column
14-15: File
16-19: Isa
20-23: Discriminator
24+: bit fields
The packing is fine until the "Isa" field, which is an 8-bit int that occupies 4 bytes. We can instead move Discriminator into the 16-19 slot, and pack Isa into the 20-23 range along with the bit fields:
0-7: Address
8-11: Line
12-13: Column
14-15: File
16-19: Discriminator
20-23: Isa + bit fields
This layout is only 24 bytes. This 25% reduction in size may seem small but a large binary can have line tables with thousands of rows stored in a vector.
Patch by Simon Que!
Differential Revision: https://reviews.llvm.org/D27961
llvm-svn: 290931
We can perform the following:
(add (zext (add nuw X, C1)), C2) -> (zext (add nuw X, C1+C2))
This is only possible if C2 is negative and C2 is greater than or equal to negative C1.
llvm-svn: 290927
This test was testing that we could correctly find the parent of a DIE, but it was actually just testing the special case where a DIE's depth was 1. This corrects that error by adding an extra level into the the DWARF to ensure that we correctly get the parent by looking for the parent with a depth that is 1 less than the current depth.
Differential Revision: https://reviews.llvm.org/D28261
llvm-svn: 290918
As per post-commit review for r289993 (D27775), we can only safely
import a type as a decl if it has an Identifier, as the Name alone
is not enough to be unique across modules.
llvm-svn: 290915
I'm not sure what determines the minor version, but it appears
that it's possible for a fully updated, release version of
VS2015 with Update 3 can go (at least) as low as 19.00.24213.1.
Updating the compiler version check to account for this so we
don't generate superfluous warnings.
llvm-svn: 290914
I wrote this patch before seeing the comment in:
https://reviews.llvm.org/D27114
...that suggests we should actually be canonicalizing the other way.
So just in case we decide this is the right way, we might as well
have a cleaner implementation.
llvm-svn: 290912
Use getReturnedArgOperand() instead of rolling our own. Note that it's
equivalent because there can only be one 'returned' operand.
The existing code was also incorrect: there already was awkward logic to
ignore callee/EH blocks, but operands can now also be operand bundles,
in which case we'll look for non-existent parameter attributes.
Unfortunately, this isn't observable in-tree, as it only crashes when
exercising the regular call lowering logic with operand bundles.
Still, this is a nice small cleanup anyway.
llvm-svn: 290905
Provide a distinct contents for semBogus and semPPCDoubleDouble in order
to prevent compilers from collapsing them to a single memory address,
while we heavily rely on every semantic having distinct address.
This happens if insecure optimization collapsing identical values is
enabled. As a result, APFloats of semBogus are indistinguishable from
semPPCDoubleDouble -- and whenever the move constructor is used, the old
value beings being incorrectly recognized as a semPPCDoubleDouble.
Since the values in semPPCDoubleDouble are not used anywhere,
we can easily solve this issue via altering the value of one of the
fields and therefore ensuring that the collapse can not occur.
Differential Revision: https://reviews.llvm.org/D28112
llvm-svn: 290896
Summary:
No need to have this per-architecture. While there, unify 32-bit ARM's
behaviour with what changed elsewhere and start function names lowercase
as per the coding standards. Individual entry emission code goes to the
entry's own class.
Fully tested on amd64, cross-builds on both ARMs and PowerPC.
Reviewers: dberris
Subscribers: aemerson, llvm-commits
Differential Revision: https://reviews.llvm.org/D28209
llvm-svn: 290858
Summary:
Regardless how the loop body weight is distributed, we should preserve
total loop body weight. i.e. we should have same weight reaching the body of the loop
or its duplicates in peeled and unpeeled case.
Reviewers: mkuper, davidxl, anemet
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28179
llvm-svn: 290833
Apparently my suggestion of using ternary doesn't really work
as clang complains about incompatible types on LHS and RHS. Some
GCC versions happen to accept the code but clang behaviour is
correct here.
llvm-svn: 290822
Add an explicit LLVM_ENABLE_DIA_SDK option to control building support
for DIA SDK-based debugging. Control its value to match whether DIA SDK
support was found and expose it in LLVMConfig (alike LLVM_ENABLE_ZLIB).
Its value is needed for LLDB to determine whether to run tests requiring
DIA support. Currently it is obtained from llvm/Config/config.h;
however, this file is not available for standalone builds. Following
this change, LLDB will be modified to use the value from LLVMConfig.
Differential Revision: https://reviews.llvm.org/D26255
llvm-svn: 290818
GNU as rejects input where .cfi_sections is used after .cfi_startproc,
if the new section differs from the old. Adjust our output to always
emit .cfi_sections before the first .cfi_startproc to minimize necessary
code.
Differential Revision: https://reviews.llvm.org/D28011
llvm-svn: 290817
Summary:
This avoids the very fragile code for null expressions. We could also use a denseset that tracks which things have null expressions instead, but that seems pretty fragile and premature optimization.
This resolves a number of infinite loop cases, test reductions coming.
Reviewers: davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28193
llvm-svn: 290816
Summary: Previously, we tried to fix up the equivalences during symbolic evaluation. This does not work. Now, we change the equivalences during congruence finding, where it belongs. We also initialize the equivalence table to give a maximal answer.
Reviewers: davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28192
llvm-svn: 290815
X86 target does not provide any target specific cost calculation for interleave patterns.It uses the common target-independent calculation, which gives very high numbers. As a result, the scalar version is chosen in many cases. The situation on AVX-512 is even worse, since we have 3-src shuffles that significantly reduce the cost.
In this patch I calculate the cost on AVX-512. It will allow to compare interleave pattern with gather/scatter and choose a better solution (PR31426).
* Shiffle-broadcast cost will be changed in Simon's upcoming patch.
Differential Revision: https://reviews.llvm.org/D28118
llvm-svn: 290810
Summary:
`PromotedFloats` needs to be checked in
`DAGTypeLegalizer::PerformExpensiveChecks`. This patch fixes a few type
legalization failures with expansive checks for ARM fp16 tests.
Reviewers: baldrick, bogner, arsenm
Subscribers: arsenm, aemerson, llvm-commits
Differential Revision: https://reviews.llvm.org/D28187
llvm-svn: 290796
I'm not sure if this was intentional, but today
isGuaranteedToTransferExecutionToSuccessor returns true for readonly and
argmemonly calls that may throw. This commit changes the function to
not implicitly infer nounwind this way.
Even if we eventually specify readonly calls as not throwing,
isGuaranteedToTransferExecutionToSuccessor is not the best place to
infer that. We should instead teach FunctionAttrs or some other such
pass to tag readonly functions / calls as nounwind instead.
llvm-svn: 290794
I don't think this hole is currently exposed, but I crashed regression tests for
jump-threading and loop-vectorize after I added calls to isKnownNonNullAt() in
InstSimplify as part of trying to solve PR28430:
https://llvm.org/bugs/show_bug.cgi?id=28430
That's because they call into value tracking with a context instruction, but no
other parts of the query structure filled in.
For more background, see the discussion in:
https://reviews.llvm.org/D27855
llvm-svn: 290786
This was originally motivated by a compile time problem I've since figured out how to solve differently, but the cleanup seemed useful. We had the same logic - which essentially implemented find - in several places. By commoning them out, I can implement find and allow erase to be inlined at the call sites if profitable.
Differential Revision: https://reviews.llvm.org/D28183
llvm-svn: 290779
Summary: Update the Phabricator docs to clarify how changes are merged for contributors without commit access.
Reviewers: delcypher, aaron.ballman
Subscribers: aaron.ballman, anmol, llvm-commits
Differential Revision: https://reviews.llvm.org/D28184
llvm-svn: 290767
Summary:
gep 0, 0 is equivalent to bitcast. LLVM canonicalizes it
to getelementptr because it make SROA can then handle it.
Simple case like
void g(A &a) {
z(a);
if (glob)
a.foo();
}
void testG() {
A a;
g(a);
}
was not devirtualized with -fstrict-vtable-pointers because luck of
handling for gep 0 in Memory Dependence Analysis
Reviewers: dberlin, nlewycky, chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28126
llvm-svn: 290763
CVP doesn't care about the order of blocks visited, but by using a pre-order traversal over the graph we can a) not visit unreachable blocks and b) optimize as we go so that analysis of later blocks produce slightly more precise results.
I noticed this via inspection and don't have a concrete example which points to the issue.
llvm-svn: 290760
The bug was introduced in r289619.
Reviewers: Mehdi Amini
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28134
llvm-svn: 290749
I remove one extra line, but because annoyingly llvm-lit does not
clean the output directory before running the test, it didn't fail
locally (the file was present from a previous run).
llvm-svn: 290740
This is similar to the allocfn case - if an alloca is not captured, then it's
necessarily thread-local.
Differential Revision: https://reviews.llvm.org/D28170
llvm-svn: 290738
Summary:
The current loop complete unroll algorithm checks if unrolling complete will reduce the runtime by a certain percentage. If yes, it will apply a fixed boosting factor to the threshold (by discounting cost). The problem for this approach is that the threshold abruptly. This patch makes the boosting factor a function of runtime reduction percentage, capped by a fixed threshold. In this way, the threshold changes continuously.
The patch also simplified the code by reducing one parameter in UP.
The patch only affects code-gen of two speccpu2006 benchmark:
445.gobmk binary size decreases 0.08%, no performance change.
464.h264ref binary size increases 0.24%, no performance change.
Reviewers: mzolotukhin, chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26989
llvm-svn: 290737
Some incoming changes in ThinLTO will break this test.
Instead of relying on the heuristic to import, we
force the importing to happen with llvm-link.
llvm-svn: 290736
"Changed" doesn't actually change within the loop, so there's
no reason to keep track of it - we always return false during
analysis and true after the transformation is made.
llvm-svn: 290735
We correctly canonicalized (add (sext x), (sext y)) to (sext (add x, y))
where possible. However, we didn't perform the same canonicalization
for zexts or for muls.
llvm-svn: 290733
This moves the exit block and insertion point computation to be eager,
instead of after seeing the first scalar we can promote.
The cost is relatively small (the computation happens anyway, see discussion
on D28147), and the code is easier to follow, and can bail out earlier
if there's a catchswitch present.
llvm-svn: 290729
We would check whether we have a prehader *or* dedicated exit blocks,
and go into the promotion loop. Then, for each alias set we'd check
if we have a preheader *and* dedicated exit blocks, and bail if not.
Instead, bail immediately if we don't have both.
llvm-svn: 290728
We want to recompute LCSSA only when we actually promoted a value.
This means we only need to look at changes made by promotion when
deciding whether to recompute it or not, not at regular sinking/hoisting.
(This was what the code was documented as doing, just not what it did)
Hopefully NFC.
llvm-svn: 290726
Edit for voice, and also add examples. In particular, add an
explanation for why you might want to specialize IntrusiveRefCntPtrInfo,
which is not obvious.
llvm-svn: 290720
Summary:
This class is unnecessary.
Its comment indicated that it was a compile error to allocate an
instance of a class that inherits from RefCountedBaseVPTR on the stack.
This may have been true at one point, but it's not today.
Moreover you really do not want to allocate *any* refcounted object on
the stack, vptrs or not, so if we did have a way to prevent these
objects from being stack-allocated, we'd want to apply it to regular
RefCountedBase too, obviating the need for a separate RefCountedBaseVPTR
class.
It seems that the main way RefCountedBaseVPTR provides safety is by
making its subclass's destructor virtual. This may have been helpful at
one point, but these days clang will emit an error if you define a class
with virtual functions that inherits from RefCountedBase but doesn't
have a virtual destructor.
Reviewers: compnerd, dblaikie
Subscribers: cfe-commits, klimek, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D28162
llvm-svn: 290717
Summary: Previously we type-punned through a union, which is not safe.
Reviewers: rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28161
llvm-svn: 290715
This reverts commit r290694. It broke sanitizer tests on Win64. I'll
probably bring this back, but the jump tables will just live in .text
like they do for MSVC.
llvm-svn: 290714
This fixes the issue exposed in PR31393, where we weren't trying
sufficiently hard to diagnose bad TBAA metadata.
This does reduce the variety in the error messages we print out, but I
think the tradeoff of verifying more, simply and quickly overrules the
need for more helpful error messags here.
llvm-svn: 290713
Among other stuff, this allows to use predefined .option.machine_version_major
/minor/stepping symbols in the directive.
Relevant test expanded at once (also file renamed for clarity).
Differential Revision: https://reviews.llvm.org/D28140
llvm-svn: 290710
This change adds a new intrinsic which is intended to provide memcpy functionality
with additional atomicity guarantees. Please refer to the review thread
or language reference for further details.
Differential Revision: https://reviews.llvm.org/D27133
llvm-svn: 290708
We bypassed the intrinsic and returned the passthru operand, but we should also add the intrinsic to the worklist since its now dead. This can allow DCE to find it sooner and remove it. Similar was done for InsertElement when the inserted element isn't demanded.
llvm-svn: 290704
The accidentally had trivially dead code. Also needed to adjust the rounding mode to not CUR_DIRECTION so the intrinsics don't get converted to native operations before going through SimplifyDemandedVectorElts.
llvm-svn: 290702
Apparently GCC targeting Windows breaks bitfields on static data members:
struct Foo {
unsigned X : 16;
static const int M = 42;
unsigned Y : 16;
};
static_assert(sizeof(Foo) == 4, "asdf"); // fails
Who knew.
llvm-svn: 290700
Summary:
The optimal iteration order for this problem is RPO order. We want to
process as many preds of a backedge as we can before we process the
backedge.
At the same time, as we add predicate handling, we want to be able to
touch instructions that are dominated by a given block by
ranges (because a change in value numbering a predicate possibly
affects all users we dominate that are using that predicate).
If we don't do it this way, we can't do value inference over
backedges (the paper covers this in depth).
The newgvn branch currently overshoots the last part, and guarantees
that it will touch *at least* the right set of instructions, but it
does touch more. This is because the bitvector instruction ranges are
currently generated in RPO order (so we take the max and the min of
the ranges of dominated blocks, which means there are some in the
middle we didn't have to touch that we did).
We can do better by sorting the dominator tree, and then just using
dominator tree order.
As a preliminary, the dominator tree has some RPO guarantees, but not
enough. It guarantees that for a given node, your idom must come
before you in the RPO ordering. It guarantees no relative RPO ordering
for siblings. We add siblings in whatever order they appear in the module.
So that is what we fix.
We sort the children array of the domtree into RPO order, and then use
the dominator tree for ordering, instead of RPO, since the dominator
tree is now a valid RPO ordering.
Note: This would help any other pass that iterates a forward problem
in dominator tree order. Most of them are single pass. It will still
maximize whatever result they compute. We could also build the
dominator tree in this order, but our incremental updates would still
put it out of sort order, and recomputing the sort order is almost as
hard as general incremental updates of the domtree.
Also note that the sorting does not affect any tests, etc. Nothing
depends on domtree order, including the verifier, the equals
functions for domtree nodes, etc.
How much could this matter, you ask?
Here are the current numbers.
This is generated by running NewGVN over all files in LLVM.
Note that once we propagate equalities, the differences go up by an
order of magnitude or two (IE instead of 29, the max ends up in the
thousands, since the worst case we add a factor of N, where N is the
number of branch predicates). So while it doesn't look that stark for
the default ordering, it gets *much much* worse. There are also
programs in the wild where the difference is already pretty stark
(2 iterations vs hundreds).
RPO ordering:
759040 Number of iterations is 1
112908 Number of iterations is 2
Default dominator tree ordering:
755081 Number of iterations is 1
116234 Number of iterations is 2
603 Number of iterations is 3
27 Number of iterations is 4
2 Number of iterations is 5
1 Number of iterations is 7
Dominator tree sorted:
759040 Number of iterations is 1
112908 Number of iterations is 2
<yay!>
Really bad ordering (sort domtree siblings in postorder. not quite the
worst possible, but yeah):
754008 Number of iterations is 1
21 Number of iterations is 10
8 Number of iterations is 11
6 Number of iterations is 12
5 Number of iterations is 13
2 Number of iterations is 14
2 Number of iterations is 15
3 Number of iterations is 16
1 Number of iterations is 17
2 Number of iterations is 18
96642 Number of iterations is 2
1 Number of iterations is 20
2 Number of iterations is 21
1 Number of iterations is 22
1 Number of iterations is 29
17266 Number of iterations is 3
2598 Number of iterations is 4
798 Number of iterations is 5
273 Number of iterations is 6
186 Number of iterations is 7
80 Number of iterations is 8
42 Number of iterations is 9
Reviewers: chandlerc, davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28129
llvm-svn: 290699
I added one for Value back in r262045, and I'm starting to think we
should have these for any class with bitfields whose memory efficiency
really matters.
llvm-svn: 290698
Summary:
Follow-up to r290691, where I introduced HasLLVMReservedName. rnk
pointed out that that patch added an extra word to GlobalValue on MSVC,
because it doesn't pack bitfields with different types.
This patch moves HasLLVMReservedName into the existing bitfield, where
we appear to have plenty of bits to spare.
Reviewers: rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28149
llvm-svn: 290696
Summary:
We were already using 32-bit jump table entries, but this was a
consequence of the default PIC model on Win64, and not an intentional
design decision. This patch ensures that we always use 32-bit label
difference jump table entries on Win64 regardless of the PIC model. This
is a good idea because it saves executable size and object file size.
Moving the jump tables to .rdata cleans up the disassembled object code
and reduces the available ROP targets, but it requires adding one more
RIP-relative lea to the code. COFF doesn't have relocations to express
the difference between two arbitrary symbols, so we can't use the jump
table label in the label difference like we do elsewhere.
Fixes PR31488
Reviewers: majnemer, compnerd
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28141
llvm-svn: 290694
The Bitstream reader and writer are limited to handle a "size_t" at
most, which means that we can't backpatch and read back a 64bits
value on 32 bits platform.
llvm-svn: 290693
Summary:
Previously isIntrinsic() called getName(). This involves a hashtable
lookup, so is nontrivially expensive. And isIntrinsic() is called
frequently, particularly by dyn_cast<IntrinsicInstr>.
This patch steals a bit of IntID and uses that to store whether or not
getName() starts with "llvm."
Reviewers: bogner, arsenm, joker-eph
Subscribers: sanjoy, llvm-commits
Differential Revision: https://reviews.llvm.org/D22949
llvm-svn: 290691
This index record the position for each metadata record in
the bitcode, so that the reader will be able to lazy-load
on demand each individual record.
We also make sure that every abbrev is emitted upfront so
that the block can be skipped while reading.
I don't plan to commit this before having the reader
counterpart, but I figured this can be reviewed mostly
independently.
Recommit r290684 (was reverted in r290686 because a test
was broken) after adding a threshold to avoid emitting
the index when unnecessary (little amount of metadata).
This optimization "hides" a limitation of the ability
to backpatch in the bitstream: we can only backpatch
safely when the position has been flushed. So if we emit
an index for one metadata, it is possible that (part of)
the offset placeholder hasn't been flushed and the backpatch
will fail.
Differential Revision: https://reviews.llvm.org/D28083
llvm-svn: 290690
emplace_back is not faster if it is equivalent to push_back. In this cases emplaced value had the
same type that the one stored in container. It is ugly and it might be even slower (see
Scott Meyers presentation about emplacement).
llvm-svn: 290685
Summary:
This index record the position for each metadata record in
the bitcode, so that the reader will be able to lazy-load
on demand each individual record.
We also make sure that every abbrev is emitted upfront so
that the block can be skipped while reading.
I don't plan to commit this before having the reader
counterpart, but I figured this can be reviewed mostly
independently.
Reviewers: pcc, tejohnson
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28083
llvm-svn: 290684
Jump table emission can switch to .rdata before
WinException::endFunction gets called. Just remember the appropriate
text section we started in and reset back to it when we end the
function. We were already switching sections back from .xdata anyway.
Fixes the first problem in PR31488, so that now COFF switch tables can
live in .rdata if we want them to.
llvm-svn: 290678
This is an orthogonal and separated layer instead of being embedded
inside the pass manager. While it adds a small amount of complexity, it
is fairly minimal and the composability and control seems worth the
cost.
The logic for this ends up being nicely isolated and targeted. It should
be easy to experiment with different iteration strategies wrapped around
the CGSCC bottom-up walk using this kind of facility.
The mechanism used to track devirtualization is the simplest one I came
up with. I think it handles most of the cases the existing iteration
machinery handles, but I haven't done a *very* in depth analysis. It
does however match the basic intended semantics, and we can tweak or
tune its exact behavior incrementally as necessary. One thing that we
may want to revisit is freshly building the value handle set on each
iteration. While I don't think this will be a significant cost (it is
strictly fewer value handles but more churn of value handes than the old
call graph), it is conceivable that we'll want a somewhat more clever
tracking mechanism. My hope is to layer that on as a follow up patch
with data supporting any implementation complexity it adds.
This code also provides for a basic count heuristic: if the number of
indirect calls decreases and the number of direct calls increases for
a given function in the SCC, we assume devirtualization is responsible.
This matches the heuristics currently used in the legacy pass manager.
Differential Revision: https://reviews.llvm.org/D23114
llvm-svn: 290665
analyses when we're about to break apart an SCC.
We can't wait until after breaking apart the SCC to invalidate things:
1) Which SCC do we then invalidate? All of them?
2) Even if we invalidate all of them, a newly created SCC may not have
a proxy that will convey the invalidation to functions!
Previously we only invalidated one of the SCCs and too late. This led to
stale analyses remaining in the cache. And because the caching strategy
actually works, they would get used and chaos would ensue.
Doing invalidation early is somewhat pessimizing though if we *know*
that the SCC structure won't change. So it turns out that the design to
make the mutation API force the caller to know the *kind* of mutation in
advance was indeed 100% correct and we didn't do enough of it. So this
change also splits two cases of switching a call edge to a ref edge into
two separate APIs so that callers can clearly test for this and take the
easy path without invalidating when appropriate. This is particularly
important in this case as we expect most inlines to be between functions
in separate SCCs and so the common case is that we don't have to so
aggressively invalidate analyses.
The LCG API change in turn needed some basic cleanups and better testing
in its unittest. No interesting functionality changed there other than
more coverage of the returned sequence of SCCs.
While this seems like an obvious improvement over the current state, I'd
like to revisit the core concept of invalidating within the CG-update
layer at all. I'm wondering if we would be better served forcing the
callers to handle the invalidation beforehand in the cases that they
can handle it. An interesting example is when we want to teach the
inliner to *update and preserve* analyses. But we can cross that bridge
when we get there.
With this patch, the new pass manager an build all of the LLVM test
suite at -O3 and everything passes. =D I haven't bootstrapped yet and
I'm sure there are still plenty of bugs, but this gives a nice baseline
so I'm going to increasingly focus on fleshing out the missing
functionality, especially the bits that are just turned off right now in
order to let us establish this baseline.
llvm-svn: 290664
There are cases of AVX-512 instructions that have two possible encodings. This is the case with instructions that use vector registers with low indexes of 0 - 15 and do not use the zmm registers or the mask k registers.
The EVEX encoding prefix requires 4 bytes whereas the VEX prefix can take only up to 3 bytes. Consequently, using the VEX encoding for these instructions results in a code size reduction of ~2 bytes even though it is compiled with the AVX-512 features enabled.
Reviewers: Craig Topper, Zvi Rackoover, Elena Demikhovsky
Differential Revision: https://reviews.llvm.org/D27901
llvm-svn: 290663
when they are call edges at the leaf but may (transitively) be reached
via ref edges.
It turns out there is a simple rule: insert everything as a ref edge
which is a safe conservative default. Then we let the existing update
logic handle promoting some of those to call edges.
Note that it would be fairly cheap to make these call edges right away
if that is desirable by testing whether there is some existing call path
from the source to the target. It just seemed like slightly more
complexity in this code path that isn't strictly necessary. If anyone
feels strongly about handling this differently I'm happy to change it.
llvm-svn: 290649
due to a call cycle.
This actually crashed the ref removal before.
I've added a unittest that covers this kind of interesting graph
structure and mutation.
llvm-svn: 290645
currenty relies on the old PM's dependency system forming LCSSA.
The new PM will require a different design for this, and for now this is
causing most of the issues I'm currently seeing in testing. I'd like to
get to a testable baseline and then work on re-enabling things one at
a time.
llvm-svn: 290644
This adds a combine that canonicalizes a chain of inserts which broadcasts
a value into a single insert + a splat shufflevector.
This fixes PR31286.
Differential Revision: https://reviews.llvm.org/D27992
llvm-svn: 290641
Fix a warning detected by gcc 6:
warning: cast from type 'const void*' to type 'uint8_t* {aka unsigned char*}' casts away qualifiers [-Wcast-qual]
llvm-svn: 290618
Replace the use of grep with FileCheck. Tidy up some of the tests. A
few of the tests have been left as weak as previously, though some have
been made more stringent.
llvm-svn: 290616
There is no need to do this within an analysis. That method shouldn't
even be reached if this predicate holds as the actual useful
optimization is in the analysis manager itself.
llvm-svn: 290614