In the testcase attached, we believe %tmp1 implies %tmp4.
where:
br i1 %tmp1, label %bb2, label %bb7
br i1 %tmp4, label %bb5, label %bb7
because Wwhile looking at PredicateInfo stuffs we end up calling
isImpliedTrueByMatchingCmp() with the arguments backwards.
Differential Revision: https://reviews.llvm.org/D32718
llvm-svn: 301849
If we have ~(~X & Y), it only makes sense to transform it to (X | ~Y) when we do not need
the intermediate (~X & Y) value. In that case, we would need an extra instruction to
generate ~Y + 'or' (as shown in the test changes).
It's ok if we have multiple uses of ~X or Y, however. In those cases, we may not reduce the
instruction count or critical path, but we might improve throughput because we can generate
~X and ~Y in parallel. Whether that actually makes perf sense or not for a target is something
we can't answer in IR.
Differential Revision: https://reviews.llvm.org/D32703
llvm-svn: 301848
lldb-dwarfdump gets a new "--verify" option that will verify a single file's DWARF debug info and will print out any errors that it finds. It will return an non-zero exit status if verification fails, and a zero exit status if verification succeeds. Adding the --quiet option will suppress any output the STDOUT or STDERR.
The first part of the verify does the following:
- verifies that all CU relative references (DW_FORM_ref1, DW_FORM_ref2, DW_FORM_ref4, DW_FORM_ref8, DW_FORM_ref_udata) have valid CU offsets
- verifies that all DW_FORM_ref_addr references have valid .debug_info offsets
- verifies that all DW_AT_ranges attributes have valid .debug_ranges offsets
- verifies that all DW_AT_stmt_list attributes have valid .debug_line offsets
- verifies that all DW_FORM_strp attributes have valid .debug_str offsets
Unit tests were added for each of the above cases.
Differential Revision: https://reviews.llvm.org/D32707
llvm-svn: 301844
This is to prepare for an upcoming change which uses pointers instead of
GUIDs to represent references.
Differential Revision: https://reviews.llvm.org/D32469
llvm-svn: 301843
We were using operator=(0) which implicitly calls countLeadingZeros but only to compare with 64 to determine if we can compare VAL or pVal[0] to uint64_t. By handling the multiword case with countLeadingZerosSlowCase==BitWidth we can prevent a load of pVal[0] from being inserted inline at each call site. This saves a little bit of code size.
llvm-svn: 301842
This was an omission in r301813. I had made the supporting changes to
make this happen, but I forgot to actually update the PrevPair
declaration.
llvm-svn: 301817
We may not be able to rewrite indirect branch target, but we also want to take it into
account when folding, i.e. if it and all its successor's predecessors go to the same
destination, we can fold, i.e. no need to thread.
llvm-svn: 301816
In cases where an instruction (a call site, say) is RAUW'ed with some
other value (this is possible via the `returned` attribute, for
instance), we want the slot in UnknownInsts to point to the original
Instruction we wanted to track, not the value it got replaced by.
Fixes PR32587.
This relands r301426.
llvm-svn: 301814
In preparation for introducing writing capabilities for each of
these classes, I would like to adopt a Foo / FooRef naming
convention, where Foo indicates that the class can manipulate and
serialize Foos, and FooRef indicates that it is an immutable view of
an existing Foo. In other words, Foo is a writer and FooRef is a
reader. This patch names some existing readers to conform to the
FooRef convention, while offering no functional change.
llvm-svn: 301810
Summary:
This frees up one slot in the HandleBaseKind enum, which I will use
later to add a new kind of value handle. The size of the
HandleBaseKind enum is important because we store a HandleBaseKind in
the low two bits of a (in the worst case) 4 byte aligned pointer.
Reviewers: davide, chandlerc
Subscribers: mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D32634
llvm-svn: 301809
This is the SelectionDAG version of D32521. If know where at least one 1 is located in the input to these intrinsics we can place an upper bound on the number of bits needed to represent the count and thus increase the number of known zeros in the output.
I think we can also refine this further for CTTZ_UNDEF/CTLZ_UNDEF by assuming that the answer will never be BitWidth. I've left this out for now because it caused other test failures across multiple targets. Usually because of turning ADD into OR based on this new information.
I'll fix CTPOP in a future patch.
Differential Revision: https://reviews.llvm.org/D32692
llvm-svn: 301806
Summary: [JumpThread] Do RAUW in case Cond folds to a constant in the CFG
Reviewers: sanjoy
Reviewed By: sanjoy
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32407
llvm-svn: 301804
In this patch, I introduce a new alt macro feature.
This feature adds meaning for the % when using it as a prefix to the calling macro arguments.
In the altmacro mode, the percent sign '%' before an absolute expression convert the expression first to a string.
As described in the https://sourceware.org/binutils/docs-2.27/as/Altmacro.html
"Expression results as strings
You can write `%expr' to evaluate the expression expr and use the result as a string."
expression assumptions:
1. '%' can only evaluate an absolute expression.
2. Altmacro '%' must be the first character of the evaluated expression.
3. If no '%' is located before the expression, a regular module operation is expected.
4. The result of Absolute Expressions can be only integer.
Differential Revision: https://reviews.llvm.org/D32526
llvm-svn: 301797
We discussed shrinking/widening of selects in IR in D26556, and I'll try to get back to that
patch eventually. But I'm hoping that this transform is less iffy in the DAG where we can check
legality of the select that we want to produce.
A few things to note:
1. We can't wait until after legalization and do this generically because (at least in the x86
tests from PR14657), we'll have PACKSS and bitcasts in the pattern.
2. This might benefit more of the SSE codegen if we lifted the legal-or-custom requirement, but
that requires a closer look to make sure we don't end up worse.
3. There's a 'vblendv' opportunity that we're missing that results in andn/and/or in some cases.
That should be fixed next.
4. I'm assuming that AVX1 offers the worst of all worlds wrt uneven ISA support with multiple
legal vector sizes, but if there are other targets like that, we should add more tests.
5. There's a codegen miracle in the multi-BB tests from PR14657 (the gcc auto-vectorization tests):
despite IR that is terrible for the target, this patch allows us to generate the optimal loop
code because something post-ISEL is hoisting the splat extends above the vector loops.
Differential Revision: https://reviews.llvm.org/D32620
llvm-svn: 301781
Summary:
programUndefinedIfPoison makes more sense, given what the function
does; and I'm about to add a function with a name similar to
isKnownNotFullPoison (so do the rename to avoid confusion).
Reviewers: broune, majnemer, bjarke.roune
Reviewed By: broune
Subscribers: mcrosier, llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D30444
llvm-svn: 301776
Summary: As per discution on how to get better codegen an large int legalization, it became clear that using a glue for the carry was preventing several desirable optimizations. Passing the carry down as a value allow for more flexibility.
Reviewers: jyknight, nemanjai, mkuper, spatel, RKSimon, zvi, bkramer
Subscribers: igorb, llvm-commits
Differential Revision: https://reviews.llvm.org/D29872
llvm-svn: 301775
This features isn't used anywhere in tree. It's existence seems to be preventing selfhost builds from inlining any of the setBits methods including setLowBits, setHighBits, and setBitsFrom. This is because the code makes the method recursive.
If anyone needs this feature in the future we could consider adding a setBitsWithWrap method. This way only the calls that need it would pay for it.
llvm-svn: 301769
Summary:
Apply canonicalization rules:
1. Input vectors with no elements selected from can be replaced with undef.
2. If only one input vector is constant it shall be the second one.
This allows constant-folding to cover more ad-hoc simplifications that
were in place and avoid duplication for RHS and LHS checks.
There are more rules we may want to add in the future when we see a
justification. e.g. mask elements that select undef elements can be
replaced with undef.
Reviewers: spatel, RKSimon, andreadb, davide
Reviewed By: spatel, RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32338
llvm-svn: 301766
In case of microMIPS mode %gottprel operator should emit microMIPS
relocation R_MICROMIPS_TLS_GOTTPREL, not R_MIPS_TLS_GOTTPREL.
Differential Revision: http://reviews.llvm.org/D32617
llvm-svn: 301763
Summary:
Predicate<> now has a field to indicate how often it must be recomputed.
Currently, there are two frequencies, per-module (RecomputePerFunction==0)
and per-function (RecomputePerFunction==1). Per-function predicates are
currently recomputed more frequently than necessary since the only predicate
in this category is cheap to test. Per-module predicates are now computed in
getSubtargetImpl() while per-function predicates are computed in selectImpl().
Tablegen now manages the PredicateBitset internally. It should only be
necessary to add the required includes.
Also fixed a problem revealed by the test case where
constrainSelectedInstRegOperands() would attempt to tie operands that
BuildMI had already tied.
Reviewers: ab, qcolombet, t.p.northover, rovka, aditya_nandakumar
Reviewed By: rovka
Subscribers: kristof.beyls, igorb, llvm-commits
Differential Revision: https://reviews.llvm.org/D32491
llvm-svn: 301750
Summary: This patch adds isNegative, isNonNegative for querying whether the sign bit is known. It also adds makeNegative and makeNonNegative for controlling the sign bit.
Reviewers: RKSimon, spatel, davide
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32651
llvm-svn: 301747
We were default constructing the Lower/Upper APInts. Then creating min or max value, then doing a move assignment to Lower and copy assignment to upper. The copy assignment operator in particular has an out of line function call that has to examine whether or not a previous allocation exists that can be reused which of course it can't in this case.
The new code creates the min/max value first, move constructs Lower from it then copy constructs Upper from Lower.
This also seems to have convinced a self host build that this constructor can be inlined more readily into other methods in ConstantRange.
llvm-svn: 301736
There is a lot of duplicate code for printing line info between
YAML and the raw output printer. This introduces a base class
that can be shared between the two, and makes some minor
cleanups in the process.
llvm-svn: 301728
retainAutoreleasedReturnValue that retains the returned value.
This commit fixes a bug in ARC optimizer where it moves a release
between a call and a retainAutoreleasedReturnValue, causing the returned
object to be released before the retainAutoreleasedReturnValue can
retain it.
This commit accomplishes that by doing a lookahead and checking whether
the call prevents the release from moving upwards. In the long term, we
should treat the region between the retainAutoreleasedReturnValue and
the call as a critical section and disallow moving anything there
(possibly using operand bundles).
rdar://problem/20449878
llvm-svn: 301724
I fixed my miscompile in r301722 and I hope I don't have to take
a look at this code again now that Chandler has a new LoopUnswitch
pass, but maybe this could be of use for somebody else in the
meanwhile.
llvm-svn: 301723
This has been mysteriously failing since r301593, which cleaned up the
types of things like size_t and SIZE_MAX for freestanding targets. Reid
and Kostya suggested marking it as UNSUPPORTED on windows, given that no
one has been able to reproduce locally.
llvm-svn: 301719
The llvm-readobj parsing code currently exists in our CodeView
library, so we use that to parse instead of re-writing the logic
in the tool.
llvm-svn: 301718
This broke the Clang build. (Clang-side patch missing?)
Original commit message:
> [IR] Make add/remove Attributes use AttrBuilder instead of
> AttributeList
>
> This change cleans up call sites and avoids creating temporary
> AttributeList objects.
>
> NFC
llvm-svn: 301712
Fixes the issue highlighted in
http://lists.llvm.org/pipermail/cfe-dev/2014-June/037500.html.
The DW_AT_decl_file and DW_AT_decl_line attributes on namespaces can
prevent LLVM from uniquing types that are in the same namespace. They
also don't carry any meaningful information.
rdar://problem/17484998
Differential Revision: https://reviews.llvm.org/D32648
llvm-svn: 301706
While looking at pure addressing expressions, it's possible
for the value to appear later in Postorder.
I haven't been able to come up with a testcase where this
exhibits an actual issue, but if you insert a dump before
the value map lookup, a few testcases crash.
llvm-svn: 301705
Eliminates some more cases where some subset of the addressing
computation remains flat. Some cases with addrspacecasts
in nested constant expressions are still left behind however.
llvm-svn: 301704
When a PHI operand has a subregister, create a COPY instead of simply
replacing the PHI output with the input it.
Differential Revision: https://reviews.llvm.org/D32650
llvm-svn: 301699
I used Null rather than Zero to match the getNullValue method name.
There are some other places outside APInt where isNullValue would be more readable than isMinValue even though they do the same thing. I'll update those in future patches.
llvm-svn: 301695
While debugging a miscompile I realized loopunswitch doesn't
put newlines when printing the instruction being replacement.
Ending up with a single line with many instruction replaced isn't
the best for readability and/or mental sanity.
llvm-svn: 301692
Also, add test for data relocations and fix addend to
be signed.
Subscribers: jfb, dschuff
Differential Revision: https://reviews.llvm.org/D32513
llvm-svn: 301690
The IntrNoMem, IntrReadMem, IntrWriteMem, and IntrArgMemOnly intrinsic
properties differ from their corresponding LLVM IR attributes by specifying
that the intrinsic, in addition to its memory properties, has no other side
effects.
The IntrHasSideEffects flag used in combination with one of the memory flags
listed above, makes it possible to define an intrinsic such that its
properties at the CodeGen layer match its properties at the IR layer.
Patch by Tom Stellard
llvm-svn: 301685
The method is called "get *Param* Alignment", and is only used for
return values exactly once, so it should take argument indices, not
attribute indices.
Avoids confusing code like:
IsSwiftError = CS->paramHasAttr(ArgIdx, Attribute::SwiftError);
Alignment = CS->getParamAlignment(ArgIdx + 1);
Add getRetAlignment to handle the one case in Value.cpp that wants the
return value alignment.
This is a potentially breaking change for out-of-tree backends that do
their own call lowering.
llvm-svn: 301682
Adds a new method finalizeLowering to TargetLoweringBase. This is in
preparation for an upcoming commit.
This function is meant for target specific adjustments to
MachineFrameInfo or register reservations.
Move the freezeRegisters() and the hasCopyImplyingStackAdjustment()
handling into the new function to prove the concept. As an added bonus
GlobalISel no longer missed the hasCopyImplyingStackAdjustment()
handling with this.
Differential Revision: https://reviews.llvm.org/D32621
llvm-svn: 301679
Summary:
When using ThinLTO, the linker performs its own parallelism. This
change limits the number of parallel link jobs that Ninja will issue
to keep the total number of threads reasonable when linking with
ThinLTO.
Reviewers: hans, ruiu
Subscribers: mgorny, mehdi_amini, Prazek
Differential Revision: https://reviews.llvm.org/D31990
llvm-svn: 301676
This eliminates many extra 'Idx' induction variables in loops over
arguments in CodeGen/ and Target/. It also reduces the number of places
where we assume that ReturnIndex is 0 and that we should add one to
argument numbers to get the corresponding attribute list index.
NFC
llvm-svn: 301666
This became no longer necessary after D19462 landed, and will be incompatible
with an upcoming change to the summary data structures that changes how we
represent references.
llvm-svn: 301660
. swap 4-bit register encoding, 16-bit offset and 32-bit imm to support big endian archs
. add a test
Reported-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
llvm-svn: 301653
Summary:
The motivation example is like below which has 13 cases but only 2 distinct targets
```
lor.lhs.false2: ; preds = %if.then
switch i32 %Status, label %if.then27 [
i32 -7012, label %if.end35
i32 -10008, label %if.end35
i32 -10016, label %if.end35
i32 15000, label %if.end35
i32 14013, label %if.end35
i32 10114, label %if.end35
i32 10107, label %if.end35
i32 10105, label %if.end35
i32 10013, label %if.end35
i32 10011, label %if.end35
i32 7008, label %if.end35
i32 7007, label %if.end35
i32 5002, label %if.end35
]
```
which is compiled into a balanced binary tree like this on AArch64 (similar on X86)
```
.LBB853_9: // %lor.lhs.false2
mov w8, #10012
cmp w19, w8
b.gt .LBB853_14
// BB#10: // %lor.lhs.false2
mov w8, #5001
cmp w19, w8
b.gt .LBB853_18
// BB#11: // %lor.lhs.false2
mov w8, #-10016
cmp w19, w8
b.eq .LBB853_23
// BB#12: // %lor.lhs.false2
mov w8, #-10008
cmp w19, w8
b.eq .LBB853_23
// BB#13: // %lor.lhs.false2
mov w8, #-7012
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_14: // %lor.lhs.false2
mov w8, #14012
cmp w19, w8
b.gt .LBB853_21
// BB#15: // %lor.lhs.false2
mov w8, #-10105
add w8, w19, w8
cmp w8, #9 // =9
b.hi .LBB853_17
// BB#16: // %lor.lhs.false2
orr w9, wzr, #0x1
lsl w8, w9, w8
mov w9, #517
and w8, w8, w9
cbnz w8, .LBB853_23
.LBB853_17: // %lor.lhs.false2
mov w8, #10013
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_18: // %lor.lhs.false2
mov w8, #-7007
add w8, w19, w8
cmp w8, #2 // =2
b.lo .LBB853_23
// BB#19: // %lor.lhs.false2
mov w8, #5002
cmp w19, w8
b.eq .LBB853_23
// BB#20: // %lor.lhs.false2
mov w8, #10011
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_21: // %lor.lhs.false2
mov w8, #14013
cmp w19, w8
b.eq .LBB853_23
// BB#22: // %lor.lhs.false2
mov w8, #15000
cmp w19, w8
b.ne .LBB853_3
```
However, the inline cost model estimates the cost to be linear with the number
of distinct targets and the cost of the above switch is just 2 InstrCosts.
The function containing this switch is then inlined about 900 times.
This change use the general way of switch lowering for the inline heuristic. It
etimate the number of case clusters with the suitability check for a jump table
or bit test. Considering the binary search tree built for the clusters, this
change modifies the model to be linear with the size of the balanced binary
tree. The model is off by default for now :
-inline-generic-switch-cost=false
This change was originally proposed by Haicheng in D29870.
Reviewers: hans, bmakam, chandlerc, eraman, haicheng, mcrosier
Reviewed By: hans
Subscribers: joerg, aemerson, llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D31085
llvm-svn: 301649
Summary:
Skip memops if the total value profiled count is 0, we can't correctly
scale up the counts and there is no point anyway.
Reviewers: davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32624
llvm-svn: 301645
Reapplied r299221 after fix for nondeterminism in ThinLTO builder (rL301599), with extra check for implicit truncation of inserted element.
llvm-svn: 301644
Declare the ARMInstructionSelector in an anonymous namespace, to make it
more in line with the other targets which were migrated to this in
r299637 in order to avoid TableGen'erated headers being included in
non-GlobalISel builds.
llvm-svn: 301632
There was a garbage character in output introduced by myself in
r290040 "[DWARF] - Introduce DWARFDebugPubTable class for dumping pub* sections."
llvm-svn: 301631
This is a follow up to the fix in r298360 to improve the handling of debug
values when redundant LEAs are removed. The fix in r298360 effectively
discarded the debug values. This patch now attempts to preserve the debug
values by using the DWARF DW_OP_stack_value operation via prependDIExpr.
Moved functions appendOffset and prependDIExpr from Local.cpp to
DebugInfoMetadata.cpp and made them available as static member functions of
DIExpression.
Differential Revision: https://reviews.llvm.org/D31604
llvm-svn: 301630
EarlyCSE should not just ignore assumes. It should use the fact that its condition is true for all dominated instructions.
Reviewers: sanjoy, reames, apilipenko, anna, skatkov
Reviewed By: reames, sanjoy
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32482
llvm-svn: 301625
If a condition is calculated only once, and there are multiple guards on this condition, we should be able
to remove all guards dominated by the first of them. This patch allows EarlyCSE to try to find the condition
of a guard among the known values, and if it is true, remove the guard. Otherwise we keep the guard and
mark its condition as 'true' for future consideration.
Reviewers: sanjoy, reames, apilipenko, skatkov, anna, dberlin
Reviewed By: reames, sanjoy
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32476
llvm-svn: 301623
This patch replaces the separate APInts for KnownZero/KnownOne with a single KnownBits struct. This is similar to what was done to ValueTracking's version recently.
This is largely a mechanical transformation from KnownZero to Known.Zero.
Differential Revision: https://reviews.llvm.org/D32569
llvm-svn: 301620
This patch uses various APInt methods to reduce the number of temporary APInts. These were all found while working through converting SelectionDAG's computeKnownBits to also use the KnownBits struct recently added to the ValueTracking version.
llvm-svn: 301618
Summary:
In some cases LLVM (especially the SLP vectorizer) will create vectors
that are 256 bytes (or larger). Given that this is intentional[0] is
likely to get more common, this patch updates the StackMap binary
format to deal with the spill locations for said vectors.
This change also bumps the stack map version from 2 to 3.
[0]: https://reviews.llvm.org/D32533#738350
Reviewers: reames, kavon, skatkov, javed.absar
Subscribers: mcrosier, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D32629
llvm-svn: 301615
COFF Import libraries which use the obsolete CONSTANT export are
supposed to get two symbols, one with the `_imp_` prefix and one
without. Ensure that we expose both for iteration. This is necessary
to fix the librarian with COFF CONSTANT exports.
llvm-svn: 301614
When dumping raw data from a stream, you might know the offset
of a certain record you're interested in, as well as how long
that record is. Previously, you had to dump the entire stream
and wade through the bytes to find the interesting record.
This patch allows you to specify an offset and length on the
command line, and it will only dump the requested range.
llvm-svn: 301607
Reviewers: zturner, hansw, hans
Reviewed By: hans
Subscribers: hans, llvm-commits
Differential Revision: https://reviews.llvm.org/D32611
llvm-svn: 301595
Use a combination of !associated, comdat, @llvm.compiler.used and
custom sections to allow dead stripping of globals and their asan
metadata. Sometimes.
Currently this works on LLD, which supports SHF_LINK_ORDER with
sh_link pointing to the associated section.
This also works on BFD, which seems to treat comdats as
all-or-nothing with respect to linker GC. There is a weird quirk
where the "first" global in each link is never GC-ed because of the
section symbols.
At this moment it does not work on Gold (as in the globals are never
stripped).
This is a second re-land of r298158. This time, this feature is
limited to -fdata-sections builds.
llvm-svn: 301587
When possible, put ASan ctor/dtor in comdat.
The only reason not to is global registration, which can be
TU-specific. This is not the case when there are no instrumented
globals. This is also limited to ELF targets, because MachO does
not have comdat, and COFF linkers may GC comdat constructors.
The benefit of this is a lot less __asan_init() calls: one per DSO
instead of one per TU. It's also necessary for the upcoming
gc-sections-for-globals change on Linux, where multiple references to
section start symbols trigger quadratic behaviour in gold linker.
This is a second re-land of r298756. This time with a flag to disable
the whole thing to avoid a bug in the gold linker:
https://sourceware.org/bugzilla/show_bug.cgi?id=19002
llvm-svn: 301586
This patch dumps the raw bytes of the .rsrc sections that
are present in COFF object and executable files. Subsequent
patches will parse this information and dump in a more human
readable format.
Differential Revision: https://reviews.llvm.org/D32463
Patch By: Eric Beckmann
llvm-svn: 301578
Currently, this pass only focuses on *trivial* loop unswitching. At that
reduced problem it remains significantly better than the current loop
unswitch:
- Old pass is worse than cubic complexity. New pass is (I think) linear.
- New pass is much simpler in its design by focusing on full unswitching. (See
below for details on this).
- New pass doesn't carry state for thresholds between pass iterations.
- New pass doesn't carry state for correctness (both miscompile and
infloop) between pass iterations.
- New pass produces substantially better code after unswitching.
- New pass can handle more trivial unswitch cases.
- New pass doesn't recompute the dominator tree for the entire function
and instead incrementally updates it.
I've ported all of the trivial unswitching test cases from the old pass
to the new one to make sure that major functionality isn't lost in the
process. For several of the test cases I've worked to improve the
precision and rigor of the CHECKs, but for many I've just updated them
to handle the new IR produced.
My initial motivation was the fact that the old pass carried state in
very unreliable ways between pass iterations, and these mechansims were
incompatible with the new pass manager. However, I discovered many more
improvements to make along the way.
This pass makes two very significant assumptions that enable most of these
improvements:
1) Focus on *full* unswitching -- that is, completely removing whatever
control flow construct is being unswitched from the loop. In the case
of trivial unswitching, this means removing the trivial (exiting)
edge. In non-trivial unswitching, this means removing the branch or
switch itself. This is in opposition to *partial* unswitching where
some part of the unswitched control flow remains in the loop. Partial
unswitching only really applies to switches and to folded branches.
These are very similar to full unrolling and partial unrolling. The
full form is an effective canonicalization, the partial form needs
a complex cost model, cannot be iterated, isn't canonicalizing, and
should be a separate pass that runs very late (much like unrolling).
2) Leverage LLVM's Loop machinery to the fullest. The original unswitch
dates from a time when a great deal of LLVM's loop infrastructure was
missing, ineffective, and/or unreliable. As a consequence, a lot of
complexity was added which we no longer need.
With these two overarching principles, I think we can build a fast and
effective unswitcher that fits in well in the new PM and in the
canonicalization pipeline. Some of the remaining functionality around
partial unswitching may not be relevant today (not many test cases or
benchmarks I can find) but if they are I'd like to add support for them
as a separate layer that runs very late in the pipeline.
Purely to make reviewing and introducing this code more manageable, I've
split this into first a trivial-unswitch-only pass and in the next patch
I'll add support for full non-trivial unswitching against a *fixed*
threshold, exactly like full unrolling. I even plan to re-use the
unrolling thresholds, as these are incredibly similar cost tradeoffs:
we're cloning a loop body in order to end up with simplified control
flow. We should only do that when the total growth is reasonably small.
One of the biggest changes with this pass compared to the previous one
is that previously, each individual trivial exiting edge from a switch
was unswitched separately as a branch. Now, we unswitch the entire
switch at once, with cases going to the various destinations. This lets
us unswitch multiple exiting edges in a single operation and also avoids
numerous extremely bad behaviors, where we would introduce 1000s of
branches to test for thousands of possible values, all of which would
take the exact same exit path bypassing the loop. Now we will use
a switch with 1000s of cases that can be efficiently lowered into
a jumptable. This avoids relying on somehow forming a switch out of the
branches or getting horrible code if that fails for any reason.
Another significant change is that this pass actively updates the CFG
based on unswitching. For trivial unswitching, this is actually very
easy because of the definition of loop simplified form. Doing this makes
the code coming out of loop unswitch dramatically more friendly. We
still should run loop-simplifycfg (at the least) after this to clean up,
but it will have to do a lot less work.
Finally, this pass makes much fewer attempts to simplify instructions
based on the unswitch. Something like loop-instsimplify, instcombine, or
GVN can be used to do increasingly powerful simplifications based on the
now dominating predicate. The old simplifications are things that
something like loop-instsimplify should get today or a very, very basic
loop-instcombine could get. Keeping that logic separate is a big
simplifying technique.
Most of the code in this pass that isn't in the old one has to do with
achieving specific goals:
- Updating the dominator tree as we go
- Unswitching all cases in a switch in a single step.
I think it is still shorter than just the trivial unswitching code in
the old pass despite having this functionality.
Differential Revision: https://reviews.llvm.org/D32409
llvm-svn: 301576
Just calling dropAllReferences leaves pointers to the ConstantExpr
behind, so we would eventually crash with a null pointer dereference.
Differential Revision: https://reviews.llvm.org/D32551
llvm-svn: 301575
Summary:
Misc improvements to debug output. Fix a couple typos and also dump the
value profile before we make any profitability checks.
Reviewers: davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32607
llvm-svn: 301574
Generate the better include paths. Instead of #include <llvm_header.h> doxygen
produces #include "llvm/Folder/llvm_header.h"
Patch by Yuka Takahashi (D32342)!
llvm-svn: 301569
Summary:
The type of the target frame index is intptr, not the type of the value we're
going to store into it. Without this change we crash in the attached test case
when trying to type-legalize a TargetFrameIndex.
Patchpoint lowering types the target frame index as intptr as well.
Reviewers: reames, bogner, arsenm
Subscribers: arsenm, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D32256
llvm-svn: 301566
libraries are properly unloaded when llvm_shutdown is called.
Summary:
This was mostly affecting usage of the JIT, where storing the library handles in
a set made iteration unordered/undefined. This lead to disagreement between the
JIT and native code as to what the address and implementation of particularly on
Windows with stdlib functions:
JIT: putenv_s("TEST", "VALUE") // called msvcrt.dll, putenv_s
JIT: getenv("TEST") -> "VALUE" // called msvcrt.dll, getenv
Native: getenv("TEST") -> NULL // called ucrt.dll, getenv
Also fixed is the issue of DynamicLibrary::getPermanentLibrary(0,0) on Windows
not giving priority to the process' symbols as it did on Unix.
Reviewers: chapuni, v.g.vassilev, lhames
Reviewed By: lhames
Subscribers: danalbert, srhines, mgorny, vsk, llvm-commits
Differential Revision: https://reviews.llvm.org/D30107
llvm-svn: 301562
Previously parsing of these were all grouped together into a
single master class that could parse any type of debug info
fragment.
With writing forthcoming, the complexity of each individual
fragment is enough to warrant them having their own classes so
that reading and writing of each fragment type can be grouped
together, but isolated from the code for reading and writing
other fragment types.
In doing so, I found a place where parsing code was duplicated
for the FileChecksums fragment, across llvm-readobj and the
CodeView library, and one of the implementations had a bug.
Now that the codepaths are merged, the bug is resolved.
Differential Revision: https://reviews.llvm.org/D32547
llvm-svn: 301557
We have a lot of very similarly named classes related to
dealing with module debug info. This patch has NFC, it just
renames some classes to be more descriptive (albeit slightly
more to type). The mapping from old to new class names is as
follows:
Old | New
ModInfo | DbiModuleDescriptor
ModuleSubstream | ModuleDebugFragment
ModStream | ModuleDebugStream
With the corresponding Builder classes renamed accordingly.
Differential Revision: https://reviews.llvm.org/D32506
llvm-svn: 301555
Author: milena.vujosevic.janicic
Reviewers: sdardis
The code implements size reduction pass for MicroMIPS.
Load and store instructions are examined and transformed, if possible.
lw32 instruction is transformed into 16-bit instruction lwsp
sw32 instruction is transformed into 16-bit instruction swsp
Arithmetic instrcutions are examined and transformed, if possible.
addu32 instruction is transformed into 16-bit instruction addu16
subu32 instruction is transformed into 16-bit instruction subu16
Differential Revision: https://reviews.llvm.org/D15144
llvm-svn: 301540
In getCmpSelInstrCost(), CondTy may actually be scalar while ValTy is a
vector when LoopVectorizer is the caller. Therefore the assert that CondTy
must be a vector type if ValTy is was wrong and is now removed.
Review: Ulrich Weigand
llvm-svn: 301533
Fix a crash when trying to extend a value passed as a sign- or
zero-extended stack parameter. The cause of the crash was that we were
setting the size of the loaded value to 32 bits, and then tyring to
extend again to 32 bits.
This patch addresses the issue by also introducing a G_TRUNC after the
load. This will leave the unused bits to their original values set by
the caller, while being consistent about the types. For values that are
not extended, we just use a smaller load.
llvm-svn: 301531
It is useful to output size of ranges when address ranges
section of .gdb_index is dumped.
It helps to compare outputs produced by different linkers,
for example. In that case address ranges can look very different,
when they are the same at fact. Difference comes from different
low address because of different address of .text.
Differential revision: https://reviews.llvm.org/D32492
llvm-svn: 301527
'Src' looks like it was borrowed from memcpy, 'Val' makes more sense for
memset and is consistent with naming within the function.
Differential Revision: https://reviews.llvm.org/D32580
llvm-svn: 301521
This changes code that touches ValueHandleBase::V to go through
getValPtr and (newly added) setValPtr. This functionality will be
used later, but also seemed like a generally good cleanup.
I also renamed the field to Val, but that's just to make it obvious
that I fixed all the uses.
llvm-svn: 301518
Previously, an expression such as Saver.save(std::string("foo") + "bar")
didn't compile because there is an ambiguity as to whether the argument
is of const Twine& or StringRef.
llvm-svn: 301512
also a discussion about exactly what we should do prior to re-enabling
it.
The current bug is http://llvm.org/PR32821 and the discussion about this
is in the review thread for r300200.
llvm-svn: 301505
DISubprogram currently has 10 pointer operands, several of which are
often nullptr. This patch reduces the amount of memory allocated by
DISubprogram by rearranging the operands such that containing type,
template params, and thrown types come last, and are only allocated
when they are non-null (or followed by non-null operands).
This patch also eliminates the entirely unused DisplayName operand.
This saves up to 4 pointer operands per DISubprogram. (I tried
measuring the effect on peak memory usage on an LTO link of an X86
llc, but the results were very noisy).
This reapplies r301498 with an attempted workaround for g++.
Differential Revision: https://reviews.llvm.org/D32560
llvm-svn: 301501
DISubprogram currently has 10 pointer operands, several of which are
often nullptr. This patch reduces the amount of memory allocated by
DISubprogram by rearranging the operands such that containing type,
template params, and thrown types come last, and are only allocated
when they are non-null (or followed by non-null operands).
This patch also eliminates the entirely unused DisplayName operand.
This saves up to 4 pointer operands per DISubprogram. (I tried
measuring the effect on peak memory usage on an LTO link of an X86
llc, but the results were very noisy).
llvm-svn: 301498
It was doing the same as the base implementation and was irritating me
when I was searching for backends that have custom behavior for
canRealignStack.
llvm-svn: 301495
For Swift we would like to be able to encode the error types that a
function may throw, so the debugger can display them alongside the
function's return value when finish-ing a function.
DWARF defines DW_TAG_thrown_type (intended to be used for C++ throw()
declarations) that is a perfect fit for this purpose. This patch wires
up support for DW_TAG_thrown_type in LLVM by adding a list of thrown
types to DISubprogram.
To offset the cost of the extra pointer, there is a follow-up patch
that turns DISubprogram into a variable-length node.
rdar://problem/29481673
Differential Revision: https://reviews.llvm.org/D32559
llvm-svn: 301489
The previous algorithm processed one character at a time, which is very
painful on a modern CPU. Replace it with xxHash64, which both already
exists in the codebase and is fairly fast.
Patch from Scott Smith!
Differential Revision: https://reviews.llvm.org/D32509
llvm-svn: 301487