Scudo only supports building for android/linux/fuchsia, so require target_os to
be one of linux/fuchsia to do a stage2_unix scudo build. Android is already
covered by the stage2_android* toolchains below.
Differential Revision: https://reviews.llvm.org/D71131
Summary:
Add a new cl::callback attribute to Option.
This attribute specifies a callback function that is called when
an option is seen, and can be used to set other options, as in
option A implies option B. If the option is a `cl::list`, and
`cl::CommaSeparated` is also specified, the callback will fire
once for each value. This could be used to validate combinations
or selectively set other options.
Reviewers: beanz, thomasfinch, MaskRay, thopre, serge-sans-paille
Reviewed By: beanz
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70620
Before this change, the *InstPrinter.cpp files of each target where some
of the slowest objects to compile in all of LLVM. See this snippet produced by
ClangBuildAnalyzer:
https://reviews.llvm.org/P8171$96
Search for "InstPrinter", and see that it shows up in a few places.
Tablegen was emitting a large switch containing a sequence of operand checks,
each of which created many conditions and many BBs. Register allocation and
jump threading both did not scale well with such a large repetitive sequence of
basic blocks.
So, this change essentially turns those control flow structures into
data. The previous structure looked like:
switch (Opc) {
case TGT::ADD:
// check alias 1
if (MI->getOperandCount() == N && // check num opnds
MI->getOperand(0).isReg() && // check opnd 0
...
MI->getOperand(1).isImm() && // check opnd 1
AsmString = "foo";
break;
}
// check alias 2
if (...)
...
return false;
The new structure looks like:
OpToPatterns: Sorted table of opcodes mapping to pattern indices.
\->
Patterns: List of patterns. Previous table points to subrange of
patterns to match.
\->
Conds: The if conditions above encoded as a kind and 32-bit value.
See MCInstPrinter.cpp for the details of how the new data structures are
interpreted.
Here are some before and after metrics.
Time to compile AArch64InstPrinter.cpp:
0m29.062s vs. 0m2.203s
size of the obj:
3.9M vs. 676K
size of clang.exe:
97M vs. 96M
I have not benchmarked disassembly performance, but typically
disassemblers are bottlenecked on IO and string processing, not alias
matching, so I'm not sure it's interesting enough to be worth doing.
Reviewers: RKSimon, andreadb, xbolva00, craig.topper
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D70650
The xor'ing behaviour is only used for msvc/crt environments, when we're targeting
macho the guard load code doesn't know about the xor in the epilog. Disable xor'ing
when targeting win32-macho to be consistent.
Differential Revision: https://reviews.llvm.org/D71095
D53794 introduced code to perform the FP_TO_UINT expansion via FP_TO_SINT in a way that would never expose floating-point exceptions in the intermediate steps. Unfortunately, I just noticed there is still a way this can happen. As discussed in D53794, the compiler now generates this sequence:
// Sel = Src < 0x8000000000000000
// Val = select Sel, Src, Src - 0x8000000000000000
// Ofs = select Sel, 0, 0x8000000000000000
// Result = fp_to_sint(Val) ^ Ofs
The problem is with the Src - 0x8000000000000000 expression. As I mentioned in the original review, that expression can never overflow or underflow if the original value is in range for FP_TO_UINT. But I missed that we can get an Inexact exception in the case where Src is a very small positive value. (In this case the result of the sub is ignored, but that doesn't help.)
Instead, I'd suggest to use the following sequence:
// Sel = Src < 0x8000000000000000
// FltOfs = select Sel, 0, 0x8000000000000000
// IntOfs = select Sel, 0, 0x8000000000000000
// Result = fp_to_sint(Val - FltOfs) ^ IntOfs
In the case where the value is already in range of FP_TO_SINT, we now simply compute Val - 0, which now definitely cannot trap (unless Val is a NaN in which case we'd want to trap anyway).
In the case where the value is not in range of FP_TO_SINT, but still in range of FP_TO_UINT, the sub can never be inexact, as Val is between 2^(n-1) and (2^n)-1, i.e. always has the 2^(n-1) bit set, and the sub is always simply clearing that bit.
There is a slight complication in the case where Val is a constant, so we know at compile time whether Sel is true or false. In that scenario, the old code would automatically optimize the sub away, while this no longer happens with the new code. Instead, I've added extra code to check for this case and then just fall back to FP_TO_SINT directly. (This seems to catch even slightly more cases.)
Original version of the patch by Ulrich Weigand. X86 changes added by Craig Topper
Differential Revision: https://reviews.llvm.org/D67105
Summary:
musttail calls should not require allocating extra stack for arguments.
Updates to arguments passed in memory should happen in place before the
epilogue.
This bug was mostly a missed optimization, unless inalloca was used and
store to push conversion fired.
If a reserved call frame was used for an inalloca musttail call, the
call setup and teardown instructions would be deleted, and SP
adjustments would be inserted in the prologue and epilogue. You can see
these are removed from several test cases in this change.
In the case where the stack frame was not reserved, i.e. call frame
optimization fires and turns argument stores into pushes, then the
imbalanced call frame setup instructions created for inalloca calls
become a problem. They remain in the instruction stream, resulting in a
call setup that allocates zero bytes (expected for inalloca), and a call
teardown that deallocates the inalloca pack. This deallocation was
unbalanced, leading to subsequent crashes.
Reviewers: hans
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71097
Summary:
Sample profile loader of AutoFDO tries to replay previous inlining using context sensitive profile. The replay only repeats inlining if the call site block is hot. As a result it punts inlining of small functions, some of which can be beneficial for size, and will still be inlined by CSGCC inliner later. The oscillation between sample profile loader's inlining and regular CGSSC inlining cause unnecessary loss of context-sensitive profile. It doesn't have much impact for inline decision itself, but it negatively affects post-inline profile quality as CGSCC inliner have to scale counts which is not as accurate as the original context sensitive profile, and bad post-inline profile can misguide code layout.
This change added regular Inline Cost calculation for sample profile loader, so we can inline small functions upfront under switch -sample-profile-inline-size. In addition -sample-profile-cold-inline-threshold is added so we can tune the separate size threshold - currently the default is chosen to be the same as regular inliner's cold call-site threshold.
Reviewers: wmi, davidxl
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70750
GCC says:
.../llvm/lib/DebugInfo/GSYM/FunctionInfo.cpp:195:12:
error: ‘InfoType’ is not a class, namespace, or enumeration
case InfoType::EndOfList:
^
Presumably, GCC thinks InfoType is a variable here. Work around it by
using the name IT as is done above.
This is a follow-up to D70607 where we made any
extract element on SLM more costly than default. But that is
pessimistic for extract from element 0 because that corresponds
to x86 movd/movq instructions. These generally have >1 cycle
latency, but they are probably implemented as single uop
instructions.
Note that no vectorization tests are affected by this change.
Also, no targets besides SLM are affected because those are
falling through to the default cost of 1 anyway. But this will
become visible/important if we add more specializations via cost
tables.
Differential Revision: https://reviews.llvm.org/D71023
Summary:
Split off of D67120.
Add the profile guided size optimization instrumentation / queries in the code
gen or target passes. This doesn't enable the size optimizations in those passes
yet as they are currently disabled in shouldOptimizeForSize (for non-IR pass
queries).
Reviewers: davidxl
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71072
Current tail duplication integrated in bb layout is designed to increase the fallthrough from a BB's predecessor to its successor, but we have observed cases that duplication doesn't increase fallthrough, or it brings too much size overhead.
To overcome these two issues in function canTailDuplicateUnplacedPreds I add two checks:
make sure there is at least one duplication in current work set.
the number of duplication should not exceed the number of successors.
The modification in hasBetterLayoutPredecessor fixes a bug that potential predecessor must be at the bottom of a chain.
Differential Revision: https://reviews.llvm.org/D64376
SUMMARY:
if the size of Csect is zero, the Csect do not need write any data into sections
for example, the TOC Csect has zero size, it do not need invoke a
Asm.writeSectionData(W.OS, Csect.MCCsect, Layout);
Reviewers: daltenty
Subscribers: rupprecht, seiyai,hiraditya
Differential Revision: https://reviews.llvm.org/D71120
Summary:
The immediate forms of the MVE VQSHL instruction have MC names like
`MVE_VSLIimms8` and `MVE_VSLIimmu32`. Those names are confusing,
because VSLI is a completely different shift instruction with no
semantic relation to VQSHL. But it just happens to be defined
immediately before VQSHL in `ARMInstrMVE.td`, so this looks like a
copy-paste error. Renamed the ids to match the instruction name.
Reviewers: ostannard, dmgreen, MarkMurrayARM, miyuki
Reviewed By: miyuki
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71114
SUMMARY:
in the patch https://reviews.llvm.org/D66969 . we need a test case to verify the out text section of the xcoffobject file is correct or not.
but we do not have llvm disassembly tools to dump the xcoffobjectfile . since we commit the patch https://reviews.llvm.org/D70255, we have tools for it. we create this test case for it.
Reviewers: daltenty,hubert.reinterpretcast,
Differential Revision: https://reviews.llvm.org/D70719
Summary:
When trying to calculate the offsets for the jump table entries
we fail to take into account the block alignment, which could be
greater than 4 bytes. This led to cases where the jump table
offset was too big to fit in a byte.
Reviewers: t.p.northover, sdesmalen, ostannard
Reviewed By: ostannard
Subscribers: ostannard, kristof.beyls, hiraditya, llvm-commits
Committed on behalf of David Sherwood (david-arm)
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70533
This introduce a new helper which is used to parse the SHT_GNU_versym section.
LLVM/GNU styles implementations now use it to share the logic.
Differential revision: https://reviews.llvm.org/D71054
InnerLoopVectorizer's code called during VPlan execution still relies on
original IR's def-use relations to decide which vector code to generate,
limiting VPlan transformations ability to modify def-use relations and still
have ILV generate the vector code.
This commit moves GEP operand queries controlling how GEPs are widened to a
dedicated recipe and extracts GEP widening code to its own ILV method taking
those recorded decisions as arguments. This reduces ingredient def-use usage by
ILV as a step towards full VPlan-based def-use relations.
Differential revision: https://reviews.llvm.org/D69067
One of CodeGenPrepare's optimizations is to duplicate address calculations
into basic blocks, so that as much information as possible can be folded
into memory addressing operands. This is great -- but the dbg.value
variable location intrinsics are not updated in the same way. This can lead
to dbg.values referring to address computations in other blocks that will
never be encoded into the DAG, while duplicate address computations are
performed locally that could be used by the dbg.value. Some of these (such
as non-constant-offset GEPs) can't be salvaged past.
Fix this by, whenever we duplicate an address computation into a block,
looking for dbg.value users of the original memory address in the same
block, and redirecting those to the local computation.
Differential Revision: https://reviews.llvm.org/D58403
After my recent commit daee549 the following test case is failing:
CodeGen/X86/vector-constrained-fp-intrinsics.ll
Not sure why I didn't catch this earlier, seems to be affected
by other changes that came in recently.
Fixed by regerenating the test again. Sorry for the disruption!
Summary:
Adds intrinsics for the following:
* cmphs, cmphi
* cmpge, cmpgt
* cmpeq, cmpne
* cmplt, cmple
* cmplo, cmpls
Includes a minor change to `TLI.getMemValueType` that fixes a crash due to the
scalable flag being dropped.
Reviewers: sdesmalen, efriedma, rengolin, rovka, dancgr, huntergr
Reviewed By: efriedma
Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70889
This patch implements the following changes:
1) SelectionDAGBuilder::visitConstrainedFPIntrinsic currently treats
each constrained intrinsic like a global barrier (e.g. a function call)
and fully serializes all pending chains. This is actually not required;
it is allowed for constrained intrinsics to be reordered w.r.t one
another or (nonvolatile) memory accesses. The MI-level scheduler already
allows for that flexibility, so it makes sense to allow it at the DAG
level as well.
This patch therefore changes the way chains for constrained intrisincs
are created, and handles them basically like load operations are handled.
This has the effect that constrained intrinsics are no longer serialized
against one another or (nonvolatile) loads. They are still serialized
against stores, but that seems hard to change with the current DAG chain
setup, and it also doesn't seem to be a big problem preventing DAG
2) The OPC_CheckFoldableChainNode check requires that each of the
intermediate nodes in a multi-node pattern match only has a single use.
This check tends to fail if those intermediate nodes are strict operations
as those have a chain output that typically indeed has another use.
However, we don't really need to consider chains here at all, since they
will all be rewritten anyway by UpdateChains later. Other parts of the
matcher therefore already ignore chains, but this hasOneUse check doesn't.
This patch replaces hasOneUse by a custom test that verifies there is no
more than one use of any non-chain output value.
In theory, this change could affect code unrelated to strict FP nodes,
but at least on SystemZ I could not find any single instance of that
happening
3) The SystemZ back-end currently does not allow matching multiply-and-
extend operations (32x32 -> 64bit or 64x64 -> 128bit FP multiply) for
strict FP operations. This was not possible in the past due to the
problems described under 1) and 2) above.
With those issues fixed, it is now possible to fully support those
instructions in strict mode as well, and this patch does so.
Differential Revision: https://reviews.llvm.org/D70913
That refactoring moves NonRelocatableStringpool into common CodeGen folder.
So that NonRelocatableStringpool could be used not only inside dsymutil.
Differential Revision: https://reviews.llvm.org/D71068
In general ValueHandleBase::ValueIsRAUWd shouldn't be called when not
all uses of the value were actually replaced, though, currently
formLCSSAForInstructions calls it when it inserts LCSSA-phis.
Calls of ValueHandleBase::ValueIsRAUWd were added to LCSSA specifically
to update/invalidate SCEV. In the best case these calls duplicate some
of the work already done by SE->forgetValue, though in case when SCEV of
the value is SCEVUnknown, SCEV replaces the underlying value of
SCEVUnknown with the new value (i.e. acts like LCSSA-phi actually fully
replaces the value it is created for), which leads to SCEV being
corrupted because LCSSA-phi rarely dominates all uses of its inputs.
Fixes bug https://bugs.llvm.org/show_bug.cgi?id=44058.
Reviewers: fhahn, efriedma, reames, sanjoy.google
Reviewed By: fhahn
Subscribers: hiraditya, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70593
This is for the case where -gmlt -gsplit-dwarf -fsplit-dwarf-inlining
are used together in some but not all units during LTO (or, in the
reduced case, even without LTO) - ensuring that no split dwarf is used
(because split-dwarf-inlining puts the same data in the .o file, so
there's no need to duplicate it into the .dwo file)
We shouldn't assume that the returned result can be used to get
the other result.
This is prep-work for strict FP where we will also need to pass
the chain result along in more cases.
Summary:
Lookup functions are designed to not fully decode a FunctionInfo, LineTable or InlineInfo, they decode only what is needed into a LookupResult object. This allows lookups to avoid costly memory allocations and avoid parsing large amounts of information one a suitable match is found.
LookupResult objects contain the address that was looked up, the concrete function address range, the name of the concrete function, and a list of source locations. One for each inline function, and one for the concrete function. This allows one address to turn into multiple frames and improves the signal you get when symbolicating addresses in GSYM files.
Reviewers: labath, aprantl
Subscribers: mgorny, hiraditya, llvm-commits, lldb-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70993
Summary:
Add an option to allow the attribute propagation on the index to be
disabled, to allow a workaround for issues (such as that fixed by
D70977).
Also move the setting of the WithAttributePropagation flag on the index
into propagateAttributes(), and remove some old stale code that predated
this flag and cleared the maybe read/write only bits when we need to
disable the propagation (previously only when importing disabled, now
also when the new option disables it).
Reviewers: evgeny777, steven_wu
Subscribers: mehdi_amini, inglorion, hiraditya, dexonsmith, arphaman, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70984
* Context *
During register coalescing, we use rematerialization when coalescing is not
possible. That means we may rematerialize a super register when only a smaller
register is actually used.
E.g.,
0B v1 = ldimm 0xFF
1B v2 = COPY v1.low8bits
2B = v2
=>
0B v1 = ldimm 0xFF
1B v2 = ldimm 0xFF
2B = v2.low8bits
Where xB are the slot indexes.
Here v2 grew from a 8-bit register to a 16-bit register.
When that happens and subregister liveness is enabled, we create subranges for
the newly created value.
E.g., before remat, the live range of v2 looked like:
main range: [1r, 2r)
(Reads v2 is defined at index 1 slot register and used before the slot register
of index 2)
After remat, it should look like:
main range: [1r, 2r)
low 8 bits: [1r, 2r)
high 8 bits: [1r, 1d) <-- dead def
I.e., the unsused lanes of v2 should be marked as dead definition.
* The Problem *
Prior to this patch, the live-ranges from the previous exampel, would have the
full live-range for all subranges:
main range: [1r, 2r)
low 8 bits: [1r, 2r)
high 8 bits: [1r, 2r) <-- too long
* The Fix *
Technically, the code that this patch changes is not wrong:
When we create the subranges for the newly rematerialized value, we create only
one subrange for the whole bit mask.
In other words, at this point v2 live-range looks like this:
main range: [1r, 2r)
low & high: [1r, 2r)
Then, it gets wrong when we call LiveInterval::refineSubRanges on low 8 bits:
main range: [1r, 2r)
low 8 bits: [1r, 2r)
high 8 bits: [1r, 2r) <-- too long
Ideally, we would like LiveInterval::refineSubRanges to be able to do the right
thing and mark the dead lanes as such. However, this is not possible, because by
the time we update / refine the live ranges, the IR hasn't been updated yet,
therefore we actually don't have enough information to do the right thing.
Another option to fix the problem would have been to call
LiveIntervals::shrinkToUses after the IR is updated. This is not desirable as
this may have a noticeable impact on compile time.
Instead, what this patch does is when we create the subranges for the
rematerialized value, we explicitly create one subrange for the lanes that were
used before rematerialization and one for the lanes that were not used. The used
one inherits the live range of the main range and the unused one is just created
empty. The existing rematerialization code then detects that the unused one are
not live and it correctly sets dead def intervals for them.
https://llvm.org/PR41372
Summary:
AutoFDO's sample profile loader processes function in arbitrary source code order, so if I change the order of two functions in source code, the inline decision can change. This also prevented the use of context-sensitive profile to do specialization while inlining. This commit enforces SCC top-down order for sample profile loader. With this change, we can now do specialization, as illustrated by the added test case:
Say if we have A->B->C and D->B->C call path, we want to inline C into B when root inliner is B, but not when root inliner is A or D, this is not possible without enforcing top-down order. E.g. Once C is inlined into B, A and D can only choose to inline (B->C) as a whole or nothing, but what we want is only inline B into A and D, not its recursive callee C. If we process functions in top-down order, this is no longer a problem, which is what this commit is doing.
This change is guarded with a new switch "-sample-profile-top-down-load" for tuning, and it depends on D70653. Eventually, top-down can be the default order for sample profile loader.
Reviewers: wmi, davidxl
Subscribers: hiraditya, llvm-commits, tejohnson
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70655
Summary:
When sample profile loader decides not to inline a previously inlined call-site, we adjust the profile of outlined function simply by scaling up its profile counts by call-site count. This means the context-sensitive profile of that inlined instance will be thrown away. This commit try to keep context-sensitive profile for such cases:
- Instead of scaling outlined function's profile, we now properly merge the FunctionSamples of inlined instance into outlined function, including all recursively inlined profile.
- Instead of adjusting the profile for negative inline decision at the end of the sample profile loader pass, we do the profile merge right after processing each function. This change paired with top-down ordering of annotation/inline-replay (a separate diff) will make sure we recursively merge profile back before the profile is used for annotation and inline replay.
A new switch -sample-profile-merge-inlinee is added to enable the new profile merge for tuning. It should be the default behavior eventually.
Reviewers: wmi, davidxl
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70653
The loclists_table_base was being overwritten for each CU even though
only one loclists contribution is made so everything but the last CU
would have a label that was never defined and fail to assemble.
Summary: Previously we only handled the case where the csect hadn't been set up yet, so we'd hit an assert later on.
Reviewers: jasonliu, DiggerLin, stevewan
Reviewed By: jasonliu
Subscribers: hubert.reinterpretcast, wuzish, nemanjai, hiraditya, kbarton, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71032
Summary:
Emit a value debug intrinsic (with OP_deref) when an alloca address is
passed to a function call after going through a bitcast.
This generates an FP or SP-relative location for the local variable in
the following case:
int x;
use((void *)&x;
Reviewers: aprantl, vsk, pcc
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70752
Summary:
Previously, it was not possible to skip running the localizer pass
conditionally. This patch adds an input function to the pass which
decides if the pass should run on the given MachineFunction or not.
No test case as there is no upstream target needs this functionality.
Reviewers: qcolombet
Reviewed By: qcolombet
Subscribers: rovka, hiraditya, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71038
Summary:
This reverts commit c3b06d0c39.
Reason for revert: Caused miscompiles when inserting assume for undef.
Also adds a test to prevent similar breakage in future.
Fixes PR44154.
Reviewers: rnk, jdoerfert, efriedma, xbolva00
Reviewed By: rnk
Subscribers: thakis, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70933
Summary:
D68408 proposes to greatly improve our negation sinking abilities.
But in current canonicalization, we produce `sub A, zext(B)`,
which we will consider non-canonical and try to sink that negation,
undoing the existing canonicalization.
So unless we explicitly stop producing previous canonicalization,
we will have two conflicting folds, and will end up endlessly looping.
This inverts canonicalization, and adds back the obvious fold
that we'd miss:
* `sub [nsw] Op0, sext/zext (bool Y) -> add [nsw] Op0, zext/sext (bool Y)`
https://rise4fun.com/Alive/xx4
* `sext(bool) + C -> bool ? C - 1 : C`
https://rise4fun.com/Alive/fBl
It is obvious that `@ossfuzz_9880()` / `@lshr_out_of_range()`/`@ashr_out_of_range()`
(oss-fuzz 4871) are no longer folded as much, though those aren't really worrying.
Reviewers: spatel, efriedma, t.p.northover, hfinkel
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71064
Summary:
When MUL is the first operand to SUB, we can't use MLS because the accumulator
should be negated. Emit a NEG of the accumulator and an MLA instead, similar to
what we do for FMUL / FSUB fusing.
Reviewers: dmgreen, SjoerdMeijer, fhahn, Gerolf, mstorsjo, asbirlea
Reviewed By: asbirlea
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71067
The following changes enable llvm-ifs to handle the following merge conflicts:
* Weak + Strong symbol merging for the same symbol
* empty vs non-empty triple field
* empty vs non-empty object file format
Differential Revision: https://reviews.llvm.org/D70834
The patch makes sure that the LastThrowing pointer does not point to any instruction deleted by call to DeleteDeadInstruction.
While iterating through the instructions the pass maintains a pointer to the lastThrowing Instruction. A call to deleteDeadInstruction deletes a dead store and other instructions feeding the original dead instruction which also become dead. The instruction pointed by the lastThrowing pointer could also be deleted by the call to DeleteDeadInstruction and thus it becomes a dangling pointer. Because of this, we see an error in the next iteration.
In the patch, we maintain a list of throwing instructions encountered previously and use the last non deleted throwing instruction from the container.
Patch by Ankit <quic_aankit@quicinc.com>
Reviewers: fhahn, bcahoon, efriedma
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D65326
This makes no difference currently because we don't apply FMF
to FP casts, but that may change.
This could also be a place to add a fold for select with fptrunc,
so it will make that patch easier/smaller.
This patch addresses a performance problem reported in PR43855, and
present in the reapplication in in 001574938e5. It turns out that
MachineSink will (often) move instructions to the first block that
post-dominates the current block, and then try to sink further. This
means if we have a lot of conditionals, we can needlessly create large
numbers of DBG_VALUEs, one in each block the sunk instruction passes
through.
To fix this, rather than immediately sinking DBG_VALUEs, record them in
a pass structure. When sinking is complete and instructions won't be
sunk any further, new DBG_VALUEs are added, avoiding lots of
intermediate DBG_VALUE $noregs being created.
Differential revision: https://reviews.llvm.org/D70676
Fix part of PR43855, resolving a problem that comes from the reapplication
in 001574938e5. If we have two DBG_VALUE insts in a block that specify
the location of the same variable, for example:
%0 = someinst
DBG_VALUE %0, !123, !DIExpression()
%1 = anotherinst
DBG_VALUE %1, !123, !DIExpression()
if %0 were to sink, the corresponding DBG_VALUE would sink too, past the
next DBG_VALUE, effectively re-ordering assignments. To fix this, I've
added a SeenDbgVars set recording what variable locations have been seen in
a block already (working bottom up), and now flag DBG_VALUEs that would
pass a later DBG_VALUE for the same variable.
NB, this only works for repeated DBG_VALUEs in the same basic block, the
general case involving control flow is much harder, which I've written
up in PR44117.
Differential revision: https://reviews.llvm.org/D70672
These were:
* D58386 / f5e1b718a6 / reverted in d382a8a768
* D58238 / ee50590e16 / reverted in a8db456b53
Of which the latter has a performance regression tracked in PR43855,
fixed by D70672 / D70676, which will be committed atomically with this
reapplication.
Contains a minor difference to account for a change in the IsCopyInstr
signature.
ARMCodeGenPrepare has already been generalized and renamed to
TypePromotion. We've had it enabled and tested downstream for a
while, so enable it by default.
Differential Revision: https://reviews.llvm.org/D70998
Patch was reverted because https://bugs.llvm.org/show_bug.cgi?id=44048
The original patch is modified to set the strictfp IR attribute
explicitly in CodeGen instead of as a side effect of IRBuilder.
In the 2nd attempt to reapply there was a windows lit test fail, the
tests were fixed to use wildcard matching.
Differential Revision: https://reviews.llvm.org/D62731
Summary:
Currently these function return the raw content of the appropriate table
header, which means they are relative to the DW_AT_{loc,rng}list_base,
and one has to relocate them in order to do anything.
This changes the functions to perform the relocation themselves, which
seems more clearer, particularly as they are sitting right next to the
find{Rng,Loc}listFromOffset functions, but one *cannot* simply take the
result of these functions and take pass them there.
The only effect of this patch is to change what value is dumped for the
DW_AT_ranges attribute, which I think is for the better, as previously
the values appeared to point into thin air.
(The main reason I am looking at this is because I was trying to
implement equivalent functionality in lldb's DWARFUnit, and was stumped
by this behavior.
Reviewers: dblaikie, JDevlieghere, aprantl
Subscribers: hiraditya, llvm-commits, SouraVX
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71006
Summary:
If a call is bundled then the code that looks for instructions that
produce parameter values would break when reaching the call's bundle
header, due to the `ifCall(/*AnyInBundle*/)` invocation returning true.
It is not enough to simply ignore bundle headers in the `isCall()`
invocation, as the bundle header may have defines of parameter registers
due to the call, meaning that such registers would incorrectly be
removed from the worklist. Therefore, do not look at bundle headers at
all.
Reviewers: djtodoro, NikolaPrica, aprantl, vsk
Reviewed By: aprantl, vsk
Subscribers: hiraditya, llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D71024
This patch removes the magic "main" JITDylib from ExecutionEngine. The main
JITDylib was created automatically at ExecutionSession construction time, and
all subsequently created JITDylibs were added to the main JITDylib's
links-against list by default. This saves a couple of lines of boilerplate for
simple JIT setups, but this isn't worth introducing magical behavior for.
ORCv2 clients should now construct their own main JITDylib using
ExecutionSession::createJITDylib and set up its linkages manually using
JITDylib::setSearchOrder (or related methods in JITDylib).
This patch adds forward iterators mc_difflist_iterator,
mc_subreg_iterator and mc_superreg_iterator, based on the existing
DiffListIterator. Those are used to provide iterator ranges over
sub- and super-register from TRI, which are slightly more convenient
than the existing MCSubRegIterator/MCSuperRegIterator. Unfortunately,
it duplicates a bit of functionality, but the new iterators are a bit
more convenient (and can be used with various existing iterator
utilities) and should probably replace the old iterators in the future.
This patch updates some existing users.
Reviewers: evandro, qcolombet, paquette, MatzeB, arsenm
Reviewed By: qcolombet
Differential Revision: https://reviews.llvm.org/D70565
This patch turns MachineOperandIteratorBase into a regular forward
iterator, which can be used with iterator_range.
It also adds mi_bundle_ops and const_mi_bundle_ops that return iterator
ranges over all operands in a bundle and updates a use of the old
iterator.
Reviewers: evandro, t.p.northover, paquette, MatzeB, arsenm
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D70561
Fix assertion error
```
bool llvm::MachineOperand::isRenamable() const: Assertion `Register::isPhysicalRegister(getReg()) && "isRenamable should only be checked on physical registers"' failed.
```
by checking if the register is 0 before invoking `isRenamable`.