We do not need the SmallPtrSet to avoid adding duplicates to
OpsToRename, because we already keep a ValueInfo mapping. If we see an
op for the first time, Infos will be empty and we can also add it to
OpsToRename.
We process operands by visiting BBs depth-first and then iterate over
all instructions & users, so the order should be deterministic.
Therefore we can skip one round of sorting, which we purely needed for
guaranteeing a deterministic order when iterating over the SmallPtrSet.
Reviewers: efriedma, davide
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D64816
llvm-svn: 367028
Summary:
- As LCSSA is turned on just before isel, it may create PHI of the flow,
which is consumed by pseudo structurized CFG instructions. When that
PHIs are eliminated in O0, COPY may be placed wrongly as the these
pseudo structurized CFG instructions are considering prologue of MBB.
- Run extra `unreachable-mbb-elimination` at the end of isel to clean up
PHIs.
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64353
llvm-svn: 367023
... for the vector forms of `{SQ,UQ,}{INC,DEC}P` instructions. Also continue
supporting the exsting behaviour of not requiring an explicit size
specifier. The preferred disasembly is *with* the specifier.
This is implemented by redefining intruction forms to require vector predicates
with explicit size and adding aliases, which allow a predicate with no size.
Differential Revision: https://reviews.llvm.org/D65145
llvm-svn: 367019
trunc (load X) --> load (bitcast X to narrow type)
We have this transform in DAGCombiner::ReduceLoadWidth(), but the truncated
load pattern can interfere with other instcombine transforms, so I'd like to
allow the fold sooner.
Example:
https://bugs.llvm.org/show_bug.cgi?id=16739
...in that report, we have bitcasts bracketing these ops, so those could get
eliminated too.
We've generally ruled out widening of loads early in IR ( LoadCombine -
http://lists.llvm.org/pipermail/llvm-dev/2016-September/105291.html ), but
that reasoning may not apply to narrowing if we can preserve information
such as the dereferenceable range.
Differential Revision: https://reviews.llvm.org/D64432
llvm-svn: 367011
We should only zap returns in functions, where all live users have a
replace-able value (are not overdefined). Unused return values should be
undefined.
This should make it easier to detect bugs like in PR42738.
Alternatively we could bail out of zapping the function returns, but I
think it would be better to address those divergences between function
and call-site values where they are actually caused.
Reviewers: davide, efriedma
Reviewed By: davide, efriedma
Differential Revision: https://reviews.llvm.org/D65222
llvm-svn: 366998
This refactors boolean 'OptForSize' that was passed around in a lot of places.
It controlled folding of the tail loop, the scalar epilogue, into the main loop
but code-size reasons may not be the only reason to do this. Thus, this is a
first step to generalise the concept of tail-loop folding, and hence OptForSize
has been renamed and is using an enum ScalarEpilogueStatus that holds the
status how the epilogue should be lowered.
This will be followed up by D65197, that picks up the predicate loop hint and
performs the tail-loop folding.
Differential Revision: https://reviews.llvm.org/D64916
llvm-svn: 366993
Summary:
In PostRA phase, we often have to find out the most recent definition
of a register. This patch adds getDefMIPostRA so that other methods
can use it rather than implementing it repeatedly.
Differential Revision: https://reviews.llvm.org/D65131
llvm-svn: 366990
Summary:
Add a new method which tries to compute the target address referenced by an operand.
This patch supports x86_64 RIP-relative addressing for now.
It is necessary to print referenced symbol names in llvm-objdump.
Reviewers: andreadb, MaskRay, grosbach, jgalenson, craig.topper
Reviewed By: MaskRay, craig.topper
Subscribers: bcain, rupprecht, jhenderson, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63847
llvm-svn: 366987
Change MAXSECTALIGN to a public MaxSectionAlignment in MachOUniversal.
Will be used in a follow-up.
Patch by Anusha Basana <anusha.basana@gmail.com>
Differential Revision: https://reviews.llvm.org/D65117
llvm-svn: 366969
This patch changes the coding style of the FileCollector from the LLDB
to the LLVM coding style. Alex recently lifted it into LLVM and I
volunteered to do the conversion.
llvm-svn: 366966
Before, we weren't able to select things like this for G_GEP:
add x0, x8, #8
And instead we'd materialize the 8.
This teaches GISel to do that. It gives some considerable code size savings
on 252.eon-- about 4%!
Differential Revision: https://reviews.llvm.org/D65248
llvm-svn: 366959
Throughout the legalizerinfo we currently make the assumption that the target
has neon and FP target features available. Fixing it will require a refactor of
the whole thing, so until then make sure we fall back.
Works around PR42734
Differential Revision: https://reviews.llvm.org/D65244
llvm-svn: 366957
The file collector class is useful for creating reproducers,
not just for LLDB, but for other tools as well in LLVM/Clang.
Differential Revision: https://reviews.llvm.org/D65237
llvm-svn: 366956
Summary:
This was originally reported in D62818.
https://rise4fun.com/Alive/oPH
InstCombine does the opposite fold, in hope that `C l>>/<< Y` expression
will be hoisted out of a loop if `Y` is invariant and `X` is not.
But as it is seen from the diffs here, if it didn't get hoisted,
the produced assembly is almost universally worse.
Much like with my recent "hoist add/sub by/from const" patches,
we should get almost universal win if we hoist constant,
there is almost always an "and/test by imm" instruction,
but "shift of imm" not so much, so we may avoid having to
materialize the immediate, and thus need one less register.
And since we now shift not by constant, but by something else,
the live-range of that something else may reduce.
Special care needs to be applied not to disturb x86 `BT` / hexagon `tstbit`
instruction pattern. And to not get into endless combine loop.
Reviewers: RKSimon, efriedma, t.p.northover, craig.topper, spatel, arsenm
Reviewed By: spatel
Subscribers: hiraditya, MaskRay, wuzish, xbolva00, nikic, nemanjai, jvesely, wdng, nhaehnle, javed.absar, tpr, kristof.beyls, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62871
llvm-svn: 366955
If we have a G_MUL, and either the LHS or the RHS of that mul is the legal
shift value for a load addressing mode, we can fold it into the load.
This gives some code size savings on some SPEC tests. The best are around 2%
on 300.twolf and 3% on 254.gap.
Differential Revision: https://reviews.llvm.org/D65173
llvm-svn: 366954
For aliases, any expression that lowers at the MC level to global_object or
global_object+constant is valid at the object file level. getBaseObject()
should return a result if the aliasee ends up being of that form even if
the IR used to produce it is somewhat unconventional.
Note that this is different from what stripInBoundsOffsets() and that family
of functions is doing. Those functions are concerned about semantic properties
of IR, whereas here we only care about the lowering result.
Therefore reimplement getBaseObject() in a way that matches the lowering
result. This fixes a crash when producing a summary for aliases such as
that in the included test case.
Differential Revision: https://reviews.llvm.org/D65115
llvm-svn: 366952
This introduces a new family of combiner helper routines that re-use the
target specific cost model from SelectionDAG, and generate inline implementations
of the memcpy family of intrinsics.
The combines are only enabled at optimization levels higher than -O0, and give
very substantial performance improvements.
Differential Revision: https://reviews.llvm.org/D65167
llvm-svn: 366951
We can treat icmp eq X, MIN_UINT as icmp ule X, MIN_UINT and allow
it to merge with icmp ugt X, C. Similar for the other constants.
We can do simliar for icmp ne X, (U)INT_MIN/MAX in foldAndOfICmps. And we already handled UINT_MIN there.
Fixes PR42691.
Differential Revision: https://reviews.llvm.org/D65017
llvm-svn: 366945
r366317 added a legalization for s128 G_ICMP narrow scalar which tried to hard
code the result type of the new legalized G_SELECT. Change this to instead use
type of the original G_ICMP result and allow the target to legalize it if necessary
later.
llvm-svn: 366943
To support prefetch mode 3 we need to pad current
cacheline and fill 3 cachelines after. Current padding
is only sufficient for mode 2.
Differential Revision: https://reviews.llvm.org/D65236
llvm-svn: 366938
This removes the VCEQ/VCNE/VCGE/VCEQZ/etc nodes, just using two called VCMP and
VCMPZ with an extra operand as the condition code. I believe this will make
some combines simpler, allowing us to just look at these codes and not the
operands. It also helps fill in a missing VCGTUZ MVE selection without adding
extra nodes for it.
Differential Revision: https://reviews.llvm.org/D65072
llvm-svn: 366934
This patch adds support for recognizing cases where a larger vector type is being used to reduce just the elements in the lower subvector:
e.g. <8 x i32> reduction pattern in a <16 x i32> vector:
<4,5,6,7,u,u,u,u,u,u,u,u,u,u,u,u>
<2,3,u,u,u,u,u,u,u,u,u,u,u,u,u,u>
<1,u,u,u,u,u,u,u,u,u,u,u,u,u,u,u>
matchBinOpReduction returns the lower extracted subvector in such cases, assuming isExtractSubvectorCheap accepts the extraction.
I've only enabled it for X86 reduction sums so far. I intend to enable it for the bitop/minmax cases in future patches, and eventually I think its worth turning it on all the time. This is mainly just a case of ensuring calls to matchBinOpReduction don't make assumptions on the vector width based on the original vector extraction.
Fixes the x86 partial reduction sum cases in PR33758 and PR42023.
Differential Revision: https://reviews.llvm.org/D65047
llvm-svn: 366933
The prevents us from trying to convert an i1 predicate vector to a float, or
vice-versa. Better patterns are possible, which will follow in a subsequent
commit. For now we just expand them.
Differential Revision: https://reviews.llvm.org/D65066
llvm-svn: 366931
Fix an off-by-one error which made us not look at the last element of the
zero vector. This caused a miscompile in 188.ammp.
Differential Revision: https://reviews.llvm.org/D65168
llvm-svn: 366930
MVE VCMP instructions can use a general purpose register as the second operand.
This adds the combines for it, selecting from a compare of a vdup.
Differential Revision: https://reviews.llvm.org/D65061
llvm-svn: 366924
If we are already using the same chain for the old/new memory ops then just return.
Fixes PR42727 which had getLoad() reusing an existing node.
llvm-svn: 366922
This adds a DeMorgan combine for OR's of compares to turn them into AND's,
helping prevent them from going into and out of gpr registers. It also fills in
the VCLE and VCLT nodes that MVE can select, allowing it to invert more
compares.
Differential Revision: https://reviews.llvm.org/D65059
llvm-svn: 366920
Summary:
Move `-ftime-trace-granularity` option to frontend options. Without patch
this option is showed up in the help for any tool that links libSupport.
Reviewers: sammccall
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D65202
llvm-svn: 366911
Add a number of folds to convert and(vcmp, vcmp) into a single VPT block, where
the second vcmp becomes predicated on the first.
The VCMP; VPST; VCMP will eventually be converted to VPT; VCMP in the
VPTBlockPass.
Differential Revision: https://reviews.llvm.org/D65058
llvm-svn: 366910
Much like integers, this adds MVE floating point compares and select. It
requires a lot more buildvector/shuffle code because we may need to expand the
compares without mve.fp, and requires support for and/or because of the way we
lower llvm condition codes.
Some original code by David Sherwood
Differential Revision: https://reviews.llvm.org/D65054
llvm-svn: 366909
This adds some basic, "worst case" handling for MVE predicate Or/And/Xor. It
does this by going into and out of GPRs, doing the operation on scalars.
Code by David Sherwood.
Differential Revision: https://reviews.llvm.org/D65053
llvm-svn: 366907
This change make sure that llvm does not emit an invalid IT block
by putting the constant pool in the middle of an IT block.
We have code to try to avoid putting a constant island in the middle of an
IT block, but it only works if we see an IT between the one currently
referencing CPE and possible insertion point. If the first instruction
we look at is the VLDRD after the IT , we never see the IT and does not
realize that the instruction doing the load could be in an IT block itself.
Differential Revision: https://reviews.llvm.org/D64621
Change-Id: I24cecb37cded75e8992870bd997f6226853bd920
llvm-svn: 366905
Summary:
SimplifyFPBinOp is a variant of SimplifyBinOp that lets you specify
fast math flags, but the name is misleading because both functions
can simplify both FP and non-FP ops. Instead, overload SimplifyBinOp
so that you can optionally specify fast math flags.
Likewise for SimplifyFPUnOp.
Reviewers: spatel
Reviewed By: spatel
Subscribers: xbolva00, cameron.mcinally, eraman, hiraditya, haicheng, zzheng, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64902
llvm-svn: 366902
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch lift the restriction for a
numeric expression to either be a variable definition or a numeric
expression to try to match.
This commit allows a numeric variable to be set to the result of the
evaluation of a numeric expression after it has been matched
successfully. When it happens, the variable is allowed to be used on
the same line since its value is known at match time.
It also makes use of this possibility to reuse the parsing code to
parse a command-line definition by crafting a mirror string of the
-D option with the equal sign replaced by a colon sign, e.g. for option
'-D#NUMVAL=10' it creates the string
'-D#NUMVAL=10 (parsed as [[#NUMVAL:10]])' where the numeric expression
is parsed to define NUMVAL. This result in a few tests needing updating
for the location diagnostics on top of the tests for the new feature.
It also enables empty numeric expression which match any number without
defining a variable. This is done here rather than in commit #5 of the
patch series because it requires to dissociate automatic regex insertion
in RegExStr from variable definition which would make commit #5 even
bigger than it already is.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60388
> llvm-svn: 366860
llvm-svn: 366897
This adds support code for building and shuffling i1 predicate registers. It
generally uses two basic principles, either converting the predicate into an
scalar (through a PREDICATE_CAST) and doing scalar operations on it there, or
by converting the register to an full vector register and back.
Some of the code here is a not super efficient but will hopefully cover most
cases of moving i1 vectors around and can be improved in subsequent patches.
Some code by David Sherwood.
Differential Revision: https://reviews.llvm.org/D65052
llvm-svn: 366890
This adds the very basics for MVE vector predication, adding integer VCMP and
VSEL instruction support. This is done through predicate registers (MVT::v16i1,
MVT::v8i1, MVT::v4i1), but otherwise using same mechanics as NEON to custom
lower setcc's through ARMISD::VCXX nodes (VCEQ, VCGT, VCEQZ, etc).
An extra VCNE was added, as this can be handled sensibly by MVE's expanded
number of VCMP condition codes. (There are also VCLE and VCLT which are added
later).
VPSEL is also added here, simply selecting on the vselect.
Original code by David Sherwood.
Differential Revision: https://reviews.llvm.org/D65051
llvm-svn: 366885
While combining two loads into a single load, we often need to
reorder the pointer operands for the new load. This reordering was
broken in the cases where there was a chain of values that built up
the pointer.
Differential Revision: https://reviews.llvm.org/D65193
llvm-svn: 366881
This is a follow up to D64971. While we need to insert the deref after
the offset, it needs to come before the remaining elements in the
original expression since the deref needs to happen before the LLVM
fragment if present.
Differential Revision: https://reviews.llvm.org/D65172
llvm-svn: 366865
Copying them is expensive. This allows the tables to be moved around at
lower cost, and allows a remarks::StringTable to be constructed from
a remarks::ParsedStringTable.
llvm-svn: 366864
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch lift the restriction for a
numeric expression to either be a variable definition or a numeric
expression to try to match.
This commit allows a numeric variable to be set to the result of the
evaluation of a numeric expression after it has been matched
successfully. When it happens, the variable is allowed to be used on
the same line since its value is known at match time.
It also makes use of this possibility to reuse the parsing code to
parse a command-line definition by crafting a mirror string of the
-D option with the equal sign replaced by a colon sign, e.g. for option
'-D#NUMVAL=10' it creates the string
'-D#NUMVAL=10 (parsed as [[#NUMVAL:10]])' where the numeric expression
is parsed to define NUMVAL. This result in a few tests needing updating
for the location diagnostics on top of the tests for the new feature.
It also enables empty numeric expression which match any number without
defining a variable. This is done here rather than in commit #5 of the
patch series because it requires to dissociate automatic regex insertion
in RegExStr from variable definition which would make commit #5 even
bigger than it already is.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60388
llvm-svn: 366860
We need to be able to load and store s128 for memcpy inlining, where we want to
generate Q register mem ops. Making these legal also requires that we add some
support in other instructions. Regbankselect should also know about these since
they have no GPR register class that can hold them, so need special handling to
live on the FPR bank.
Differential Revision: https://reviews.llvm.org/D65166
llvm-svn: 366857
This exposes better support to use a string table with a format through
an actual new remark::Format, called yaml-strtab.
This can now be used with -fsave-optimization-record=yaml-strtab.
llvm-svn: 366849
Currently PowerPC backend emits code like this:
r3 = li 0
std r3, 264(r1)
r3 = li 0
std r3, 272(r1)
This patch fixes that and other cases where a register already contains a value that is loaded so we will get:
r3 = li 0
std r3, 264(r1)
std r3, 272(r1)
Differential Revision: https://reviews.llvm.org/D64220
llvm-svn: 366840
LegalizeDAG tries to legal the DAG by legalizing nodes before
their operands.
If we create a new node, we end up legalizing it after its operands.
This prevents some of the optimizations that can be done when the
operand is a build_vector since the build_vector will have been
legalized to something else.
Differential Revision: https://reviews.llvm.org/D65132
llvm-svn: 366835
The original code failed to account for the fact that one exit can have a pointer exit count without all of them having pointer exit counts. This could cause two separate bugs:
1) We might exit the loop early, and leave optimizations undone. This is what triggered the assertion failure in the reported test case.
2) We might optimize one exit, then exit without indicating a change. This could result in an analysis invalidaton bug if no other transform is done by the rest of indvars.
Note that the pointer exit counts are a really fragile concept. They show up only when we have a pointer IV w/o a datalayout to provide their size. It's really questionable to me whether the complexity implied is worth it.
llvm-svn: 366829
Summary:
Allow IntToPtrInst to carry !dereferenceable metadata tag.
This is valid since !dereferenceable can be only be applied to
pointer type values.
Change-Id: If8a6e3c616f073d51eaff52ab74535c29ed497b4
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64954
llvm-svn: 366826
When we select the XRO variants of loads, we can pull in very specific shifts
(of the size of an element). E.g.
```
ldr x1, [x2, x3, lsl #3]
```
This teaches GISel to handle these when they're coming from shifts
specifically.
This adds a new addressing mode function, `selectAddrModeShiftedExtendXReg`
which recognizes this pattern.
This also packs this up with `selectAddrModeRegisterOffset` into
`selectAddrModeXRO`. This is intended to be equivalent to `selectAddrModeXRO`
in AArch64ISelDAGtoDAG.
Also update load-addressing-modes to show that all of the cases here work.
Differential Revision: https://reviews.llvm.org/D65119
llvm-svn: 366819
If all the demanded elts are from one operand and are inline, then we can use the operand directly.
The changes are mainly from SSE41 targets which has blendvpd but not cmpgtq, allowing the v2i64 comparison to be simplified as we only need the signbit from alternate v4i32 elements.
llvm-svn: 366817
llvm-ar outputs a strange error message when handling archives with
members larger than 4GB due to not checking file size when passing the
value as an unsigned 32 bit integer. This overflow issue caused
malformed archives to be created.:
https://bugs.llvm.org/show_bug.cgi?id=38058
This change allows for members above 4GB and will error in a case that
is over the formats size limit, a 10 digit decimal integer.
Differential Revision: https://reviews.llvm.org/D65093
llvm-svn: 366813
While lowering test.set.loop.iterations, it wasn't checked how the
brcond was using the result and so the wls could branch to the loop
preheader instead of not entering it. The same was true for
loop.decrement.reg.
So brcond and br_cc and now lowered manually when using the hwloop
intrinsics. During this we now check whether the result has been
negated and whether we're using SETEQ or SETNE and 0 or 1. We can
then figure out which basic block the WLS and LE should be targeting.
Differential Revision: https://reviews.llvm.org/D64616
llvm-svn: 366809
This patch introduces the DAG version of SimplifyMultipleUseDemandedBits, which attempts to peek through ops (mainly and/or/xor so far) that don't contribute to the demandedbits/elts of a node - which means we can do this even in cases where we have multiple uses of an op, which normally requires us to demanded all bits/elts. The intention is to remove a similar instruction - SelectionDAG::GetDemandedBits - once SimplifyMultipleUseDemandedBits has matured.
The InstCombine version of SimplifyMultipleUseDemandedBits can constant fold which I haven't added here yet, and so far I've only wired this up to some basic binops (and/or/xor/add/sub/mul) to demonstrate its use.
We do see a couple of regressions that need to be addressed:
AMDGPU unsigned dot product codegen retains an AND mask (for ZERO_EXTEND) that it previously removed (but otherwise the dotproduct codegen is a lot better).
X86/AVX2 has poor handling of vector ANY_EXTEND/ANY_EXTEND_VECTOR_INREG - it prematurely gets converted to ZERO_EXTEND_VECTOR_INREG.
The code owners have confirmed its ok for these cases to fixed up in future patches.
Differential Revision: https://reviews.llvm.org/D63281
llvm-svn: 366799
cast<CallInst> shouldn't return null and we dereference the pointer in a lot of other places, causing both MSVC + cppcheck to warn about dereferenced null pointers
llvm-svn: 366793
Summary:
Deduce dereferenceable attribute in Attributor.
These will be added in a later patch.
* dereferenceable(_or_null)_globally (D61652)
* Deduction based on load instruction (similar to D64258)
Reviewers: jdoerfert, sstefan1
Reviewed By: jdoerfert
Subscribers: hiraditya, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64876
llvm-svn: 366788
The function was calling getNode() on an SDValue to return and the
caller turned the result back into a SDValue. So just return the
original SDValue to avoid this.
llvm-svn: 366779
Summary: Adds a binding to the internalize pass that allows the caller to pass a function pointer that acts as the visibility-preservation predicate. Previously, one could only pass an unsigned value (not LLVMBool?) that directed the pass to consider "main" or not.
Reviewers: whitequark, deadalnix, harlanhaskins
Reviewed By: whitequark, harlanhaskins
Subscribers: kren1, hiraditya, llvm-commits, harlanhaskins
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62456
llvm-svn: 366777
Replace float load/store pair with integer load/store pair when it's only used in load/store,
because float load/store instructions cost more cycles then integer load/store.
A typical scenario is when there is a call with more than 13 float arguments passing, we need pass them by stack.
So we need a load/store pair to do such memory operation if the variable is global variable.
Differential Revision: https://reviews.llvm.org/D64195
llvm-svn: 366775
[Attributor] Liveness analysis.
Liveness analysis abstract attribute used to indicate which BasicBlocks are dead and can therefore be ignored.
Right now we are only looking at noreturn calls.
Reviewers: jdoerfert, uenoku
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D64162
llvm-svn: 366769
We were silently using the ABI alignment for all of the stores generated for deopt and gc values. We'd gotten the alignment of the stack slot itself properly reduced (via MachineFrameInfo's clamping), but having the MMO on the store incorrect was enough for us to generate an aligned store to a unaligned location.
The simplest fix would have been to just pass the alignment to the helper function, but once we do that, the helper function doesn't really help. So, inline it and directly call the MMO version of DAG.getStore with a properly constructed MMO.
Note that there's a separate performance possibility here. Even if we *can* realign stacks, we probably don't *want to* if all of the stores are in slowpaths. But that's a later patch, if at all. :)
llvm-svn: 366765
This patch exnteds the error handling in the debug line parser to get
rid of the existing MD5 assertion. I want to reuse the debug line parser
from LLVM in LLDB where we cannot crash on invalid input.
Differential revision: https://reviews.llvm.org/D64544
llvm-svn: 366762