[Recommitting now an unrelated assertion in SROA is sorted out]
The new version has several advantages:
1) IMSHO it's more readable and neater
2) It handles loads and stores properly
3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.
With this change we can now finally sink load-modify-store idioms such as:
if (a)
return *b += 3;
else
return *b += 4;
=>
%z = load i32, i32* %y
%.sink = select i1 %a, i32 5, i32 7
%b = add i32 %z, %.sink
store i32 %b, i32* %y
ret i32 %b
When this works for switches it'll be even more powerful.
Round 4. This time we should handle all instructions correctly, and not replace any operands that need to be constant with variables.
This was really hard to determine safely, so the helper function should be put into the Instruction API. I'll do that as a followup.
llvm-svn: 279460
__guard_local is defined as long on OpenBSD. If the source file contains
a definition of __guard_local, it mismatches with the int8 pointer type
used in LLVM. In that case, Module::getOrInsertGlobal() returns a
cast operation instead of a GlobalVariable. Trying to set the
visibility on the cast operation leads to random segfaults (seen when
compiling the OpenBSD kernel, which also runs with stack protection).
In the kernel, the hidden attribute does not matter. For userspace code,
__guard_local is defined as hidden in the startup code. If a program
re-defines __guard_local, the definition from the startup code will
either win or the linker complains about multiple definitions
(depending on whether the re-defined __guard_local is placed in the
common segment or not).
It also matches what gcc on OpenBSD does.
Thanks Stefan Kempf <sisnkemp@gmail.com> for the patch!
Differential Revision: http://reviews.llvm.org/D23674
llvm-svn: 279449
Summary: We can allow sinking if the single user block has only one unique predecessor, regardless of the number of edges. Note that a switch statement with multiple cases can have the same destination.
Reviewers: mcrosier, majnemer, spatel, reames
Subscribers: reames, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D23722
llvm-svn: 279448
The new version has several advantages:
1) IMSHO it's more readable and neater
2) It handles loads and stores properly
3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.
With this change we can now finally sink load-modify-store idioms such as:
if (a)
return *b += 3;
else
return *b += 4;
=>
%z = load i32, i32* %y
%.sink = select i1 %a, i32 5, i32 7
%b = add i32 %z, %.sink
store i32 %b, i32* %y
ret i32 %b
When this works for switches it'll be even more powerful.
Round 4. This time we should handle all instructions correctly, and not replace any operands that need to be constant with variables.
This was really hard to determine safely, so the helper function should be put into the Instruction API. I'll do that as a followup.
llvm-svn: 279443
Assembler directives .dtprelword, .dtpreldword, .tprelword, and
.tpreldword generates relocations R_MIPS_TLS_DTPREL32, R_MIPS_TLS_DTPREL64,
R_MIPS_TLS_TPREL32, and R_MIPS_TLS_TPREL64 respectively.
The main motivation for this patch is to be able to write test cases
for checking correctness of the LLD linker's behaviour.
Differential Revision: https://reviews.llvm.org/D23669
llvm-svn: 279439
It use to be non-const for the sole purpose of custom handling of
commons symbol. This is moved now in the regular LTO handling now
and such we can constify the callback.
llvm-svn: 279438
This change cause performance regression on MultiSource/Benchmarks/TSVC/Symbolics-flt/Symbolics-flt from LNT and some other bechmarks.
See https://reviews.llvm.org/D18777 for details.
llvm-svn: 279433
This change needs to be reverted in order to revert -r278267 which cause performance regression on MultiSource/Benchmarks/TSVC/Symbolics-flt/Symbolics-flt from LNT and some other bechmarks.
See comments on https://reviews.llvm.org/D18777 for details.
llvm-svn: 279432
As discussed on PR26491, we are missing the opportunity to make use of the smaller MOVHLPS instruction because we set both arguments of a SHUFPD when using it to lower a single input shuffle.
This patch sets the lowered argument to UNDEF if that shuffle element is undefined. This in turn makes it easier for target shuffle combining to decode UNDEF shuffle elements, allowing combines to MOVHLPS to occur.
A fix to match against MOVHPD stores was necessary as well.
This builds on the improved MOVLHPS/MOVHLPS lowering and memory folding support added in D16956
Adding similar support for SHUFPS will have to wait until have better support for target combining of binary shuffles.
Differential Revision: https://reviews.llvm.org/D23027
llvm-svn: 279430
This tries to keep all the ModRM memory and register forms in their own regions of the encodings. Hoping to make it simple on some of the switch statements that operate on these encodings.
llvm-svn: 279422
The gold-plugin was doing this internally, now the API is handling
commons correctly based on the given resolution.
Differential Revision: https://reviews.llvm.org/D23739
llvm-svn: 279417
Summary: r279379 introduced crash on arm 32bit bot. I suspect this is alignment issue.
Reviewers: eugenis
Subscribers: llvm-commits, aemerson
Differential Revision: https://reviews.llvm.org/D23762
llvm-svn: 279413
The callers still have ConstantInt guards, so there is no functional change
intended from this change. But relaxing the callers will allow more folds
for vector types.
llvm-svn: 279396
In some cases, FastIsel was emitting TEST instruction with K reg input, which is illegal.
Changed to using KORTEST when dealing with K regs.
Differential Revision: https://reviews.llvm.org/D23163
llvm-svn: 279393
This fixes the crash from PR29072, where the MachineBasicBlock::iterator
wasn't being properly checked against MachineBasicBlock::end() before
iterating. This was another bug exposed by the new
ilist::iterator::operator*() assertion from r279314.
This testcase is poor quality. bugpoint couldn't reduce any further,
and I haven't had time to dig into what's going on so I can't invent a
better one. I didn't even get good CHECK lines in: this is just a
crasher.
I'm committing anyway since this is a real crash with an obvious fix,
but I'll leave PR29072 open and ask an ARM maintainer to help improve
the testcase.
llvm-svn: 279391
Summary:
We can insert function call instead of multiple store operation.
Current default is blocks larger than 64 bytes.
Changes are hidden behind -asan-experimental-poisoning flag.
PR27453
Differential Revision: https://reviews.llvm.org/D23711
llvm-svn: 279383
Summary:
Callbacks are not being used yet.
PR27453
Reviewers: kcc, eugenis
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23634
llvm-svn: 279380
Summary: Reduce store size to avoid leading and trailing zeros.
Reviewers: kcc, eugenis
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23648
llvm-svn: 279379
Summary:
We are going to combine poisoning of red zones and scope poisoning.
PR27453
Reviewers: kcc, eugenis
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23623
llvm-svn: 279373
The test case included in r279125 exposed existing undefined behavior in the
SLP vectorizer that it did not introduce. This patch reapplies the original
patch, but modifies the test case to avoid hitting the undefined behavior. This
allows us to close PR28330 while keeping the UBSan bot happy. The undefined
behavior the original test uncovered will be addressed in a follow-on patch.
Reference: https://llvm.org/bugs/show_bug.cgi?id=28330
llvm-svn: 279370
unit for use in the PreservedAnalyses set.
This doesn't have any important functional change yet but it cleans
things up and makes the analysis substantially more efficient by
avoiding querying through the type erasure for every analysis.
I also think it makes it much easier to reason about how analyses are
preserved when walking across pass managers and across IR unit
abstractions.
Thanks to Sean and Mehdi both for the comments and suggestions.
Differential Revision: https://reviews.llvm.org/D23691
llvm-svn: 279360
This is a partial enablement (move the ConstantInt guard down) because there are many
different folds here and one of the later ones will require reworking 'isSignBitCheck'.
llvm-svn: 279339
- Always compile print() regardless of LLVM_ENABLE_DUMP. (We usually
only gard dump() functions with that).
- Only show the set properties to reduce output clutter.
- Remove the unused variant that even shows the unset properties.
- Fix comments
llvm-svn: 279338
Currently nodes_iterator may dereference to a NodeType* or a NodeType&. Make them all dereference to NodeType*, which is NodeRef later.
Differential Revision: https://reviews.llvm.org/D23704
Differential Revision: https://reviews.llvm.org/D23705
llvm-svn: 279326
Summary:
This switches us to use a different, more powerful algorithm for address
space inference. I've tested this locally and it seems to work great.
Once we're more confident in it, we can remove the old pass altogether.
Reviewers: jingyue
Subscribers: llvm-commits, tra, jholewinski
Differential Revision: https://reviews.llvm.org/D23694
llvm-svn: 279317
This adds a G_INSERT instruction, which technically makes G_SEQUENCE redundant
(it's equivalent to a G_INSERT into an IMPLICIT_DEF). We'll leave G_SEQUENCE
for now though: it's likely to be far more common as it's a fundamental part of
legalization, so avoiding the mess and bloat of the extra IMPLICIT_DEFs is
probably worthwhile.
llvm-svn: 279306
Summary: This way they can be re-used by target-specific schedulers.
Reviewers: atrick, MatzeB, kparzysz
Subscribers: kparzysz, llvm-commits, MatzeB
Differential Revision: https://reviews.llvm.org/D23678
llvm-svn: 279305
Specifically, this is done near the end of "SimplifyICmpInst" using
computeKnownBits() as the broader solution. There are even vector
tests (yay!) for this in test/Transforms/InstSimplify/compare.ll.
I considered putting an assert here instead of just deleting, but
then we could assert every possible fold in InstSimplify in
InstCombine, so...less is more?
llvm-svn: 279300
They can be deleted or replicated, so the cache may become outdated.
They only need to be visited once during frame lowering, so just scan
the function instead.
llvm-svn: 279297
was done to hopefully appease MSVC.
As an upside, this also implements the suggestion Sanjoy made in code
review, so two for one! =]
I'll be watching the bots to see if there are still issues.
llvm-svn: 279295
First, make sure all types involved are represented, rather than being implicit
from the register width.
Second, canonicalize all types to scalar. These operations just act in bits and
don't care about vectors.
Also standardize spelling of Indices in the MachineIRBuilder (NFC here).
llvm-svn: 279294
Unsigned addition and subtraction can reuse the instructions created to
legalize large width operations (i.e. both produce and consume a carry flag).
Signed operations and multiplies get a dedicated op-with-overflow instruction.
Once this is produced the two values are combined into a struct register (which
will almost always be merged with a corresponding G_EXTRACT as part of
legalization).
llvm-svn: 279278
Repeated inserts into AliasSetTracker have quadratic behavior - inserting a
pointer into AST is linear, since it requires walking over all "may" alias
sets and running an alias check vs. every pointer in the set.
We can avoid this by tracking the total number of pointers in "may" sets,
and when that number exceeds a threshold, declare the tracker "saturated".
This lumps all pointers into a single "may" set that aliases every other
pointer.
(This is a stop-gap solution until we migrate to MemorySSA)
This fixes PR28832.
Differential Revision: https://reviews.llvm.org/D23432
llvm-svn: 279274
This doesn't change tests codegen as we already combined to blend+zero which is what we lower VZEXT_MOVL to on SSE41+ targets, but it does put us in a better position when we improve shuffling for optsize.
llvm-svn: 279273
The intended transform is:
// Simplify icmp eq (or (ptrtoint P), (ptrtoint Q)), 0
// -> and (icmp eq P, null), (icmp eq Q, null).
P and Q are both pointer types, but may have different types. We need
two calls to getNullValue() to make the icmps.
llvm-svn: 279271
CGSCC use a WeakVH to track call sites. RAUW a call within a function
can result in that WeakVH getting confused about whether or not the call
site is still around.
llvm-svn: 279268
Of course, we really need to refactor and fix all of the cmp predicates,
but this one is interesting because without it, we later perform an
information-losing transform of icmp (shl 1, Y), C, and we can't recover
the better fold.
llvm-svn: 279263
In addition, the branch instructions will have proper BB destinations, not offsets, like before.
Patch by Vadzim Dambrouski!
Differential Revision: https://reviews.llvm.org/D20162
llvm-svn: 279242
Improved handling of fma, floating point min/max, additional load/store
instructions for floating point types.
Patch by Jyotsna Verma.
llvm-svn: 279239
solve completely opaque MSVC build errors. It complains about lots of
stuff with this change without givin nearly enough information to even
try to fix.
llvm-svn: 279231
INSERTPS doesn't fit well with our shuffle mask canonicalization, so we need to attempt both the original mask and the commuted mask to more likely get a match
llvm-svn: 279230
The new version has several advantages:
1) IMSHO it's more readable and neater
2) It handles loads and stores properly
3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.
With this change we can now finally sink load-modify-store idioms such as:
if (a)
return *b += 3;
else
return *b += 4;
=>
%z = load i32, i32* %y
%.sink = select i1 %a, i32 5, i32 7
%b = add i32 %z, %.sink
store i32 %b, i32* %y
ret i32 %b
When this works for switches it'll be even more powerful.
llvm-svn: 279229
to run methods, both for transform passes and analysis passes.
This also allows the analysis manager to use a different set of extra
arguments from the pass manager where useful. Consider passes over
analysis produced units of IR like SCCs of the call graph or loops.
Passes of this nature will often want to refer to the analysis result
that was used to compute their IR units (the call graph or LoopInfo).
And for transformations, they may want to communicate special update
information to the outer pass manager. With this change, it becomes
possible to have a run method for a loop pass that looks more like:
PreservedAnalyses run(Loop &L, AnalysisManager<Loop, LoopInfo> &AM,
LoopInfo &LI, LoopUpdateRecord &UR);
And to query the analysis manager like:
AM.getResult<MyLoopAnalysis>(L, LI);
This makes accessing the known-available analyses convenient and clear,
and it makes passing customized data structures around easy.
My initial use case is going to be in updating the pass manager layers
when the analysis units of IR change. But there are more use cases here
such as having a layer that lets inner passes signal whether certain
additional passes should be run because of particular simplifications
made. Two desires for this have come up in the past: triggering
additional optimization after successfully unrolling loops, and
triggering additional inlining after collapsing indirect calls to direct
calls.
Despite adding this layer of generic extensibility, the *only* change to
existing, simple usage are for places where we forward declare the
AnalysisManager template. We really shouldn't be doing this because of
the fragility exposed here, but currently it makes coping with the
legacy PM code easier.
Differential Revision: http://reviews.llvm.org/D21462
llvm-svn: 279227
The heuristic above this code is incredibly suspect, but disregarding that it mutates the cast opcode so we need to check the *mutated* opcode later to see if we need to emit an AssertSext or AssertZext node.
Fixes PR29041.
llvm-svn: 279223
directly produce the index as the value type result.
This requires making the index movable which is straightforward. It
greatly simplifies things by allowing us to completely avoid the builder
API and the layers of abstraction inherent there. Instead both pass
managers can directly construct these when run by value. They still
won't be constructed truly eagerly thanks to the optional in the legacy
PM. The code that directly builds the index can also just share a direct
function.
A notable change here is that the result type of the analysis for the
new PM is no longer a reference type. This was really problematic when
making changes to how we handle result types to make our interface
requirements *much* more strict and precise. But I think this is an
overall improvement.
Differential Revision: https://reviews.llvm.org/D23701
llvm-svn: 279216
Without the synthesized reference to a symbol in the xray_instr_map,
linker section garbage collection will helpfully remove the whole
xray_instr_map section from the final executable (or archive). This will
cause the runtime to not be able to identify the sleds and hot-patch the
calls/jumps into the runtime trampolines.
This change adds a reference from the text section at the end of the
function to keep around the associated xray_instr_map section as well.
We also make sure that we catch this reference in the test.
Reviewers: chandlerc, echristo, majnemer, mehdi_amini
Subscribers: mehdi_amini, llvm-commits, dberris
Differential Revision: https://reviews.llvm.org/D23398
llvm-svn: 279204
The ppc64 multistage bot fails on this.
This reverts commit r279124.
Also Revert "CodeGen: Add/Factor out LiveRegUnits class; NFCI" because it depends on the previous change
This reverts commit r279171.
llvm-svn: 279199
Patch by William Dillon. Thanks William!
This patch adds support for the R_ARM_REL32 and R_ARM_GOT_PREL ELF ARM
relocations to RuntimeDyld, which should allow JITing of code that
produces these relocations.
No test case: Unfortunately RuntimeDyldELF's GOT building mechanism (which
uses a separate section for GOT entries) isn't compatible with
RuntimeDyldChecker. The correct fix for this is to fix RuntimeDyldELF's GOT
support (it's fundamentally broken at the moment: separate sections aren't
guaranteed to be in range of a GOT entry load), but that's a non-trivial job.
llvm-svn: 279182
Summary: Reduce store size to avoid leading and trailing zeros.
Reviewers: kcc, eugenis
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23648
llvm-svn: 279178
The structs BarrierOp, PrefetchOp, PSBHintOp are in AArch64AsmParser.cpp
(inside anonymous namespace). This diff changes the order of fields and
removes the excessive padding (8 bytes).
Patch by Alexander Shaposhnikov!
llvm-svn: 279173
This is a set of register units intended to track register liveness, it
is similar in spirit to LivePhysRegs.
You can also think of this as the liveness tracking parts of the
RegisterScavenger factored out into an own class.
This was proposed in http://llvm.org/PR27609
Differential Revision: http://reviews.llvm.org/D21916
llvm-svn: 279171
The following function currently relies on tail-merging for if
conversion to succeed. The common tail of cond_true and cond_false is
extracted, and this then forms a diamond pattern that can be
successfully if converted.
If this block does not get extracted, either because tail-merging is
disabled or the threshold is higher, we should still recognize this
pattern and if-convert it.
Fixed a regression in the original commit. Need to un-reverse branches after
reversing them, or other conversions go awry.
Regression on self-hosting bots with no obvious explanation. Tidied up range
handling to be more obviously correct, but there was no smoking gun.
define i32 @t2(i32 %a, i32 %b) nounwind {
entry:
%tmp1434 = icmp eq i32 %a, %b ; <i1> [#uses=1]
br i1 %tmp1434, label %bb17, label %bb.outer
bb.outer: ; preds = %cond_false, %entry
%b_addr.021.0.ph = phi i32 [ %b, %entry ], [ %tmp10, %cond_false ]
%a_addr.026.0.ph = phi i32 [ %a, %entry ], [ %a_addr.026.0, %cond_false ]
br label %bb
bb: ; preds = %cond_true, %bb.outer
%indvar = phi i32 [ 0, %bb.outer ], [ %indvar.next, %cond_true ]
%tmp. = sub i32 0, %b_addr.021.0.ph
%tmp.40 = mul i32 %indvar, %tmp.
%a_addr.026.0 = add i32 %tmp.40, %a_addr.026.0.ph
%tmp3 = icmp sgt i32 %a_addr.026.0, %b_addr.021.0.ph
br i1 %tmp3, label %cond_true, label %cond_false
cond_true: ; preds = %bb
%tmp7 = sub i32 %a_addr.026.0, %b_addr.021.0.ph
%tmp1437 = icmp eq i32 %tmp7, %b_addr.021.0.ph
%indvar.next = add i32 %indvar, 1
br i1 %tmp1437, label %bb17, label %bb
cond_false: ; preds = %bb
%tmp10 = sub i32 %b_addr.021.0.ph, %a_addr.026.0
%tmp14 = icmp eq i32 %a_addr.026.0, %tmp10
br i1 %tmp14, label %bb17, label %bb.outer
bb17: ; preds = %cond_false, %cond_true, %entry
%a_addr.026.1 = phi i32 [ %a, %entry ], [ %tmp7, %cond_true ], [ %a_addr.026.0, %cond_false ]
ret i32 %a_addr.026.1
}
Without tail-merging or diamond-tail if conversion:
LBB1_1: @ %bb
@ =>This Inner Loop Header: Depth=1
cmp r0, r1
ble LBB1_3
@ BB#2: @ %cond_true
@ in Loop: Header=BB1_1 Depth=1
subs r0, r0, r1
cmp r1, r0
it ne
cmpne r0, r1
bgt LBB1_4
LBB1_3: @ %cond_false
@ in Loop: Header=BB1_1 Depth=1
subs r1, r1, r0
cmp r1, r0
bne LBB1_1
LBB1_4: @ %bb17
bx lr
With diamond-tail if conversion, but without tail-merging:
@ BB#0: @ %entry
cmp r0, r1
it eq
bxeq lr
LBB1_1: @ %bb
@ =>This Inner Loop Header: Depth=1
cmp r0, r1
ite le
suble r1, r1, r0
subgt r0, r0, r1
cmp r1, r0
bne LBB1_1
@ BB#2: @ %bb17
bx lr
llvm-svn: 279168
The cost of predicating a diamond is only the instructions that are not shared
between the two branches. Additionally If a predicate clobbering instruction
occurs in the shared portion of the branches (e.g. a cond move), it may still
be possible to if convert the sub-cfg. This change handles these two facts by
rescanning the non-shared portion of a diamond sub-cfg to recalculate both the
predication cost and whether both blocks are pred-clobbering.
llvm-svn: 279167
This may affect calculations for thresholds, but is not a significant change
in behavior.
The problem was that an inclusive range must have an additonal flag to showr
that it is empty, because otherwise begin == end implies that the range has one
element, and it may not be possible to move past on either side.
llvm-svn: 279166
Summary:
Inline asm memory constraints can have the base or index register be assigned
to %r0 right now. Make sure that we assign only ADDR64 registers to the base
and index.
Reviewers: uweigand
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23367
llvm-svn: 279157
Clean up the existing code by:
1. Renaming variables
2. Adding local variables
3. Making it vector-safe
This is still guarded by a ConstantInt check, so no functional change is intended.
But this should be ready to go: if we move the ConstantInt check down, all of
these folds should do the right thing for vector types.
llvm-svn: 279150
The names of the tablegen defs now match the names of the ISD nodes.
This makes the world a slightly saner place, as previously "fround" matched
ISD::FP_ROUND and not ISD::FROUND.
Differential Revision: https://reviews.llvm.org/D23597
llvm-svn: 279129
We abort building vectorizable trees in some cases (e.g., if the maximum
recursion depth is reached, if the region size is too large, etc.). If this
happens for a reduction, we can be left with a root entry that needs to be
gathered. For these cases, we need make sure we actually set VectorizedValue to
the resulting vector.
This patch ensures we properly set VectorizedValue, and it also ensures the
insertelement sequence generated for the gathers is inserted at the correct
location.
Reference: https://llvm.org/bugs/show_bug.cgi?id=28330
Differential Revison: https://reviews.llvm.org/D23410
llvm-svn: 279125
Re-apply r276044 with off-by-1 instruction fix for the reload placement.
This is a variant of scavengeRegister() that works for
enterBasicBlockEnd()/backward(). The benefit of the backward mode is
that it is not affected by incomplete kill flags.
This patch also changes
PrologEpilogInserter::doScavengeFrameVirtualRegs() to use the register
scavenger in backwards mode.
Differential Revision: http://reviews.llvm.org/D21885
llvm-svn: 279124
This is prep work for allowing the threshold to be different during layout,
and to enforce a single threshold between merging and duplicating during
layout. No observable change intended.
llvm-svn: 279117
When running 'opt -O2 verify-uselistorder-nodbg.lto.bc', there are 33m allocations. 8.2m
come from std::string allocations in Intrinsic::getName(). Turns out this method only
returns a std::string because it needs to handle overloads, but that is not the common case.
This adds an overload of getName which just returns a StringRef when there are no overloads
and so saves on the allocations.
llvm-svn: 279113
Normally, when an AND with a constant is lowered to NILL, the constant value is truncated to 16 bits. However, since r274066, ANDs whose results are used in a shift are caught by a different pattern that does not truncate. The instruction printer expects a 16-bit unsigned immediate operand for NILL, so this results in an abort.
This patch adds code to manually truncate the constant in this situation. The rest of the bits are then set, so we will detect a case for NILL "naturally" rather than using peephole optimizations.
Differential Revision: http://reviews.llvm.org/D21854
llvm-svn: 279105
Remove an unnecessary round-trip:
iterator => operator->() => getIterator()
In some cases, the iterator is end(), so the dereference of operator->
is invalid (UB).
The testcase only crashes with r278974 (currently reverted to
investigate this), which adds an assertion for invalid dereferences of
ilist nodes.
Fixes PR29035.
llvm-svn: 279104
The WebAssemly spec removing the return value from store instructions, so
remove the associated optimization from LLVM.
This patch leaves the store instruction operands in place for now, so stores
now always write to "$drop"; these will be removed in a seperate patch.
llvm-svn: 279100