forked from OSchip/llvm-project
Fix Load Control Dependence in MemCpy Generation
In Memcpy lowering we had missed a dependence from the load of the operation to successor operations. This causes us to potentially construct an in initial DAG with a memory dependence not fully represented in the chain sub-DAG but rather require looking at the entire DAG breaking alias analysis by allowing incorrect repositioning of memory operations. To work around this, r200033 changed DAGCombiner::GatherAllAliases to be conservative if any possible issues to happen. Unfortunately this check forbade many non-problematic situations as well. For example, it's common for incoming argument lowering to add a non-aliasing load hanging off of EntryNode. Then, if GatherAllAliases visited EntryNode, it would find that other (unvisited) use of the EntryNode chain, and just give up entirely. Furthermore, the check was incomplete: it would not actually detect all such potentially problematic DAG constructions, because GatherAllAliases did not guarantee to visit all chain nodes going up to the root EntryNode. This is in general fine -- giving up early will just miss a potential optimization, not generate incorrect results. But, for this non-chain dependency detection code, it's possible that you could have a load attached to a higher-up chain node than any which were visited. If that load aliases your store, but the only dependency is through the value operand of a non-aliasing store, it would've been missed by this code, and potentially reordered. With the dependence added, this check can be removed and Alias Analysis can be much more aggressive. This fixes code quality regression in the Consecutive Store Merge cleanup (D14834). Test Change: ppc64-align-long-double.ll now may see multiple serializations of its stores Differential Revision: http://reviews.llvm.org/D18062 llvm-svn: 265836
This commit is contained in:
parent
bb2c3e199e
commit
66f485f4e2
|
@ -14710,63 +14710,6 @@ void DAGCombiner::GatherAllAliases(SDNode *N, SDValue OriginalChain,
|
|||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// We need to be careful here to also search for aliases through the
|
||||
// value operand of a store, etc. Consider the following situation:
|
||||
// Token1 = ...
|
||||
// L1 = load Token1, %52
|
||||
// S1 = store Token1, L1, %51
|
||||
// L2 = load Token1, %52+8
|
||||
// S2 = store Token1, L2, %51+8
|
||||
// Token2 = Token(S1, S2)
|
||||
// L3 = load Token2, %53
|
||||
// S3 = store Token2, L3, %52
|
||||
// L4 = load Token2, %53+8
|
||||
// S4 = store Token2, L4, %52+8
|
||||
// If we search for aliases of S3 (which loads address %52), and we look
|
||||
// only through the chain, then we'll miss the trivial dependence on L1
|
||||
// (which also loads from %52). We then might change all loads and
|
||||
// stores to use Token1 as their chain operand, which could result in
|
||||
// copying %53 into %52 before copying %52 into %51 (which should
|
||||
// happen first).
|
||||
//
|
||||
// The problem is, however, that searching for such data dependencies
|
||||
// can become expensive, and the cost is not directly related to the
|
||||
// chain depth. Instead, we'll rule out such configurations here by
|
||||
// insisting that we've visited all chain users (except for users
|
||||
// of the original chain, which is not necessary). When doing this,
|
||||
// we need to look through nodes we don't care about (otherwise, things
|
||||
// like register copies will interfere with trivial cases).
|
||||
|
||||
SmallVector<const SDNode *, 16> Worklist;
|
||||
for (const SDNode *N : Visited)
|
||||
if (N != OriginalChain.getNode())
|
||||
Worklist.push_back(N);
|
||||
|
||||
while (!Worklist.empty()) {
|
||||
const SDNode *M = Worklist.pop_back_val();
|
||||
|
||||
// We have already visited M, and want to make sure we've visited any uses
|
||||
// of M that we care about. For uses that we've not visisted, and don't
|
||||
// care about, queue them to the worklist.
|
||||
|
||||
for (SDNode::use_iterator UI = M->use_begin(),
|
||||
UIE = M->use_end(); UI != UIE; ++UI)
|
||||
if (UI.getUse().getValueType() == MVT::Other &&
|
||||
Visited.insert(*UI).second) {
|
||||
if (isa<MemSDNode>(*UI)) {
|
||||
// We've not visited this use, and we care about it (it could have an
|
||||
// ordering dependency with the original node).
|
||||
Aliases.clear();
|
||||
Aliases.push_back(OriginalChain);
|
||||
return;
|
||||
}
|
||||
|
||||
// We've not visited this use, but we don't care about it. Mark it as
|
||||
// visited and enqueue it to the worklist.
|
||||
Worklist.push_back(*UI);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Walk up chain skipping non-aliasing memory nodes, looking for a better chain
|
||||
|
|
|
@ -4394,6 +4394,7 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, SDLoc dl,
|
|||
DAG.getMemBasePlusOffset(Src, SrcOff, dl),
|
||||
SrcPtrInfo.getWithOffset(SrcOff), VT, isVol, false,
|
||||
false, MinAlign(SrcAlign, SrcOff));
|
||||
OutChains.push_back(Value.getValue(1));
|
||||
Store = DAG.getTruncStore(Chain, dl, Value,
|
||||
DAG.getMemBasePlusOffset(Dst, DstOff, dl),
|
||||
DstPtrInfo.getWithOffset(DstOff), VT, isVol,
|
||||
|
|
|
@ -18,17 +18,17 @@ entry:
|
|||
ret ppc_fp128 %0
|
||||
}
|
||||
|
||||
; CHECK: std 6, 72(1)
|
||||
; CHECK: std 5, 64(1)
|
||||
; CHECK: std 4, 56(1)
|
||||
; CHECK: std 3, 48(1)
|
||||
; CHECK-DAG: std 6, 72(1)
|
||||
; CHECK-DAG: std 5, 64(1)
|
||||
; CHECK-DAG: std 4, 56(1)
|
||||
; CHECK-DAG: std 3, 48(1)
|
||||
; CHECK: lfd 1, 64(1)
|
||||
; CHECK: lfd 2, 72(1)
|
||||
|
||||
; CHECK-VSX: std 6, 72(1)
|
||||
; CHECK-VSX: std 5, 64(1)
|
||||
; CHECK-VSX: std 4, 56(1)
|
||||
; CHECK-VSX: std 3, 48(1)
|
||||
; CHECK-VSX-DAG: std 6, 72(1)
|
||||
; CHECK-VSX-DAG: std 5, 64(1)
|
||||
; CHECK-VSX-DAG: std 4, 56(1)
|
||||
; CHECK-VSX-DAG: std 3, 48(1)
|
||||
; CHECK-VSX: li 3, 16
|
||||
; CHECK-VSX: addi 4, 1, 48
|
||||
; CHECK-VSX: lxsdx 1, 4, 3
|
||||
|
|
Loading…
Reference in New Issue