2013-08-02 05:42:05 +08:00
|
|
|
; RUN: llc < %s -mtriple=armv7-apple-ios -O0 | FileCheck %s -check-prefix=NO-REALIGN
|
|
|
|
; RUN: llc < %s -mtriple=armv7-apple-ios -O0 | FileCheck %s -check-prefix=REALIGN
|
2012-12-04 08:52:33 +08:00
|
|
|
|
|
|
|
; rdar://12713765
|
|
|
|
; When realign-stack is set to false, make sure we are not creating stack
|
|
|
|
; objects that are assumed to be 64-byte aligned.
|
|
|
|
@T3_retval = common global <16 x float> zeroinitializer, align 16
|
|
|
|
|
2013-08-02 05:42:05 +08:00
|
|
|
define void @test1(<16 x float>* noalias sret %agg.result) nounwind ssp "no-realign-stack" {
|
2012-12-04 08:52:33 +08:00
|
|
|
entry:
|
2014-12-10 06:08:57 +08:00
|
|
|
; NO-REALIGN-LABEL: test1
|
[ARM] Re-re-apply VLD1/VST1 base-update combine.
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.
The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.
However, for ARMISD nodes, we just used the MMO alignment, no matter
what. In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.
And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).
To fix this, we can just mirror the hack done for ISD nodes: only
take into account the MMO alignment when the access is overaligned.
Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
rdar://19717869, rdar://14062261.
llvm-svn: 229932
2015-02-20 07:52:41 +08:00
|
|
|
; NO-REALIGN: mov r[[R2:[0-9]+]], r[[R1:[0-9]+]]
|
|
|
|
; NO-REALIGN: vld1.32 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]!
|
2014-12-10 06:08:57 +08:00
|
|
|
; NO-REALIGN: vld1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; NO-REALIGN: add r[[R2:[0-9]+]], r[[R1]], #32
|
|
|
|
; NO-REALIGN: vld1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; NO-REALIGN: add r[[R2:[0-9]+]], r[[R1]], #48
|
|
|
|
; NO-REALIGN: vld1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
|
|
|
|
; NO-REALIGN: add r[[R2:[0-9]+]], r[[R1:[0-9]+]], #48
|
|
|
|
; NO-REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; NO-REALIGN: add r[[R2:[0-9]+]], r[[R1]], #32
|
|
|
|
; NO-REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
[ARM] Re-re-apply VLD1/VST1 base-update combine.
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.
The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.
However, for ARMISD nodes, we just used the MMO alignment, no matter
what. In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.
And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).
To fix this, we can just mirror the hack done for ISD nodes: only
take into account the MMO alignment when the access is overaligned.
Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
rdar://19717869, rdar://14062261.
llvm-svn: 229932
2015-02-20 07:52:41 +08:00
|
|
|
; NO-REALIGN: vst1.32 {{{d[0-9]+, d[0-9]+}}}, [r[[R1]]:128]!
|
2014-12-10 06:08:57 +08:00
|
|
|
; NO-REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R1]]:128]
|
|
|
|
|
|
|
|
; NO-REALIGN: add r[[R2:[0-9]+]], r[[R0:0]], #48
|
|
|
|
; NO-REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; NO-REALIGN: add r[[R2:[0-9]+]], r[[R0]], #32
|
|
|
|
; NO-REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
[ARM] Re-re-apply VLD1/VST1 base-update combine.
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.
The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.
However, for ARMISD nodes, we just used the MMO alignment, no matter
what. In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.
And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).
To fix this, we can just mirror the hack done for ISD nodes: only
take into account the MMO alignment when the access is overaligned.
Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
rdar://19717869, rdar://14062261.
llvm-svn: 229932
2015-02-20 07:52:41 +08:00
|
|
|
; NO-REALIGN: vst1.32 {{{d[0-9]+, d[0-9]+}}}, [r[[R0]]:128]!
|
2014-12-10 06:08:57 +08:00
|
|
|
; NO-REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R0]]:128]
|
2013-02-09 04:35:15 +08:00
|
|
|
%retval = alloca <16 x float>, align 16
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load <16 x float>, <16 x float>* @T3_retval, align 16
|
2012-12-04 08:52:33 +08:00
|
|
|
store <16 x float> %0, <16 x float>* %retval
|
2015-02-28 05:17:42 +08:00
|
|
|
%1 = load <16 x float>, <16 x float>* %retval
|
2012-12-04 08:52:33 +08:00
|
|
|
store <16 x float> %1, <16 x float>* %agg.result, align 16
|
|
|
|
ret void
|
|
|
|
}
|
2013-08-02 05:42:05 +08:00
|
|
|
|
|
|
|
define void @test2(<16 x float>* noalias sret %agg.result) nounwind ssp {
|
|
|
|
entry:
|
2014-12-10 06:08:57 +08:00
|
|
|
; REALIGN-LABEL: test2
|
2015-01-08 23:09:14 +08:00
|
|
|
; REALIGN: bfc sp, #0, #6
|
[ARM] Re-re-apply VLD1/VST1 base-update combine.
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.
The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.
However, for ARMISD nodes, we just used the MMO alignment, no matter
what. In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.
And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).
To fix this, we can just mirror the hack done for ISD nodes: only
take into account the MMO alignment when the access is overaligned.
Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
rdar://19717869, rdar://14062261.
llvm-svn: 229932
2015-02-20 07:52:41 +08:00
|
|
|
; REALIGN: mov r[[R2:[0-9]+]], r[[R1:[0-9]+]]
|
|
|
|
; REALIGN: vld1.32 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]!
|
2014-12-10 06:08:57 +08:00
|
|
|
; REALIGN: vld1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; REALIGN: add r[[R2:[0-9]+]], r[[R1]], #32
|
|
|
|
; REALIGN: vld1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; REALIGN: add r[[R2:[0-9]+]], r[[R1]], #48
|
|
|
|
; REALIGN: vld1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
|
|
|
|
|
|
|
|
; REALIGN: orr r[[R2:[0-9]+]], r[[R1:[0-9]+]], #48
|
|
|
|
; REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; REALIGN: orr r[[R2:[0-9]+]], r[[R1]], #32
|
|
|
|
; REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; REALIGN: orr r[[R2:[0-9]+]], r[[R1]], #16
|
|
|
|
; REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R2]]:128]
|
|
|
|
; REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R1]]:128]
|
|
|
|
|
|
|
|
; REALIGN: add r[[R1:[0-9]+]], r[[R0:0]], #48
|
|
|
|
; REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R1]]:128]
|
|
|
|
; REALIGN: add r[[R1:[0-9]+]], r[[R0]], #32
|
|
|
|
; REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R1]]:128]
|
[ARM] Re-re-apply VLD1/VST1 base-update combine.
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.
The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.
However, for ARMISD nodes, we just used the MMO alignment, no matter
what. In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.
And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).
To fix this, we can just mirror the hack done for ISD nodes: only
take into account the MMO alignment when the access is overaligned.
Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
rdar://19717869, rdar://14062261.
llvm-svn: 229932
2015-02-20 07:52:41 +08:00
|
|
|
; REALIGN: vst1.32 {{{d[0-9]+, d[0-9]+}}}, [r[[R0]]:128]!
|
2014-12-10 06:08:57 +08:00
|
|
|
; REALIGN: vst1.64 {{{d[0-9]+, d[0-9]+}}}, [r[[R0]]:128]
|
2013-08-02 05:42:05 +08:00
|
|
|
%retval = alloca <16 x float>, align 16
|
2015-02-28 05:17:42 +08:00
|
|
|
%0 = load <16 x float>, <16 x float>* @T3_retval, align 16
|
2013-08-02 05:42:05 +08:00
|
|
|
store <16 x float> %0, <16 x float>* %retval
|
2015-02-28 05:17:42 +08:00
|
|
|
%1 = load <16 x float>, <16 x float>* %retval
|
2013-08-02 05:42:05 +08:00
|
|
|
store <16 x float> %1, <16 x float>* %agg.result, align 16
|
|
|
|
ret void
|
|
|
|
}
|