forked from OSchip/llvm-project
b74c6d2c9d
In D74183 clang started emitting alignment for sret parameters unconditionally. This caused a 1.5% compile-time regression on tramp3d-v4. The reason is that we now generate many instance of IR like %ptrint = ptrtoint %class.GuardLayers* %guards_m to i64 %maskedptr = and i64 %ptrint, 3 %maskcond = icmp eq i64 %maskedptr, 0 tail call void @llvm.assume(i1 %maskcond) to preserve the alignment information during inlining. Based on IR analysis, these assumptions also regress optimization. The attached phase ordering test case illustrates two issues: One are instruction count based optimization heuristics, which are affected by the four additional instructions of the assumption. The other is blocking of SROA due to ptrtoint casts (PR45763). We already encountered the same problem in Rust, where we (unlike Clang) generally prefer to emit alignment information absolutely everywhere it is available. We were only able to do this after hardcoding -preserve-alignment-assumptions-during-inlining=false, because we were seeing significant optimization and compile-time regressions otherwise. This patch disables -preserve-alignment-assumptions-during-inlining by default, because we should not be punishing people for adding more alignment annotations. Once the assume bundle work shakes out and we can represent (and use) alignment assumptions using assume bundles, it should be possible to re-enable this with reduced overhead. Differential Revision: https://reviews.llvm.org/D76886 |
||
---|---|---|
.. | ||
X86 | ||
2010-03-22-empty-baseclass.ll | ||
PR6627.ll | ||
basic.ll | ||
bitfield-bittests.ll | ||
gdce.ll | ||
globalaa-retained.ll | ||
inlining-alignment-assumptions.ll | ||
lifetime-sanitizer.ll | ||
min-max-abs-cse.ll | ||
minmax.ll | ||
reassociate-after-unroll.ll | ||
rotate.ll | ||
scev-custom-dl.ll | ||
scev.ll | ||
simplifycfg-options.ll | ||
two-shifts-by-sext.ll | ||
unsigned-multiply-overflow-check.ll | ||
vector-trunc.ll |