forked from OSchip/llvm-project
0828699488
This is causing compilation timeouts on code with long sequences of local values and calls (i.e. foo(1); foo(2); foo(3); ...). It turns out that code coverage instrumentation is a great way to create sequences like this, which how our users ran into the issue in practice. Intel has a tool that detects these kinds of non-linear compile time issues, and Andy Kaylor reported it as PR37010. The current sinking code scans the whole basic block once per local value sink, which happens before emitting each call. In theory, local values should only be introduced to be used by instructions between the current flush point and the last flush point, so we should only need to scan those instructions. llvm-svn: 329822 |
||
---|---|---|
.. | ||
InlinedFnLocalVar.ll | ||
delay-slot.ll | ||
dsr-fixed-objects.ll | ||
dsr-non-fixed-objects.ll | ||
dwarfdump-tls.ll | ||
fn-call-line.ll | ||
lit.local.cfg | ||
processes-relocations.ll | ||
prologue_end.ll | ||
tls.ll |