This is an alternative to the ImmutableMapRef interface where a factory
should still be canonicalizing by default, but in certain cases an
improvement can be made by delaying the canonicalization.
llvm-svn: 169532
Previously we made three passes over the set of dead symbols, and removed
them from the state /twice/. Now we combine the autorelease pass and the
symbol death pass, and only have to remove the bindings for the symbols
that leaked.
llvm-svn: 169527
Previously we would search for the last statement, then back up to the
entrance of the block that contained that statement. Now, while we're
scanning for the statement, we just keep track of which blocks are being
exited (in reverse order).
llvm-svn: 169526
This doesn't seem to make much of a difference in practice, but it does
have the potential to avoid a trip through the constraint manager.
llvm-svn: 169524
Whenever we touch a single bindings cluster multiple times, we can delay
canonicalizing it until the final access. This has some interesting
implications, in particular that we shouldn't remove an /empty/ cluster
from the top-level map until canonicalization.
This is good for a 2% speedup or so on the test case in
<rdar://problem/12810842>
llvm-svn: 169523
This feature was probably intended to improve diagnostics, but was currently
only used when dumping the Environment. It shows what location a given value
was loaded from, e.g. when evaluating an LValueToRValue cast.
llvm-svn: 169522
ToolChains.cpp
This is in anticipation of forthcoming library path changes.
Also ...
- Fixes some inconsistencies in how the arch is passed to tools.
- Add test cases for various forms of arch flags
llvm-svn: 169505
paths
- Inherit from Linux rather than ToolChain
- Override AddClangSystemIncludeArgs and AddClangCXXStdlibIncludeArgs
to properly set include paths.
llvm-svn: 169495
Instead of unconditionally storing origin with every application store,
only do this when the shadow of the stored value is != 0.
This change also delays instrumentation of stores until after the walk over
function's instructions, because adding new basic blocks confuses InstVisitor.
We only keep 1 origin value per 4 bytes of application memory. This change
fixes the bug when a store of a single clean byte wiped the origin for the
whole 4-byte area.
Since stores of uninitialized values are relatively uncommon, this change
improves performance of track-origins mode by 5% median and by up to 47% on
specs.
llvm-svn: 169490
generally support the C++11 memory model requirements for bitfield
accesses by relying more heavily on LLVM's memory model.
The primary change this introduces is to move from a manually aligned
and strided access pattern across the bits of the bitfield to a much
simpler lump access of all bits in the bitfield followed by math to
extract the bits relevant for the particular field.
This simplifies the code significantly, but relies on LLVM to
intelligently lowering these integers.
I have tested LLVM's lowering both synthetically and in benchmarks. The
lowering appears to be functional, and there are no really significant
performance regressions. Different code patterns accessing bitfields
will vary in how this impacts them. The only real regressions I'm seeing
are a few patterns where the LLVM code generation for loads that feed
directly into a mask operation don't take advantage of the x86 ability
to do a smaller load and a cheap zero-extension. This doesn't regress
any benchmark in the nightly test suite on my box past the noise
threshold, but my box is quite noisy. I'll be watching the LNT numbers,
and will look into further improvements to the LLVM lowering as needed.
llvm-svn: 169489
Some languages, e.g. Ada and Pascal, allow you to specify that the array bounds
are different from the default (1 in these cases). If we have a lower bound
that's non-default, then we emit the lower bound. We also calculate the correct
upper bound in those cases.
llvm-svn: 169484