Currently, a load from an alloca that is used in as single block and is not preceded
by a store is replaced by undef. This is not always correct if the single block is
inside a loop.
Fix the logic so that:
1) If there are no stores in the block, replace the load with an undef, as before.
2) If there is a store (regardless of where it is in the block w.r.t the load), bail
out, and let the rest of mem2reg handle this alloca.
Patch by: gil.rapaport@intel.com
Differential Revision: http://reviews.llvm.org/D11355
llvm-svn: 242884
In r242510, non-instrumented allocas are now moved into the first basic block. This patch limits that to only move allocas that are present *after* the first instrumented one (i.e. only move allocas up). A testcase was updated to show behavior in these two cases. Without the patch, an alloca could be moved down, and could cause an invalid IR.
Differential Revision: http://reviews.llvm.org/D11339
llvm-svn: 242883
through APIs that are no longer necessary now that the update API has
been removed.
This will make changes to the AA interfaces significantly less
disruptive (I hope). Either way, it seems like a really nice cleanup.
llvm-svn: 242882
part of simplifying its interface and usage in preparation for porting
to work with the new pass manager.
Note that this will likely expose that we have dead arguments, members,
and maybe even pass requirements for AA. I'll be cleaning those up in
seperate patches. This just zaps the actual update API.
Differential Revision: http://reviews.llvm.org/D11325
llvm-svn: 242881
change because the diff is *useless*. I assure you, I just switched to
early-return in this function.
Cleanup in preparation for my next commit, as requested in code review!
llvm-svn: 242880
GlobalsModRef) with CallbackVHs that trigger the same behavior.
This is technically more expensive, but in benchmarking some LTO runs,
it seems unlikely to even be above the noise floor. The only way I was
able to measure the performance of GMR at all was to run nothing else
but this one analysis on a linked clang bitcode file. The call graph
analysis still took 5x more time than GMR, and this change at most made
GMR 2% slower (this is well within the noise, so its hard for me to be
sure that this is an actual change). However, in a real LTO run over the
same bitcode, the GMR run takes so little time that the pass timers
don't measure it.
With this, I can remove the last update API from the AliasAnalysis
interface, but I'll actually remove the interface hook point in
a follow-up commit.
Differential Revision: http://reviews.llvm.org/D11324
llvm-svn: 242878
Summary:
This enables us to avoid casts to "void *" in some cases and avoids a couple of "casts off const
qualifiers" warnings.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D11388
llvm-svn: 242874
Summary: The current code in LoopUnswtich::processCurrentLoop() mixes trivial loop unswitch and non-trivial loop unswitch together. It goes over all basic blocks in the loop and checks if a condition is trivial or non-trivial unswitch condition. However, trivial unswitch condition can only occur in the loop header basic block (where it controls whether or not the loop does something at all). This refactoring separate trivial loop unswitch and non-trivial loop unswitch. Before going over all basic blocks in the loop, it checks if the loop header contains a trivial unswitch condition. If so, unswitch it. Otherwise, go over all blocks like before but don't check trivial condition any more since they are not possible to be in the other blocks. This code has no functionality change.
Reviewers: meheff, reames, broune
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11276
llvm-svn: 242873
Summary:
MCRegAliasIterator only works for physical registers. So, do not run it
on virtual registers.
With this issue fixed, we can resurrect the BranchFolding pass in NVPTX
backend.
Reviewers: jholewinski, bkramer
Subscribers: henryhu, meheff, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D11174
llvm-svn: 242871
<__functional_03> provides the C++03 definitions for std::memfun and
std::function. However the interaction between <functional> and <__functional_03>
is ugly and duplicates code needlessly. This patch cleans up how the two
headers work together.
The major changes are:
- Provide placeholders, is_bind_expression and is_placeholder in <functional>
for both C++03 and C++11.
- Provide bad_function_call, function fwd decl,
__maybe_derive_from_unary_function and __maybe_derive_from_binary_function
in <functional> for both C++03 and C++11.
- Move the <__functional_03> include to the bottom of <functional>. This makes
it easier to see how <__functional_03> interacts with <functional>
- Remove a commented out implementation of bind in C++03. It's never going
to get implemented.
- Mark almost all std::bind tests as unsupported in C++03. std::is_placeholder
works in C++03 and C++11. std::is_bind_expression is provided in C++03 but
always returns false.
llvm-svn: 242870
types and loads, loads or stores widened past the size of an alloca,
etc.
This started off with a bug report about big-endian behavior with
bitfields and loads and stores to a { i32, i24 } struct. An initial
attempt to fix this was sent for review in D10357, but that didn't
really get to the root of the problem.
The core issue was that canConvertValue and convertValue in SROA were
handling different bitwidth integers by doing a zext of the integer. It
wouldn't do a trunc though, only a zext! This would in turn lead SROA to
form an i24 load from an i24 alloca, zext it to i32, and then use it.
This would at least produce the wrong value for big-endian systems.
One of my many false starts here was to correct the computation for
big-endian systems by shifting. But this doesn't actually work because
the original code has a 64-bit store to the entire 8 bytes, and a 32-bit
load of the last 4 bytes, and because the alloc size is 8 bytes, we
can't lose that last (least significant if bigendian) byte! The real
problem here is that we're forming an i24 load in SROA which is actually
not sufficiently wide to load all of the necessary bits here. The source
has an i32 load, and SROA needs to form that as well.
The straightforward way to do this is to disable the zext logic in
canConvertValue and convertValue, forcing us to actually load all
32-bits. This seems like a really good change, but it in turn breaks
several other parts of SROA.
First in the chain of knock-on failures, we had places where we were
doing integer-widening promotion even though some of the integer loads
or stores extended *past the end* of the alloca's memory! There was even
a comment about preventing this, but it only prevented the case where
the type had a different bit size from its store size. So I added checks
to handle the cases where we actually have a widened load or store and
to avoid trying to special integer widening promotion in those cases.
Second, we actually rely on the ability to promote in the face of loads
past the end of an alloca! This is important so that we can (for
example) speculate loads around PHI nodes to do more promotion. The bits
loaded are garbage, but as long as they aren't used and the alignment is
suitable high (which it wasn't in the test case!) this is "fine". And we
can't stop promoting here, lots of things stop working well if we do. So
we need to add specific logic to handle the extension (and truncation)
case, but *only* where that extension or truncation are over bytes that
*are outside the alloca's allocated storage* and thus totally bogus to
load or store.
And of course, once we add back this correct handling of extension or
truncation, we need to correctly handle bigendian systems to avoid
re-introducing the exact bug that started us off on this chain of misery
in the first place, but this time even more subtle as it only happens
along speculated loads atop a PHI node.
I've ported an existing test for PHI speculation to the big-endian test
file and checked that we get that part correct, and I've added several
more interesting big-endian test cases that should help check that we're
getting this correct.
Fun times.
llvm-svn: 242869
more modules are added: visit modules depth-first rather than breadth-first.
The visitation is still (approximately) oldest-to-newest, and still guarantees
that a module is visited before anything it imports, so modules that are
imported by others sometimes need to jump to a later position in the visitation
order when more modules are loaded, but independent module trees don't
interfere with each other any more.
llvm-svn: 242863
Originally, the source for the hidden lib_d and the regular lib_d were the same
file, so we always got the "correct" source for each. Splitting them up in
D11367 exposed a bug of showing the incorrect source file for the hidden lib_d.
llvm-svn: 242862
Summary: This patch adds special configuration logic to find the compiler_rt libraries required by sanitizers on OS X. The supported sanitizers are Address and Undefined.
Reviewers: mclow.lists
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11381
llvm-svn: 242858
Before we skipped that for virtual functions not fully qualified (r81507).
This commit basically reverts this to the older behaviour, which seems
more consistent. We now also correctly consider ill-formed calls to deleted
member functions, which were silently passed before in some cases.
The review contains the whole discussion.
PR: 20268
Differential Revision: http://reviews.llvm.org/D11334
llvm-svn: 242857
the identifier table. This is redundant, since the TU-scope lookups are also
serialized as part of the TU DeclContext, and wasteful in a number of ways. We
still emit the decls for PCH / preamble builds, since for those we want
identical results, not merely semantically equivalent ones.
llvm-svn: 242855
include_if_exists=/path/to/sanitizer/options reads flags from the
file if it is present. "%b" in the include file path (for both
variants of the flag) is replaced with the basename of the main
executable.
llvm-svn: 242853
This optimization allows the DWARF linker to reuse definition of
types it has emitted in previous CUs rather than reemitting them
in each CU that references them. The size and link time gains are
huge. For example when linking the DWARF for a debug build of
clang, this generates a ~150M dwarf file instead of a ~700M one
(the numbers date back a bit and must not be totally accurate
these days).
As with all the other parts of the llvm-dsymutil codebase, the
goal is to keep bit-for-bit compatibility with dsymutil-classic.
The code is littered with a lot of FIXMEs that should be
addressed once we can get rid of the compatibilty goal.
llvm-svn: 242847
This commit begins serialization of the CFI index machine operands by
serializing one kind of CFI instruction - the .cfi_def_cfa_offset instruction.
Reviewers: Duncan P. N. Exon Smith
llvm-svn: 242845
Target and breakpoints options were added:
breakpoint set --language lang --name func
settings set target.language pascal
These specify the Language to use when interpreting the breakpoint's
expression (note: currently only implemented for breakpoints on
identifiers). If the breakpoint language is not set, the target.language
setting is used.
This support is required by Pascal, for example, to set breakpoint at 'ns.foo'
for function 'foo' in namespace 'ns'.
Tests on the language were also added to Module::PrepareForFunctionNameLookup
for efficiency.
Reviewed by: clayborg
Subscribers: jingham, lldb-commits
Differential Revision: http://reviews.llvm.org/D11119
llvm-svn: 242844
Summary:
In the benchmark (https://github.com/vetter/shoc) we are researching,
the duplicated load is not eliminated because MemoryDependenceAnalysis
hit the BlockScanLimit. This patch change it into a command line option
instead of a hardcoded value.
Patched by Xuetian Weng.
Test Plan: test/Analysis/MemoryDependenceAnalysis/memdep-block-scan-limit.ll
Reviewers: jingyue, reames
Subscribers: reames, llvm-commits
Differential Revision: http://reviews.llvm.org/D11366
llvm-svn: 242842
We ended up with the wrong predefine after the recent TargetParser shuffle, and
I accidentally solidified it with a test. This should fix it.
llvm-svn: 242841
This makes one substantive change and a few stylistic changes to the
VSX swap optimization pass.
The substantive change is to permit LXSDX and LXSSPX instructions to
participate in swap optimization computations. The previous change to
insert a swap following a SUBREG_TO_REG widening operation makes this
almost trivial.
I experimented with also permitting STXSDX and STXSSPX instructions.
This can be done using similar techniques: we could insert a swap
prior to a narrowing COPY operation, and then permit these stores to
participate. I prototyped this, but discovered that the pattern of a
narrowing COPY followed by an STXSDX does not occur in any of our
test-suite code. So instead, I added commentary indicating that this
could be done.
Other TLC:
- I changed SH_COPYSCALAR to SH_COPYWIDEN to more clearly indicate
the direction of the copy.
- I factored the insertion of swap instructions into a separate
function.
Finally, I added a new test case to check that the scalar-to-vector
loads are working properly with swap optimization.
llvm-svn: 242838
This commit refactors the function 'maybeLexGlobalValue' so that now it reuses
the function 'lexName' when lexing a named global value token.
llvm-svn: 242837