During the core analysis, ExplodedNodes are added to the
ExplodedGraph, and those nodes are cached for deduplication purposes.
After core analysis, reports are generated. Here, trimmed copies of
the ExplodedGraph are made. Since the ExplodedGraph has already been
deduplicated, there is no need to deduplicate again.
This change makes it possible to add ExplodedNodes to an
ExplodedGraph without the overhead of deduplication. "Uncached" nodes
also cannot be iterated over, but none of the report generation code
attempts to iterate over all nodes. This change reduces the analysis
time of a large .C file from 3m43.941s to 3m40.256s (~1.6% speedup).
It should slightly reduce memory consumption. Gains should be roughly
proportional to the number (and path length) of static analysis
warnings.
This patch enables future work that should remove the need for an
InterExplodedGraphMap inverse map. I plan on using the (now unused)
ExplodedNode link to connect new nodes to the original nodes.
http://reviews.llvm.org/D21229
llvm-svn: 273572
The patch adds one more partition to the MIPS GOT. This time it is for
TLS related GOT entries. Such entries are located after 'local' and 'global'
ones. We cannot get a final offset for these entries at the time of
creation because we do not know size of 'local' and 'global' partitions.
So we have to adjust the offset later using `getMipsTlsOffset()` method.
All MIPS TLS relocations which need GOT entries operates MIPS style GOT
offset - 'offset from the GOT's beginning' - MipsGPOffset constant. That
is why I add new types of relocation expressions.
One more difference from othe ABIs is that the MIPS ABI does not support
any TLS relocation relaxations. I decided to make a separate function
`handleMipsTlsRelocation` and put MIPS TLS relocation handling code
there. It is similar to `handleTlsRelocation` routine and duplicates its
code. But it allows to make the code cleaner and prevent pollution of
the `handleTlsRelocation` by MIPS 'if' statements.
Differential Revision: http://reviews.llvm.org/D21606
llvm-svn: 273569
The PIC and PIE levels are not independent. In fact, if PIE is defined
it is always the same as PIC.
This is clear in the driver where ParsePICArgs returns a PIC level and
a IsPIE boolean. Unfortunately that is currently lost and we pass two
redundant levels down the pipeline.
This patch keeps a bool and a PIC level all the way down to codegen.
llvm-svn: 273566
No extra tests required as this is currently covered by existing testing, and this is basically impossible to write Unicode-specific tests for.
llvm-svn: 273563
When simplifying a load we need to make sure that the type of the
simplified value matches the type of the instruction we're processing.
In theory, we can handle casts here as we deal with constant data, but
since it's not implemented at the moment, we at least need to bail out.
This fixes PR28262.
llvm-svn: 273562
DeadStoreElimination can currently remove a small store rendered unnecessary by
a later larger one, but could not remove a larger store rendered unnecessary by
a series of later smaller ones. This adds that capability.
It works by keeping a map, which is used as an effective interval map, for each
store later overwritten only partially, and filling in that interval map as
more such stores are discovered. No additional walking or aliasing queries are
used. In the map forms an interval covering the the entire earlier store, then
it is dead and can be removed. The map is used as an interval map by storing a
mapping between the ending offset and the beginning offset of each interval.
I discovered this problem when investigating a performance issue with code like
this on PowerPC:
#include <complex>
using namespace std;
complex<float> bar(complex<float> C);
complex<float> foo(complex<float> C) {
return bar(C)*C;
}
which produces this:
define void @_Z4testSt7complexIfE(%"struct.std::complex"* noalias nocapture sret %agg.result, i64 %c.coerce) {
entry:
%ref.tmp = alloca i64, align 8
%tmpcast = bitcast i64* %ref.tmp to %"struct.std::complex"*
%c.sroa.0.0.extract.shift = lshr i64 %c.coerce, 32
%c.sroa.0.0.extract.trunc = trunc i64 %c.sroa.0.0.extract.shift to i32
%0 = bitcast i32 %c.sroa.0.0.extract.trunc to float
%c.sroa.2.0.extract.trunc = trunc i64 %c.coerce to i32
%1 = bitcast i32 %c.sroa.2.0.extract.trunc to float
call void @_Z3barSt7complexIfE(%"struct.std::complex"* nonnull sret %tmpcast, i64 %c.coerce)
%2 = bitcast %"struct.std::complex"* %agg.result to i64*
%3 = load i64, i64* %ref.tmp, align 8
store i64 %3, i64* %2, align 4 ; <--- ***** THIS SHOULD NOT BE HERE ****
%_M_value.realp.i.i = getelementptr inbounds %"struct.std::complex", %"struct.std::complex"* %agg.result, i64 0, i32 0, i32 0
%4 = lshr i64 %3, 32
%5 = trunc i64 %4 to i32
%6 = bitcast i32 %5 to float
%_M_value.imagp.i.i = getelementptr inbounds %"struct.std::complex", %"struct.std::complex"* %agg.result, i64 0, i32 0, i32 1
%7 = trunc i64 %3 to i32
%8 = bitcast i32 %7 to float
%mul_ad.i.i = fmul fast float %6, %1
%mul_bc.i.i = fmul fast float %8, %0
%mul_i.i.i = fadd fast float %mul_ad.i.i, %mul_bc.i.i
%mul_ac.i.i = fmul fast float %6, %0
%mul_bd.i.i = fmul fast float %8, %1
%mul_r.i.i = fsub fast float %mul_ac.i.i, %mul_bd.i.i
store float %mul_r.i.i, float* %_M_value.realp.i.i, align 4
store float %mul_i.i.i, float* %_M_value.imagp.i.i, align 4
ret void
}
the problem here is not just that the i64 store is unnecessary, but also that
it blocks further backend optimizations of the other uses of that i64 value in
the backend.
In the future, we might want to add a special case for handling smaller
accesses (e.g. using a bit vector) if the map mechanism turns out to be
noticeably inefficient. A sorted vector is also a possible replacement for the
map for small numbers of tracked intervals.
Differential Revision: http://reviews.llvm.org/D18586
llvm-svn: 273559
We would incorrectly emit the directive sections due to the missing overridden
methods. We now emit the expected "/DEFAULTLIB" rather than "-l" options for
requested linkage
llvm-svn: 273558
Summary:
The backend has no reason to behave like a driver and should generally do
as it's told (and error out if it can't) instead of trying to figure out
what the API user meant. The default ABI is still derived from the arch
component as a concession to backwards compatibility.
API-users that previously passed an explicit CPU and a triple that was
inconsistent with the CPU (e.g. mips-linux-gnu and mips64r2) may get a
different ABI to what they got before. However, it's expected that there
are no such users on the basis that CodeGen has been asserting that the
triple is consistent with the selected ABI for several releases. API-users
that were consistent or passed '' or 'generic' as the CPU will see no
difference.
Reviewers: sdardis, rafael
Subscribers: rafael, dsanders, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D21466
llvm-svn: 273557
Move most of the initializations in ARMSubtarget::initializeEnvironment to
member initializers.
Change suggested by Matthias Braun (see http://reviews.llvm.org/D21432).
llvm-svn: 273556
Summary:
When parseAnyRegister() encounters a symbol alias, it parses integers and adds
a corresponding expression to the operand list. This is clearly wrong since the
only operands that parseAnyRegister() should be accepting are registers.
It's not clear why this code was added and there are no test cases that cover
it. I think it might be leftover from when searchSymbolAlias() was more widely
used.
Reviewers: sdardis
Subscribers: dsanders, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D21377
llvm-svn: 273555
This flag was introduced in r269655 with the new diagnostic handler for llc. Its
purpose was to keep the old behavior for some of the tests that didn't recover
well after an error. Those tests have been fixed, so now it's safe to remove the
flag entirely.
Fixes PR27759.
Differential Revision: http://reviews.llvm.org/D21368
llvm-svn: 273554
Currently isComplete = 1 requires that every instruction must
be described, declared unsupported or marked as having no
scheduling information for a processor.
For some backends such as MIPS, this requirement entails
long regex lists of instructions that are unsupported.
This patch teaches Tablegen to skip over instructions that
are associated with unsupported feature when checking if the
scheduling model is complete.
Patch by: Daniel Sanders
Contributions by: Simon Dardis
Reviewers: MatzeB
Differential Reviewer: http://reviews.llvm.org/D20522
llvm-svn: 273551
The exit-on-error flag was necessary in order to avoid an assertion when
handling DYNAMIC_STACKALLOC nodes in SelectionDAGLegalize.
We can avoid the assertion by creating some dummy nodes. This enables us to
remove the exit-on-error flag on the first 2 run lines (SI), but on the third
run line (R600) we would run into another assertion when trying to reserve
indirect registers. This patch also replaces that assertion with an early exit
from the function.
Fixes PR27761.
Differential Revision: http://reviews.llvm.org/D20852
llvm-svn: 273550
dext and dins, along with their 'm' and 'u' variants are defined in mips64r2,
not mips64.
Reviewers: dsanders, vkalintiris
Differential Review: http://reviews.llvm.org/D21608
llvm-svn: 273549
are performed before the other substatements of the construct are parsed,
rather than deferring them until the end. This allows better error recovery
from semantic errors in the condition, improves diagnostic order, and is a
prerequisite for C++17 constexpr if.
llvm-svn: 273548
Summary:
This adds new SB API calls and classes to allow a user of the SB API to obtain a full list of memory regions accessible within the process. Adding this to the API makes it possible use the API for tasks like scanning memory for blocks allocated with a header and footer to track down memory leaks, otherwise just inspecting every address is impractical especially for 64 bit processes.
These changes only add the API itself and a base implementation of GetMemoryRegions() to lldb_private::Process::GetMemoryRegions.
I will submit separate patches to fill in lldb_private::Process::GetMemoryRegionInfoList and GetMemoryRegionInfo for individual platforms.
The original discussion about this is here:
http://lists.llvm.org/pipermail/lldb-dev/2016-May/010203.html
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D20565
llvm-svn: 273547
IfConversion used to always add the undef flag when adding a use operand
on a newly predicated instruction. This would be an operand for the register
being conditionally redefined. Due to the undef flag, the liveness of this
register prior to the predicated instruction would get lost.
This patch changes this so that such use operands are added only when the
register is live, without the undef flag.
Reviewed by Quentin Colombet.
http://reviews.llvm.org/D209077
llvm-svn: 273545
This is a cleanup commit similar to r271555, but for ARM.
The end goal is to get rid of the isSwift / isCortexXY / isWhatever methods.
Since the ARM backend seems to have quite a lot of calls to these methods, I
intend to submit 5-6 subtarget features at a time, instead of one big lump.
Differential Revision: http://reviews.llvm.org/D21432
llvm-svn: 273544
Patch by Shridhar Joshi.
This option provides names of all the link time modules which define and
reference symbols requested by user. This helps to speed up application
development by detecting references causing undefined symbols.
It also helps in detecting symbols being resolved to wrong (unintended)
definitions in case of applications containing multiple definitions for
same symbols with different types, bindings.
Implements PR28226.
llvm-svn: 273536
Patch by Nitesh Jain.
Summary: On some target like MIPS32 we need to explicitly link atomic library for 64 bit atomic operations. This module then can be used in LLDB (http://reviews.llvm.org/D20464) or Libcxx (http://reviews.llvm.org/D16613) for explicitly link to atomic library.
Reviewers: chandlerc, beanz
Differential: reviews.llvm.org/D20896
llvm-svn: 273534
Peter Smith found while trying to support thunk creation for ARM that
LLD sometimes creates broken thunks for MIPS. The cause of the bug is
that we assign file offsets to input sections too early. We need to
create all sections and then assign section offsets because appending
thunks changes file offsets for all following sections.
This patch separates the pass to assign file offsets from thunk
creation pass. This effectively reverts r265673.
Differential Revision: http://reviews.llvm.org/D21598
llvm-svn: 273532
for TestNamespaceLookup.py; didn't see anything obviously wrong so I'll
need to look at this more closely before re-committing. (passed OK on
macOS ;)
llvm-svn: 273531