Summary: This is a simple question we should be able to answer without creating a temporary to hold the AND result. We can also get an early out as soon as we find a word that intersects.
Reviewers: RKSimon, hans, spatel, davide
Reviewed By: hans, davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32253
llvm-svn: 300812
The compiled code already needs to check single/multi word for the countLeadingZeros call inside of getActiveBits, but it isn't able to optimize out the leadingZeros call in the single word case that can't produce a value larger than 64.
This shrank the opt binary by about 5-6k on my local x86-64 build.
llvm-svn: 300798
Summary:
In the current implementation, to find inline stack for an address incurs expensive linear search in 2 places:
* linear search for the top-level DIE
* recursive linear traverse the DIE tree to find the path to the leaf DIE
In this patch, a map is built from address to its corresponding leaf DIE. The inline stack is built by traversing from the leaf DIE up to the root DIE. This speeds up batch symbolization by ~10X without noticible memory overhead.
Reviewers: dblaikie
Reviewed By: dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32177
llvm-svn: 300742
Patch by Ettore Speziale
Allow TableGen to generate static functions to perform GCC/MS builtin name to
target specific intrinsic ID mapping.
https://reviews.llvm.org/D31150
llvm-svn: 300735
This should simplify the call sites, which typically want to tweak one
attribute at a time. It should also avoid creating ephemeral
AttributeLists that live forever.
llvm-svn: 300718
Frequently you you want a bitmask consisting of a specified
number of 1s, either at the beginning or end of a word.
The naive way to do this is to write
template<typename T>
T leadingBitMask(unsigned N) {
return (T(1) << N) - 1;
}
but using this function you cannot produce a word with every
bit set to 1 (i.e. leadingBitMask<uint8_t>(8)) because left
shift is undefined when N is greater than or equal to the
number of bits in the word.
This patch provides an efficient, branch-free implementation
that works for all values of N in [0, CHAR_BIT*sizeof(T)]
Differential Revision: https://reviews.llvm.org/D32212
llvm-svn: 300710
Summary:
In the current implementation, to find inline stack for an address incurs expensive linear search in 2 places:
* linear search for the top-level DIE
* recursive linear traverse the DIE tree to find the path to the leaf DIE
In this patch, a map is built from address to its corresponding leaf DIE. The inline stack is built by traversing from the leaf DIE up to the root DIE. This speeds up batch symbolization by ~10X without noticible memory overhead.
Reviewers: dblaikie
Reviewed By: dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32177
llvm-svn: 300697
This fixes PR32471.
As comment 10 on that bug report highlights
(https://bugs.llvm.org//show_bug.cgi?id=32471#c10), there are quite a
few different defendable design tradeoffs that could be made, including
not representing pointers at all in LLT.
I decided to go for representing vector-of-pointer as a concept in LLT,
while keeping the size of the LLT type 64 bits (this is an increase from
48 bits before). My rationale for keeping pointers explicit is that on
some targets probably it's very handy to have the distinction between
pointer and non-pointer (e.g. 68K has a different register bank for
pointers IIRC). If we keep a scalar pointer, it probably is easiest to
also have a vector-of-pointers to keep LLT relatively conceptually clean
and orthogonal, while we don't have a very strong reason to break that
orthogonality. Once we gain more experience on the use of LLT, we can
of course reconsider this direction.
Rejecting vector-of-pointer types in the IRTranslator is also an option
to avoid the crash reported in PR32471, but that is only a very
short-term solution; also needs quite a bit of code tweaks in places,
and is probably fragile. Therefore I didn't consider this the best
option.
llvm-svn: 300664
Use children<> and nodes<> in appropriate places to cleanup the code.
Also, as part of the cleanup,
change the signature of DominatorTreeBase's Split.
It is a protected non-virtual member function called only twice,
both from within the class,
and the removed passed argument in both cases is '*this'.
The reason for the existence of that argument seems to be that
back before r43115 Split was a free function,
so an argument to get '*this' was needed - but now that is no longer the
case.
Patch by Yoav Ben-Shalom!
Differential Revision: https://reviews.llvm.org/D32118
llvm-svn: 300656
The 'addAttributes(unsigned, AttrBuilder)' overload delegated to 'get'
instead of 'addAttributes'.
Since we can implicitly construct an AttrBuilder from an AttributeSet,
just standardize on AttrBuilder.
llvm-svn: 300651
Summary:
VersionPrinter by default outputs information about the Host CPU
and Default target. Printing this information requires linking in
a large amount of data, such as supported target triples as C
strings, which in turn bloats the binary size.
Enable a new CMake option LLVM_VERSION_PRINTER_SHOW_HOST_TARGET_INFO
which controls printing of the host and target info. This allows
the target triple names to be dead-code stripped. This is a nice
win for LLVM clients that wish to minimize their binary size, such
as graphics drivers.
By default this is ON, so there is no change in the default behavior.
Clients who wish to suppress this printing can do so by setting this
option to off via CMake.
A test app on Linux that uses ParseCommandLineOptions() shows a binary
size reduction of 23KB (from 149K to 126K) for a Release build, and 24KB
(from 135K to 111K) in a MinSizeRel build.
Reviewers: klimek, beanz, bogner, chandlerc, compnerd
Reviewed By: compnerd
Patch by pammon (Peter Ammon) !
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D30904
llvm-svn: 300630
Summary:
This allows us to, if the symbol names are available in the binary, be
able to provide the function name in the YAML output.
Reviewers: dblaikie, pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32153
llvm-svn: 300624
BasicAA wants to know if a function is either a malloc or calloc like function. Currently we have to check both separately. This means both calls check if its an intrinsic, query TLI, check the nobuiltin attribute, scan the AllocationFnData, etc.
This patch adds a isMallocOrCallocLikeFn so we can go through all of the checks once per call.
This also changes the one other location I saw that called both together.
Differential Revision: https://reviews.llvm.org/D32188
llvm-svn: 300608
This patch uses lshrInPlace to replace code where the object that lshr is called on is being overwritten with the result.
This adds an lshrInPlace(const APInt &) version as well.
Differential Revision: https://reviews.llvm.org/D32155
llvm-svn: 300566
In the assembler, we should emit build attributes based on the target
selected with command-line options. This matches the GNU assembler's
behaviour. We only do this for build attributes which describe the
hardware that is expected to be available, not the ones that describe
ABI compatibility.
This is done by moving some of the attribute emission code to
ARMTargetStreamer, so that it can be shared between the assembly and
code-generation code paths. Since the assembler only creates a
MCSubtargetInfo, not an ARMSubtarget, the code had to be changed to
check raw features, and not use the convenience functions in
ARMSubtarget.
If different attributes are later specified using the .eabi_attribute
directive, then they will take precedence, as happens when the same
.eabi_attribute is specified twice.
This must be enabled by an option, because we don't want to do this when
parsing inline assembly. The attributes would match the ones emitted at
the start of the file, so wouldn't actually change the emitted object
file, but the extra directives would be added to every inline assembly
block when emitting assembly, which we'd like to avoid.
The majority of the changes in the build-attributes.ll test are just
re-ordering the directives, because the hardware attributes are now
emitted before the ABI ones. However, I did fix one bug which I spotted:
Tag_CPU_arch_profile was not being emitted for v6M.
Differential revision: https://reviews.llvm.org/D31812
llvm-svn: 300547
This reverts r300535 and r300537.
The newly added tests in test/CodeGen/AArch64/GlobalISel/arm64-fallback.ll
produces slightly different code between LLVM versions being built with different compilers.
E.g., dependent on the compiler LLVM is built with, either one of the following
can be produced:
remark: <unknown>:0:0: unable to legalize instruction: %vreg0<def>(p0) = G_EXTRACT_VECTOR_ELT %vreg1, %vreg2; (in function: vector_of_pointers_extractelement)
remark: <unknown>:0:0: unable to legalize instruction: %vreg2<def>(p0) = G_EXTRACT_VECTOR_ELT %vreg1, %vreg0; (in function: vector_of_pointers_extractelement)
Non-determinism like this is clearly a bad thing, so reverting this until
I can find and fix the root cause of the non-determinism.
llvm-svn: 300538
This fixes PR32471.
As comment 10 on that bug report highlights
(https://bugs.llvm.org//show_bug.cgi?id=32471#c10), there are quite a
few different defendable design tradeoffs that could be made, including
not representing pointers at all in LLT.
I decided to go for representing vector-of-pointer as a concept in LLT,
while keeping the size of the LLT type 64 bits (this is an increase from
48 bits before). My rationale for keeping pointers explicit is that on
some targets probably it's very handy to have the distinction between
pointer and non-pointer (e.g. 68K has a different register bank for
pointers IIRC). If we keep a scalar pointer, it probably is easiest to
also have a vector-of-pointers to keep LLT relatively conceptually clean
and orthogonal, while we don't have a very strong reason to break that
orthogonality. Once we gain more experience on the use of LLT, we can
of course reconsider this direction.
Rejecting vector-of-pointer types in the IRTranslator is also an option
to avoid the crash reported in PR32471, but that is only a very
short-term solution; also needs quite a bit of code tweaks in places,
and is probably fragile. Therefore I didn't consider this the best
option.
llvm-svn: 300535
The DWARF specification knows 3 kinds of non-empty simple location
descriptions:
1. Register location descriptions
- describe a variable in a register
- consist of only a DW_OP_reg
2. Memory location descriptions
- describe the address of a variable
3. Implicit location descriptions
- describe the value of a variable
- end with DW_OP_stack_value & friends
The existing DwarfExpression code is pretty much ignorant of these
restrictions. This used to not matter because we only emitted very
short expressions that we happened to get right by accident. This
patch makes DwarfExpression aware of the rules defined by the DWARF
standard and now chooses the right kind of location description for
each expression being emitted.
This would have been an NFC commit (for the existing testsuite) if not
for the way that clang describes captured block variables. Based on
how the previous code in LLVM emitted locations, DW_OP_deref
operations that should have come at the end of the expression are put
at its beginning. Fixing this means changing the semantics of
DIExpression, so this patch bumps the version number of DIExpression
and implements a bitcode upgrade.
There are two major changes in this patch:
I had to fix the semantics of dbg.declare for describing function
arguments. After this patch a dbg.declare always takes the *address*
of a variable as the first argument, even if the argument is not an
alloca.
When lowering a DBG_VALUE, the decision of whether to emit a register
location description or a memory location description depends on the
MachineLocation — register machine locations may get promoted to
memory locations based on their DIExpression. (Future) optimization
passes that want to salvage implicit debug location for variables may
do so by appending a DW_OP_stack_value. For example:
DBG_VALUE, [RBP-8] --> DW_OP_fbreg -8
DBG_VALUE, RAX --> DW_OP_reg0 +0
DBG_VALUE, RAX, DIExpression(DW_OP_deref) --> DW_OP_reg0 +0
All testcases that were modified were regenerated from clang. I also
added source-based testcases for each of these to the debuginfo-tests
repository over the last week to make sure that no synchronized bugs
slip in. The debuginfo-tests compile from source and run the debugger.
https://bugs.llvm.org/show_bug.cgi?id=32382
<rdar://problem/31205000>
Differential Revision: https://reviews.llvm.org/D31439
llvm-svn: 300522
Instead of storing an UncommonIndex on the Symbol, use a flag bit to store
whether the Symbol has an Uncommon. This shrinks Chromium's .bc files (after
D32061) by about 1%.
Differential Revision: https://reviews.llvm.org/D32070
llvm-svn: 300514
This merges the two different multiword shift right implementations into a single version located in tcShiftRight. lshrInPlace now calls tcShiftRight for the multiword case.
I retained the memmove fast path from lshrInPlace and used a memset for the zeroing. The for loop is basically tcShiftRight's implementation with the zeroing and the intra-shift of 0 removed.
Differential Revision: https://reviews.llvm.org/D32114
llvm-svn: 300503
the exponential behavior.
The patch is to fix PR32043. Functions getZeroExtendExpr and getSignExtendExpr
may call themselves recursively more than once. This is potentially a 2^N
complexity behavior. The exponential behavior was not commonly exposed before
because of existing global cache mechnism like UniqueSCEVs or some early return
mechanism when flags FlagNSW or FlagNUW are seen. However, we still have case
which can expose the exponential behavior, like the case in PR32043, so we add
a local cache in getZeroExtendExpr and getSignExtendExpr. If the input of the
functions -- SCEV and type pair have been seen before, we can find the extended
expression directly in the local cache.
Differential Revision: https://reviews.llvm.org/D30350
llvm-svn: 300494
This was added to work around a bug in MSVC 2013's implementation of stable_sort. That bug has been fixed as of MSVC 2015 so we shouldn't need this anymore.
Technically the current implementation has undefined behavior because we only protect the deleting of the pVal array with the self move check. There is still a memcpy of that.VAL to VAL that isn't protected. In the case of self move those are the same local and memcpy is undefined for src and dst overlapping.
This reduces the size of the opt binary on my local x86-64 build by about 4k.
Differential Revision: https://reviews.llvm.org/D32116
llvm-svn: 300477
The documentation for the waymarking algorithm says that we use the lower 2 bits of Use::Prev to store the way marking bits. But because we use a PointerIntPair with the default PointerLikeTypeTraits, we're using bits 2:1 on 64-bit targets.
There's also a trick employed for distinguishing Users that have Uses stored with them and Users that have Uses stored in a separate array. The documentation says we use the LSB of the first byte of the real User object or the User* that occurs at the end of the Use array. But again due to the PointerLikeTypeTraits we're really using bit 2(64-bit) or bit 1(32-bit) and not the LSB. This is a little worrying because the first byte of the User object is the vtable ptr so we're assuming the vtable has 8 byte or 4 byte alignment where what is documented would only require 2 byte alignment.
This patch provides a custom traits override for these two cases to put the bits where the documentation says they are. It also has the side effect of removing some shifts from the waymarking traversal implementation.
Differential Revision: https://reviews.llvm.org/D31733
llvm-svn: 300471
Add a top-level STRTAB block containing a string table blob, and start storing
strings for module codes FUNCTION, GLOBALVAR, ALIAS, IFUNC and COMDAT in
the string table.
This change allows us to share names between globals and comdats as well
as between modules, and improves the efficiency of loading bitcode files by
no longer using a bit encoding for symbol names. Once we start writing the
irsymtab to the bitcode file we will also be able to share strings between
it and the module.
On my machine, link time for Chromium for Linux with ThinLTO decreases by
about 7% for no-op incremental builds or about 1% for full builds. Total
bitcode file size decreases by about 3%.
As discussed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2017-April/111732.html
Differential Revision: https://reviews.llvm.org/D31838
llvm-svn: 300464
Summary:
This seems like an uncontroversial first step toward providing access to the metadata hierarchy that now exists in LLVM. This should allow for good debug info support from C.
Future plans are to deprecate API that take mixed bags of values and metadata (mainly the LLVMMDNode family of functions) and migrate the rest toward the use of LLVMMetadataRef.
Once this is in place, mapping of DIBuilder will be able to start.
Reviewers: mehdi_amini, echristo, whitequark, jketema, Wallbraker
Reviewed By: Wallbraker
Subscribers: Eugene.Zelenko, axw, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D19448
llvm-svn: 300447
This is a version of D32090 that unifies all of the
`getInstrProf*SectionName` helper functions. (Note: the build failures
which D32090 would have addressed were fixed with r300352.)
We should unify these helper functions because they are hard to use in
their current form. E.g we recently introduced more helpers to fix
section naming for COFF files. This scheme doesn't totally succeed at
hiding low-level details about section naming, so we should switch to an
API that is easier to maintain.
This is not an NFC commit because it fixes llvm-cov's testing support
for COFF files (this falls out of the API change naturally). This is an
area where we lack tests -- I will see about adding one as a follow up.
Testing: check-clang, check-profile, check-llvm.
Differential Revision: https://reviews.llvm.org/D32097
llvm-svn: 300381
This avoids the confusing 'CS.paramHasAttr(ArgNo + 1, Foo)' pattern.
Previously we were testing return value attributes with index 0, so I
introduced hasReturnAttr() for that use case.
llvm-svn: 300367
Now that the libObect support for wasm is better we can
have readobj and nm produce more useful output too.
Differential Revision: https://reviews.llvm.org/D31514
llvm-svn: 300365
This change really saves just one foldingset lookup, but makes
SCEVRewriteVisitor "feature compatible" with the handwritten logic in
ScalarEvolutionNormalization, so that I can change
ScalarEvolutionNormalization to use SCEVRewriteVisitor in a next step.
This is a non-functional change, but _may_ improve performance in some
pathological cases, but that's unlikely.
llvm-svn: 300348
Looks like earlier I was relying on #include ordering in files that
used ScalarEvolutionNormalization.h.
Found thanks to the selfhost modules buildbot!
llvm-svn: 300336
It is cleaner to have a callback based system where the logic of
whether an add recurrence is normalized or not lives on IVUsers.
This is one step in a multi-step cleanup.
llvm-svn: 300330
MOVNTDQA non-temporal aligned vector loads can be correctly represented using generic builtin loads, allowing us to remove the existing x86 intrinsics.
Clang companion patch: D31766.
Differential Revision: https://reviews.llvm.org/D31767
llvm-svn: 300325
Start using it in LLD to avoid needing to read bitcode again just to get the
target triple, and in llvm-lto2 to avoid printing symbol table information
that is inappropriate for the target.
Differential Revision: https://reviews.llvm.org/D32038
llvm-svn: 300300
The tests were failing due to an occasional deadlock in SerializationTraits
for Error: Both serializers and deserializers were protected by a single
mutex and in the unit test (where both ends of the RPC are in the same
process) one side might obtain the mutex, then block waiting for input,
leaving the other side of the connection unable to obtain the mutex to
write the data the first side was waiting for. Splitting the mutex into
two (one for serialization, one for deserialization) appears to have fixed the
issue.
llvm-svn: 300286
Now that we have a type that can represent the attributes on a single
return, function, or parameter, we can pass it around directly rather
than passing around AttributeList and Idx. Removes some more one-based
argument attribute index counting.
NFC
llvm-svn: 300285
Add hasParamAttribute() and use it instead of hasAttribute(ArgNo+1,
Kind) everywhere.
The fact that the AttributeList index for an argument is ArgNo+1 should
be a hidden implementation detail.
NFC
llvm-svn: 300272
Switch from Euclid's algorithm to Stein's algorithm for computing GCD. This
avoids the (expensive) APInt division operation in favour of bit operations.
Remove all memory allocation from within the GCD loop by tweaking our `lshr`
implementation so it can operate in-place.
Differential Revision: https://reviews.llvm.org/D31968
llvm-svn: 300252
Summary: For iterative SamplePGO, an indirect call can be speculatively promoted to multiple direct calls and get inlined. All these promoted direct calls will share the same callsite location (offset+discriminator). With the current implementation, we cannot distinguish between different promotion candidates and its inlined instance. This patch adds callee_name to the key of the callsite sample map. And added helper functions to get all inlined callee samples for a given callsite location. This helps the profile annotator promote correct targets and inline it before annotation, and ensures all indirect call targets to be annotated correctly.
Reviewers: davidxl, dnovillo
Reviewed By: davidxl
Subscribers: andreadb, llvm-commits
Differential Revision: https://reviews.llvm.org/D31950
llvm-svn: 300240
Summary:
The linker needs to be able to determine whether a symbol is text or data to
handle the case of a common being overridden by a strong definition in an
archive. If the archive contains a text member of the same name as the common,
that function is discarded. However, if the archive contains a data member of
the same name, that strong definition overrides the common. This is a behavior
of ld.bfd, which the Qualcomm linker also supports in LTO.
Here's a test case to illustrate:
####
cat > 1.c << \!
int blah;
!
cat > 2.c << \!
int blah() {
return 0;
}
!
cat > 3.c << \!
int blah = 20;
!
clang -c 1.c
clang -c 2.c
clang -c 3.c
ar cr lib.a 2.o 3.o
ld 1.o lib.a -t
####
The correct output is:
1.o
(lib.a)3.o
Thanks to Shankar Easwaran and Hemant Kulkarni for the test case!
Reviewers: mehdi_amini, rafael, pcc, davide
Reviewed By: pcc
Subscribers: davide, llvm-commits, inglorion
Differential Revision: https://reviews.llvm.org/D31901
llvm-svn: 300205
Instructions CALLSEQ_START..CALLSEQ_END and their target dependent
counterparts keep data like frame size, stack adjustment etc. These
data are accessed by getOperand using hard coded indices. It is
error prone way. This change implements the access by special methods,
which improve readability and allow changing data representation without
massive changes of index values.
Differential Revision: https://reviews.llvm.org/D31953
llvm-svn: 300196
The bool type may be larger than the char type, so assuming we can cast from
bool to char and write a byte out to the stream is unsafe.
Hopefully this will get RPCUtilsTest.ReturnExpectedFailure passing on the bots.
llvm-svn: 300174
Summary:
APInt is currently implemented with an unsigned BitWidth field first and then a uint_64/pointer union. Due to the 64-bit size of the union there is a hole after the bitwidth.
Putting the union first allows the class to be packed. Making it 12 bytes instead of 16 bytes. An APSInt goes from 20 bytes to 16 bytes.
This shows a 4k reduction on the size of the opt binary on my local x86-64 build. So this enables some other improvement to the code as well.
Reviewers: dblaikie, RKSimon, hans, davide
Reviewed By: davide
Subscribers: davide, llvm-commits
Differential Revision: https://reviews.llvm.org/D32001
llvm-svn: 300171
This patch allows Error and Expected types to be passed to and returned from
RPC functions.
Serializers and deserializers for custom error types (types deriving from the
ErrorInfo class template) can be registered with the SerializationTraits for
a given channel type (see registerStringError in RPCSerialization.h for an
example), allowing a given custom type to be sent/received. Unregistered types
will be serialized/deserialized as StringErrors using the custom type's log
message as the error string.
llvm-svn: 300167
This is a magic header file supported by the build system that provides a
single definition, LLVM_REVISION, containing an LLVM revision identifier,
if available. This functionality previously lived in the LTO library, but
I am moving it out to lib/Support because I want to also start using it in
lib/Object to create the IR symbol table.
This change also fixes a bug where LLVM_REVISION was never actually being
used in lib/LTO because the macro HAS_LLVM_REVISION was never defined (it
was misspelled as HAVE_SVN_VERSION_INC in lib/LTO/CMakeLists.txt, and was
only being defined in a non-existent file Version.cpp).
I also changed the code to use "git rev-parse --git-dir" to locate the .git
directory, instead of looking for it in the LLVM source root directory,
which makes this compatible with monorepos as well as git worktrees.
Differential Revision: https://reviews.llvm.org/D31985
llvm-svn: 300160
This seems like a much more natural API, based on Derek Schuff's
comments on r300015. It further hides the implementation detail of
AttributeList that function attributes come last and appear at index
~0U, which is easy for the user to screw up. git diff says it saves code
as well: 97 insertions(+), 137 deletions(-)
This also makes it easier to change the implementation, which I want to
do next.
llvm-svn: 300153
This typedef used to be conditional based on whether rvalue references were supported. Looks like it got left behind when we switched to always having rvalue references with c++11. I don't think it provides any value now.
llvm-svn: 300146
Improve performance of argument list parsing with large numbers of IDs and
large numbers of arguments, by tracking a conservative range of indexes within
the argument list that might contain an argument with each ID. In the worst
case (when the first and last argument with a given ID are at the opposite ends
of the argument list), this still results in a linear-time walk of the list,
but it helps substantially in the common case where each ID occurs only once,
or a few times close together in the list.
This gives a ~10x speedup to clang's `test/Driver/response-file.c`, which
constructs a very large set of command line arguments and feeds them to the
clang driver.
Differential Revision: https://reviews.llvm.org/D30130
llvm-svn: 300135
In a followup patch I intend to introduce an additional dumping
mode which dumps a graphical representation of a class's layout.
In preparation for this, the text-based layout printer needs to
be split out from the graphical layout printer, and both need
to be able to use the same code for printing the intro and outro
of a class's definition (e.g. base class list, etc).
This patch does so, and in the process introduces a skeleton
definition for the graphical printer, while currently making
the graphical printer just print nothing.
NFC
llvm-svn: 300134
Previously the dumping of class definitions was very primitive,
and it made it hard to do more than the most trivial of output
formats when dumping. As such, we would only dump one line for
each field, and then dump non-layout items like nested types
and enums.
With this patch, we do a complete analysis of the object
hierarchy including aggregate types, bases, virtual bases,
vftable analysis, etc. The only immediately visible effects
of this are that a) we can now dump a line for the vfptr where
before we would treat that as padding, and b) we now don't
treat virtual bases that come at the end of a class as padding
since we have a more detailed analysis of the class's storage
usage.
In subsequent patches, we should be able to use this analysis
to display a complete graphical view of a class's layout including
recursing arbitrarily deep into an object's base class / aggregate
member hierarchy.
llvm-svn: 300133
Summary:
Readnone attribute would cause CSE of two barriers with
the same argument, which is invalid by example:
struct Base {
virtual int foo() { return 42; }
};
struct Derived1 : Base {
int foo() override { return 50; }
};
struct Derived2 : Base {
int foo() override { return 100; }
};
void foo() {
Base *x = new Base{};
new (x) Derived1{};
int a = std::launder(x)->foo();
new (x) Derived2{};
int b = std::launder(x)->foo();
}
Here 2 calls of std::launder will produce @llvm.invariant.group.barrier,
which would be merged into one call, causing devirtualization
to devirtualize second call into Derived1::foo() instead of
Derived2::foo()
Reviewers: chandlerc, dberlin, hfinkel
Subscribers: llvm-commits, rsmith, amharc
Differential Revision: https://reviews.llvm.org/D31531
llvm-svn: 300101
Often you have a unique_ptr<T> where T supports LLVM's
casting methods, and you wish to cast it to a unique_ptr<U>.
Prior to this patch, this requires doing hacky things like:
unique_ptr<U> Casted;
if (isa<U>(Orig.get()))
Casted.reset(cast<U>(Orig.release()));
This is overly verbose, and it would be nice to just be able
to use unique_ptr directly with cast and dyn_cast. To this end,
this patch updates cast<> to work directly with unique_ptr<T>,
so you can now write:
auto Casted = cast<U>(std::move(Orig));
Since it's possible for dyn_cast<> to fail, however, we choose
to use a slightly different API here, because it's awkward to
write
if (auto Casted = dyn_cast<U>(std::move(Orig))) {}
when Orig may end up not having been moved at all. So the
interface for dyn_cast is
if (auto Casted = unique_dyn_cast<U>(Orig)) {}
Where the inclusion of `unique` in the name of the cast operator
re-affirms that regardless of success of or fail of the casting,
exactly one of the input value and the return value will contain
a non-null result.
Differential Revision: https://reviews.llvm.org/D31890
llvm-svn: 300098
On FreeBSD backtrace is not part of libc and depends on libexecinfo
being available. Instead of using manual checks we can use the builtin
CMake module FindBacktrace.cmake to detect availability of backtrace()
in a portable way.
Patch By: Alex Richardson
Differential Revision: https://reviews.llvm.org/D27143
llvm-svn: 300062
Since SystemZ supports vector element load/store instructions, there is no
need for extracts/inserts if a vector load/store gets scalarized.
This patch lets Target specify that it supports such instructions by means of
a new TTI hook that defaults to false.
The use for this is in the LoopVectorizer getScalarizationOverhead() method,
which will with this patch produce a smaller sum for a vector load/store on
SystemZ.
New test: test/Transforms/LoopVectorize/SystemZ/load-store-scalarization-cost.ll
Review: Adam Nemet
https://reviews.llvm.org/D30680
llvm-svn: 300056
getArithmeticInstrCost(), getShuffleCost(), getCastInstrCost(),
getCmpSelInstrCost(), getVectorInstrCost(), getMemoryOpCost(),
getInterleavedMemoryOpCost() implemented.
Interleaved access vectorization enabled.
BasicTTIImpl::getCastInstrCost() improved to check for legal extending loads,
in which case the cost of the z/sext instruction becomes 0.
Review: Ulrich Weigand, Renato Golin.
https://reviews.llvm.org/D29631
llvm-svn: 300052
Summary:
As far as instruction selection is concerned, all three appear to be same thing.
Support for these operands is experimental since AArch64 doesn't make use
of them and the in-tree targets that do use them (AMDGPU for
OperandWithDefaultOps, AMDGPU/ARM/Hexagon/Lanai for PredicateOperand, and ARM
for OperandWithDefaultOps) are not using tablegen-erated GlobalISel yet.
Reviewers: rovka, aditya_nandakumar, t.p.northover, qcolombet, ab
Reviewed By: rovka
Subscribers: inglorion, aemerson, rengolin, mehdi_amini, dberris, kristof.beyls, igorb, tpr, llvm-commits
Differential Revision: https://reviews.llvm.org/D31135
llvm-svn: 300037
and to expose a handle to represent the actual case rather than having
the iterator return a reference to itself.
All of this allows the iterator to be used with common STL facilities,
standard algorithms, etc.
Doing this exposed some missing facilities in the iterator facade that
I've fixed and required some work to the actual iterator to fully
support the necessary API.
Differential Revision: https://reviews.llvm.org/D31548
llvm-svn: 300032
Collection of PostDominatedByUnreachable and PostDominatedByColdCall have been
split out of heuristics itself. Update of the data happens now for each basic
block (before update for PostDominatedByColdCall might be skipped if
unreachable or matadata heuristic handled this basic block).
This separation allows re-ordering of heuristics without loosing
the post-domination information.
Reviewers: sanjoy, junbuml, vsk, chandlerc, reames
Reviewed By: chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31701
llvm-svn: 300029
Summary:
For now, it just wraps AttributeSetNode*. Eventually, it will hold
AvailableAttrs as an inline bitset, and adding and removing enum
attributes will be super cheap.
This sinks AttributeSetNode back down to lib/IR/AttributeImpl.h.
Reviewers: pete, chandlerc
Subscribers: llvm-commits, jfb
Differential Revision: https://reviews.llvm.org/D31940
llvm-svn: 300014
Analysis, it has Analysis passes, and once NewGVN is made an Analysis,
this removes the cross dependency from Analysis to Transform/Utils.
NFC.
llvm-svn: 299980
Summary:
This lets PDB readers lookup type record data by type index in O(log n)
time. It also enables makes `cvdump -t` work on PDBs produced by LLD.
cvdump will not dump a PDB that doesn't have an index-to-offset table.
The table is sorted by type index, and has an entry every 8KB. Looking
up a type record by index is a binary search of this table, followed by
a scan of at most 8KB.
Reviewers: ruiu, zturner, inglorion
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31636
llvm-svn: 299958
From a user prospective, it forces the use of an annoying nullptr to mark the end of the vararg, and there's not type checking on the arguments.
The variadic template is an obvious solution to both issues.
Differential Revision: https://reviews.llvm.org/D31070
llvm-svn: 299949
Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.
From a user prospective, it forces the use of an annoying nullptr
to mark the end of the vararg, and there's not type checking on the
arguments. The variadic template is an obvious solution to both
issues.
llvm-svn: 299925
The getter was equivalent to AttributeList::getAttributes(unsigned),
which seems like a better way to express getting the AttributeSet for a
given index. This static helper was only used in one place anyway.
The constructor doesn't benefit from inlining and doesn't need to be in
a header.
llvm-svn: 299900
This re-lands r299875.
I introduced a bug in Clang code responsible for replacing K&R, no
prototype declarations with a real function definition with a prototype.
The bug was here:
// Collect any return attributes from the call.
- if (oldAttrs.hasAttributes(llvm::AttributeList::ReturnIndex))
- newAttrs.push_back(llvm::AttributeList::get(newFn->getContext(),
- oldAttrs.getRetAttributes()));
+ newAttrs.push_back(oldAttrs.getRetAttributes());
Previously getRetAttributes() carried AttributeList::ReturnIndex in its
AttributeList. Now that we return the AttributeSetNode* directly, it no
longer carries that index, and we call this overload with a single node:
AttributeList::get(LLVMContext&, ArrayRef<AttributeSetNode*>)
That aborted with an assertion on x86_32 targets. I added an explicit
triple to the test and added CHECKs to help find issues like this in the
future sooner.
llvm-svn: 299899
LLVM makes several assumptions about address space 0. However,
alloca is presently constrained to always return this address space.
There's no real way to avoid using alloca, so without this
there is no way to opt out of these assumptions.
The problematic assumptions include:
- That the pointer size used for the stack is the same size as
the code size pointer, which is also the maximum sized pointer.
- That 0 is an invalid, non-dereferencable pointer value.
These are problems for AMDGPU because alloca is used to
implement the private address space, which uses a 32-bit
index as the pointer value. Other pointers are 64-bit
and behave more like LLVM's notion of generic address
space. By changing the address space used for allocas,
we can change our generic pointer type to be LLVM's generic
pointer type which does have similar properties.
llvm-svn: 299888
Summary:
AttributeList::get(Fn|Ret|Param)Attributes no longer creates a temporary
AttributeList just to hide the AttributeSetNode type.
I've also added a factory method to create AttributeLists from a
parallel array of AttributeSetNodes. I think this simplifies
construction of AttributeLists when rewriting function prototypes.
Previously we would test if a particular index had attributes, and
conditionally add a temporary attribute list to a vector. Now the
attribute set vector is parallel to the argument vector already that
these passes already construct.
My long term vision is to wrap AttributeSetNode* inside an AttributeSet
type that holds the enum attributes, but that will come in a follow up
change.
I haven't done any performance measurements for this change because
profiling hasn't shown that any of the affected code is hot.
Reviewers: pete, chandlerc, sanjoy, hfinkel
Reviewed By: pete
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D31198
llvm-svn: 299875
BitVector had methods for searching for the first and next
set bits, but it did not have analagous methods for finding
the first and next unset bits. This is useful when your ones
and zeros are grouped together and you want to iterate over
ranges of ones and zeros.
Differential Revision: https://reviews.llvm.org/D31802
llvm-svn: 299857
* Adds support for pointers to arrays, which was missing
* Adds some tests
* Improves consistency of const and volatile qualifiers
* Eliminates non-composable special case code for arrays and function by using
a more general recursive approach
* Has a hack for getting the calling convention into the right spot for
pointer-to-functions
Given the rapid changes happenning in llvm-pdbdump, this may be difficult to
merge.
Differential Revision: https://reviews.llvm.org/D31832
llvm-svn: 299848
1. Added some asserts to make sure concrete symbol types don't
get constructed with RawSymbols that have an incompatible
SymTag enum value.
2. Added new forwarding macros that auto-define an Id/Sym method
pair whenever there is a method that returns a SymIndexId.
Previously we would just provide one method that returned only
the SymIndexId and it was up to the caller to use the Session
object to get a pointer to the symbol. Now we automatically
get both the method that returns the Id, as well as a method
that returns the pointer directly with just one macro.
3. Added some methods for dumping straight to stdout that can
be used from inside the debugger for diagnostics during a
debug session.
4. Added a clone() method and a cast<T>() method to PDBSymbol
that can shorten some usage patterns.
llvm-svn: 299831
The original instruction might get legalized and erased and expanded
into intermediate instructions and the intermediate instructions might
fail legalization. This end up in reporting GISelFailure on the erased
instruction.
Instead report GISelFailure on the intermediate instruction which failed
legalization.
Reviewed by: ab
llvm-svn: 299802
This reverts commit r299766. This change appears to have broken the MIPS
buildbots. Reverting while I investigate.
Revert "[mips] Remove usage of debug only variable (NFC)"
This reverts commit r299769. Follow up commit.
llvm-svn: 299788
By target hookifying getRegisterType, getNumRegisters, getVectorBreakdown,
backends can request that LLVM to scalarize vector types for calls
and returns.
The MIPS vector ABI requires that vector arguments and returns are passed in
integer registers. With SelectionDAG's new hooks, the MIPS backend can now
handle LLVM-IR with vector types in calls and returns. E.g.
'call @foo(<4 x i32> %4)'.
Previously these cases would be scalarized for the MIPS O32/N32/N64 ABI for
calls and returns if vector types were not legal. If vector types were legal,
a single 128bit vector argument would be assigned to a single 32 bit / 64 bit
integer register.
By teaching the MIPS backend to inspect the original types, it can now
implement the MIPS vector ABI which requires a particular method of
scalarizing vectors.
Previously, the MIPS backend relied on clang to scalarize types such as "call
@foo(<4 x float> %a) into "call @foo(i32 inreg %1, i32 inreg %2, i32 inreg %3,
i32 inreg %4)".
This patch enables the MIPS backend to take either form for vector types.
Reviewers: zoran.jovanovic, jaydeep, vkalintiris, slthakur
Differential Revision: https://reviews.llvm.org/D27845
llvm-svn: 299766
Summary:
getModRefInfo is meant to answer the question "what impact does this
instruction have on a given memory location" (not even another
instruction).
Long debate on this on IRC comes to the conclusion the answer should be "nothing special".
That is, a noalias volatile store does not affect a memory location
just by being volatile. Note: DSE and GVN and memdep currently
believe this, because memdep just goes behind AA's back after it says
"modref" right now.
see line 635 of memdep. Prior to this patch we would get modref there, then check aliasing,
and if it said noalias, we would continue.
getModRefInfo *already* has this same AA check, it just wasn't being used because volatile was
lumped in with ordering.
(I am separately testing whether this code in memdep is now dead except for the invariant load case)
Reviewers: jyknight, chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31726
llvm-svn: 299741
This adjusts header file includes for headers and source files
in Core. In doing so, one dependency cycle is eliminated
because all the includes from Core to that project were dead
includes anyway. In places where some files in other projects
were only compiling due to a transitive include from another
header, fixups have been made so that those files also include
the header they need. Tested on Windows and Linux, and plan
to address failures on OSX and FreeBSD after watching the
bots.
llvm-svn: 299714
Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.
From a user prospective, it forces the use of an annoying nullptr
to mark the end of the vararg, and there's not type checking on the
arguments. The variadic template is an obvious solution to both
issues.
Patch by: Serge Guelton <serge.guelton@telecom-bretagne.eu>
Differential Revision: https://reviews.llvm.org/D31070
llvm-svn: 299699
Use a combination of !associated, comdat, @llvm.compiler.used and
custom sections to allow dead stripping of globals and their asan
metadata. Sometimes.
Currently this works on LLD, which supports SHF_LINK_ORDER with
sh_link pointing to the associated section.
This also works on BFD, which seems to treat comdats as
all-or-nothing with respect to linker GC. There is a weird quirk
where the "first" global in each link is never GC-ed because of the
section symbols.
At this moment it does not work on Gold (as in the globals are never
stripped).
This is a re-land of r298158 rebased on D31358. This time,
asan.module_ctor is put in a comdat as well to avoid quadratic
behavior in Gold.
llvm-svn: 299697
Create the constructor in the module pass.
This in needed for the GC-friendly globals change, where the constructor can be
put in a comdat in some cases, but we don't know about that in the function
pass.
This is a rebase of r298731 which was reverted due to a false alarm.
llvm-svn: 299695
memorydefs, not just stores. Along the way, we audit and fixup issues
about how we were tracking memory leaders, and improve the verifier
to notice more memory congruency issues.
llvm-svn: 299682
Summary:
Host CPU detection now supports Kryo, so we need to recognize it in ARM
target.
Reviewers: mcrosier, t.p.northover, rengolin, echristo, srhines
Reviewed By: t.p.northover, echristo
Subscribers: aemerson
Differential Revision: https://reviews.llvm.org/D31775
llvm-svn: 299674
Summary:
Use an explicit work queue instead, to avoid accidentally
causing stack overflows for input with very large CFGs.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D31681
llvm-svn: 299569
This is a generic combine enabled via target hook to reduce icmp logic as discussed in:
https://bugs.llvm.org/show_bug.cgi?id=32401
It's likely that other targets will want to enable this hook for scalar transforms,
and there are probably other patterns that can use bitwise logic to reduce comparisons.
Note that we are missing an IR canonicalization for these patterns, and we will probably
prefer the pair-of-compares form in IR (shorter, more likely to fold).
Differential Revision: https://reviews.llvm.org/D31483
llvm-svn: 299542
A number of backends (AArch64, MIPS, ARM) have been using
MCContext::reportError to report issues such as out-of-range fixup values in
their TgtAsmBackend. This is great, but because MCContext couldn't easily be
threaded through to the adjustFixupValue helper function from its usual
callsite (applyFixup), these backends ended up adding an MCContext* argument
and adding another call to applyFixup to processFixupValue. Adding an
MCContext parameter to applyFixup makes this unnecessary, and even better -
applyFixup can take a reference to MCContext rather than a potentially null
pointer.
Differential Revision: https://reviews.llvm.org/D30264
llvm-svn: 299529
Decouple this setting from EnableIRPA.
To support function calls on AMDGPU, it is necessary to
report the global register usage throughout the kernel's
call graph, so callees need to be handled first.
llvm-svn: 299487
stores with some fixes.
Summary:
This enables us to cache the clobbering access for stores, despite the
fact that we can't rewrite the use-def chains themselves.
Early testing shows that, after this change, for larger testcases, it
will be a significant net positive (memory and time) to remove the
walker caching.
Reviewers: george.burgess.iv, davide
Subscribers: Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D31567
llvm-svn: 299486
If an instruction has a true dependency, it makes sense for to use that
register for any undef read operands in the same instruction (we'll have
to wait for that register to become available anyway). This logic
was already implemented. However, the code would then still try to
revisit that instruction and break the dependency (and always fail,
since by definition a true dependency has to be live before the
instruction). Avoid revisiting such instructions as a performance
optimization. No functional change.
Differential Revision: https://reviews.llvm.org/D30173
llvm-svn: 299467
This patch optimizes two memory intrinsic operations: memset and memcpy based
on the profiled size of the operation. The high level transformation is like:
mem_op(..., size)
==>
switch (size) {
case s1:
mem_op(..., s1);
goto merge_bb;
case s2:
mem_op(..., s2);
goto merge_bb;
...
default:
mem_op(..., size);
goto merge_bb;
}
merge_bb:
Differential Revision: http://reviews.llvm.org/D28966
llvm-svn: 299446
This patch is a part one of two reviews, one for the clang and the other for LLVM.
The patch deletes the back-end intrinsics and adds support for them in the auto upgrade.
Differential Revision: https://reviews.llvm.org/D31393
llvm-svn: 299432
Summary:
Lift the restrictions that prevented the tree walking introduced in the
previous change and add support for patterns like:
(G_ADD (G_MUL (G_SEXT $src1), (G_SEXT $src2)), $src3) -> SMADDWrrr $dst, $src1, $src2, $src3
Also adds support for G_SEXT and G_ZEXT to support these cases.
One particular aspect of this that I should draw attention to is that I've
tried to be overly conservative in determining the safety of matches that
involve non-adjacent instructions and multiple basic blocks. This is intended
to be used as a cheap initial check and we may add a more expensive check in
the future. The current rules are:
* Reject if any instruction may load/store (we'd need to check for intervening
memory operations.
* Reject if any instruction has implicit operands.
* Reject if any instruction has unmodelled side-effects.
See isObviouslySafeToFold().
Reviewers: t.p.northover, javed.absar, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: ab
Subscribers: igorb, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D30539
llvm-svn: 299430
Otherwise, yamlize in YAMLTraits.h might be wrongly defined.
This makes some AMDGPU tests fail when LLVM_LINK_LLVM_DYLIB is set.
Differential Revision: https://reviews.llvm.org/D30508
llvm-svn: 299415
Summary:
The TypeTableBuilder provides stable storage for type records. We don't
need to copy all of the bytes into a flat vector before adding it to the
TpiStreamBuilder.
This makes addTypeRecord take an ArrayRef<uint8_t> and a hash code to go
with it, which seems like a simplification.
Reviewers: ruiu, zturner, inglorion
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31634
llvm-svn: 299406
Summary:
MASM can produce type streams that are not topologically sorted. It can
even produce type streams with circular references, but those are not
common in practice.
Reviewers: inglorion, ruiu
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31629
llvm-svn: 299403
Summary:
Add a hook for simplification of shufflevector's with the following rules:
- Constant folding - NFC, as it was already being done by the default handler.
- If only one of the operands is constant, constant fold the shuffle if the
mask does not select elements from the variable operand - to show the hook is firing and affecting the test-cases.
Reviewers: RKSimon, craig.topper, spatel, sanjoy, nlopes, majnemer
Reviewed By: spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31525
llvm-svn: 299393
Dont emit Mapping symbols for sections that contain only data.
Summary:
Dont emit mapping symbols for sections that contain only data.
Reviewers: rengolin, weimingz, kparzysz, t.p.northover, peter.smith
Reviewed By: t.p.northover
Patched by Shankar Easwaran <shankare@codeaurora.org>
Subscribers: alekseyshl, t.p.northover, llvm-commits
Differential Revision: https://reviews.llvm.org/D30724
llvm-svn: 299392
Summary:
Move the aarch64-type-promotion pass within the existing type promotion framework in CGP.
This change also support forking sexts when a new sext is required for promotion.
Note that change is based on D27853 and I am submitting this out early to provide a better idea on D27853.
Reviewers: jmolloy, mcrosier, javed.absar, qcolombet
Reviewed By: qcolombet
Subscribers: llvm-commits, aemerson, rengolin, mcrosier
Differential Revision: https://reviews.llvm.org/D28680
llvm-svn: 299379
Summary:
This changes the static method TimerGroup::printAllJSONValues from private to
public, to match the static method TimerGroup::printAll. When trying to drive
the reporting machinery by hand, the existing API is _almost_ flexible enough,
but this entrypoint is required to intermix printing timers with other
non-timer output.
The underlying motive here is a Swift change to consolidate the collection of
timers, LLVM statistics and other (non-assert-dependent) counters into JSON
files, which requires a bit of manual intervention in LLVM's stat and timer
output routines. See https://github.com/apple/swift/pull/8477 for details.
Reviewers: MatzeB
Reviewed By: MatzeB
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31566
llvm-svn: 299371
Support for writing this module code was removed in r73220, which was well
before the LLVM 3.0 release, so we do not need to be able to understand it
for backwards compatibility.
Differential Revision: https://reviews.llvm.org/D31563
llvm-svn: 299370
This moves the isMask and isShiftedMask functions to be class methods. They now use the MathExtras.h function for single word size and leading/trailing zeros/ones or countPopulation for the multiword size. The previous implementation made multiple temorary memory allocations to do the bitwise arithmetic operations to match the MathExtras.h implementation.
Differential Revision: https://reviews.llvm.org/D31565
llvm-svn: 299362
This patch is one step to attempt to unify the main APInt interface and the tc functions used by APFloat.
This patch adds a WordType to APInt and uses that in all the tc functions. I've added temporary typedefs to APFloat to alias it to integerPart to keep the patch size down. I'll work on removing that in a future patch.
In future patches I hope to reuse the tc functions to implement some of the main APInt functionality.
I may remove APINT_ from BITS_PER_WORD and WORD_SIZE constants so that we don't have the repetitive APInt::APINT_ externally.
Differential Revision: https://reviews.llvm.org/D31523
llvm-svn: 299341
Summary:
This enables us to cache the clobbering access for stores, despite the
fact that we can't rewrite the use-def chains themselves.
Early testing shows that, after this change, for larger testcases, it will be a significant net positive (memory and time) to remove the walker caching.
Reviewers: george.burgess.iv, davide
Subscribers: Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D31567
llvm-svn: 299322
Summary:
GreatestComonDivisor currently makes a copy of both its inputs. Then in the loop we do one move and two copies, plus any allocation the urem call does.
This patch changes it to take its inputs by value so that we can do a move of any rvalue inputs instead of copying. Then in the loop we do 3 move assignments and no copies. This way the only possible allocations we have in the loop is from the urem call.
Reviewers: dblaikie, RKSimon, hans
Reviewed By: dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31572
llvm-svn: 299314
This patch refactors the code used in llc such that all the users of the
addPassesToEmitFile API have access to a homogeneous way of handling
start/stop-after/before options right out of the box.
Previously each user would have needed to duplicate this logic and set
up its own options.
NFC
llvm-svn: 299282
This removes a parameter from the routine that was responsible for a lot of the issue. It was a bit count that had to be set to the BitWidth of the APInt and would get passed to getLowBitsSet. This guaranteed the call to getLowBitsSet would create an all ones value. This was then compared to (V | (V-1)). So the only shifted masks we detected had to have the MSB set.
The one in tree user is a transform in InstCombine that never fires due to earlier transforms covering the case better. I've submitted a patch to remove it completely, but for now I've just adapted it to the new interface for isShiftedMask.
llvm-svn: 299273
Currently ComputeNumSignBits returns the minimum number of sign bits for all elements of vector data, when we may only be interested in one/some of the elements.
This patch adds a DemandedElts argument that allows us to specify the elements we actually care about. The original ComputeNumSignBits implementation calls with a DemandedElts demanding all elements to match current behaviour. Scalar types set this to 1.
I've only added support for BUILD_VECTOR and EXTRACT_VECTOR_ELT so far, all others will default to demanding all elements but can be updated in due course.
Followup to D25691.
Differential Revision: https://reviews.llvm.org/D31311
llvm-svn: 299219