Modify ProfileSummary class to make it not instrumented profile specific.
Add a new InstrumentedProfileSummary class that inherits from ProfileSummary.
Differential Revision: http://reviews.llvm.org/D17310
llvm-svn: 261119
The dynamic table is also an array of a fixed structure, so it can be
represented with a DynReginoInfo.
No major functionality change. The extra error checking is covered by
existing tests with a broken dynamic program header.
Idea extracted from r260488. I did the extra cleanups.
llvm-svn: 261107
We used to keep both a section and a pointer to the first symbol.
The oddity of keeping a section for dynamic symbols is because there is
a DT_SYMTAB but no DT_SYMTABZ, so to print the table we have to find the
size via a section table.
The reason for still keeping a pointer to the first symbol is because we
want to be able to print relocation tables even if the section table is
missing (it is mandatory only for files used in linking).
With this patch we keep just a DynRegionInfo. This then requires
changing a few places that were asking for a Elf_Shdr but actually just
needed the first symbol.
The test change is to delete the program header pointer.
Now that we use the information of both DT_SYMTAB and .dynsym, we don't
depend on the sh_entsize of .dynsym if we see DT_SYMTAB.
Note: It is questionable if it is worth it putting the effort to report
broken sh_entsize given that in files with no section table we have to
assume it is sizeof(Elf_Sym), but that is for another change.
Extracted from r260488.
llvm-svn: 261099
This patch detects vector reductions before instruction selection. Vector
reductions are vectorized reduction operations, and for such operations we have
freedom to reorganize the elements of the result as long as the reduction of them
stay unchanged. This will enable some reduction pattern recognition during
instruction combine such as SAD/dot-product on X86. A flag is added to
SDNodeFlags to mark those vector reduction nodes to be checked during instruction
combine.
To detect those vector reductions, we search def-use chains starting from the
given instruction, and check if all uses fall into two categories:
1. Reduction with another vector.
2. Reduction on all elements.
in which 2 is detected by recognizing the pattern that the loop vectorizer
generates to reduce all elements in the vector outside of the loop, which
includes several ShuffleVector and one ExtractElement instructions.
Differential revision: http://reviews.llvm.org/D15250
llvm-svn: 261070
This reverts commit r261030 and r261036.
(The revision was marked "approved" on phabricator, but some concerns
were raised on the mailing list. Thanks D. Blaikie for notifying me.)
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261055
reference-edge SCCs.
This essentially builds a more normal call graph as a subgraph of the
"reference graph" that was the old model. This allows both to exist and
the different use cases to use the aspect which addresses their needs.
Specifically, the pass manager and other *ordering* constrained logic
can use the reference graph to achieve conservative order of visit,
while analyses reasoning about attributes and other properties derived
from reachability can reason about the direct call graph.
Note that this isn't necessarily complete: it doesn't model edges to
declarations or indirect calls. Those can be found by scanning the
instructions of the function if desirable, and in fact every user
currently does this in order to handle things like calls to instrinsics.
If useful, we could consider caching this information in the call graph
to save the instruction scans, but currently that doesn't seem to be
important.
An important realization for why the representation chosen here works is
that the call graph is a formal subset of the reference graph and thus
both can live within the same data structure. All SCCs of the call graph
are necessarily contained within an SCC of the reference graph, etc.
The design is to build 'RefSCC's to model SCCs of the reference graph,
and then within them more literal SCCs for the call graph.
The formation of actual call edge SCCs is not done lazily, unlike
reference edge 'RefSCC's. Instead, once a reference SCC is formed, it
directly builds the call SCCs within it and stores them in a post-order
sequence. This is used to provide a consistent platform for mutation and
update of the graph. The post-order also allows for very efficient
updates in common cases by bounding the number of nodes (and thus edges)
considered.
There is considerable common code that I'm still looking for the best
way to factor out between the various DFS implementations here. So far,
my attempts have made the code harder to read and understand despite
reducing the duplication, which seems a poor tradeoff. I've not given up
on figuring out the right way to do this, but I wanted to wait until
I at least had the system working and tested to continue attempting to
factor it differently.
This also requires introducing several new algorithms in order to handle
all of the incremental update scenarios for the more complex structure
involving two edge colorings. I've tried to comment the algorithms
sufficiently to make it clear how this is expected to work, but they may
still need more extensive documentation.
I know that there are some changes which are not strictly necessarily
coupled here. The process of developing this started out with a very
focused set of changes for the new structure of the graph and
algorithms, but subsequent changes to bring the APIs and code into
consistent and understandable patterns also ended up touching on other
aspects. There was no good way to separate these out without causing
*massive* merge conflicts. Ultimately, to a large degree this is
a rewrite of most of the core algorithms in the LCG class and so I don't
think it really matters much.
Many thanks to the careful review by Sanjoy Das!
Differential Revision: http://reviews.llvm.org/D16802
llvm-svn: 261040
Summary: Loading IR with debug info improves MDString::get() from 19ms to 10ms.
Reviewers: dexonsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16597
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261030
Summary:
On the contrary to Full LTO, ThinLTO can afford to shift compile time
from the frontend to the linker: both phases are parallel (even if
it is not totally "free": projects like clang are reusing product
from the "compile phase" for multiple link, think about
libLLVMSupport reused for opt, llc, etc.).
This pipeline is based on the proposal in D13443 for full LTO. We
didn't move forward on this proposal because the LTO link was far too
long after that. We believe that we can afford it with ThinLTO.
The ThinLTO pipeline integrates in the regular O2/O3 flow:
- The compile phase perform the inliner with a somehow lighter
function simplification. (TODO: tune the inliner thresholds here)
This is intendend to simplify the IR and get rid of obvious things
like linkonce_odr that will be inlined.
- The link phase will run the pipeline from the start, extended with
some specific passes that leverage the augmented knowledge we have
during LTO. Especially after the inliner is done, a sequence of
globalDCE/globalOpt is performed, followed by another run of the
"function simplification" passes. It is not clear if this part
of the pipeline will stay as is, as the split model of ThinLTO
does not allow the same benefit as FullLTO without added tricks.
The measurements on the public test suite as well as on our internal
suite show an overall net improvement. The binary size for the clang
executable is reduced by 5%. We're still tuning it with the bringup
of ThinLTO and it will evolve, but this should provide a good starting
point.
Reviewers: tejohnson
Differential Revision: http://reviews.llvm.org/D17115
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261029
It is intended to contains the passes run over a function after the
inliner is done with a function and before it moves to its callers.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261028
The usual way to get a 32-bit relocation is to use a constant extender which doubles the size of the instruction, 4 bytes to 8 bytes.
Another way is to put a .word32 and mix code and data within a function. The disadvantage is it's not a valid instruction encoding and jumping over it causes prefetch stalls inside the hardware.
This relocation packs a 23-bit value in to an "r0 = add(rX, #a)" instruction by overwriting the source register bits. Since r0 is the return value register, if this instruction is placed after a function call which return void, r0 will be filled with an undefined value, the prefetch won't be confused, and the callee can access the constant value by way of the link register.
llvm-svn: 261006
Original message:
Get rid of the ifdefs in TargetLowering.
Introduce a new API used only by GlobalISel: CallLowering.
This API will contain target hooks dedicated to call lowering.
llvm-svn: 260998
Original messages:
Revert "[readobj] Handle ELF files with no section table or with no program headers."
Revert "[readobj] Dump DT_JMPREL relocations when outputting dynamic relocations."
r260489 depends on r260488 and among other issues r260488 deleted error
handling code.
llvm-svn: 260962
Summary:
Extending findExistingExpansion can use existing value in ExprValueMap.
This patch gives 0.3~0.5% performance improvements on
benchmarks(test-suite, spec2000, spec2006, commercial benchmark)
Reviewers: mzolotukhin, sanjoy, zzheng
Differential Revision: http://reviews.llvm.org/D15559
llvm-svn: 260938
r260925 introduced a version of the *trim methods which is preferable
when trimming a single kind of character. Update all users in llvm.
llvm-svn: 260926
Add support for trimming a single kind of character from a StringRef.
This makes the common case of trimming null bytes much neater. It's also
probably a bit speedier too, since it avoids creating a std::bitset in
find_{first,last}_not_of.
llvm-svn: 260925
Summary: The name is confusing as it matche another method on the module.
Reviewers: joker.eph, Wallbraker, echristo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17283
llvm-svn: 260920
This patch avoids the initial memset at the cost of making iterators
slightly more complex. This should be beneficial as most SmallPtrSets
hold no or only a few elements, while iterating over them is less
common.
Differential Revision: http://reviews.llvm.org/D16672
llvm-svn: 260913
This ensures that all of the various pieces are working. The next patch
will wire up commandline-driven alias analysis chain building and allow
BasicAA to work with the AAManager.
llvm-svn: 260838
into the new pass manager and fix the latent bugs there.
This lets everything live together nicely, but it isn't really useful
yet. I never finished wiring the AA layer up for the new pass manager,
and so subsequent patches will change this to do that wiring and get AA
stuff more fully integrated into the new pass manager. Turns out this is
necessary even to get functionattrs ported over. =]
llvm-svn: 260836
r180893 added an indirect include of llvm/Config/Targets.def to
llvm/Support/CodeGen.h, which in turn is included by things like
llvm/IR/Module.h. After a full build of LLVM and Clang, ninja had to
rebuild 1274 files after reconfiguring.
This commit strips CodeGen.h back down to just a pile of enums and moves
the expensive includes over to CodeGenCWrappers.h (which is only
included in two places). This gets ninja down to 88 files if you
reconfigure with, e.g., -DLLVM_TARGETS_TO_BUILD=X86.
llvm-svn: 260835
Summary:
Export the CloneDebugInfoMetadata utility, which clones all debug info
associated with a function into the first module. Also use this function
in CloneModule on each function we clone (the CloneFunction entrypoint
already does this).
Without this, cloning a module will lead to DI quality regressions,
especially since r252219 reversed the Function <-> DISubprogram edge
(before we could get lucky and have this edge preserved if the
DISubprogram itself was, e.g. due to location metadata).
This was verified to fix missing debug information in julia and
a unittest to verify the new behavior is included.
Patch by Yichao Yu! Thanks!
Reviewers: loladiro, pcc
Differential Revision: http://reviews.llvm.org/D17165
llvm-svn: 260791
As support expands to more runtimes, we'll need to
distinguish between more than just HSA and unknown.
This also lets us stop using unknown everywhere.
llvm-svn: 260790
Add another interface to function annotateValueSite() which directly uses the
VauleData array.
Differential Revision: http://reviews.llvm.org/D17108
llvm-svn: 260741
MSan adds a constructor to each translation unit that calls
__msan_init, and does nothing else. The idea is to run __msan_init
before any instrumented code. This results in multiple constructors
and multiple .init_array entries in the final binary, one per
translation unit. This is absolutely unnecessary; one would be
enough.
This change moves the constructors to a comdat group in order to drop
the extra ones.
llvm-svn: 260632
This reverts commit r260458.
It was backported on an internal branch and broke stage2 build. Since
this can lead to weird random crash I'm reverting upstream as well
while investigating.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260605
Summary:
On the contrary to Full LTO, ThinLTO can afford to shift compile time
from the frontend to the linker: both phases are parallel.
This pipeline is based on the proposal in D13443 for full LTO. We ]
didn't move forward on this proposal because the link was far too long
after that.
This patch refactor the "function simplification" passes that are part
of the inliner loop in a helper function (this part is NFC and can be
commited separately to simplify the diff). The ThinLTO pipeline
integrates in the regular O2/O3 flow:
- The compile phase perform the inliner with a somehow lighter
function simplification. (TODO: tune the inliner thresholds here)
This is intendend to simplify the IR and get rid of obvious things
like linkonce_odr that will be inlined.
- The link phase will run the pipeline from the start, extended with
some specific passes that leverage the augmented knowledge we have
during LTO. Especially after the inliner is done, a sequence of
globalDCE/globalOpt is performed, followed by another run of the
"function simplification" passes.
The measurements on the public test suite as well as on our internal
suite show an overall net improvement. The binary size for the clang
executable is reduced by 5%. We're still tuning it with the bringup
of ThinLTO but this should provide a good starting point.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits, dexonsmith
Differential Revision: http://reviews.llvm.org/D17115
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260604
It is intended to contains the passes run over a function after the
inliner is done with a function and before it moves to its callers.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260603
Summary: This required to add binding to Instruction::removeFromParent so that instruction can be forward declared and then moved at the right place.
Reviewers: bogner, chandlerc, echristo, dblaikie, joker.eph, Wallbraker
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17057
llvm-svn: 260597
Rather than storing type units in a vector and emitting them at the end
of code generation, emit them immediately and destroy them, reclaiming the
memory we were using for their DIEs.
In one benchmark carried out against Chromium's 50 largest (by bitcode
file size) translation units, total peak memory consumption with type units
decreased by median 17%, or by 7% when compared against disabling type units.
Tested using check-{llvm,clang}, the GDB 7.5 test suite (with
'-fdebug-types-section') and by eyeballing llvm-dwarfdump output on those
Chromium translation units with split DWARF both disabled and enabled, and
verifying that the only changes were to addresses and abbreviation ordering.
Differential Revision: http://reviews.llvm.org/D17118
llvm-svn: 260578
We actually need that information only for generic instructions, therefore it
would be nice not to have to pay the extra memory consumption for all
instructions. Especially because a typed non-generic instruction does not make
sense.
The question is then, is it possible to have that information in a union or
something?
My initial thought was that we could have a derived class GenericMachineInstr
with additional information, but in practice it makes little to no sense since
generic MachineInstrs are likely turned into non-generic ones by just switching
the opcode. In other words, we don't want to go through the process of creating
a new, non-generic MachineInstr, object each time we do this switch. The memory
benefit probably is not worth the extra compile time.
Another option would be to keep the type of the MachineInstr in a side table.
This would induce an extra indirection though.
Anyway, I will file a PR to discuss about it and remember we need to come back
to it at some point.
llvm-svn: 260558
This is a part of the refactoring to unify isSafeToLoadUnconditionally and isDereferenceablePointer functions. In the subsequent change isSafeToSpeculativelyExecute will be modified to use isSafeToLoadUnconditionally instead of isDereferenceableAndAlignedPointer.
Reviewed By: reames
Differential Revision: http://reviews.llvm.org/D16227
llvm-svn: 260520
This adds support for finding the dynamic table and dynamic symbol table via
the section table or the program header table. If there's no section table an
attempt is made to figure out the length of the dynamic symbol table.
llvm-svn: 260488
For now, generic virtual registers will not have a register class. We may want
to change that. For instance, if we want to use all the methods from
TargetRegisterInfo with generic virtual registers, we need to either have some
sort of generic register classes that do what we want, or teach those methods
how to deal with nullptr register class.
Although the latter seems easy enough to do, we may still want to differenciate
generic register classes from nullptr to catch cases where nullptr gets
introduced by a bug of some sort.
Anyway, I will file a PR to keep track of that.
llvm-svn: 260474
There is not reason to pass an array of "char *" to rebuild a set if
the client already has one.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260462
Summary:
Just like the existing find_as() method, the new insert_as() accepts
an extra parameter which is used as a key to find the bucket in the
map.
When creating a Constant, we want to check the map before actually
creating the object. In this case we have to perform two queries to
the map, and this extra parameter can save recomputing the hash value
for the second query.
Reviewers: dexonsmith, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16268
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260458
Summary: Measured to be more performant when reading bitcode.
Reviewers: rafael, dexonsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16285
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260455
The Use argument was used to compute the operand number for a fast
path when replacing only one operand. However we always have to go
through all the operands. So the argument number can be recomputed
locally anyway.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260454
The patch adds a parameter in annotateValueSite() to control the max number
of records written to the value profile meta data for each value site. The
default is kept as the current value of 3.
Differential Revision: http://reviews.llvm.org/D17084
llvm-svn: 260450
This restores commit r260408, along with a fix for a bot failure.
The bot failure was caused by dereferencing a unique_ptr in the same
call instruction parameter list where it was passed via std::move.
Apparently due to luck this was not exposed when I built the compiler
with clang, only with gcc.
llvm-svn: 260442
Summary:
Refactor common value, scope, and label tracking logic out of DwarfDebug
into a common base class called DebugHandlerBase.
Update an old LLVM IR test case to avoid an assertion in LexicalScopes.
Reviewers: dblaikie, majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16931
llvm-svn: 260432
Patch by Alexander Riccio
This patch enables `constexpr` on Visual Studio 2015 by adding `||
LLVM_MSC_PREREQ(1900)` to the preprocessor `#if` statement. Since VS2013
doesn't support `constexpr`, that's purposely excluded. The
`LLVM_CONSTEXPR` macro is used in ~25 places.
I also added the MSVC/SAL equivalent of:
- `__attribute__((__warn_unused_result__))` as an
`LLVM_ATTRIBUTE_UNUSED_RESULT` definition
- `__attribute__((returns_nonnull))` as an
`LLVM_ATTRIBUTE_RETURNS_NONNULL` definition
...in case anybody ever decides to run `/analyze` on LLVM (probably
myself, if anybody)
Reviewers: rnk, aaron.ballman
Differential Revision: http://reviews.llvm.org/D16751
llvm-svn: 260410
Summary:
This patch uses the lower 64-bits of the MD5 hash of a function name as
a GUID in the function index, instead of storing function names. Any
local functions are first given a global name by prepending the original
source file name. This is the same naming scheme and GUID used by PGO in
the indexed profile format.
This change has a couple of benefits. The primary benefit is size
reduction in the combined index file, for example 483.xalancbmk's
combined index file was reduced by around 70%. It should also result in
memory savings for the index file in memory, as the in-memory map is
also indexed by the hash instead of the string.
Second, this enables integration with indirect call promotion, since the
indirect call profile targets are recorded using the same global naming
convention and hash. This will enable the function importer to easily
locate function summaries for indirect call profile targets to enable
their import and subsequent promotion.
The original source file name is recorded in the bitcode in a new
module-level record for use in the ThinLTO backend pipeline.
Reviewers: davidxl, joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D17028
llvm-svn: 260408
Currently you can't specify node properties like commutativity on
a PatFrag. If you want to create a PatFrag on a commutative node
with a hasOneUse predicate, this enables you to specify that the
PatFrag is also commutable.
llvm-svn: 260404
Summary:
As discussed on IRC, move the ThinLTOGlobalProcessing code out of
the linker, and into TransformUtils. The name of the class is changed
to FunctionImportGlobalProcessing.
Reviewers: joker.eph, rafael
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D17081
llvm-svn: 260395
This patch uses one bit in profile version to differentiate Clang
instrumentation and IR level instrumentation profiles.
PGOInstrumenation generates a COMDAT variable __llvm_profile_raw_version so
that the compiler runtime can set the right profile kind.
For Maco-O platform, we generate the variable as linkonce_odr linkage as
COMDAT is not supported.
PGOInstrumenation now checks this bit to make sure it's an IR level
instrumentation profile.
The patch was submitted as r260164 but reverted due to a Darwin test breakage.
Original Differential Revision: http://reviews.llvm.org/D15540
Differential Revision: http://reviews.llvm.org/D17020
llvm-svn: 260385
Patch by Rong Xu
The problem is exposed by intra-module indirect call promotion where
prof symtab is created from module which does not contain all symbols
from the program. With partial symtab, the result needs to be checked
more strictly.
llvm-svn: 260361
This patch adds a new class, OrcI386, which contains the hooks needed to
support lazy-JITing on i386 (currently only for Pentium 2 or above, as the JIT
re-entry code uses the FXSAVE/FXRSTOR instructions).
Support for i386 is enabled in the LLI lazy JIT and the Orc C API, and
regression and unit tests are enabled for this architecture.
llvm-svn: 260338
Summary:
Tests for this will be added once the AMDGPU backend enables this
option.
Reviewers: arsenm
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16602
llvm-svn: 260336
Summary: As per title. This also include extra support for insertvalue and extracvalue.
Reviewers: bogner, chandlerc, echristo, dblaikie, joker.eph, Wallbraker
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17055
llvm-svn: 260335
Summary: As per title. This remove the need to rely on internal knowledge of call and invoke instruction to find called value and argument count.
Reviewers: bogner, chandlerc, echristo, dblaikie, joker.eph, Wallbraker
Differential Revision: http://reviews.llvm.org/D17054
llvm-svn: 260332
Summary:
Remove the convergent attribute on any functions which provably do not
contain or invoke any convergent functions.
After this change, we'll be able to modify clang to conservatively add
'convergent' to all functions when compiling CUDA.
Reviewers: jingyue, joker.eph
Subscribers: llvm-commits, tra, jhen, hfinkel, resistor, chandlerc, arsenm
Differential Revision: http://reviews.llvm.org/D17013
llvm-svn: 260319
This pass implements whole program optimization of virtual calls in cases
where we know (via bitset information) that the list of callees is fixed. This
includes the following:
- Single implementation devirtualization: if a virtual call has a single
possible callee, replace all calls with a direct call to that callee.
- Virtual constant propagation: if the virtual function's return type is an
integer <=64 bits and all possible callees are readnone, for each class and
each list of constant arguments: evaluate the function, store the return
value alongside the virtual table, and rewrite each virtual call as a load
from the virtual table.
- Uniform return value optimization: if the conditions for virtual constant
propagation hold and each function returns the same constant value, replace
each virtual call with that constant.
- Unique return value optimization for i1 return values: if the conditions
for virtual constant propagation hold and a single vtable's function
returns 0, or a single vtable's function returns 1, replace each virtual
call with a comparison of the vptr against that vtable's address.
Differential Revision: http://reviews.llvm.org/D16795
llvm-svn: 260312
Summary: As per title. Also add a facility method to get the name of a basic block from the C API.
Reviewers: bogner, chandlerc, echristo, dblaikie, joker.eph, Wallbraker
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16912
llvm-svn: 260309
I reinvented this functionality in http://reviews.llvm.org/D16828 because it was
hidden away as a static function. The changes in x86 are not based on a complete
audit. I suspect there are other possible uses there, and there are almost certainly
more potential users in other targets.
llvm-svn: 260295
On Windows, the DLL containing the registry will get its own global head
and tail variables, so the entries registered in the DLL will be
invisible to the consumer.
In order to solve this, we need to export a getter function from the
plugin DLL per registry and copy over the data inside it. This patch
adds support for this. This will be used to support clang plugins on
Windows.
llvm-svn: 260261
Summary:
Move the function renaming logic into the Function class, and the
MD5Hash routine into the MD5 header.
This will enable these routines to be shared with ThinLTO, which
will be changed to store the MD5 hash instead of full function name
in the combined index for significant size reductions. And using the same
function naming for locals in the function index facilitates future
integration with indirect call value profiles.
Reviewers: davidxl
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17006
llvm-svn: 260197
compiler-specific issues. Instead, repeat an 'operator delete' definition in
each derived class that is actually deleted, and give up on the static type
safety of an error when sized delete is accidentally used on a type derived
from TrailingObjects.
llvm-svn: 260190
Summary:
Passes that call `getAnalysisIfAvailable<T>` also need to call
`addUsedIfAvailable<T>` in `getAnalysisUsage` to indicate to the
legacy pass manager that it uses `T`. This contract was being
violated by passes that used `createLegacyPMAAResults`. This change
fixes this by exposing a helper in AliasAnalysis.h,
`addUsedAAAnalyses`, that is complementary to createLegacyPMAAResults
and does the right thing when called from `getAnalysisUsage`.
Reviewers: chandlerc
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D17010
llvm-svn: 260183
This fixes undefined behavior in C++14 due to the size of the object being
deleted being different from sizeof(dynamic type) when it is allocated with
trailing objects.
MSVC seems to have several bugs around using-declarations changing the access
of a member inherited from a base class, so use forwarding functions instead of
using-declarations to make TrailingObjects::operator delete accessible where
desired.
llvm-svn: 260180
Summary:
Unrolling Analyzer is already pretty complicated, and it becomes harder and harder to exercise it with usual IR tests, as with them we can only check the final decision: whether the loop is unrolled or not. This change factors this framework out from LoopUnrollPass to analyses, which allows to use unit tests.
The change itself is supposed to be NFC, except adding a couple of tests.
I plan to add more tests as I add new functionality and find/fix bugs.
Reviewers: chandlerc, hfinkel, sanjoy
Subscribers: zzheng, sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D16623
llvm-svn: 260169
This patch uses one bit in profile version to differentiate Clang
instrumentation and IR level instrumentation profiles.
PGOInstrumenation generates a COMDAT variable __llvm_profile_raw_version so
that the compiler runtime can set the right profile kind.
PGOInstrumenation now checks this bit to make sure it's an IR level
instrumentation profile.
Differential Revision: http://reviews.llvm.org/D15540
llvm-svn: 260146
This matches GCC and MSVC's behaviour, and saves on code size.
We were already not extending i1 return values on x86_64 after r127766. This
takes that patch further by applying it to x86 target as well, and also for i8
and i16.
The ABI docs have been unclear about the required behaviour here. The new i386
psABI [1] clearly states (Table 2.4, page 14) that i1, i8, and i16 return
vales do not need to be extended beyond 8 bits. The x86_64 ABI doc is being
updated to say the same [2].
Differential Revision: http://reviews.llvm.org/D16907
[1]. https://01.org/sites/default/files/file_attach/intel386-psabi-1.0.pdf
[2]. https://groups.google.com/d/msg/x86-64-abi/E8O33onbnGQ/_RFWw_ixDQAJ
llvm-svn: 260133
This reduces sizes of instrumented object files, final binaries,
process images, and raw profile data.
The format of the indexed profile data remain the same.
Differential Revision: http://reviews.llvm.org/D16388
llvm-svn: 260117
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260085
The Windows bots have been failing for the last two days, with:
FAILED: C:\PROGRA~2\MICROS~1.0\VC\bin\amd64\cl.exe -c LLVMContextImpl.cpp
D:\buildslave\clang-x64-ninja-win7\llvm\lib\IR\LLVMContextImpl.cpp(137) :
error C2248: 'llvm::TrailingObjects<llvm::AttributeSetImpl,
llvm::IndexAttrPair>::operator delete' :
cannot access private member declared in class 'llvm::AttributeSetImpl'
TrailingObjects.h(298) : see declaration of
'llvm::TrailingObjects<llvm::AttributeSetImpl,
llvm::IndexAttrPair>::operator delete'
AttributeImpl.h(213) : see declaration of 'llvm::AttributeSetImpl'
llvm-svn: 260053
Summary:
This is the attribute purpose-made for e.g. __syncthreads. It appears
that NoDuplicate may not be sufficient to prevent Sink from touching a
call to __syncthreads.
Reviewers: jingyue, hfinkel
Subscribers: llvm-commits, jholewinski, jhen, rnk, tra, majnemer
Differential Revision: http://reviews.llvm.org/D16941
llvm-svn: 260005
Summary:
Adds the linkage type to both the per-module and combined function
summaries, which subsumes the current islocal bit. This will eventually
be used to optimized linkage types based on global summary-based
analysis.
Reviewers: joker.eph
Subscribers: joker.eph, davidxl, llvm-commits
Differential Revision: http://reviews.llvm.org/D16943
llvm-svn: 259993
Summary:
When alias analysis is uncertain about the aliasing between any two accesses,
it will return MayAlias. This uncertainty from alias analysis restricts LICM
from proceeding further. In cases where alias analysis is uncertain we might
use loop versioning as an alternative.
Loop Versioning will create a version of the loop with aggressive aliasing
assumptions in addition to the original with conservative (default) aliasing
assumptions. The version of the loop making aggressive aliasing assumptions
will have all the memory accesses marked as no-alias. These two versions of
loop will be preceded by a memory runtime check. This runtime check consists
of bound checks for all unique memory accessed in loop, and it ensures the
lack of memory aliasing. The result of the runtime check determines which of
the loop versions is executed: If the runtime check detects any memory
aliasing, then the original loop is executed. Otherwise, the version with
aggressive aliasing assumptions is used.
The pass is off by default and can be enabled with command line option
-enable-loop-versioning-licm.
Reviewers: hfinkel, anemet, chatur01, reames
Subscribers: MatzeB, grosser, joker.eph, sanjoy, javed.absar, sbaranga,
llvm-commits
Differential Revision: http://reviews.llvm.org/D9151
llvm-svn: 259986
check wrong when inheriting a member through two levels of private inheritance,
where the middle one is a class template specialization.
llvm-svn: 259943
-fsized-deallocation. Disable sized deallocation for all objects derived from
TrailingObjects, as we expect the storage allocated for these objects to be
larger than the size of their dynamic type.
llvm-svn: 259942
Summary:
This makes it possible to specify some operands as optional to the AsmMatcher.
Setting this field to true will prevent the AsmMatcher from emitting
'too few operands' errors when there are missing optional operands.
Reviewers: olista01, ab
Subscribers: nhaustov, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D15755
llvm-svn: 259913
CodeView, like most other debug formats, represents the live range of a
variable so that debuggers might print them out.
They use a variety of records to represent how a particular variable
might be available (in a register, in a frame pointer, etc.) along with
a set of ranges where this debug information is relevant.
However, the format only allows us to use ranges which are limited to a
maximum of 0xF000 in size. This means that we need to split our debug
information into chunks of 0xF000.
Because the layout of code is not known until *very* late, we must use a
new fragment to record the information we need until we can know
*exactly* what the range is.
llvm-svn: 259868
Summary computation is not just for instrumented profiling and so I have moved
the ProfileSummary class to ProfileCommon.h (named so to allow code unrelated
to summary but common to instrumented and sampled profiling to be placed there)
Differential Revision: http://reviews.llvm.org/D16661
llvm-svn: 259846
This patch implements softening of long double type (ppcf128) on ppc32
architecture and enables operations for this type for soft float.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D15811
llvm-svn: 259791
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
The original commit triggered regressions in Polly tests. The regressions
exposed two problems which have been fixed in current version.
1. Polly will generate a new function based on the old one. To generate an
instruction for the new function, it builds SCEV for the old instruction,
applies some tranformation on the SCEV generated, then expands the transformed
SCEV and insert the expanded value into new function. Because SCEV expansion
may reuse value cached in ExprValueMap, the value in old function may be
inserted into new function, which is wrong.
In SCEVExpander::expand, there is a logic to check the cached value to
be used should dominate the insertion point. However, for the above
case, the check always passes. That is because the insertion point is
in a new function, which is unreachable from the old function. However
for unreachable node, DominatorTreeBase::dominates thinks it will be
dominated by any other node.
The fix is to simply add a check that the cached value to be used in
expansion should be in the same function as the insertion point instruction.
2. When the SCEV is of scConstant type, expanding it directly is cheaper than
reusing a normal value cached. Although in the cached value set in ExprValueMap,
there is a Constant type value, but it is not easy to find it out -- the cached
Value set is not sorted according to the potential cost. Existing reuse logic
in SCEVExpander::expand simply chooses the first legal element from the cached
value set.
The fix is that when the SCEV is of scConstant type, don't try the reuse
logic. simply expand it.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259736
Unfortunately, ProgramInfo::ProcessId is signed on Unix and unsigned on
Windows, breaking the standard fix of using '0U' in the gtest
expectation.
llvm-svn: 259704
Bail out if we have a PHI on an EHPad that gets a value from a
CatchSwitchInst. Because the CatchSwitchInst cannot be split, there is
no good place to stick any instructions.
This fixes PR26373.
llvm-svn: 259702
The IR/Value class had a linkage issue present when LLVM was built
as a library, and the LLVM library build time had different settings
for NDEBUG than the client of the LLVM library. Clients could get
into a state where the LLVM lib expected
Value::assertModuleIsMaterialized() to be inline-defined in the header
but clients expected that method to be defined in the LLVM library.
See this llvm-commits thread for more details:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160201/329667.html
llvm-svn: 259695
Summary:
This patch adds a reserve call to an expensive function
(`llvm::LoadIntrinsics`), and may fix a few other low hanging
performance fruit (I've put them in comments for now, so we can
discuss).
**Motivation:**
As I'm sure other developers do, when I build LLVM, I build the entire
project with the same config (`Debug`, `MinSizeRel`, `Release`, or
`RelWithDebInfo`). However, the `Debug` config also builds llvm-tblgen
in `Debug` mode. Later build steps that run llvm-tblgen then can
actually be the slowest steps in the entire build. Nobody likes slow
builds.
Reviewers: rnk, dblaikie
Differential Revision: http://reviews.llvm.org/D16832
Patch by Alexander G. Riccio
llvm-svn: 259683
Recommited, after some fixing with test cases.
Updated test cases:
test/CodeGen/AArch64/arm64-misched-memdep-bug.ll
test/CodeGen/AArch64/tailcall_misched_graph.ll
Temporarily disabled test cases:
test/CodeGen/AMDGPU/split-vector-memoperand-offsets.ll
test/CodeGen/PowerPC/ppc64-fastcc.ll (partially updated)
test/CodeGen/PowerPC/vsx-fma-m.ll
test/CodeGen/PowerPC/vsx-fma-sp.ll
http://reviews.llvm.org/D8705
Reviewers: Hal Finkel, Andy Trick.
llvm-svn: 259673
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259662
Summary:
This adds a new attribute which targets can set in TableGen which causes a function to be generated which matches register alternative names. This is very similar to `ShouldEmitMatchRegisterName`, except it works on alt names.
This patch is currently used by the out of tree part of the AVR backend. It reduces code duplication greatly, and has the effect that you do not need to hardcode altname to register mappings in C++.
It will not work on targets which have registers which share the same aliases.
Reviewers: stoklund, arsenm, dsanders, hfinkel, vkalintiris
Subscribers: hfinkel, dylanmckay, llvm-commits
Differential Revision: http://reviews.llvm.org/D16312
llvm-svn: 259636
With this patch, the profile summary data will be available in indexed
profile data file so that profiler reader/compiler optimizer can start
to make use of.
Differential Revision: http://reviews.llvm.org/D16258
llvm-svn: 259626
Summary:
LoopVersioning is a transform utility that transform passes can use to
run-time disambiguate may-aliasing accesses. I'd like to also expose as
pass to allow it to be unit-tested.
I am planning to add support for non-aliasing annotation in
LoopVersioning and I'd like to be able to write tests directly using
this pass.
(After that feature is done, the pass could also be used to look for
optimization opportunities that are hidden behind incomplete alias
information at compile time.)
The pass drives LoopVersioning in its default way which is to fully
disambiguate may-aliasing accesses no matter how many checks are
required.
Reviewers: hfinkel, ashutosh.nema, sbaranga
Subscribers: zzheng, mssimpso, llvm-commits, sanjoy
Differential Revision: http://reviews.llvm.org/D16612
llvm-svn: 259610
Please see include/llvm/Transforms/Utils/MemorySSA.h for a description
of MemorySSA, and what it does.
Differential Revision: http://reviews.llvm.org/D7864
llvm-svn: 259595
This didn't affect X86_64, which is the only client of this code at the moment,
as stubs and pointers are both 8-bytes there. It will affect other platforms
though.
llvm-svn: 259575
CodeView requires us to accurately describe the extent of the inlined
code. We did this by grabbing the next debug location in source order
and using *that* to denote where we stopped inlining. However, this is
not sufficient or correct in instances where there is no next debug
location or the next debug location belongs to the start of another
function.
To get this correct, use the end symbol of the function to denote the
last possible place the inlining could have stopped at.
llvm-svn: 259548
This directive emits the binary annotations that describe line and code
deltas in inlined call sites. Single-stepping through inlined frames in
windbg now works.
llvm-svn: 259535
Re-commit of r258951 after fixing layering violation.
The BPF and WebAssembly backends had identical code for emitting errors
for unsupported features, and AMDGPU had very similar code. This merges
them all into one DiagnosticInfo subclass, that can be used by any
backend.
There should be minimal functional changes here, but some AMDGPU tests
have been updated for the new format of errors (it used a slightly
different format to BPF and WebAssembly). The AMDGPU error messages will
now benefit from having precise source locations when debug info is
available.
llvm-svn: 259498
differentiate between indirect references to functions an direct calls.
This doesn't do a whole lot yet other than change the print out produced
by the analysis, but it lays the groundwork for a very major change I'm
working on next: teaching the call graph to actually be a call graph,
modeling *both* the indirect reference graph and the call graph
simultaneously. More details on that in the next patch though.
The rest of this is essentially a bunch of over-engineering that won't
be interesting until the next patch. But this also isolates essentially
all of the churn necessary to introduce the edge abstraction from the
very important behavior change necessary in order to separately model
the two graphs. So it should make review of the subsequent patch a bit
easier at the cost of making this patch seem poorly motivated. ;]
Differential Revision: http://reviews.llvm.org/D16038
llvm-svn: 259463
These sets do linear searching in small mode; It is not a good idea to
use huge numbers as the small value here, save people from themselves by
adding a static_assert.
Differential Revision: http://reviews.llvm.org/D16706
llvm-svn: 259419
Changed emitting offset of macinfo entry into compiler unit DIE to use "addSectionLabel" method rather than explicitly calculating size/offset of macro entry.
Differential Revision: http://reviews.llvm.org/D16292
llvm-svn: 259358
Patch adds a DWARF language vendor extension for RenderScript.
We are already using this identifier in LLDB with a hard coded value, so it's preferable to use a LLVM generated enum instead.
The language is intended to be added to the next version of the standard.
See http://www.dwarfstd.org/ShowIssue.php?issue=150331.1
Reviewers: dexonsmith, echristo
Subscribers: probinson domipheus, srhines, llvm-commits
Differential Revision: http://reviews.llvm.org/D16409
llvm-svn: 259348
Add an option to llvm-profdata merge for writing out sparse indexed
profiles. These profiles omit InstrProfRecords for functions which are
never executed.
Differential Revision: http://reviews.llvm.org/D16727
llvm-svn: 259258
Loop transformations can sometimes fail because the loop, while in
valid rotated LCSSA form, is not in a canonical CFG form. This is
an extremely simple pass that just merges obviously redundant
blocks, which can be used to fix some known failure cases. In the
future, it may be enhanced with more cases (and have code shared with
SimplifyCFG).
This allows us to run LoopSimplifyCFG -> LoopRotate -> LoopUnroll,
so that SimplifyCFG cleans up the loop before Rotate tries to run.
Not currently used in the pass manager, since this pass doesn't do
anything unless you can hook it up in an LPM with other loop passes.
It'll be added once Chandler cleans up things to allow this.
Tested in a custom pipeline out of tree to confirm it works in
practice (in addition to the included trivial test).
llvm-svn: 259256
The majority of attribute queries checks for the existence of an enum
attribute in the FunctionIndex slot. We only have 48 of those and can
therefore summarize them in an uint64_t bitset which measurably improves
compile time.
Differential Revision: http://reviews.llvm.org/D16618
llvm-svn: 259252
This support is _very_ rudimentary, just enough to get some basic data
into the CodeView debug section.
Left to do is:
- Use the combined opcodes to save space.
- Do something about code offsets.
llvm-svn: 259230
Summary:
There are three parts to inlined call frames:
1. The inlinee line subsection
2. The inline site symbol record
3. The function ids referenced by both
This change starts by emitting function ids (3) for all subprograms and
emitting the base inline site symbol record (2). The actual line numbers
in (2) use an encoded format that will come next, along with the inlinee
line subsection.
Reviewers: majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16333
llvm-svn: 259217
The buildSchedGraph() was in need of reworking as the AA features had been
added on top of earlier code. It was very difficult to understand, and buggy.
There had been found cases where scheduling dependencies had actually been
missed (see r228686).
AliasChain, RejectMemNodes, adjustChainDeps() and iterateChainSucc() have
been removed. There are instead now just the four maps from Value to SUs, which
have been renamed to Stores, Loads, NonAliasStores and NonAliasLoads.
An unknown store used to become the AliasChain, but now becomes a store mapped
to 'unknownValue' (in Stores). What used to be PendingLoads is instead the
list of SUs mapped to 'unknownValue' in Loads.
RejectMemNodes and adjustChainDeps() used to be a safety-net for everything.
The SU maps were sometimes cleared and SUs were put in RejectMemNodes, where
adjustChainDeps() would look. Instead of this, a more straight forward approach
is used in maintaining the SU maps without clearing them and simply letting
them grow over time. Instead of the cutt-off in adjustChainDeps() search, a
reduction of maps will be done if needed (see below).
Each SUnit either becomes the BarrierChain, or is put into one of the maps. For
each SUnit encountered, all the information about previous ones are still
available until a new BarrierChain is set, at which point the maps are cleared.
For huge regions, the algorithm becomes slow, therefore the maps will get
reduced at a threshold (current default is 1000 nodes), by a fraction (default 1/2).
These values can be tuned by use of CL options in case some test case shows that
they need to be changed (-dag-maps-huge-region and -dag-maps-reduction-size).
There has not been any considerable change observed in output quality or compile
time. There may now be more DAG edges inserted than before (i.e. if A->B->C,
then A->C is not needed). However, in a comparison run there were fewer total
calls to AA, and a somewhat improved compile time, which means this seems to
be not a problem.
http://reviews.llvm.org/D8705
Reviewers: Hal Finkel, Andy Trick.
llvm-svn: 259201
Also removed a few redundant `else`s.
Bug was found by a test I wrote for MemorySSA (in review at
http://reviews.llvm.org/D7864; shiny update coming soon). So, assuming
that lands at some point, this should be covered by that. If anyone
feels this deserves its own explicit test case, please let me know.
I'll write one.
llvm-svn: 259179
This patch enables llvm-bcanalyzer to print the bitcode wrapper header
if the file has one, which is needed to test the changes made in
r258627 (bitcode-wrapper-header-armv7m.ll is the test case for r258627).
Differential Revision: http://reviews.llvm.org/D16642
llvm-svn: 259162
This reverts commit r259117.
The LineInfo constructor is defined in the codeview library and we have
to link against it now. Doing that isn't trivial, so reverting for now.
llvm-svn: 259126
Adds a new family of .cv_* directives to LLVM's variant of GAS syntax:
- .cv_file: Similar to DWARF .file directives
- .cv_loc: Similar to the DWARF .loc directive, but starts with a
function id. CodeView line tables are emitted by function instead of
by compilation unit, so we needed an extra field to communicate this.
Rather than overloading the .loc direction further, we decided it was
better to have our own directive.
- .cv_stringtable: Emits the codeview string table at the current
position. Currently this just contains the filenames as
null-terminated strings.
- .cv_filechecksums: Emits the file checksum table for all files used
with .cv_file so far. There is currently no support for emitting
actual checksums, just filenames.
This moves the line table emission code down into the assembler. This
is in preparation for implementing the inlined call site line table
format. The inline line table format encoding algorithm requires knowing
the absolute code offsets, so it must run after the assembler has laid
out the code.
David Majnemer collaborated on this patch.
llvm-svn: 259117
Re-commit of r258951 after fixing layering violation.
The related LLVM patch adds a backend diagnostic type for reporting
unsupported features, this adds a printer for them to clang.
In the case where debug location information is not available, I've
changed the printer to report the location as the first line of the
function, rather than the closing brace, as the latter does not give the
user any information. This also affects optimisation remarks.
Differential Revision: http://reviews.llvm.org/D16590
llvm-svn: 259035
move ptestm{q|d} intrinsics from patterns form (in td file) to the intrinsics table
Differential Revision: http://reviews.llvm.org/D16633
llvm-svn: 259029
This patch revamps the RegStackifier pass with a new tree traversal mechanism,
enabling three major new features:
- Stackification of values with multiple uses, using the result value of set_local
- More aggressive stackification of instructions with side effects
- Reordering operands in commutative instructions to enable more stackification.
llvm-svn: 259009
This patch is part of the work to make PPCLoopDataPrefetch
target-independent
(http://thread.gmane.org/gmane.comp.compilers.llvm.devel/92758).
As it was discussed in the above thread, getPrefetchDistance is
currently using instruction count which may change in the future.
llvm-svn: 258995
Various bits we want to use the new ABI actually compile with "-arch armv7k
-miphoneos-version-min=9.0". Not ideal, but also not ridiculous given how
slices work.
llvm-svn: 258975
ObjC ARC Optimizer.
The main implication of this is:
1. Ensuring that we treat it conservatively in terms of optimization.
2. We put the ASM marker on it so that the runtime can recognize
objc_unsafeClaimAutoreleasedReturnValue from releaseRV.
<rdar://problem/21567064>
Patch by Michael Gottesman!
llvm-svn: 258970
The BPF and WebAssembly backends had identical code for emitting errors
for unsupported features, and AMDGPU had very similar code. This merges
them all into one DiagnosticInfo subclass, that can be used by any
backend.
There should be minimal functional changes here, but some AMDGPU tests
have been updated for the new format of errors (it used a slightly
different format to BPF and WebAssembly). The AMDGPU error messages will
now benefit from having precise source locations when debug info is
available.
The implementation of DiagnosticInfoUnsupported::print must be in
lib/Codegen rather than in the existing file in lib/IR/ to avoid
introducing a dependency from IR to CodeGen.
Differential Revision: http://reviews.llvm.org/D16590
llvm-svn: 258951
Most of the time we only hit the small case, so it is beneficial to pull
it out of the insert_imp() implementation. This improves compile time
at least for non-LTO builds.
Differential Revision: http://reviews.llvm.org/D16619
llvm-svn: 258908
This brings the compile time of Function.cpp from ~40s down to ~4s for
me locally. It also shaves off about 400KB of object file size in a
release+asserts build.
I also realized that the AMDGPU backend does not have any GCC builtin
names to match, so the extra lookup was a no-op. I removed it to silence
a zero-length string table array warning. There should be no functional
change here.
This change really ends the story of PR11951.
llvm-svn: 258897
This is a recommit of r258620 which causes PR26293.
The original message:
Now LIR can turn following codes into memset:
typedef struct foo {
int a;
int b;
} foo_t;
void bar(foo_t *f, unsigned n) {
for (unsigned i = 0; i < n; ++i) {
f[i].a = 0;
f[i].b = 0;
}
}
void test(foo_t *f, unsigned n) {
for (unsigned i = 0; i < n; i += 2) {
f[i] = 0;
f[i+1] = 0;
}
}
llvm-svn: 258777
These two functions are hard to reason about. This commit makes the code
more comprehensible:
- Use four distinct variables (OldIdxIn, OldIdxOut, NewIdxIn, NewIdxOut)
with a fixed value instead of a changing iterator I that points to
different things during the function.
- Remove the early explanation before the function in favor of more
detailed comments inside the function. Should have more/clearer comments now
stating which conditions are tested and which invariants hold at
different points in the functions.
The behaviour of the code was not changed.
I hope that this will make it easier to review the changes in
http://reviews.llvm.org/D9067 which I will adapt next.
Differential Revision: http://reviews.llvm.org/D16379
llvm-svn: 258756
This patch was originally committed as r257885, but was reverted due to windows
failures. The cause of these failures has been fixed under r258677, hence
re-committing the original patch.
llvm-svn: 258683
VPMADD52LUQ - Packed Multiply of Unsigned 52-bit Integers and Add the Low 52-bit Products to Qword Accumulators
VPMADD52HUQ - Packed Multiply of Unsigned 52-bit Unsigned Integers and Add High 52-bit Products to 64-bit Accumulators
Differential Revision: http://reviews.llvm.org/D16407
llvm-svn: 258680
SCCP has code identical to changeToUnreachable's behavior, switch it
over to just call changeToUnreachable.
No functionality change intended.
llvm-svn: 258654
InstCombine and SCCP both want to remove dead code in a very particular
way but using identical means to do so. Share the code between the two.
No functionality change is intended.
llvm-svn: 258653
Summary: Helper so we don't have to enumerate nvptx && nvptx64 everywhere.
Reviewers: echristo
Subscribers: llvm-commits, jhen, tra
Differential Revision: http://reviews.llvm.org/D16494
llvm-svn: 258639
Summary:
Update ObjectTransformLayer::addObjectSet to take the object set by
value rather than reference and pass it to the base layer with move
semantics rather than copy, to match r258185's changes to
ObjectLinkingLayer.
Update the unit test to verify that ObjectTransformLayer's signature stays
in sync with ObjectLinkingLayer's.
Reviewers: lhames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16414
llvm-svn: 258630
If the intrinsic is overloaded and works on multiple types,
it cannot resolve to a single corresponding builtin and requires
handling in clang. This just causes crashes now.
llvm-svn: 258559
The intrinsic target prefix should match the target name
as it appears in the triple.
This is not yet complete, but gets most of the important ones.
llvm.AMDGPU.* intrinsics used by mesa and libclc are still handled
for compatability for now.
llvm-svn: 258557
\src\llvm-rw\include\llvm/Support/AlignOf.h(254) :
error C2872: 'detail' : ambiguous symbol
could be 'llvm::detail'
or 'llvm::support::detail'
llvm-svn: 258553
Make the variable a member of the writer trait object owned
now by the writer. Also use a different generator interface
to pass the infoObject from the writer.
llvm-svn: 258544
Using an array instead of ArrayRef would allow type inference, but
(short of using C99) one would still need to write
typedef uint16_t VT[];
LE.write(VT{0x1234, 0x5678});
llvm-svn: 258535
This reapplies r258296 and r258366, and also fixes an existing bug in
SelectionDAG.cpp's isMemSrcFromString, neglecting to account for the
offset in a GlobalAddressSDNode, which is uncovered by those patches.
llvm-svn: 258482
This reverts r258296 and the follow up r258366. With this change, we
miscompiled the following program on Windows:
#include <string>
#include <iostream>
static const char kData[] = "asdf jkl;";
int main() {
std::string s(kData + 3, sizeof(kData) - 3);
std::cout << s << '\n';
}
llvm-svn: 258465
The X86 musttail implementation finds register parameters to forward by
running the calling convention algorithm until a non-register location
is returned. However, assigning a vector memory location has the side
effect of increasing the function's stack alignment. We shouldn't
increase the stack alignment when we are only looking for register
parameters, so this change conditionalizes it.
llvm-svn: 258442
Include the needed headfile to fix the buildbot failure due to r258420 [PGO] Passmanagerbuilder change that enable IR level PGO instrumentation.
llvm-svn: 258423
This patch includes the passmanagerbuilder change that enables IR level PGO instrumentation. It adds two passmanagerbuilder options: -profile-generate=<profile_filename> and -profile-use=<profile_filename>. The new options are primarily for debug purpose.
Reviewers: davidxl, silvas
Differential Revision: http://reviews.llvm.org/D15828
llvm-svn: 258420
Summary:
And use it in PPCLoopDataPrefetch.cpp.
@hfinkel, please let me know if your preference would be to preserve the
ppc-loop-prefetch-cache-line option in order to be able to override the
value of TTI::getCacheLineSize for PPC.
Reviewers: hfinkel
Subscribers: hulx2000, mcrosier, mssimpso, hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D16306
llvm-svn: 258419
Summary:
This is now the same as the behaviour of the GNU assembler. This was done
as it is required in order to build the Linux kernel with the integrated
assembler enabled.
Reviewers: dsanders, vkalintiris
Subscribers: dsanders, llvm-commits
Differential Revision: http://reviews.llvm.org/D13594
llvm-svn: 258400
Summary:
The previous form, taking opcode and type, is moved to an internal
helper and the new form, taking an instruction, is a wrapper around this
helper.
Although this is a slight cleanup on its own, the main motivation is to
refactor the constant folding API to ease migration to opaque pointers.
This will be follow-up work.
Reviewers: eddyb
Subscribers: dblaikie, llvm-commits
Differential Revision: http://reviews.llvm.org/D16383
llvm-svn: 258391
Summary:
Although this is a slight cleanup on its own, the main motivation is to
refactor the constant folding API to ease migration to opaque pointers.
This will be follow-up work.
Reviewers: eddyb
Subscribers: zzheng, dblaikie, llvm-commits
Differential Revision: http://reviews.llvm.org/D16380
llvm-svn: 258390
Summary:
Although this is a slight cleanup on its own, the main motivation is to
refactor the constant folding API to ease migration to opaque pointers.
This will be follow-up work.
Reviewers: eddyb
Subscribers: dblaikie, llvm-commits
Differential Revision: http://reviews.llvm.org/D16378
llvm-svn: 258389
This patch adds the necessary plumbing to cmake to build the sources related to
GlobalISel.
To build the sources related to GlobalISel, we need to add -DBUILD_GLOBAL_ISEL=ON.
By default, this is OFF, thus GlobalISel sources will not impact people that do
not explicitly opt-in.
Differential Revision: http://reviews.llvm.org/D15983
llvm-svn: 258344
Summary:
This adds a new kind of operand bundle to LLVM denoted by the
`"gc-transition"` tag. Inputs to `"gc-transition"` operand bundle are
lowered into the "transition args" section of `gc.statepoint` by
`RewriteStatepointsForGC`.
This removes the last bit of functionality that was unsupported in the
deopt bundle based code path in `RewriteStatepointsForGC`.
Reviewers: pgavlin, JosephTremoulet, reames
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D16342
llvm-svn: 258338
The selection process being split into separate passes, we need generic opcodes
to translate the LLVM IR to target independent code.
This patch adds an opcode for addition: G_ADD.
Differential Revision: http://reviews.llvm.org/D15472
llvm-svn: 258333
SelectionDAG previously missed opportunities to fold constants into
GlobalAddresses in several areas. For example, given `(add (add GA, c1), y)`, it
would often reassociate to `(add (add GA, y), c1)`, missing the opportunity to
create `(add GA+c, y)`. This isn't often visible on targets such as X86 which
effectively reassociate adds in their complex address-mode folding logic,
however it is currently visible on WebAssembly since it currently has very
simple address mode folding code that doesn't reassociate anything.
This patch fixes this by making SelectionDAG fold offsets into GlobalAddresses
at the same times that it folds constants together, so that it doesn't miss any
opportunities to perform such folding.
Differential Revision: http://reviews.llvm.org/D16090
llvm-svn: 258296
Note that this is disabled by default and still requires a patch to
handleMove() which is not upstreamed yet.
If the TrackLaneMasks policy/strategy is enabled the MachineScheduler
will build a schedule graph where definitions of independent
subregisters are no longer serialised.
Implementation comments:
- Without lane mask tracking a sub register def also counts as a use
(except for the first one with the read-undef flag set), with lane
mask tracking enabled this is no longer the case.
- Pressure Diffs where previously maintained per definition of a
vreg with the help of the SSA information contained in the
LiveIntervals. With lanemask tracking enabled we cannot do this
anymore and instead change the pressure diffs for all uses of the vreg
as it becomes live/dead. For this changed style to work correctly we
ignore uses of instructions that define the same register again: They
won't affect register pressure.
- With lanemask tracking we remove all read-undef flags from
sub register defs when building the graph and re-add them later when
all vreg lanes have become dead.
Differential Revision: http://reviews.llvm.org/D14969
llvm-svn: 258259
This renaming is necessary to avoid a subregister aware scheduler
accidentally creating liveness "holes" which are rejected by the
MachineVerifier.
Explanation as found in this patch:
Helper class that can divide MachineOperands of a virtual register into
equivalence classes of connected components.
MachineOperands belong to the same equivalence class when they are part of
the same SubRange segment or adjacent segments (adjacent in control
flow); Different subranges affected by the same MachineOperand belong to
the same equivalence class.
Example:
vreg0:sub0 = ...
vreg0:sub1 = ...
vreg0:sub2 = ...
...
xxx = op vreg0:sub1
vreg0:sub1 = ...
store vreg0:sub0_sub1
The example contains 3 different equivalence classes:
- One for the (dead) vreg0:sub2 definition
- One containing the first vreg0:sub1 definition and its use,
but not the second definition!
- The remaining class contains all other operands involving vreg0.
We provide a utility function here to rename disjunct classes to different
virtual registers.
Differential Revision: http://reviews.llvm.org/D16126
llvm-svn: 258257
they're needed.
Prior to this patch objects were loaded (via RuntimeDyld::loadObject) when they
were added to the ObjectLinkingLayer, but were not relocated and finalized until
a symbol address was requested. In the interim, another object could be loaded
and finalized with the same memory manager, causing relocation/finalization of
the first object to fail (as the first finalization call may have marked the
allocated memory for the first object read-only).
By deferring the loadObject call (and subsequent memory allocations) until an
object file is needed we can avoid prematurely finalizing memory.
llvm-svn: 258185
In some cases, the max backedge taken count can be more conservative
than the exact backedge taken count (for instance, because
ScalarEvolution::getRange is not control-flow sensitive whereas
computeExitLimitFromICmp can be). In these cases,
computeExitLimitFromCond (specifically the bit that deals with `and` and
`or` instructions) can create an ExitLimit instance with a
`SCEVCouldNotCompute` max backedge count expression, but a computable
exact backedge count expression. This violates an implicit SCEV
assumption: a computable exact BE count should imply a computable max BE
count.
This change
- Makes the above implicit invariant explicit by adding an assert to
ExitLimit's constructor
- Changes `computeExitLimitFromCond` to be more robust around
conservative max backedge counts
llvm-svn: 258184
According the build bots, clang is using the Registry class somewhere as well. Will reapply with appropriate clang changes at a later point.
llvm-svn: 258159
The Registry class constructs a linked list of nodes whose storage is inside static variables and nodes are added via static initializers. The trick is that those static initializers are in both the LLVM code base, and some random plugin that might get loaded in at runtime. The existing code tries to use C++ templates and their ODR rules to get a single definition of the registry for each type, but, experimentally, this doesn't quite work as designed. (Well, the entire structure doesn't. It might not actually be an ODR problem.)
Previously, when I tried moving the GCStrategy class (along with it's registry) from CodeGen to IR, I ran into a problem where asking the GCStrategyRegistry a question would return inconsistent results depending on whether you asked from CodeGen (where the static initializers still were) or Transforms. My best guess is that this is a result of either a) an order of initialization error, or b) we ended up with two copies of the registry being created. I remember at the time having convinced myself it was probably (b), but I don't have any of my notes around from that investigation any more.
See http://reviews.llvm.org/rL226311 for the original patch in question.
This patch tries to remove the possibility of (b) above. (a) was already fixed in change 258109.
Differential Revision: http://reviews.llvm.org/D16170
llvm-svn: 258157
Our loop construct is not a way to identify cycles in the CFG. This wasn't immediately obvious from the header, so clarify that fact.
The motivation for this was that I just fixed a out of tree bug due to a mistaken assumption (on my part) on what a Loop actually was. While it was fresh in my mind, I wanted to document the key point.
llvm-svn: 258154
Summary:
GEPOperator: provide getResultElementType alongside getSourceElementType.
This is made possible by adding a result element type field to GetElementPtrConstantExpr, which GetElementPtrInst already has.
GEP: replace get(Pointer)ElementType uses with get{Source,Result}ElementType.
Reviewers: mjacob, dblaikie
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16275
llvm-svn: 258145
The value size was always 1 or 0, so we don't need to store it.
In a no asserts build this takes the testcase of pr26208 from 11 to 10
seconds.
llvm-svn: 258141
Summary:
This is a companion patch for http://reviews.llvm.org/D16124.
Internalized symbols increase the size of strongly-connected components in
SCC-based module splitting and thus reduce the amount of parallelism. This
patch records the original linkage of non-local symbols prior to
internalization and then restores it just before splitting/CodeGen. This is
also useful for cases where the linker requires symbols to remain external, for
instance, so they can be placed according to linker script rules.
It's currently under its own flag (-restore-globals) but should eventually
share a common flag with D16124.
Reviewers: joker.eph, pcc
Subscribers: slarin, llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D16229
llvm-svn: 258100
Although glibc defines it, this is currently of no use for my primary
use-case (dumping DT_* keys correctly). Its semantic is not described
anywhere I can find, so better leave it out for now.
Thanks to Rafael for pointing out in his post-commit review!
llvm-svn: 258089
Summary:
Currently llvm::SplitModule as the first step globalizes all local objects, which might not be desirable in some scenarios.
This change adds a new flag to llvm::SplitModule that uses SCC approach to search for a balanced partition without the need to externalize symbols.
Such partition might not be possible or fully balanced for a given number of partitions, and is a function of the module properties (global/local dependencies within the module).
Joint development Tobias Edler von Koch (tobias@codeaurora.org) and Sergei Larin (slarin@codeaurora.org)
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D16124
llvm-svn: 258083
Summary:
When SimplifySetCC sees a setcc node that compares the result of a
value extension operation with a constant, it tries to simplify the
setcc node by eliminating the extension and shrinking the constant.
If shrinking the inputs to setcc is deemed not desirable by the target
(e.g. the target does not want a setcc comparing i1 values), then it
is still possible to optimize this sequence in some cases.
This patch adds the following combines to SimplifySetCC when shrinking setcc
inputs is not desirable:
(setcc ([sz]ext (setcc x, y, cc)), 0, setne) -> (setcc (x, y, cc))
(setcc ([sz]ext (setcc x, y, cc)), 0, seteq) -> (setcc (x, Y, !cc))
There are no tests for this yet, but once AMDGPU correctly implements
TargetLowering::isTypeDesirableForOp(), this new combine will be
exercised by the existing CodeGen/AMDGPU/setcc-opt.ll test.
Reviewers: resistor, arsenm
Subscribers: jroelofs, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D15034
llvm-svn: 258067
The symtab is logically referenced beyond the call to the create
method. This changes makes sure its lifetime matches that of
the reader.
llvm-svn: 258036