Doesn't make sense to restrict this to BumpPtrAllocator. While there
replace an explicit loop with std::equal. Some standard libraries know
how to compile this down to a ::memcmp call if possible.
llvm-svn: 206615
Reality is that we're never going to copy one of these. Supporting this
was becoming a nightmare because nothing even causes it to compile most
of the time. Lots of subtle errors built up that wouldn't have been
caught by any "normal" testing.
Also, make the move assignment actually work rather than the bogus swap
implementation that would just infloop if used. As part of that, factor
out the graph pointer updates into a helper to share between move
construction and move assignment.
llvm-svn: 206583
implementation of the SpecificBumpPtrAllocator -- we have to actually
move the subobject. =] Noticed when using this code more directly.
llvm-svn: 206582
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
llvm-svn: 206581
Previously module verification was always enabled, with no way to turn it off.
As of this commit, module verification is on by default in Debug builds, and off
by default in release builds. The default behaviour can be overridden by calling
setVerifyModules(bool) on the JIT instance (this works for both the old JIT, and
MCJIT).
<rdar://problem/16150008>
llvm-svn: 206561
Rewrite the shared implementation of BlockFrequencyInfo and
MachineBlockFrequencyInfo entirely.
The old implementation had a fundamental flaw: precision losses from
nested loops (or very wide branches) compounded past loop exits (and
convergence points).
The @nested_loops testcase at the end of
test/Analysis/BlockFrequencyAnalysis/basic.ll is motivating. This
function has three nested loops, with branch weights in the loop headers
of 1:4000 (exit:continue). The old analysis gives non-sensical results:
Printing analysis 'Block Frequency Analysis' for function 'nested_loops':
---- Block Freqs ----
entry = 1.0
for.cond1.preheader = 1.00103
for.cond4.preheader = 5.5222
for.body6 = 18095.19995
for.inc8 = 4.52264
for.inc11 = 0.00109
for.end13 = 0.0
The new analysis gives correct results:
Printing analysis 'Block Frequency Analysis' for function 'nested_loops':
block-frequency-info: nested_loops
- entry: float = 1.0, int = 8
- for.cond1.preheader: float = 4001.0, int = 32007
- for.cond4.preheader: float = 16008001.0, int = 128064007
- for.body6: float = 64048012001.0, int = 512384096007
- for.inc8: float = 16008001.0, int = 128064007
- for.inc11: float = 4001.0, int = 32007
- for.end13: float = 1.0, int = 8
Most importantly, the frequency leaving each loop matches the frequency
entering it.
The new algorithm leverages BlockMass and PositiveFloat to maintain
precision, separates "probability mass distribution" from "loop
scaling", and uses dithering to eliminate probability mass loss. I have
unit tests for these types out of tree, but it was decided in the review
to make the classes private to BlockFrequencyInfoImpl, and try to shrink
them (or remove them entirely) in follow-up commits.
The new algorithm should generally have a complexity advantage over the
old. The previous algorithm was quadratic in the worst case. The new
algorithm is still worst-case quadratic in the presence of irreducible
control flow, but it's linear without it.
The key difference between the old algorithm and the new is that control
flow within a loop is evaluated separately from control flow outside,
limiting propagation of precision problems and allowing loop scale to be
calculated independently of mass distribution. Loops are visited
bottom-up, their loop scales are calculated, and they are replaced by
pseudo-nodes. Mass is then distributed through the function, which is
now a DAG. Finally, loops are revisited top-down to multiply through
the loop scales and the masses distributed to pseudo nodes.
There are some remaining flaws.
- Irreducible control flow isn't modelled correctly. LoopInfo and
MachineLoopInfo ignore irreducible edges, so this algorithm will
fail to scale accordingly. There's a note in the class
documentation about how to get closer. See also the comments in
test/Analysis/BlockFrequencyInfo/irreducible.ll.
- Loop scale is limited to 4096 per loop (2^12) to avoid exhausting
the 64-bit integer precision used downstream.
- The "bias" calculation proposed on llvmdev is *not* incorporated
here. This will be added in a follow-up commit, once comments from
this review have been handled.
llvm-svn: 206548
Summary:
This prevents the discriminator generation pass from triggering if
the DWARF version being used in the module is prior to 4.
Reviewers: echristo, dblaikie
CC: llvm-commits
Differential Revision: http://reviews.llvm.org/D3413
llvm-svn: 206507
Still only 32-bit ARM using it at this stage, but the promotion allows
direct testing via opt and is a reasonably self-contained patch on the
way to switching ARM64.
At this point, other targets should be able to make use of it without
too much difficulty if they want. (See ARM64 commit coming soon for an
example).
llvm-svn: 206485
this code ages ago and lost track of it. Seems worth doing though --
this thing can get called from places that would benefit from knowing
that std::distance is O(1). Also add a very fledgeling unittest for
Users and make sure various aspects of this seem to work reasonably.
llvm-svn: 206453
graph. This simplifies the custom move constructor operation to one of
walking the graph and updating the 'up' pointers to point to the new
location of the graph. Switch the nodes from a reference to a pointer
for the 'up' edge to facilitate this.
llvm-svn: 206450
This introduces clang's Basic/OnDiskHashTable.h into llvm as
Support/OnDiskHashTable.h. I've taken the opportunity to add doxygen
comments and run the file through clang-format, but other than the
namespace changing from clang:: to llvm:: the API is identical.
llvm-svn: 206438
This is so that EF_MIPS_NAN2008 is set if we are using IEEE 754-2008
NaN encoding (-mnan=2008). This patch also adds support for parsing
'.nan legacy' and '.nan 2008' assembly directives. The handling of
these directives should match GAS' behaviour i.e., the last directive
in use sets the ELF header bit (EF_MIPS_NAN2008).
Differential Revision: http://reviews.llvm.org/D3346
llvm-svn: 206396
It doesn't work. I'm still cleaning up all the places where I blindly
followed this pattern. There are more to come in this code too.
As a benefit, this lets the default copy and move operations Just Work.
llvm-svn: 206375
because there is another (size_t, size_t) overload of Allocator, and the
only distinguishing factor is that one is a tempalte and the other
isn't. There was only one usage of this and that one was easily
converted to carry the alignment constraint in the type itself.
llvm-svn: 206325
Implement DebugInfoVerifier, which steals verification relying on
DebugInfoFinder from Verifier.
- Adds LegacyDebugInfoVerifierPassPass, a ModulePass which wraps
DebugInfoVerifier. Uses -verify-di command-line flag.
- Change verifyModule() to invoke DebugInfoVerifier as well as
Verifier.
- Add a call to createDebugInfoVerifierPass() wherever there was a
call to createVerifierPass().
This implementation as a module pass should sidestep efficiency issues,
allowing us to turn debug info verification back on.
<rdar://problem/15500563>
llvm-svn: 206300
ARM64 suffered multiple -verify-machineinstr failures (principally over the
xsp/xzr issue) because FastISel was completely ignoring which subset of the
general-purpose registers each instruction required.
More fixes are coming in ARM64 specific FastISel, but this should cover the
generic problems.
llvm-svn: 206283
by removing the MallocSlabAllocator entirely and just using
MallocAllocator directly. This makes all off these allocators expose and
utilize the same core interface.
The only ugly part of this is that it exposes the fact that the JIT
allocator has no real handling of alignment, any more than the malloc
allocator does. =/ It would be nice to fix both of these to support
alignments, and then to leverage that in the BumpPtrAllocator to do less
over allocation in order to manually align pointers. But, that's another
patch for another day. This patch has no functional impact, it just
removes the somewhat meaningless wrapper around MallocAllocator.
llvm-svn: 206267
allocation libraries, may allow more efficient allocation and
deallocation. It at least makes the interface implementable by the JIT
memory manager.
However, this highlights problematic overloading between the void* and
the T* deallocation functions. I'm looking into a better way to do this,
but as it happens, it comes up rarely in the codebase.
llvm-svn: 206265
overloads. This doesn't matter *that* much yet, but it will in
a subsequent patch. I had tested the original pattern, but not my
attempt to pacify MSVC. This at least appears to work. Still fixing the
rest of the fallout in the final patch that uses these overloads, but it
will follow shortly.
llvm-svn: 206259
'sizeof(T)' for T == void and produces a hard error. I cannot fathom why
this is OK. Oh well. switch to an explicit test for being the
(potentially qualified) void type, which is the only specific case I was
worried about. Hopefully this survives the libstdc++ build bots which
have limited type traits implementations...
llvm-svn: 206256
to types which we can compute the size of. The comparison with zero
isn't actually interesting here, it's mostly about putting sizeof into
a sfinae context.
This is particular important for Deallocate as otherwise the void*
overload can quickly become ambiguous.
llvm-svn: 206251
MCModule's ctor had to be moved out of line so the definition of
MCFunction was available. (ctor requires the dtor of members (in case
the ctor throws) which required access to the dtor of MCFunction)
llvm-svn: 206244
This patch re-introduces the MCContext member that was removed from
MCDisassembler in r206063, and requires that an MCContext be passed in at
MCDisassembler construction time. (Previously the MCContext member had been
initialized in an ad-hoc fashion after construction). The MCCContext member
can be used by MCDisassembler sub-classes to construct constant or
target-specific MCExprs.
This patch updates disassemblers for in-tree targets, and provides the
MCRegisterInfo instance that some disassemblers were using through the
MCContext (previously those backends were constructing their own
MCRegisterInfo instances).
llvm-svn: 206241
along with templated overloads much like we have for Allocate. These
will facilitate switching the Deallocate interface of all the Allocator
classes to accept the size by pre-filling it from the type size where we
can do so. I plan to convert several uses to the template variants in
subsequent patches prior to adding the Size parameter.
No functionality changed, WIP.
llvm-svn: 206230
rather than defining them (differently!) in both allocators. This also
serves as a basis for documenting and even enforcing some of the
LLVM-style "allocator" concept methods which must exist with various
signatures.
I plan on extending and changing the signatures of these to further
simplify our allocator model in subsequent commits, so I wanted to
factor things as best as I could first. Notably, I'm working to add the
'Size' to the deallocation method of all allocators. This has several
implications not the least of which are faster deallocation times on
certain allocation libraries (tcmalloc). It also will allow the JIT
allocator to fully model the existing allocation interfaces and allow
sanitizer poisoning of deallocated regions. The list of advantages goes
on. =] But by factoring things first I'll be able to make this easier by
first introducing template helpers for the deallocation path.
llvm-svn: 206225
small formatting inconsistencies with the rest of LLVM and even this
file. I looked at all the changes and they seemed like just better
formatting.
llvm-svn: 206209
declaration. GCC 4.7 appears to get hopelessly confused by declaring
this function within a member function of a class template. Go figure.
llvm-svn: 206152
abstract interface. The only user of this functionality is the JIT
memory manager and it is quite happy to have a custom type here. This
removes a virtual function call and a lot of unnecessary abstraction
from the common case where this is just a *very* thin vaneer around
a call to malloc.
Hopefully still no functionality changed here. =]
llvm-svn: 206149
slabs rather than embedding a singly linked list in the slabs
themselves. This has a few advantages:
- Better utilization of the slab's memory by not wasting 16-bytes at the
front.
- Simpler allocation strategy by not having a struct packed at the
front.
- Avoids paging every allocated slab in just to traverse them for
deallocating or dumping stats.
The latter is the really nice part. Folks have complained from time to
time bitterly that tearing down a BumpPtrAllocator, even if it doesn't
run any destructors, pages in all of the memory allocated. Now it won't.
=]
Also resolves a FIXME with the scaling of the slab sizes. The scaling
now disregards specially sized slabs for allocations larger than the
threshold.
llvm-svn: 206147
Add support for file auxiliary symbol entries in COFF symbol tables. A COFF
symbol table with a FILE entry is followed by sizeof(__FILE__) / 18 auxiliary
symbol records which contain the filename. Read them and form the original
filename that the record contains. Then display the name in the output.
llvm-svn: 206126
Moves redundant template parameters into an implementation detail of
BlockFrequencyInfoImpl.
No functionality change.
<rdar://problem/14292693>
llvm-svn: 206084
This is a shared implementation class for BlockFrequencyInfo and
MachineBlockFrequencyInfo, not for BlockFrequency, a related (but
distinct) class.
No functionality change.
<rdar://problem/14292693>
llvm-svn: 206083
MCDisassembler has an MCSymbolizer member that is meant to take care of
symbolizing during disassembly, but it also has several methods that enable the
disassembler to do symbolization internally (i.e. without an attached symbolizer
object). There is no need for this duplication, but ARM64 had been making use of
it. This patch moves the ARM64 symbolization logic out of ARM64Disassembler and
into an ARM64ExternalSymbolizer class, and removes the duplicated MCSymbolizer
functionality from the MCDisassembler interface. Symbolization will now be
done exclusively through MCSymbolizers.
There should be no impact on disassembly for any platform, but this allows us to
tidy up the MCDisassembler interface and simplify the process of (and invariants
related to) disassembler setup.
llvm-svn: 206063
This code has been moved to a new function in the TargetLowering
class called expandMUL(). The purpose of this is to be able
to share lowering code between the SelectionDAGLegalize and
DAGTypeLegalizer classes.
No functionality changed intended.
llvm-svn: 206036
The patch implements support for both relocation record formats: Elf_Rel
and Elf_Rela. It is possible to define relocation against symbol only.
Relocations against sections will be implemented later. Now yaml2obj
recognizes X86_64, MIPS and Hexagon relocation types.
Example of relocation section specification:
Sections:
- Name: .text
Type: SHT_PROGBITS
Content: "0000000000000000"
AddressAlign: 16
Flags: [SHF_ALLOC]
- Name: .rel.text
Type: SHT_REL
Info: .text
AddressAlign: 4
Relocations:
- Offset: 0x1
Symbol: glob1
Type: R_MIPS_32
- Offset: 0x2
Symbol: glob2
Type: R_MIPS_CALL16
The patch reviewed by Michael Spencer, Sean Silva, Shankar Easwaran.
llvm-svn: 206017
Also updated as many loops as I could find using df_begin/idf_begin -
strangely I found no uses of idf_begin. Is that just used out of tree?
Also a few places couldn't use df_begin because either they used the
member functions of the depth first iterators or had specific ordering
constraints (I added a comment in the latter case).
Based on a patch by Jim Grosbach. (Jim - you just had iterator_range<T>
where you needed iterator_range<idf_iterator<T>>)
llvm-svn: 206016
This seems to have been a cargo-culted habit from the very first such
cache which didn't have any specific justification (but might've been a
layering constraint at the time).
llvm-svn: 206003
This removes the -segmented-stacks command line flag in favor of a
per-function "split-stack" attribute.
Patch by Luqman Aden and Alex Crichton!
llvm-svn: 205997
Move the iterators into the range the same way the range's ctor moves
them into the members.
Also remove some redundant top level parens in the return statement.
llvm-svn: 205993
To support compressing the debug_line section that contains multiple
fragments (due, I believe, to variation in choices of line table
encoding depending on the size of instruction ranges in the actual
program code) we needed to support compressing multiple MCFragments in a
single pass.
This patch implements that behavior by mutating the post-relaxed and
relocated section to be the compressed form of its former self,
including renaming the section.
This is a more flexible (and less invasive, to a degree) implementation
that will allow for other features such as "use compression only if it's
smaller than the uncompressed data".
Compressing debug_frame would be a possible further extension to this
work, but I've left it for now. The hurdle there is alignment sections -
which might require going as far as to refactor
MCAssembler.cpp:writeFragment to handle writing to a byte buffer or an
MCObjectWriter (there's already a virtual call there, so it shouldn't
add substantial compile-time cost) which could in turn involve
refactoring MCAsmBackend::writeNopData to use that same abstraction...
which involves touching all the backends. This would remove the limited
handling of fragment writing seen in
ELFObjectWriter.cpp:getUncompressedData which would be nice - but it's
more invasive.
I did discover that I (perhaps obviously) don't need to handle
relocations when I rewrite the fragments - since the relocations have
already been applied and computed (and stored into
ELFObjectWriter::Relocations) by this stage (necessarily, because we
need to have written any immediate values or assembly-time relocations
into the data already before we compress it, which we have). The test
case doesn't necessarily cover that in detail - I can add more test
coverage if that's preferred.
llvm-svn: 205990
To support compression for debug_line and debug_frame a different
approach is required. To simplify review, revert the old implementation
and XFAIL the test case. New implementation to follow shortly.
Reverts r205059 and r204958.
llvm-svn: 205989
Convenience wrapper to make dealing with sub-ranges easier. Like the
iterator_range<> itself, if/when this sort of thing gets standards
blessing, it will be replaced by the official version.
llvm-svn: 205987
This reverts commit r205974, it turns out that this wasn't such a great idea
after all. Using DIVariable as return value is self-documenting and marginally
more type safe.
llvm-svn: 205979
This commit adds intrinsics and codegen support for the surface read/write and texture read instructions that take an explicit sampler parameter. Codegen operates on image handles at the PTX level, but falls back to direct replacement of handles with kernel arguments if image handles are not enabled. Note that image handles are explicitly disabled for all target architectures in this change (to be enabled later).
llvm-svn: 205907
Summary:
This patch adds backend support for -Rpass=, which indicates the name
of the optimization pass that should emit remarks stating when it
made a transformation to the code.
Pass names are taken from their DEBUG_NAME definitions.
When emitting an optimization report diagnostic, the lack of debug
information causes the diagnostic to use "<unknown>:0:0" as the
location string.
This is the back end counterpart for
http://llvm-reviews.chandlerc.com/D3226
Reviewers: qcolombet
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D3227
llvm-svn: 205774
The IO normalizer would essentially lump I386 and AMD64 relocations
together. Relocation types with the same numeric value would then get
mapped in appropriately.
For example:
IMAGE_REL_AMD64_ADDR64 and IMAGE_REL_I386_DIR16 both have a numeric
value of one. We would see IMAGE_REL_I386_DIR16 in obj2yaml conversions
of object files with a machine type of IMAGE_FILE_MACHINE_AMD64.
llvm-svn: 205746
Using this file would result in an odr violation: it defines an llvm::Interval
class that conflicts with the one in Analysis/Interval.h.
llvm-svn: 205726
It affected callee's stack pop in x86. It is one of devergences between cygwin and mingw since mingw-gcc-4.6.
Added testcases to llvm/test/CodeGen/X86/win32_sret.ll for cygwin.
llvm-svn: 205688
I really should read the spec more often (and test GCC more often too).
I just assumed that namespace aliases would be the same as using
directives, except with a name. But apparently that's not how the DWARF
standards suggests they be implemented. DWARF4 provides an example and
other non-normative text suggesting that namespace aliases be
implemented by named imported declarations intsead of named imported
modules.
So be it.
llvm-svn: 205685
Also update a few null pointers in this function to be consistent with
new null pointers being added.
Patch by Robert Matusewicz!
Differential Revision: http://reviews.llvm.org/D3123
llvm-svn: 205682
Member functions defined within a class definition are implicitly
'inline' for linkage purposes. Compilers might slightly favor inlining
functions explicitly marked 'inline', but LLVM doesn't make a stylistic
habit of doing this generally.
llvm-svn: 205679
This avoids an extra copy during decompression and avoids the use of
MemoryBuffer which is a weirdly esoteric device that includes unrelated
concepts like "file name" (its rather generic name is a bit misleading).
Similar refactoring of zlib::compress coming up.
llvm-svn: 205676
This has the following advantages:
* Less code.
* The old ELF implementation was wrong for non-relocatable objects.
* The old ELF implementation (and I think MachO) was wrong for thumb.
No current testcase since this is only used from MCJIT and it only uses
relocatable objects and I don't think it supports thumb yet.
llvm-svn: 205508
This reverts commit r205479.
It turns out that nm does use addresses, it is just that every reasonable
relocatable ELF object has sections with address 0. I have no idea if those
exist in reality, but it at least it shows that llvm-nm should use the name
address.
The added test was includes an unusual .o file with non 0 section addresses. I
created it by hacking ELFObjectWriter.cpp.
Really sorry for the churn.
llvm-svn: 205493
What llvm-nm prints depends on the file format. On ELF for example, if the
file is relocatable, it prints offsets. If it is not, it prints addresses.
Since it doesn't really need to care what it is that it is printing, use the
generic term value.
Fix or implement getSymbolValue to keep llvm-nm working.
llvm-svn: 205479
Just pass a MachineInstr reference rather than an MBB iterator.
Creating a MachineInstr& is the first thing every implementation did
anyway.
llvm-svn: 205453
In preparation for an upcoming commit implementing unrolling preferences for
x86, this adds additional fields to the UnrollingPreferences structure:
- PartialThreshold and PartialOptSizeThreshold - Like Threshold and
OptSizeThreshold, but used when not fully unrolling. These are necessary
because we need different thresholds for full unrolling from those used when
partially unrolling (the full unrolling thresholds are generally going to be
larger).
- MaxCount - A cap on the unrolling factor when partially unrolling. This can
be used by a target to prevent the unrolled loop from exceeding some
resource limit independent of the loop size (such as number of branches).
There should be no functionality change for any in-tree targets.
llvm-svn: 205347
This moves one case of raw text checking down into the MCStreamer
interfaces in the form of a virtual function, even if we ultimately end
up consolidating on the one-or-many line tables issue one day, this is
nicer in the interim. This just generally streamlines a bunch of use
cases into a common code path.
llvm-svn: 205287
I don't think this is reachable by any frontend (why would you transform
asm to asm+debug info?) but it helps tidy up some of this code, avoid
the weird special case of "emit the first CU, store the label, then emit
the rest" in MCDwarfLineTable::Emit by instead having the
DWARF-for-assembly case use the same codepath as DwarfDebug.cpp, by
registering the label of the debug_line section, thus causing it to be
emitted. (with a special case in asm output to just emit the label since
asm output uses the .loc directives, etc, rather than the debug_loc
directly)
llvm-svn: 205286
No other functionality changes, DIBuilder testcase is included in a paired
CFE commit.
This relaxes the assertion in isScopeRef to also accept subclasses of
DIScope.
llvm-svn: 205279
The generic (concatenation) loop unroller is currently placed early in the
standard optimization pipeline. This is a good place to perform full unrolling,
but not the right place to perform partial/runtime unrolling. However, most
targets don't enable partial/runtime unrolling, so this never mattered.
However, even some x86 cores benefit from partial/runtime unrolling of very
small loops, and follow-up commits will enable this. First, we need to move
partial/runtime unrolling late in the optimization pipeline (importantly, this
is after SLP and loop vectorization, as vectorization can drastically change
the size of a loop), while keeping the full unrolling where it is now. This
change does just that.
llvm-svn: 205264
This commit updates the stackmap format to version 1 to indicate the
reorganizaion of several fields. This was done in order to align stackmap
entries to their natural alignment and to minimize padding.
Fixes <rdar://problem/16005902>
llvm-svn: 205254
This patch is to fix the following warning when compiled with MSVC 64 bit.
warning C4334: '<<' : result of 32-bit shift implicitly converted to 64
bits (was 64-bit shift intended?)
llvm-svn: 205245
There are two general methods for expanding a BUILD_VECTOR node:
1. Use SCALAR_TO_VECTOR on the defined scalar values and then shuffle
them together.
2. Build the vector on the stack and then load it.
Currently, we use a fixed heuristic: If there are only one or two unique
defined values, then we attempt an expansion in terms of SCALAR_TO_VECTOR and
vector shuffles (provided that the required shuffle mask is legal). Otherwise,
always expand via the stack. Even when SCALAR_TO_VECTOR is not legal, this
can still be a good idea depending on what tricks the target can play when
lowering the resulting shuffle. If the target can't do anything special,
however, and if SCALAR_TO_VECTOR is expanded via the stack, this heuristic
leads to sub-optimal code (two stack loads instead of one).
Because only the target knows whether the SCALAR_TO_VECTORs and shuffles for a
build vector of a particular type are likely to be optimial, this adds a new
TLI function: shouldExpandBuildVectorWithShuffles which takes the vector type
and the count of unique defined values. If this function returns true, then
method (1) will be used, subject to the constraint that all of the necessary
shuffles are legal (as determined by isShuffleMaskLegal). If this function
returns false, then method (2) is always used.
This commit does not enhance the current code to support expanding a
build_vector with more than two unique values using shuffles, but I'll commit
an implementation of the more-general case shortly.
llvm-svn: 205230
Unlike my previous commit, don't try to remove the corresponding VK_Mips_GOT yet
even though it shares the same assembly text since that is used.
llvm-svn: 205196
Summary:
The FileHeader mapping now accepts an optional Flags sequence that accepts
the EF_<arch>_<flag> constants. When not given, Flags defaults to zero.
Reviewers: atanasyan
Reviewed By: atanasyan
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D3213
llvm-svn: 205173
parameters rather than runtime parameters.
There is only one user of these parameters and they are compile time for
that user. Making these compile time seems to better reflect their
intended usage as well.
llvm-svn: 205143
That causes references to them to be weak references which can collapse
to null if no definition is provided. We call these functions
unconditionally, so a definition *must* be provided. Make the
definitions provided in the .cpp file weak by re-declaring them as weak
just prior to defining them. This should keep compilers which cannot
attach the weak attribute to the definition happy while actually
resolving the symbols correctly during the link.
You might ask yourself upon reading this commit log: how did *any* of
this work before? Well, fun story. It turns out we have some code in
Support (BumpPtrAllocator) which both uses virtual dispatch and has
out-of-line vtables used by that virtual dispatch. If you move the
virtual dispatch into its header in *just* the right way, the optimizer
gets to devirtualize, and remove all references to the vtable. Then the
sad part: the references to this one vtable were the only strong symbol
uses in the support library for llvm-tblgen AFAICT. At least, after
doing something just like this, these symbols stopped getting their weak
definition and random calls to them would segfault instead.
Yay software.
llvm-svn: 205137
The ARM64 backend uses it only as a container to keep an MCLOHType and
Arguments around so give it its own little copy. The other functionality
isn't used and we had a crazy method specialization hack in place to
keep it working. Unfortunately that was incompatible with MSVC.
Also range-ify a couple of loops while at it.
llvm-svn: 205114
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.
Everything will be easier with the target in-tree though, hence this
commit.
llvm-svn: 205090
ARM64 has compact-unwind information, but doesn't necessarily want to
emit .eh_frame directives as well. This teaches MC about such a
situation so that it will skip .eh_frame info when compact unwind has
been successfully produced.
For functions incompatible with compact unwind, the normal information
is still written.
llvm-svn: 205087
Given IR like:
%bit = and %val, #imm-with-1-bit-set
%tst = icmp %bit, 0
br i1 %tst, label %true, label %false
some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.
This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.
llvm-svn: 205086
This is principally to allow neater mapping of fixups to relocations
in ARM64 ELF. Without this, there isn't enough information available
to GetRelocType, leading to many more fixup_arm64_... enumerators.
llvm-svn: 205085
Another part of the ARM64 backend (so tests will be following soon).
This is currently used by the linker to relax adrp/ldr pairs into nops
where possible, though could well be more broadly applicable.
llvm-svn: 205084
The upcoming ARM64 backend doesn't have section-relative relocations,
so we give each section its own symbol to provide this functionality.
Of course, it doesn't need to appear in the final executable, so
linker-private is the best kind for this purpose.
llvm-svn: 205081
ARM64 for iOS is going to want to emit these symbols in a
linker-private style for efficiency, but other targets probably don't
want that behaviour.
llvm-svn: 205080
This is like the LLVMMatchType, except the verifier checks that the
second argument is a vector with the same base type and half the
number of elements.
This will be used by the ARM64 backend.
llvm-svn: 205079
I started trying to fix a small issue, but this code has seen a small fix too
many.
The old code was fairly convoluted. Some of the issues it had:
* It failed to check if a symbol difference was in the some section when
converting a relocation to pcrel.
* It failed to check if the relocation was already pcrel.
* The pcrel value computation was wrong in some cases (relocation-pc.s)
* It was missing quiet a few cases where it should not convert symbol
relocations to section relocations, leaving the backends to patch it up.
* It would not propagate the fact that it had changed a relocation to pcrel,
requiring a quiet nasty work around in ARM.
* It was missing comments.
llvm-svn: 205076
These are used in the ARM backends to aid type-checking on patterns involving
intrinsics. By making sure one argument is an extended/truncated version of
another.
However, there's no reason to limit them to just vectors types. For example
AArch64 has the instruction "uqshrn sD, dN, #imm" which would naturally use an
intrinsic taking an i64 and returning an i32.
llvm-svn: 205003
BumpPtrAllocator significantly less strange by making it a simple
function of the number of slabs allocated rather than by making it
a recurrance. I *think* the previous behavior was essentially that the
size of the slabs would be doubled after the first 128 were allocated,
and then doubled again each time 64 more were allocated, but only if
every allocation packed perfectly into the slab size. If not, the wasted
space wouldn't be counted toward increasing the size, but allocations
over the size threshold *would*. And since the allocations over the size
threshold might be much larger than the slab size, this could have
somewhat surprising consequences where we rapidly grow the slab size.
This currently requires adding state to the allocator to track the
number of slabs currently allocated, but that isn't too bad. I'm
planning further changes to the allocator that will make this state fall
out even more naturally.
It still doesn't fully decouple the growth rate from the allocations
which are over the size threshold. That fix is coming later.
This specific fix will allow making the entire thing into a more
stateless device and lifting the parameters into template parameters
rather than runtime parameters.
llvm-svn: 204993
Construct a uniform Windows target triple nomenclature which is congruent to the
Linux counterpart. The old triples are normalised to the new canonical form.
This cleans up the long-standing issue of odd naming for various Windows
environments.
There are four different environments on Windows:
MSVC: The MS ABI, MSVCRT environment as defined by Microsoft
GNU: The MinGW32/MinGW32-W64 environment which uses MSVCRT and auxiliary libraries
Itanium: The MSVCRT environment + libc++ built with Itanium ABI
Cygnus: The Cygwin environment which uses custom libraries for everything
The following spellings are now written as:
i686-pc-win32 => i686-pc-windows-msvc
i686-pc-mingw32 => i686-pc-windows-gnu
i686-pc-cygwin => i686-pc-windows-cygnus
This should be sufficiently flexible to allow us to target other windows
environments in the future as necessary.
llvm-svn: 204977
1) When creating a .debug_* section and instead create a .zdebug_
section.
2) When creating a fragment in a .zdebug_* section, make it a compressed
fragment.
3) When computing the size of a compressed section, compress the data
and use the size of the compressed data.
4) Emit the compressed bytes.
Also, check that only if a section has a compressed fragment, then that
is the only fragment in the section.
Assert-fail if the fragment's data is modified after it is compressed.
Initial review on llvm-commits by Eric Christopher and Rafael Espindola.
llvm-svn: 204958
This adds back r204781.
Original message:
Aliases are just another name for a position in a file. As such, the
regular symbol resolutions are not applied. For example, given
define void @my_func() {
ret void
}
@my_alias = alias weak void ()* @my_func
@my_alias2 = alias void ()* @my_alias
We produce without this patch:
.weak my_alias
my_alias = my_func
.globl my_alias2
my_alias2 = my_alias
That is, in the resulting ELF file my_alias, my_func and my_alias are
just 3 names pointing to offset 0 of .text. That is *not* the
semantics of IR linking. For example, linking in a
@my_alias = alias void ()* @other_func
would require the strong my_alias to override the weak one and
my_alias2 would end up pointing to other_func.
There is no way to represent that with aliases being just another
name, so the best solution seems to be to just disallow it, converting
a miscompile into an error.
llvm-svn: 204934
rewrite some of them to be more clear.
The terminology being used in our allocators is making me really sad. We
call things slab allocators that aren't at all slab allocators. It is
quite confusing.
llvm-svn: 204907
It seems that gcov, when faced with a string that is apparently zero
length, just keeps reading words until it finds a length it likes
better. I'm not really sure why this is, but it's simple enough to
make llvm-cov follow suit.
llvm-svn: 204881
In CallInst, op_end() points at the callee, which we don't want to iterate over
when just iterating over arguments. Now take this into account when returning
a iterator_range from arg_operands. Similar reasoning for InvokeInst.
Also adds a unit test to verify this actually works as expected.
llvm-svn: 204851
The edge data structure (EdgeEntry) now holds the indices of its entries in the
adjacency lists of the nodes it connects. This trades a little ugliness for
faster insertion/removal, which is now O(1) with a cheap constant factor. All
of this is implementation detail within the PBQP graph, the external API remains
unchanged.
Individual register allocations are likely to change, since the adjacency lists
will now be ordered differently (or rather, will now be unordered). This
shouldn't affect the average quality of allocations however.
llvm-svn: 204841
This patch is in similar vein to what done earlier to Module::globals/aliases
etc. It allows to iterate over function arguments like this:
for (Argument Arg : F.args()) {
...
}
llvm-svn: 204835
We've already got versions without the barriers, so this just adds IR-level
support for generating the new v8 ones.
rdar://problem/16227836
llvm-svn: 204813
After some discussion on IRC, emitting a call to the library function seems
like a better default, since it will move from a compiler internal error to
a linker error, that the user can work around until LLVM is fixed.
I'm also adding a note on the responsibility of the user to confirm that
the cache was cleared on platforms where nothing is done.
llvm-svn: 204806
Implementing the LLVM part of the call to __builtin___clear_cache
which translates into an intrinsic @llvm.clear_cache and is lowered
by each target, either to a call to __clear_cache or nothing at all
incase the caches are unified.
Updating LangRef and adding some tests for the implemented architectures.
Other archs will have to implement the method in case this builtin
has to be compiled for it, since the default behaviour is to bail
unimplemented.
A Clang patch is required for the builtin to be lowered into the
llvm intrinsic. This will be done next.
llvm-svn: 204802
This reverts commit r204781.
I will follow up to with msan folks to see what is what they
were trying to do with aliases to weak aliases.
llvm-svn: 204784
Aliases are just another name for a position in a file. As such, the
regular symbol resolutions are not applied. For example, given
define void @my_func() {
ret void
}
@my_alias = alias weak void ()* @my_func
@my_alias2 = alias void ()* @my_alias
We produce without this patch:
.weak my_alias
my_alias = my_func
.globl my_alias2
my_alias2 = my_alias
That is, in the resulting ELF file my_alias, my_func and my_alias are
just 3 names pointing to offset 0 of .text. That is *not* the
semantics of IR linking. For example, linking in a
@my_alias = alias void ()* @other_func
would require the strong my_alias to override the weak one and
my_alias2 would end up pointing to other_func.
There is no way to represent that with aliases being just another
name, so the best solution seems to be to just disallow it, converting
a miscompile into an error.
llvm-svn: 204781
Implement Pass::releaseMemory() in BlockFrequencyInfo and
MachineBlockFrequencyInfo. Just delete the private implementation when
not in use. Switch to a std::unique_ptr to make the logic more clear.
<rdar://problem/14292693>
llvm-svn: 204741
Implement debug_loc.dwo, as well as llvm-dwarfdump support for dumping
this section.
Outlined in the DWARF5 spec and http://gcc.gnu.org/wiki/DebugFission the
debug_loc.dwo section has more variation than the standard debug_loc,
allowing 3 different forms of entry (plus the end of list entry). GCC
seems to, and Clang certainly, only use one form, so I've just
implemented dumping support for that for now.
It wasn't immediately obvious that there was a good refactoring to share
the implementation of dumping support between debug_loc and
debug_loc.dwo, so they're separate for now - ideas welcome or I may come
back to it at some point.
As per a comment in the code, we could choose different forms that may
reduce the number of debug_addr entries we emit, but that will require
further study.
llvm-svn: 204697
This adds a function to Endian.h that reads from and updates a pointer
into a buffer with endian specific data. This is more convenient for
stream-like reading of data than endian::read.
llvm-svn: 204693
Since the profile can come from 32-bit machines, we need to check the
pointer size. Change the magic number to facilitate this.
Adds tests for reading 32-bit and 64-bit binaries (both big- and
little-endian). The tests write a binary using printf in RUN lines
(like raw-magic-but-no-header.test). Assuming the bots don't complain,
this seems like a better way forward for testing RawInstrProfReader than
committing binary files.
<rdar://problem/16400648>
llvm-svn: 204557
This patch renames method 'isConstantSplat' as 'getConstantSplatValue'
(mainly for consistency reasons), and rewrites its logic to ensure
that we always perform a legal 'cast<ConstantSDNode>'.
Added test shift-combine-crash.ll to verify that DAGCombiner no longer crashes with an assertion failure in the attempt to simplify a vector shift by a vector of all undef counts.
llvm-svn: 204536
Read a raw binary profile that corresponds to a memory dump from the
runtime profile.
The test is a binary file generated from
cfe/trunk/test/Profile/c-general.c with the new compiler-rt runtime and
the matching text version of the input. It includes instructions on how
to regenerate.
<rdar://problem/15950346>
llvm-svn: 204496
This isn't a format we'll want to write out in practice, but moving it
to the writer library simplifies llvm-profdata and isolates it from
further changes to the format.
This also allows us to update the tests to not rely on the text output
format.
llvm-svn: 204489
This introduces the ProfileData library and updates llvm-profdata to
use this library for reading profiles. InstrProfReader is an abstract
base class that will be subclassed for both the raw instrprof data
from compiler-rt and the efficient instrprof format that will be used
for PGO.
llvm-svn: 204482
Some targets require more than one relocation entry to perform a relocation.
This change allows processRelocationRef to process more than one relocation
entry at a time by passing the relocation iterator itself instead of just
the relocation entry.
Related to <rdar://problem/16199095>
llvm-svn: 204439
Extend the target hook to take also the operand index into account when
calculating the cost of the constant materialization.
Related to <rdar://problem/16381500>
llvm-svn: 204435
NumberOfRelocations field in COFF section table is only 16-bit wide. If an
object has more than 65535 relocations, the number of relocations is stored
to VirtualAddress field in the first relocation field, and a special flag
(IMAGE_SCN_LNK_NRELOC_OVFL) is set to Characteristics field.
In test we cheated a bit. I made up a test file so that it has
IMAGE_SCN_LNK_NRELOC_OVFL flag but the number of relocations is much smaller
than 65535. This is to avoid checking in a large test file just to test a
file with many relocations.
Differential Revision: http://llvm-reviews.chandlerc.com/D3139
llvm-svn: 204418
RTDyldMemoryManager, regardless of whether it thinks they're "required for
execution".
Currently, RuntimeDyld only passes sections that are "required for execution"
to the RTDyldMemoryManager, and takes "required for execution" to mean exactly
"contains symbols or relocations". There are two problems with this:
(1) It can drop sections with anonymous data that is referenced by code.
(2) It leaves the JIT client no way to inspect interesting sections that aren't
actually required to run the program (e.g dwarf sections).
A test case is still in the works.
Future work: We may want to replace this with a generic section filtering
mechanism, but that will require more consideration. For now, this flag at least
allows clients to volunteer to do the filtering themselves.
Fixes <rdar://problem/15177691>.
llvm-svn: 204398
This commit extends the coverage of the constant hoisting pass, adds additonal
debug output and updates the function names according to the style guide.
Related to <rdar://problem/16381500>
llvm-svn: 204389
This option caused LowerInvoke to generate code using SJLJ-based
exception handling, but there is no code left that interprets the
jmp_buf stack that the resulting code maintained (llvm.sjljeh.jblist).
This option has been obsolete for a while, and replaced by
SjLjEHPrepare.
This leaves the default behaviour of LowerInvoke, which is to convert
invokes to calls.
Differential Revision: http://llvm-reviews.chandlerc.com/D3136
llvm-svn: 204388
Given
bar = foo + 4
.long bar
MC would eat the 4. GNU as includes it in the relocation. The rule seems to be
that a variable that defines a symbol is used in the relocation and one that
does not define a symbol is evaluated and the result included in the relocation.
Fixing this unfortunately required some other changes:
* Since the variable is now evaluated, it would prevent the ELF writer from
noticing the weakref marker the elf streamer uses. This patch then replaces
that with a VariantKind in MCSymbolRefExpr.
* Using VariantKind then requires us to look past other VariantKind to see
.weakref bar,foo
call bar@PLT
doing this also fixes
zed = foo +2
call zed@PLT
so that is a good thing.
* Looking past VariantKind means that the relocation selection has to use
the fixup instead of the target.
This is a reboot of the previous fixes for MC. I will watch the sanitizer
buildbot and wait for a build before adding back the previous fixes.
llvm-svn: 204294
The current state of affairs has auxiliary symbols described as a big
bag of bytes. This is less than satisfying, it detracts from the YAML
file as being human readable.
Instead, allow for symbols to optionally contain their auxiliary data.
This allows us to have a much higher level way of describing things like
weak symbols, function definitions and section definitions.
This depends on D3105.
Differential Revision: http://llvm-reviews.chandlerc.com/D3092
llvm-svn: 204214
Summary: These definitions are useful to other aspects of LLVM, move them out.
Reviewers: rafael, nrieck, ruiu
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D3105
llvm-svn: 204213