reason to expose a global symbol 'decodeInstruction' nor to pollute the global
scope with a bunch of external linkage entities (some of which conflict with
others elsewhere in LLVM).
This is just the initial transition to C++; more cleanups to follow.
llvm-svn: 206717
entirely clear whether this should be valid with modules enabled, but the fixed
code is cleaner regardless.
Also fix a TU-local type that accidentally had external linkage.
llvm-svn: 206714
This reverts commit r206677, reapplying my BlockFrequencyInfo rewrite.
I've done a careful audit, added some asserts, and fixed a couple of
bugs (unfortunately, they were in unlikely code paths). There's a small
chance that this will appease the failing bots [1][2]. (If so, great!)
If not, I have a follow-up commit ready that will temporarily add
-debug-only=block-freq to the two failing tests, allowing me to compare
the code path between what the failing bots and what my machines (and
the rest of the bots) are doing. Once I've triggered those builds, I'll
revert both commits so the bots go green again.
[1]: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/1816
[2]: http://llvm-amd64.freebsd.your.org/b/builders/clang-i386-freebsd/builds/18445
<rdar://problem/14292693>
llvm-svn: 206704
Win64 stack unwinder gets confused when execution flow "falls through" after
a call to 'noreturn' function. This fixes the "missing epilogue" problem by
emitting a trap instruction for IR 'unreachable' on x86_x64-pc-windows.
A secondary use for it would be for anyone wanting to make double-sure that
'noreturn' functions, indeed, do not return.
llvm-svn: 206684
This reverts commit r206666, as planned.
Still stumped on why the bots are failing. Sanitizer bots haven't
turned anything up. If anyone can help me debug either of the failures
(referenced in r206666) I'll owe them a beer. (In the meantime, I'll be
auditing my patch for undefined behaviour.)
llvm-svn: 206677
expressions for mov instructions instead of silently truncating by default.
For the ARM assembler, we want to avoid misleadingly allowing something
like "mov r0, <symbol>" especially when we turn it into a movw and the
expression <symbol> does not have a :lower16: or :upper16" as part of the
expression. We don't want the behavior of silently truncating, which can be
unexpected and lead to bugs that are difficult to find since this is an easy
mistake to make.
This does change the previous behavior of llvm but actually matches an
older gnu assembler that would not allow this but print less useful errors
of like “invalid constant (0x927c0) after fixup” and “unsupported relocation on
symbol foo”. The error for llvm is "immediate expression for mov requires
:lower16: or :upper16" with correct location information on the operand
as shown in the added test cases.
rdar://12342160
llvm-svn: 206669
This reverts commit r206628, reapplying r206622 (and r206626).
Two tests are failing only on buildbots [1][2]: i.e., I can't reproduce
on Darwin, and Chandler can't reproduce on Linux. Asan and valgrind
don't tell us anything, but we're hoping the msan bot will catch it.
So, I'm applying this again to get more feedback from the bots. I'll
leave it in long enough to trigger builds in at least the sanitizer
buildbots (it was failing for reasons unrelated to my commit last time
it was in), and hopefully a few others.... and then I expect to revert a
third time.
[1]: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/1816
[2]: http://llvm-amd64.freebsd.your.org/b/builders/clang-i386-freebsd/builds/18445
llvm-svn: 206666
This is important for symbolizing executables with debug info in
unavailable .dwo files. Even if all DIE entries are missing, we can
still symbolize an address: function name can be fetched from symbol table,
and file/line info can be fetched from line table.
llvm-svn: 206665
Both ZLIB and the debug info compressed section header ("ZLIB" + the
size of the uncompressed data) take some constant overhead so in some
cases the compressed data is actually larger than the uncompressed data.
In these cases, just don't compress or rename the section at all.
llvm-svn: 206659
This adds support for an indexed instrumentation based profiling
format, which is just a small header and an on disk hash table. This
format will be used by clang's -fprofile-instr-use= for PGO.
llvm-svn: 206656
The option LLVM_ENABLE_SPHINX option enables the "docs-llvm-html",
"docs-llvm-man" targets but does not build them by default. The
following CMake options have been added that control what targets are
made available
SPHINX_OUTPUT_HTML
SPHINX_OUTPUT_MAN
If LLVM_BUILD_DOCS is enabled then the enabled docs-llvm-* targets will
be built by default and if ``make install`` is run then docs-llvm-html
and docs-llvm-man will be installed (tested on Linux only).
The add_sphinx_target function is in its own file so it can be included
by other projects that use Sphinx for their documentation.
Patch by Daniel Liew <daniel.liew@imperial.ac.uk>!
llvm-svn: 206655
Immutable DILineInfo doesn't bring any benefits and complicates
code. Also, use std::string instead of SmallString<16> for file
and function names - their length can vary significantly.
No functionality change.
llvm-svn: 206654
While unnamed relocations are already cached in side tables in
ELFObjectWriter::RecordRelocation, symbols still need their fragments
updated to refer to the newly compressed fragment (even if that fragment
isn't big enough to fit the offset). Even though we only create
temporary symbols in debug info sections this comes up in 32 bit builds
where even temporary symbols in mergeable sections (such as debug_str)
have to be emitted as named symbols.
I tried a few other ways to do this but they all didn't work for various
reasons:
1) Canonicalize the MCSymbolData in RecordRelocation, nulling out the
Fragment (so it didn't have to be updated by CompressDebugSection). This
doesn't work because some code relies on symbols having fragments to
indicate that they're defined, I think.
2) Canonicalize the MCSymbolData in RecordRelocation to be "first
fragment + absolute offset" so it would be cheaper to just test and
update the fragment in CompressDebugSections. This doesn't work because
the offset computed in RecordRelocation isn't that of the symbol's
fragment, it's the passed in fragment (I haven't figured out what that
fragment is - perhaps it's the location where the relocation is to be
written). And if the fragment offset has to be computed only for this
use we might as well just do it when we need to, in
CompressDebugSection.
I also added an assert to help catch this a bit more clearly, even
though it is UB. The test case improvements would either assert fail
and/or valgrind vail without the fix, even if they wouldn't necessarily
fail the FileCheck output.
llvm-svn: 206653
Summary:
This port includes the rudimentary latencies that were provided for
the Cortex-A53 Machine Model in the AArch64 backend. It also changes
the SchedAlias for COPY in the Cyclone model to an explicit
WriteRes mapping to avoid conflicts in other subtargets.
Differential Revision: http://reviews.llvm.org/D3427
Patch by Dave Estes <cestes@codeaurora.org>!
llvm-svn: 206652
This pass was removed in r184459.
Also added note that the InstCombine pass does library call
simplification.
Patch slightly modified from one by Daniel Liew
<daniel.liew@imperial.ac.uk>!
llvm-svn: 206650
This changes the on-disk hash to get the type to use for offsets from
the Info type, so that clients can be more flexible with the size of
table they support.
llvm-svn: 206643
This changes the on-disk hash to get the size of a hash value from the
Info type, so that clients can be more flexible with the types of hash
they use.
llvm-svn: 206642
When address ranges for compile unit are specified in compile unit DIE
itself, there is no need to collect ranges from children subprogram DIEs.
This change speeds up llvm-symbolizer on Clang-produced binaries with
full debug info. For instance, symbolizing a first address in a 1Gb binary
is now 2x faster (1s vs. 2s).
llvm-svn: 206641
For a 256-bit BUILD_VECTOR consisting mostly of shuffles of 256-bit vectors,
both the BUILD_VECTOR and its operands may need to be legalized in multiple
steps. Consider:
(v8f32 (BUILD_VECTOR (extract_vector_elt (v8f32 %vreg0,) Constant<1>),
(extract_vector_elt %vreg0, Constant<2>),
(extract_vector_elt %vreg0, Constant<3>),
(extract_vector_elt %vreg0, Constant<4>),
(extract_vector_elt %vreg0, Constant<5>),
(extract_vector_elt %vreg0, Constant<6>),
(extract_vector_elt %vreg0, Constant<7>),
%vreg1))
a. We can't build a 256-bit vector efficiently so, we need to split it into
two 128-bit vecs and combine them with VINSERTX128.
b. Operands like (extract_vector_elt (v8f32 %vreg0), Constant<7>) needs to be
split into a VEXTRACTX128 and a further extract_vector_elt from the
resulting 128-bit vector.
c. The extract_vector_elt from b. is lowered into a shuffle to the first
element and a movss.
Depending on the order in which we legalize the BUILD_VECTOR and its
operands[1], buildFromShuffleMostly may be faced with:
(v4f32 (BUILD_VECTOR (extract_vector_elt
(vector_shuffle<1,u,u,u> (extract_subvector %vreg0, Constant<4>), undef),
Constant<0>),
(extract_vector_elt
(vector_shuffle<2,u,u,u> (extract_subvector %vreg0, Constant<4>), undef),
Constant<0>),
(extract_vector_elt
(vector_shuffle<3,u,u,u> (extract_subvector %vreg0, Constant<4>), undef),
Constant<0>),
%vreg1))
In order to figure out the underlying vector and their identity we need to see
through the shuffles.
[1] Note that the order in which operations and their operands are legalized is
only guaranteed in the first iteration of LegalizeDAG.
Fixes <rdar://problem/16296956>
llvm-svn: 206634
This reverts commit r206622 and the MSVC fixup in r206626.
Apparently the remotely failing tests are still failing, despite my
attempt to fix the nondeterminism in r206621.
llvm-svn: 206628
Add a helper method to get address ranges specified in a DIE
(either by DW_AT_low_pc/DW_AT_high_pc, or by DW_AT_ranges). Use it
to untangle and simplify the code.
No functionality change.
llvm-svn: 206624
This reverts commit r206556, effectively reapplying commit r206548 and
its fixups in r206549 and r206550.
In an intervening commit I've added target triples to the tests that
were failing remotely [1] (but passing locally). I'm hoping the mystery
is solved? I'll revert this again if the tests are still failing
remotely.
[1]: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/1816
llvm-svn: 206622
These tests were failing on some buildbots after r206548 (reverted in
r206556), but passing locally.
They were missing target triples, so maybe that's the problem?
llvm-svn: 206621
Doesn't make sense to restrict this to BumpPtrAllocator. While there
replace an explicit loop with std::equal. Some standard libraries know
how to compile this down to a ::memcmp call if possible.
llvm-svn: 206615
This flag replaces inline instrumentation for checks and origin stores with
calls into MSan runtime library. This is a workaround for PR17409.
Disabled by default.
llvm-svn: 206585
Reality is that we're never going to copy one of these. Supporting this
was becoming a nightmare because nothing even causes it to compile most
of the time. Lots of subtle errors built up that wouldn't have been
caught by any "normal" testing.
Also, make the move assignment actually work rather than the bogus swap
implementation that would just infloop if used. As part of that, factor
out the graph pointer updates into a helper to share between move
construction and move assignment.
llvm-svn: 206583
implementation of the SpecificBumpPtrAllocator -- we have to actually
move the subobject. =] Noticed when using this code more directly.
llvm-svn: 206582
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]
The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]
So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.
However, we still want the SCCs to be formed lazily where possible.
These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.
The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:
1) We want to discover the SCC graph in a postorder fashion. That means
the root node will be the *last* node we find. Using the call-SCC DAG
as the graph structure of the SCCs results in an orphaned graph until
we discover a root.
2) We will eventually want to walk the SCC graph in parallel, exploring
distinct sub-graphs independently, and synchronizing at merge points.
This again is not helped by the call-SCC DAG structure.
The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.
Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.
Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.
llvm-svn: 206581
This commit was attributed to a different person from the person who
posted the patch to the list, and the person who posted it the list
claimed when they did that they were not the author, but that the author
was yet a third person. I don't know what is going on here, but
reverting until the attribution is clear and the author has explicitly
contributed the patch.
Also, the review hasn't really involved any of the MC maintainers and
that seems questionable too.
llvm-svn: 206576
Code mostly copied from AArch64, just tidied up a trifle and plumbed
into the ARM64 way of doing things.
This also enables the AArch64 tests which inspired the previous
untested commits.
llvm-svn: 206574
A vector extract followed by a dup can become a single instruction even if the
types don't match. AArch64 handled this in ISelLowering, but a few reasonably
simple patterns can take care of it in TableGen, so that's where I've put it.
llvm-svn: 206573
ARM64 was scalarizing some vector comparisons which don't quite map to
AArch64's compare and mask instructions. AArch64's approach of sacrificing a
little efficiency to emulate them with the limited set available was better, so
I ported it across.
More "inspired by" than copy/paste since the backend's internal expectations
were a bit different, but the tests were invaluable.
llvm-svn: 206570
I enhanced it a little in the process. The decision shouldn't really be beased
on whether a BUILD_VECTOR is a splat: any set of constants will do the job
provided they're related in the correct way.
Also, the BUILD_VECTOR could be any operand of the incoming AND nodes, so it's
best to check for all 4 possibilities rather than assuming it'll be the RHS.
llvm-svn: 206569
It's not actually used to handle C or C++ ABI rules on ARM64, but could well be
emitted by other language front-ends, so it's as well to have a sensible
implementation.
llvm-svn: 206568
Previously module verification was always enabled, with no way to turn it off.
As of this commit, module verification is on by default in Debug builds, and off
by default in release builds. The default behaviour can be overridden by calling
setVerifyModules(bool) on the JIT instance (this works for both the old JIT, and
MCJIT).
<rdar://problem/16150008>
llvm-svn: 206561
Use scalar BFE with constant shift and offset when possible.
This is complicated by the fact that the scalar version packs
the two operands of the vector version into one.
llvm-svn: 206558
Rewrite the shared implementation of BlockFrequencyInfo and
MachineBlockFrequencyInfo entirely.
The old implementation had a fundamental flaw: precision losses from
nested loops (or very wide branches) compounded past loop exits (and
convergence points).
The @nested_loops testcase at the end of
test/Analysis/BlockFrequencyAnalysis/basic.ll is motivating. This
function has three nested loops, with branch weights in the loop headers
of 1:4000 (exit:continue). The old analysis gives non-sensical results:
Printing analysis 'Block Frequency Analysis' for function 'nested_loops':
---- Block Freqs ----
entry = 1.0
for.cond1.preheader = 1.00103
for.cond4.preheader = 5.5222
for.body6 = 18095.19995
for.inc8 = 4.52264
for.inc11 = 0.00109
for.end13 = 0.0
The new analysis gives correct results:
Printing analysis 'Block Frequency Analysis' for function 'nested_loops':
block-frequency-info: nested_loops
- entry: float = 1.0, int = 8
- for.cond1.preheader: float = 4001.0, int = 32007
- for.cond4.preheader: float = 16008001.0, int = 128064007
- for.body6: float = 64048012001.0, int = 512384096007
- for.inc8: float = 16008001.0, int = 128064007
- for.inc11: float = 4001.0, int = 32007
- for.end13: float = 1.0, int = 8
Most importantly, the frequency leaving each loop matches the frequency
entering it.
The new algorithm leverages BlockMass and PositiveFloat to maintain
precision, separates "probability mass distribution" from "loop
scaling", and uses dithering to eliminate probability mass loss. I have
unit tests for these types out of tree, but it was decided in the review
to make the classes private to BlockFrequencyInfoImpl, and try to shrink
them (or remove them entirely) in follow-up commits.
The new algorithm should generally have a complexity advantage over the
old. The previous algorithm was quadratic in the worst case. The new
algorithm is still worst-case quadratic in the presence of irreducible
control flow, but it's linear without it.
The key difference between the old algorithm and the new is that control
flow within a loop is evaluated separately from control flow outside,
limiting propagation of precision problems and allowing loop scale to be
calculated independently of mass distribution. Loops are visited
bottom-up, their loop scales are calculated, and they are replaced by
pseudo-nodes. Mass is then distributed through the function, which is
now a DAG. Finally, loops are revisited top-down to multiply through
the loop scales and the masses distributed to pseudo nodes.
There are some remaining flaws.
- Irreducible control flow isn't modelled correctly. LoopInfo and
MachineLoopInfo ignore irreducible edges, so this algorithm will
fail to scale accordingly. There's a note in the class
documentation about how to get closer. See also the comments in
test/Analysis/BlockFrequencyInfo/irreducible.ll.
- Loop scale is limited to 4096 per loop (2^12) to avoid exhausting
the 64-bit integer precision used downstream.
- The "bias" calculation proposed on llvmdev is *not* incorporated
here. This will be added in a follow-up commit, once comments from
this review have been handled.
llvm-svn: 206548
Summary:
This prevents the discriminator generation pass from triggering if
the DWARF version being used in the module is prior to 4.
Reviewers: echristo, dblaikie
CC: llvm-commits
Differential Revision: http://reviews.llvm.org/D3413
llvm-svn: 206507
Change the command line vector-insertion.ll to explicitly set the neon syntax
to apple so that buildbots that default to other syntaxes won't fail.
llvm-svn: 206502
Having i128 as a legal type complicates the legalization phase. v4i32
is already a legal type, so we will use that instead.
This fixes several piglit tests.
llvm-svn: 206500
This patch improves the performance of vector creation in caseiswhere where
several of the lanes in the vector are a constant floating point value. It
also includes new patterns to fold together some of the instructions when the
value is 0.0f. Test cases included.
rdar://16349427
llvm-svn: 206496
After some discussions the preferred semantics of
the always_inline attribute is
inline always when the compiler can determine
that it it safe to do so.
llvm-svn: 206487
Previously, SSPBufferSize was assigned the value of the "stack-protector-buffer-size"
attribute after all uses of SSPBufferSize. The effect was that the default
SSPBufferSize was always used during analysis. I moved the check for the
attribute before the analysis; now --param ssp-buffer-size= works correctly again.
Differential Revision: http://reviews.llvm.org/D3349
llvm-svn: 206486
Still only 32-bit ARM using it at this stage, but the promotion allows
direct testing via opt and is a reasonably self-contained patch on the
way to switching ARM64.
At this point, other targets should be able to make use of it without
too much difficulty if they want. (See ARM64 commit coming soon for an
example).
llvm-svn: 206485
this code ages ago and lost track of it. Seems worth doing though --
this thing can get called from places that would benefit from knowing
that std::distance is O(1). Also add a very fledgeling unittest for
Users and make sure various aspects of this seem to work reasonably.
llvm-svn: 206453
graph. This simplifies the custom move constructor operation to one of
walking the graph and updating the 'up' pointers to point to the new
location of the graph. Switch the nodes from a reference to a pointer
for the 'up' edge to facilitate this.
llvm-svn: 206450
Since LLVM currently only supports WinCOFF, assume that the input is WinCOFF
rather than another type of COFF file (ECOFF/XCOFF). If the architecture is
detected as thumb (e.g. the file has a IMAGE_FILE_MACHINE_ARMNT magic) then use
a triple of thumbv7-windows.
This allows for objdump to properly handle WoA object files without having to
specify the target triple manually.
llvm-svn: 206446
Visual Studio does not permit referencing a structure member as a static field
for sizeof calculations. Resort to a pointer cast which is compatible across
Visual Studio and other compilers.
llvm-svn: 206445
This introduces clang's Basic/OnDiskHashTable.h into llvm as
Support/OnDiskHashTable.h. I've taken the opportunity to add doxygen
comments and run the file through clang-format, but other than the
namespace changing from clang:: to llvm:: the API is identical.
llvm-svn: 206438
The commit of r205855:
Author: Arnold Schwaighofer <aschwaighofer@apple.com>
Date: Wed Apr 9 14:20:47 2014 +0000
SLPVectorizer: Only vectorize intrinsics whose operands are widened equally
The vectorizer only knows how to vectorize intrinics by widening all operands by
the same factor.
Patch by Tyler Nowicki!
exposed a backend bug causing a regression (Cannot select ctpop).
The commit msg is a bit confusing because the patch actually changes the
behavior for the loop-vectorizer as well. As things got refactored into a
helper ctpop got snuck in to the trivially-vectorizable helper which is now
used by both vectorizers. In other words, we started seeing vector-ctpops in
the backend.
This change makes ctpop LegalizeAction::Expand for the types not supported by
the byte-only CNT instruction. We may be able to custom-lower these later to
a single CNT but this is to fix the compiler crash first.
Fixes <rdar://problem/16578951>
llvm-svn: 206433
is set even when it contains a indirect branch.
The attribute overrules correctness concerns
like the escape of a local block address.
This is for rdar://16501761
llvm-svn: 206429
Summary:
When optimization remarks are enabled via the driver flag -Rpass, we
should allow the FE diagnostic handler to check if the given pass name
needs a diagnostic.
We were unconditionally checking the pattern defined in opt's
-pass-remarks flag. This was causing the FE to not emit any diagnostics.
Reviewers: qcolombet
CC: llvm-commits
Differential Revision: http://reviews.llvm.org/D3362
llvm-svn: 206400
This enables TableGen to generate an additional two operand
matcher for our shift_rotate_imm and shift_rotate_reg class of instructions.
The tests were also updated so that they include now encoding information
for all affected instructions.
llvm-svn: 206398
This is so that EF_MIPS_NAN2008 is set if we are using IEEE 754-2008
NaN encoding (-mnan=2008). This patch also adds support for parsing
'.nan legacy' and '.nan 2008' assembly directives. The handling of
these directives should match GAS' behaviour i.e., the last directive
in use sets the ELF header bit (EF_MIPS_NAN2008).
Differential Revision: http://reviews.llvm.org/D3346
llvm-svn: 206396
These ones used completely different sets of intrinsics, so the only way to do
it is create a separate ARM64 copy and change them all.
Other than that, CodeGen was straightforward, no deficiencies detected here.
llvm-svn: 206392
Summary: This was a case of incorrect usage of hasMips64() vs isABI_N64()
Reviewers: matheusalmeida, dsanders
Reviewed By: dsanders
Differential Revision: http://reviews.llvm.org/D3398
llvm-svn: 206388
This should fix the ninja-x64-msvc-RA-centos6 builder.
I suspect the check in MipsSubtarget.cpp is incorrect and is really trying to
check for a bare-metal target rather and anything other than linux. I'll
investigate this.
llvm-svn: 206385
The most important part here is that we should actuall emit the stubs we refer
to in the exception table, but as a side issue this uses more sensible & GCC
compatible representations for some of the bits of information.
llvm-svn: 206380
If we know that a particular 64-bit constant has all high bits zero, then we
can rely on the fact that 32-bit ARM64 instructions automatically zero out the
high bits of an x-register. This gives the expansion logic less constraints to
satisfy and so sometimes allows it to pick better sequences.
Came up while porting test/CodeGen/AArch64/movw-consts.ll: this will allow a
32-bit MOVN to be used in @test8 soon.
llvm-svn: 206379
if not in micromips mode.
The test (elf_st_other.ll) was renamed as the name and description didn't
make sense as the test wasn't checking any symbol table entry.
Differential Revision: http://reviews.llvm.org/D3346
llvm-svn: 206377
It doesn't work. I'm still cleaning up all the places where I blindly
followed this pattern. There are more to come in this code too.
As a benefit, this lets the default copy and move operations Just Work.
llvm-svn: 206375
Summary:
I had difficulty finding tests for the N32 and N64 ABI so I've added a
collection of calling convention tests based on the document MIPS ABIs
Described (MD00305), the MIPSpro N32 Handbook, and the SYSV ABI. Where the
documents/implementations disagree, I've used GCC to resolve the conflict.
A few interesting details:
* For N32, LLVM uses 64-bit pointers when saving $ra despite pointers being
32-bit. I've yet to find a supporting statement in the ABI documentation but
the current behaviour matches GCC.
* For O32, the non-variable portion of a varargs argument list is also subject
to the rule that floating-point is passed via GPR's (on N32/N64 only the
variable portion is subject to this rule). This agrees with GCC's behaviour
and the SYSV ABI but contradicts part of the MIPSpro N32 Handbook which talks about O32's behaviour.
* The N32 implementation has the wrong callee-saved register list.
(I already have a fix for this but will commit it as a follow-up).
I've left RUN-TODO lines in for O32 on MIPS64. I don't plan to support this case
for now but we should revisit it.
Reviewers: matheusalmeida, vmedic
Reviewed By: matheusalmeida
Differential Revision: http://reviews.llvm.org/D3339
llvm-svn: 206370
This particular DAG combine is designed to kick in when both ConstantFPs will
end up being loaded via a litpool, however those nodes have a semi-legal
status, dictated by isFPImmLegal so in some cases there wouldn't have been a
litpool in the first place. Don't try to be clever in those circumstances.
Picked up while merging some AArch64 tests.
llvm-svn: 206365
Adjust the tests to validate the number of auxiliary entries used to store the
filename.
Thanks to majnemer's sharp eye for catching the missing - 1 in the round up
calculation.
llvm-svn: 206359
Add support for emitting .file records. This is mostly a quality of
implementation change (more complete support for COFF file emission) that was
noticed while working on COFF file emission for Windows on ARM.
A .file record is emitted as a symbol with storage class FILE (103) and the name
".file". A series of auxiliary format 4 records follow which contain the file
name. The filename is stored as an ANSI string and is padded with NULL if the
length is not a multiple of COFF::SymbolSize (18).
llvm-svn: 206355
Setting vector types to expand will result in scalarization on pre SI hw,
as those gpus don't have vector shifts either.
Expand also i32 vectors, this helps llvm make the correct decision
about scalarizing the vector ops.
v2: move setOperation() calls to R600ISelLowering.cpp.
cleanup the SI code to make it obvious that this patch does is nop for SI
Patch by: Jan Vesely <jan.vesely@rutgers.edu>
llvm-svn: 206348
Range'ify a bunch of loops, mainly. As a result, we have a variety
of objects via reference rather than by pointer, so propogate that
through the various helper functions where it makes sense.
llvm-svn: 206337
Print in decimal for inline immediates, and hex otherwise. Use hex
always for offsets in addressing offsets.
This approximately matches what the shader compiler does.
llvm-svn: 206335
because there is another (size_t, size_t) overload of Allocator, and the
only distinguishing factor is that one is a tempalte and the other
isn't. There was only one usage of this and that one was easily
converted to carry the alignment constraint in the type itself.
llvm-svn: 206325
handles Intrinsic::trap if TargetOptions::TrapFuncName is set.
This fixes a bug in which the trap function was not taken into consideration
when a program was compiled without optimization (at -O0).
<rdar://problem/16291933>
llvm-svn: 206323
This patch teaches the backend how to efficiently lower logical and
arithmetic packed shifts on both SSE and AVX/AVX2 machines.
When possible, instead of scalarizing a vector shift, the backend should try
to expand the shift into a sequence of two packed shifts by immedate count
followed by a MOVSS/MOVSD.
Example
(v4i32 (srl A, (build_vector < X, Y, Y, Y>)))
Can be rewritten as:
(v4i32 (MOVSS (srl A, <Y,Y,Y,Y>), (srl A, <X,X,X,X>)))
[with X and Y ConstantInt]
The advantage is that the two new shifts from the example would be lowered into
X86ISD::VSRLI nodes. This is always cheaper than scalarizing the vector into
four scalar shifts plus four pairs of vector insert/extract.
llvm-svn: 206316
Implement DebugInfoVerifier, which steals verification relying on
DebugInfoFinder from Verifier.
- Adds LegacyDebugInfoVerifierPassPass, a ModulePass which wraps
DebugInfoVerifier. Uses -verify-di command-line flag.
- Change verifyModule() to invoke DebugInfoVerifier as well as
Verifier.
- Add a call to createDebugInfoVerifierPass() wherever there was a
call to createVerifierPass().
This implementation as a module pass should sidestep efficiency issues,
allowing us to turn debug info verification back on.
<rdar://problem/15500563>
llvm-svn: 206300
Sometimes we need emit the bits that would actually be a MOVN when producing a
relocated MOVZ instruction (don't ask). But not always, a check which ARM64 got
wrong until now.
llvm-svn: 206289
Code is mostly copied directly across, with a slight extension of the
ISelDAGToDAG function so that it can cope with the floating-point constants
being behind a litpool.
llvm-svn: 206285
ARM64 suffered multiple -verify-machineinstr failures (principally over the
xsp/xzr issue) because FastISel was completely ignoring which subset of the
general-purpose registers each instruction required.
More fixes are coming in ARM64 specific FastISel, but this should cover the
generic problems.
llvm-svn: 206283
by removing the MallocSlabAllocator entirely and just using
MallocAllocator directly. This makes all off these allocators expose and
utilize the same core interface.
The only ugly part of this is that it exposes the fact that the JIT
allocator has no real handling of alignment, any more than the malloc
allocator does. =/ It would be nice to fix both of these to support
alignments, and then to leverage that in the BumpPtrAllocator to do less
over allocation in order to manually align pointers. But, that's another
patch for another day. This patch has no functional impact, it just
removes the somewhat meaningless wrapper around MallocAllocator.
llvm-svn: 206267
allocation libraries, may allow more efficient allocation and
deallocation. It at least makes the interface implementable by the JIT
memory manager.
However, this highlights problematic overloading between the void* and
the T* deallocation functions. I'm looking into a better way to do this,
but as it happens, it comes up rarely in the codebase.
llvm-svn: 206265
overloads. This doesn't matter *that* much yet, but it will in
a subsequent patch. I had tested the original pattern, but not my
attempt to pacify MSVC. This at least appears to work. Still fixing the
rest of the fallout in the final patch that uses these overloads, but it
will follow shortly.
llvm-svn: 206259
'sizeof(T)' for T == void and produces a hard error. I cannot fathom why
this is OK. Oh well. switch to an explicit test for being the
(potentially qualified) void type, which is the only specific case I was
worried about. Hopefully this survives the libstdc++ build bots which
have limited type traits implementations...
llvm-svn: 206256
to types which we can compute the size of. The comparison with zero
isn't actually interesting here, it's mostly about putting sizeof into
a sfinae context.
This is particular important for Deallocate as otherwise the void*
overload can quickly become ambiguous.
llvm-svn: 206251
MCModule's ctor had to be moved out of line so the definition of
MCFunction was available. (ctor requires the dtor of members (in case
the ctor throws) which required access to the dtor of MCFunction)
llvm-svn: 206244
This patch re-introduces the MCContext member that was removed from
MCDisassembler in r206063, and requires that an MCContext be passed in at
MCDisassembler construction time. (Previously the MCContext member had been
initialized in an ad-hoc fashion after construction). The MCCContext member
can be used by MCDisassembler sub-classes to construct constant or
target-specific MCExprs.
This patch updates disassemblers for in-tree targets, and provides the
MCRegisterInfo instance that some disassemblers were using through the
MCContext (previously those backends were constructing their own
MCRegisterInfo instances).
llvm-svn: 206241
*not* Subtarget->hasSSE1()
*but* __SSE__, the flag that LLVM libraries are compiled
The callback calls internal LLVM JIT libraries. It may be built with -msse (or above).
FIXME: JIT may use "host" instead of "generic" by default.
llvm-svn: 206240
Currently, we bind those directives with the last symbol, so if none
has been defined, this would lead to a crash of the compiler.
<rdar://problem/15939159>
llvm-svn: 206236
along with templated overloads much like we have for Allocate. These
will facilitate switching the Deallocate interface of all the Allocator
classes to accept the size by pre-filling it from the type size where we
can do so. I plan to convert several uses to the template variants in
subsequent patches prior to adding the Size parameter.
No functionality changed, WIP.
llvm-svn: 206230
rather than defining them (differently!) in both allocators. This also
serves as a basis for documenting and even enforcing some of the
LLVM-style "allocator" concept methods which must exist with various
signatures.
I plan on extending and changing the signatures of these to further
simplify our allocator model in subsequent commits, so I wanted to
factor things as best as I could first. Notably, I'm working to add the
'Size' to the deallocation method of all allocators. This has several
implications not the least of which are faster deallocation times on
certain allocation libraries (tcmalloc). It also will allow the JIT
allocator to fully model the existing allocation interfaces and allow
sanitizer poisoning of deallocated regions. The list of advantages goes
on. =] But by factoring things first I'll be able to make this easier by
first introducing template helpers for the deallocation path.
llvm-svn: 206225
Got bored, removed some manual memory management.
Pushed references (rather than pointers) through a few APIs rather than
replacing *x with x.get().
llvm-svn: 206222