Summary:
Ensure that fragments are bundle aligned when instruction bundling
is enabled and the -mc-relax-all flag is set. This is implicitly
assumed by the bundle padding implementation but this assumption
does not hold when custom alignment is being used.
The change was tested by running PNaCl toolchain trybots with
-mc-relax-all flag set.
Fixes https://code.google.com/p/nativeclient/issues/detail?id=4063
Test Plan: Regression test attached
Reviewers: mseaborn
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D10044
llvm-svn: 240869
There are two main reasons why a linked-list makes sense for
`DIEValueList`.
1. We want `DIE` to be on a `BumpPtrAllocator` to improve teardown
efficiency. Making `DIEValueList` array-based would make that much
more complicated.
2. The singly-linked list is fairly memory efficient. The histogram
[1] shows that most DIEs have relatively few values, so we often pay
less than the 2/3-pointer static overhead of a vector. Furthermore,
we don't know ahead of time exactly how many values a `DIE` needs,
so a vector-like scheme will on average over-allocate by ~50%. As
it happens, that's the same memory overhead as the linked list node.
[1]: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-May/085910.html
The comment I added to the code is a little more succinct, but I think
it's enough to give the idea.
llvm-svn: 240868
Allow callers of `Value::print()` and `Metadata::print()` to pass in a
`ModuleSlotTracker`. This allows them to pay only once for calculating
module-level slots (such as Metadata).
This is related to PR23865, where there was a huge cost for
`MachineFunction::print()`. Although I don't have a *particular* user
in mind for this new code, I have hit big slowdowns before when running
`opt -debug`, and I think this will be useful. Going forward, if
someone hits a big slowdown with `print()` statements, they can create a
`ModuleSlotTracker` and send it through. Similarly, adding support to
`Value::dump()` and `Metadata::dump()` should be trivial.
I added unit tests to be sure the `print()` functions actually behave
the same way with and without the slot tracker.
llvm-svn: 240867
It is possible for a global to be substituted with another global of a
different type or a different kind (i.e. an alias) at IR link time. One
example of this scenario is when a Microsoft ABI vtable is substituted with
an alias referring to a larger vtable containing an RTTI reference.
This will cause the global to be RAUW'd with a possibly bitcasted reference
to the other global. This will of course also affect any references to the
global in bitset metadata.
The right way to handle such metadata is simply to ignore it. This is sound
because the linked module should contain another copy of the bitset entries as
applied to the new global.
llvm-svn: 240866
The parser provides a convenient interface for reading llvm stackmap v1 sections
in object files.
This patch also includes a new option for llvm-readobj, '-stackmap', which uses
the parser to pretty-print stackmap sections for debugging/testing purposes.
llvm-svn: 240860
Another follow-up related to r240848: try a little harder to share slot
tracking calculations within a single `MachineInstr` dump. This is
unrelated to `MachineFunction::print()`, since that should be passing
through the function's `ModuleSlotTracker` by now, but could affect the
speed of dumping from a debugger if there is more than one IR-level
operand.
llvm-svn: 240852
This commit serializes the global address machine operands.
This commit doesn't serialize the operand's offset and target
flags, it serializes only the global value reference.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10671
llvm-svn: 240851
This change extends the detection of base pointers for vector constructs to handle arbitrary phi and select nodes. The existing non-vector code already handles those, so this is basically just extending the vector special case to be less special cased. It still isn't generalized vector handling since we can't handle arbitrary vector instructions (e.g. shufflevectors), but it's a lot closer.
The general structure of the change is as follows:
* Extend the base defining value relation over a subset of vector instructions and vector typed phi & select instructions.
* Move scalarization from before base pointer rewriting to after base pointer rewriting. The extension of the BDV relation is sufficient to find vector base phis for vector inputs.
* Preserve the existing special case logic for when the base of a vector element is locally obvious. This general idea could be extended to the scalar case as well.
Differential Revision: http://reviews.llvm.org/D10461#inline-84275
llvm-svn: 240850
Summary:
Some front ends make kernel pointers global already. In that case,
handlePointerParams does nothing.
Test Plan: more tests in lower-kernel-ptr-arg.ll
Reviewers: grosser
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10779
llvm-svn: 240849
For another 1% speedup on the testcase in PR23865, push the
`ModuleSlotTracker` through to metadata-related printing in
`MachineBasicBlock::print()`.
llvm-svn: 240848
Push `ModuleSlotTracker` through `MachineOperand`s, dropping the time
for `llc -print-machineinstrs` on the testcase in PR23865 from ~13
seconds to ~9 seconds. Now `SlotTracker::processFunctionMetadata()`
accounts for only 8% of the runtime, which seems reasonable.
llvm-svn: 240845
Expose enough of the IR-level `SlotTracker` so that
`MachineFunction::print()` can use a single one for printing
`BasicBlock`s. Next step would be to lift this through a few more APIs
so that we can make other print methods faster.
Fixes PR23865, changing the runtime of `llc -print-machineinstrs` from
many minutes (killed after 3 minutes, but it wasn't very close) to
13 seconds for a 502185 line dump.
llvm-svn: 240842
Summary: We need to set MTYPE = 2 for VI shaders when targeting the HSA runtime.
Reviewers: arsenm
Differential Revision: http://reviews.llvm.org/D10777
llvm-svn: 240841
As Polly got a lot faster after the small-integer-optimization imath
patch, we now increase the compute out to optimize larger kernels. This
should also expose additional slow-downs for us to address.
In LNT this gives us a 3.4x speedup on 3mm, at a cost of a 2x increase in
compile time (now 0.77s). reg_detect, oorafft and adi also show some compile
time increases. This compile time cost is divided between more time in isl and
more time in LLVM's backends due to increased code size (versioning and tiling).
llvm-svn: 240840
There were a few issues with the previous delay-import tables.
- "Attribute" field should have been 1 instead of 0.
(I don't know the meaning of this field, though.)
- LEA and CALL operands had wrong addresses.
- Address tables are in .didat (which is read-only).
They should have been in .data.
llvm-svn: 240837
We support invoking a subset of llvm's intrinsics, but the verifier didn't account for this. We had previously added a special case to verify invokes of statepoints. By generalizing the code in terms of CallSite, we can verify invokes of other intrinsics as well. Interestingly, this found one test case which was invalid.
Note: I'm deliberately leaving the naming change from CI to CS to a follow up change. That will happen shortly, I just wanted to reduce the diff to make it clear what was happening with this one.
Differential Revision: http://reviews.llvm.org/D10118
llvm-svn: 240836
Summary:
This way the function symbol points to the start of amd_kernel_code_t
rather than the start of the function.
Reviewers: arsenm
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10705
llvm-svn: 240829
If we have a caller that knows a particular argument can never be null, we can exploit this fact while simplifying values in the inline cost analysis. This has the effect of reducing the cost for inlining when a null check is present in the callee, but the value is known non null in the caller. In particular, any dependent control flow can be discounted from the cost estimate.
Note that we use the parameter attributes at the call site to memoize the analysis within the caller's code. The setting of this attribute is done in InstCombine, the inline cost analysis just consumes it. This is intentional and important because we want the inline cost analysis results to be easily cachable themselves. We're not currently doing so, but initial results on LTO indicate this will quickly become important.
Differential Revision: http://reviews.llvm.org/D9129
llvm-svn: 240828
If pseudoToMCOpcode failed, we would return the original opcode, so operands
would be swapped, but the instruction would remain the same.
It resulted in LSHLREV a, b ---> LSHLREV b, a.
This fixes Glamor text rendering and
piglit/arb_sample_shading-builtin-gl-sample-mask on VI.
This is a candidate for stable branches.
v2: the test was simplified by Tom Stellard
llvm-svn: 240824
This patch corresponds to review:
http://reviews.llvm.org/D10637
This is the first round of additions of missing builtins listed in the ABI document. More to come (this builds onto what seurer already addes). This patch adds:
vector signed long long vec_abs(vector signed long long)
vector double vec_abs(vector double)
vector signed long long vec_add(vector signed long long, vector signed long long)
vector unsigned long long vec_add(vector unsigned long long, vector unsigned long long)
vector double vec_add(vector double, vector double)
vector double vec_and(vector bool long long, vector double)
vector double vec_and(vector double, vector bool long long)
vector double vec_and(vector double, vector double)
vector signed long long vec_and(vector signed long long, vector signed long long)
vector double vec_andc(vector bool long long, vector double)
vector double vec_andc(vector double, vector bool long long)
vector double vec_andc(vector double, vector double)
vector signed long long vec_andc(vector signed long long, vector signed long long)
vector double vec_ceil(vector double)
vector bool long long vec_cmpeq(vector double, vector double)
vector bool long long vec_cmpge(vector double, vector double)
vector bool long long vec_cmpge(vector signed long long, vector signed long long)
vector bool long long vec_cmpge(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmpgt(vector double, vector double)
vector bool long long vec_cmple(vector double, vector double)
vector bool long long vec_cmple(vector signed long long, vector signed long long)
vector bool long long vec_cmple(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmplt(vector double, vector double)
vector bool long long vec_cmplt(vector signed long long, vector signed long long)
vector bool long long vec_cmplt(vector unsigned long long, vector unsigned long long)
llvm-svn: 240821
This patch corresponds to review:
http://reviews.llvm.org/D10638
This is the back end portion of patch
http://reviews.llvm.org/D10637
It just adds the code gen and intrinsic functions necessary to support that patch to the back end.
llvm-svn: 240820
The body of the loops here only contained asserts. This triggered an unused variable
warning on release builds and -Werror on the bots.
llvm-svn: 240819