The enum types array by design contains pointers to MDNodes rather than DIRefs.
Unique them when handling the enum types in DwarfDebug.
rdar://17628609
llvm-svn: 214139
Since we're storing lots of these, save two-pointers per vector with a
custom type rather than using the relatively heavy `SmallVector`.
Part of PR5680.
llvm-svn: 214135
DITypeArray is an array of DITypeRef, at its creation, we will create
DITypeRef (i.e use the identifier if the type node has an identifier).
This is the last patch to unique the type array of a subroutine type.
rdar://17628609
llvm-svn: 214132
Predict and serialize use-list order in bitcode. This makes the option
`-preserve-bc-use-list-order` work *most* of the time, but this is still
experimental.
- Builds a full value-table up front in the writer, sets up a list of
use-list orders to write out, and discards the table. This is a
simpler first step than determining the order from the various
overlapping IDs of values on-the-fly.
- The shuffles stored in the use-list order list have an unnecessarily
large memory footprint.
- `blockaddress` expressions cause functions to be materialized
out-of-order. For now I've ignored this problem, so use-list orders
will be wrong for constants used by functions that have block
addresses taken. There are a couple of ways to fix this, but I
don't have a concrete plan yet.
- When materializing functions lazily, the use-lists for constants
will not be correct. This use case is out of scope: what should the
use-list order be, if it's incomplete?
This is part of PR5680.
llvm-svn: 214125
Typedef DIArray to DITypedArray<DIDescriptor>. Also typedef DITypeArray as
DITypedArray<DITypeRef>.
This is the third of a series of patches to handle type uniqueing of the
type array for a subroutine type.
This commit should have no functionality change.
llvm-svn: 214115
This is the second of a series of patches to handle type uniqueing of the
type array for a subroutine type.
For vector and array types, getElements returns the array of subranges, so it
is a better name than getTypeArray. Even for class, struct and enum types,
getElements returns the members, which can be subprograms.
setArrays can set up to two arrays, the second is the templates.
This commit should have no functionality change.
llvm-svn: 214112
This is the first of a series of patches to handle type uniqueing of the
type array for a subroutine type.
This commit makes sure unspecified_parameter is a DIType to enable converting
the type array for a subroutine type to an array of DITypes.
This commit should have no functionality change. With this commit, we may
change unspecified type to be a DITrivialType instead of a DIType.
llvm-svn: 214111
inspection in the proccess, and shuffle the logging in the DAG combiner
around a bit.
With this it is much easier to follow what the legalizer is doing. It
should even accurately present most of the strange legalization
operations where a single node is replaced by multiple nodes, etc. There
is still some information lost (we log SDNodes not SDValues so we don't
log which result is used for which thing), but I think this is much
closer to a usable system. Notably, this will make it *much* more
apparant when legalization is actually happening inside the combiner, or
when there is a cycle caused by interactions of the legalizer and the
combiner.
The "bug" I fixed here I'm not sure is remotely possible to trigger. We
were only adding one of the nodes in a replacement to the updated set
rather than all of the nodes in the replacement. Realistically, the
worst result of this are nodes not getting back onto the worklist in the
DAG combiner. I doubt it is possible to trigger this today, and
I certainly don't have any ideas about how, but this at least brings the
code into alignment with the principled operation of the routine.
llvm-svn: 214105
While LLVM now supports both ELFv1 and ELFv2 ABIs, their use is currently
hard-coded via the target triple: powerpc64-linux is always ELFv1, while
powerpc64le-linux is always ELFv2.
These are of course the most common scenarios, but in principle it is
possible to support the ELFv2 ABI on big-endian or the ELFv1 ABI on
little-endian systems (and GCC does support that), and there are some
special use cases for that (e.g. certain Linux kernel versions could
only be built using ELFv1 on LE).
This patch implements the LLVM side of supporting this. As precedent
on other platforms suggests, ABI options are passed to the back-end as
features. Thus, this patch implements two features "elfv1" and "elfv2"
that select the desired ABI if present. (If not, the LLVM uses the
same default rules as now.)
llvm-svn: 214072
The subtarget information is the ultimate source of truth for the feature set
that is enabled at this point. We would previously not propagate the feature
information to the subtarget. While this worked for the most part (features
would be enabled/disabled as requested), if another operation that changed the
feature bits was encountered (such as a mode switch via a .arm or .thumb
directive), we would end up resetting the behaviour of the architectural
extensions.
Handling this properly requires a slightly more complicated handling. We need
to check if the feature is now being toggled. If so, only then do we toggle the
features. In return, we no longer have to calculate the feature bits ourselves.
The test changes are mostly to the diagnosis, which is now more uniform (a nice
side effect!). Add an additional test to ensure that we handle this case
properly.
Thanks to Nico Weber for alerting me to this issue!
llvm-svn: 214057
Rename to allowsMisalignedMemoryAccess.
On R600, 8 and 16 byte accesses are mostly OK with 4-byte alignment,
and don't need to be split into multiple accesses. Vector loads with
an alignment of the element type are not uncommon in OpenCL code.
llvm-svn: 214055
'J' represents a negative number suitable for an add/sub alias
instruction, but while preparing it to become an int64_t we were
mangling the sign extension. So "i32 -1" became 0xffffffffLL, for
example.
Should fix one half of PR20456.
llvm-svn: 214052
instructions in the legalized DAG, and leverage it to combine long
sequences of instructions to PSHUFB.
Eventually, the other x86-instruction-specific shuffle combines will
probably all be driven out of this routine. But the real motivation is
to detect after we have fully legalized and optimized a shuffle to the
minimal number of x86 instructions whether it is profitable to replace
the chain with a fully generic PSHUFB instruction even though doing so
requires either a load from a constant pool or tying up a register with
the mask.
While the Intel manuals claim it should be used when it replaces 5 or
more instructions (!!!!) my experience is that it is actually very fast
on modern chips, and so I've gon with a much more aggressive model of
replacing any sequence of 3 or more instructions.
I've also taught it to do some basic canonicalization to special-purpose
instructions which have smaller encodings than their generic
counterparts.
There are still quite a few FIXMEs here, and I've not yet implemented
support for lowering blends with PSHUFB (where its power really shines
due to being able to zero out lanes), but this starts implementing real
PSHUFB support even when using the new, fancy shuffle lowering. =]
llvm-svn: 214042
over each node in the worklist prior to combining.
This allows the combiner to produce new nodes which need to go back
through legalization. This is particularly useful when generating
operands to target specific nodes in a post-legalize DAG combine where
the operands are significantly easier to express as pre-legalized
operations. My immediate use case will be PSHUFB formation where we need
to build a constant shuffle mask with a build_vector node.
This also refactors the relevant functionality in the legalizer to
support this, and updates relevant tests. I've spoken to the R600 folks
and these changes look like improvements to them. The avx512 change
needs to be investigated, I suspect there is a disagreement between the
legalizer and the DAG combiner there, but it seems a minor issue so
leaving it to be re-evaluated after this patch.
Differential Revision: http://reviews.llvm.org/D4564
llvm-svn: 214020
The tale starts with r212808 which attempted to fix inversion of the low
and high bits when lowering MUL_LOHI. Sadly, that commit did not include
any positive test cases, and just removed some operations from a test
case where the actual logic being changed isn't fully visible from the
test.
What this commit did was two things. First, it reversed the low and high
results in the formation of the MERGE_VALUES node for the multiple
results. This is entirely correct.
Second it changed the shuffles for extracting the low and high
components from the i64 results of the multiplies to extract them
assuming a big-endian-style encoding of the multiply results. This
second change is wrong. There is no big-endian encoding in x86, the
results of the multiplies are normal v2i64s: when cast to v4i32, the low
i32s are at offsets 0 and 2, and the high i32s are at offsets 1 and 3.
However, the first change wasn't enough to actually fix the bug, which
is (I assume) why the second change was also made. There was another bug
in the MERGE_VALUES formation: we weren't using a VTList, and so were
getting a single result node! When grabbing the *second* result from the
node, we got... well.. colud be anything. I think this *appeared* to
invert things, but had to be causing other problems as well.
Fortunately, I fixed the MERGE_VALUES issue in r213931, so we should
have been fine, right? NOOOPE! Because the core bug was never addressed,
the test in vector-idiv failed when I fixed the MERGE_VALUES node.
Because there are essentially no docs for this node, I had to guess at
how to fix it and tried swapping the operands, restoring the order of
the original code before r212808. While this "fixed" the test case (in
that we produced the write instructions) we were still extracting the
wrong elements of the i64s, and thus PR20355 was still broken.
This commit essentially reverts the big-endian-style extraction part of
r212808 and goes back to the original masks which were correct. Now that
the MERGE_VALUES node formation is also correct, everything works. I've
also included a more detailed test from PR20355 to make sure this stays
fixed.
llvm-svn: 214011
The clever way to implement signed multiplication with unsigned *is
already implemented* and tested and working correctly. The bug is
somewhere else. Re-investigating.
This will teach me to not scroll far enough to read the code that did
what I thought needed to be done.
llvm-svn: 214009
signed multiplication is requested. While there is not a difference in
the *low* half of the result, the *high* half (used specifically to
implement the signed division by these constants) certainly is used. The
test case I've nuked was actively asserting wrong code.
There is a delightful solution to doing signed multiplication even when
we don't have it that Richard Smith has crafted, but I'll add the
machinery back and implement that in a follow-up patch. This at least
restores correctness.
llvm-svn: 214007
instructions which happen to have a constant mask.
Currently, this only handles a very narrow set of cases, but those
happen to be the cases that I care about for testing shuffles sanely.
This is a bit trickier than other shuffle instructions because we're
decoding constants out of the constant pool. The current MC layer makes
it completely impossible to inspect a constant pool entry, so we have to
do it at the MI level and attach the comment to the streamer on its way
out. So no joy for disassembling, but it does make test cases and asm
dumps *much* nicer.
Sorry for no test cases, but it didn't really seem that valuable to go
trolling through existing old test cases and updating them. I'll have
lots of testing of this in the upcoming patch for SSSE3 emission in the
new vector shuffle lowering code paths.
llvm-svn: 213986
This patch removes the empty coverage mapping regions.
Those regions were produced by clang's old mapping region generation
algorithm, but the new algorithm doesn't generate them.
llvm-svn: 213981
This is the first commit in a series that add an @llvm.assume intrinsic which
can be used to provide the optimizer with a condition it may assume to be true
(when the control flow would hit the intrinsic call). Some basic properties are added here:
- llvm.invariant(true) is dead.
- llvm.invariant(false) is unreachable (this directly corresponds to the
documented behavior of MSVC's __assume(0)), so is llvm.invariant(undef).
The intrinsic is tagged as writing arbitrarily, in order to maintain control
dependencies. BasicAA has been updated, however, to return NoModRef for any
particular location-based query so that we don't unnecessarily block code
motion.
llvm-svn: 213973
address of the stack guard was being spilled to the stack.
Previously the address of the stack guard would get spilled to the stack if it
was impossible to keep it in a register. This patch introduces a new target
independent node and pseudo instruction which gets expanded post-RA to a
sequence of instructions that load the stack guard value. Register allocator
can now just remat the value when it can't keep it in a register.
<rdar://problem/12475629>
llvm-svn: 213967
This commit implements the frameaddress intrinsic for the AArch64 architecture
in FastISel.
There were two test cases that pretty much tested the same, so I combined them
to a single test case.
Fixes <rdar://problem/17811834>
llvm-svn: 213959
Ugh. Turns out not even transformation passes link in how to read IR.
I sincerely believe the buildbots will finally agree with my system
after this though. (I don't really understand why all of this has been
working on my system, but not on all the buildbots.)
Create a new tool called llvm-uselistorder to use for verifying use-list
order. For now, just dump everything from the (now defunct)
-verify-use-list-order pass into the tool.
This might be a better way to test use-list order anyway.
Part of PR5680.
llvm-svn: 213957
This recommits r208930, r208933, and r208975 (by reverting r209338) and
reverts r209529 (the FIXME to readd this functionality once the tools
were fixed) now that DWP has been fixed to cope with a single section
for all fission type units.
Original commit message:
"Since type units in the dwo file are handled by a debug aware tool,
they don't need to leverage the ELF comdat grouping to implement
deduplication. Avoid creating all the .group sections for these as a
space optimization."
llvm-svn: 213956
I think the compiler got confused by the nested DEBUG macros.
It was failing with:
UseListOrder.cpp(80) : error C2059: syntax error : '}'
llvm-svn: 213954
`ValueEnumerator::OptimizeConstants()` creates forward references within
the constant pools, which makes predicting constants' use-list order
difficult. For now, just disable the optimization.
This can be re-enabled in the future in one of two ways:
- Enable a limited version of this optimization that doesn't create
forward references. One idea is to categorize constants by their
"height" and make that the top-level sort.
- Enable it entirely. This requires predicting how may times each
constant will be recreated as its operands' and operands' operands'
(etc.) forward references get resolved.
This is part of PR5680.
llvm-svn: 213953
Reverted by Eric Christopher (Thanks!) in r212203 after Bob Wilson
reported LTO issues. Duncan Exon Smith and Aditya Nandakumar helped
provide a reduced reproduction, though the failure wasn't too hard to
guess, and even easier with the example to confirm.
The assertion that the subprogram metadata associated with an
llvm::Function matches the scope data referenced by the DbgLocs on the
instructions in that function is not valid under LTO. In LTO, a C++
inline function might exist in multiple CUs and the subprogram metadata
nodes will refer to the same llvm::Function. In this case, depending on
the order of the CUs, the first intance of the subprogram metadata may
not be the one referenced by the instructions in that function and the
assertion will fail.
A test case (test/DebugInfo/cross-cu-linkonce-distinct.ll) is added, the
assertion removed and a comment added to explain this situation.
This was then reverted again in r213581 as it caused PR20367. The root
cause of this was the early exit in LiveDebugVariables meant that
spurious DBG_VALUE intrinsics that referenced dead variables were not
removed, causing an assertion/crash later on. The fix is to have
LiveDebugVariables strip all DBG_VALUE intrinsics in functions without
debug info as they're not needed anyway. Test case added to cover this
situation (that occurs when a debug-having function is inlined into a
nodebug function) in test/DebugInfo/X86/nodebug_with_debug_loc.ll
Original commit message:
If a function isn't actually in a CU's subprogram list in the debug info
metadata, ignore all the DebugLocs and don't try to build scopes, track
variables, etc.
While this is possibly a minor optimization, it's also a correctness fix
for an incoming patch that will add assertions to LexicalScopes and the
debug info verifier to ensure that all scope chains lead to debug info
for the current function.
Fix up a few test cases that had broken/incomplete debug info that could
violate this constraint.
Add a test case where this occurs by design (inlining a
debug-info-having function in an attribute nodebug function - we want
this to work because /if/ the nodebug function is then inlined into a
debug-info-having function, it should be fine (and will work fine - we
just stitch the scopes up as usual), but should the inlining not happen
we need to not assert fail either).
llvm-svn: 213952
This functionality is currently turned off by default.
Part of the motivation for introducing scoped-noalias metadata is to enable the
preservation of noalias parameter attribute information after inlining.
Sometimes this can be inferred from the code in the caller after inlining, but
often we simply lose valuable information.
The overall process if fairly simple:
1. Create a new unqiue scope domain.
2. For each (used) noalias parameter, create a new alias scope.
3. For each pointer, collect the underlying objects. Add a noalias scope for
each noalias parameter from which we're not derived (and has not been
captured prior to that point).
4. Add an alias.scope for each noalias parameter from which we might be
derived (or has been captured before that point).
Note that the capture checks apply only if one of the underlying objects is not
an identified function-local object.
llvm-svn: 213949