We already handle the no-slabs case when checking whether the current slab
is large enough: if no slabs have been allocated, CurPtr and End are both 0.
alignPtr(0), will still be 0, and so "if (Ptr + Size <= End)" fails.
Differential Revision: http://reviews.llvm.org/D4943
llvm-svn: 215841
While *most* (X sdiv 1) operations will get caught by InstSimplify, it
is still possible for a sdiv to appear in the worklist which hasn't been
simplified yet.
This means that it is possible for 0 - (X sdiv 1) to get transformed
into (X sdiv -1); dividing by -1 can make the transform produce undef
values instead of the proper result.
Sorry for the lack of testcase, it's a bit problematic because it relies
on the exact order of operations in the worklist.
llvm-svn: 215818
We used to assume that any fixed-offset stack object was not aliased. This
meant that no IR value could point to the memory contained in such an object.
This is a reasonable default, but is not a universally-correct
target-independent fact. For example, on PowerPC (both Darwin and non-Darwin),
some byval arguments are allocated at fixed offsets by the ABI. These, however,
certainly can be pointed to by IR values. This change moves the 'isAliased'
logic out of FixedStackPseudoSourceValue and into MFI, and allows the isAliased
property to be overridden for fixed-offset objects.
This will be used by an upcoming commit to the PowerPC backend to fix PR20280.
No functionality change intended (the behavior of
FixedStackPseudoSourceValue::isAliased has been made more conservative for
callers that don't pass an MFI object, but I don't see any in-tree callers that
do that).
llvm-svn: 215794
This reverts commit r215784 / 3f8a26f6fe16cc76c98ab21db2c600bd7defbbaa.
LLD has 3 StringSaver's, one of which takes a lock when saving the
string... Need to investigate more closely.
llvm-svn: 215790
This class is generally useful.
In breaking it out, the primary change is that it has been made
non-virtual. It seems like being abstract led to there being 3 different
(2 in llvm + 1 in clang) concrete implementations which disagreed about
the ownership of the saved strings (see the manual call to free() in the
unittest StrDupSaver; yes this is different from the CommandLine.cpp
StrDupSaver which owns the stored strings; which is different from
Clang's StringSetSaver which just holds a reference to a
std::set<std::string> which owns the strings).
I've identified 2 other places in the
codebase that are open-coding this pattern:
memcpy(Alloc.Allocate<char>(strlen(S)+1), S, strlen(S)+1)
I'll be switching them over. They are
* llvm::sys::Process::GetArgumentVector
* The StringAllocator member of YAMLIO's Input class
This also will allow simplifying Clang's driver.cpp quite a bit.
Let me know if there are any other places that could benefit from
StringSaver. I'm also thinking of adding a saveStringRef member for
getting a stable StringRef.
llvm-svn: 215784
This reverts:
r215595 "[FastISel][X86] Add large code model support for materializing floating-point constants."
r215594 "[FastISel][X86] Use XOR to materialize the "0" value."
r215593 "[FastISel][X86] Emit more efficient instructions for integer constant materialization."
r215591 "[FastISel][AArch64] Make use of the zero register when possible."
r215588 "[FastISel] Let the target decide first if it wants to materialize a constant."
r215582 "[FastISel][AArch64] Cleanup constant materialization code. NFCI."
llvm-svn: 215673
As X86MCAsmInfoDarwin uses '##' as CommentString although a single '#' starts a
comment a workaround for this special case is added.
Fixes divisions in constant expressions for the AArch64 assembler and other
targets which use '//' as CommentString.
Patch by Janne Grunau!
llvm-svn: 215615
This changes the order in which FastISel tries to materialize a constant.
Originally it would try to use a simple target-independent approach, which
can lead to the generation of inefficient code.
On X86 this would result in the use of movabsq to materialize any 64bit
integer constant - even for simple and small values such as 0 and 1. Also
some very funny floating-point materialization could be observed too.
On AArch64 it would materialize the constant 0 in a register even the
architecture has an actual "zero" register.
On ARM it would generate unnecessary mov instructions or not use mvn.
This change simply changes the order and always asks the target first if it
likes to materialize the constant. This doesn't fix all the issues
mentioned above, but it enables the targets to implement such
optimizations.
Related to <rdar://problem/17420988>.
llvm-svn: 215588
New function to erase a machine instruction and mark DBG_VALUE
for removal. A DBG_VALUE is marked for removal when it references
an operand defined in the instruction.
Use the new function to cleanup code in dead machine instruction
removal pass.
llvm-svn: 215580
critical edge has been split. The MachineDominatorTree will when lazy update the
underlying dominance properties when require.
** Context **
This is a follow-up of r215410.
Each time a critical edge is split this invalidates the dominator tree
information. Thus, subsequent queries of that interface will be slow until the
underlying information is actually recomputed (costly).
** Problem **
Prior to this patch, splitting a critical edge needed to query the dominator
tree to update the dominator information.
Therefore, splitting a bunch of critical edges will likely produce poor
performance as each query to the dominator tree will use the slow query path.
This happens a lot in passes like MachineSink and PHIElimination.
** Proposed Solution **
Splitting a critical edge is a local modification of the CFG. Moreover, as soon
as a critical edge is split, it is not critical anymore and thus cannot be a
candidate for critical edge splitting anymore. In other words, the predecessor
and successor of a basic block inserted on a critical edge cannot be inserted by
critical edge splitting.
Using these observations, we can pile up the splitting of critical edge and
apply then at once before updating the DT information.
The core of this patch moves the update of the MachineDominatorTree information
from MachineBasicBlock::SplitCriticalEdge to a lazy MachineDominatorTree.
** Performance **
Thanks to this patch, the motivating example compiles in 4- minutes instead of
6+ minutes. No test case added as the motivating example as nothing special but
being huge!
The binaries are strictly identical for all the llvm test-suite + SPECs with and
without this patch for both Os and O3.
Regarding compile time, I observed only noise, although on average I saw a
small improvement.
<rdar://problem/17894619>
llvm-svn: 215576
Add header guards to files that were missing guards. Remove #endif comments
as they don't seem common in LLVM (we can easily add them back if we decide
they're useful)
Changes made by clang-tidy with minor tweaks.
llvm-svn: 215558
Added avx512_movnt_vl multiclass for handling 256/128-bit forms of instruction.
Added encoding and lowering tests.
Reviewed by Elena Demikhovsky <elena.demikhovsky@intel.com>
llvm-svn: 215536
This implements PPCTargetLowering::getTgtMemIntrinsic for Altivec load/store
intrinsics. As with the construction of the MachineMemOperands for the
intrinsic calls used for unaligned load/store lowering, the only slight
complication is that we need to represent a larger memory range than the
loaded/stored value-type size (because the address is rounded down to an
aligned address, and we need to conservatively represent the entire possible
range of the actual access). This required adding an extra size field to
TargetLowering::IntrinsicInfo, and this was done in a way that required no
modifications to other targets (the size defaults to the store size of the
provided memory data type).
This fixes test/CodeGen/PowerPC/unal-altivec-wint.ll (so it can be un-XFAILed).
llvm-svn: 215512
Unfortunately, our use of the SDNode class hierarchy for INTRINSIC_W_CHAIN and
INTRINSIC_VOID nodes is somewhat broken right now. These nodes sometimes are
used for memory intrinsics (those with MachineMemOperands), and sometimes not.
When not, the nodes are not created as instances of MemIntrinsicSDNode, but
rather created as some other subclass of SDNode using DAG::getNode. When they
are memory intrinsics, they are created using DAG::getMemIntrinsicNode as
instances of MemIntrinsicSDNode. MemIntrinsicSDNode is a subclass of
MemSDNode, but prior to r214452, we had a non-self-consistent setup whereby
MemIntrinsicSDNode::classof on INTRINSIC_W_CHAIN and INTRINSIC_VOID would
return true but MemSDNode::classof on INTRINSIC_W_CHAIN and INTRINSIC_VOID
would return false. In r214452, MemSDNode::classof was changed to return true
for INTRINSIC_W_CHAIN and INTRINSIC_VOID, which is now self-consistent. The
problem is that neither the pre-r214452 logic and the post-r214452 logic are
really right. The truth is that not all INTRINSIC_W_CHAIN and INTRINSIC_VOID
nodes are instances of MemIntrinsicSDNode (or MemSDNode for that matter), and
the return value from classof needs to reflect that. This was broken before
r214452 (because MemIntrinsicSDNode::classof always returned true), and was
broken afterward (because MemSDNode::classof also always returned true), and
will now be correct.
The minimal solution is to grab one of the SubclassData bits (there is one left
for MemIntrinsicSDNode nodes) and use it to store whether or not a particular
INTRINSIC_W_CHAIN or INTRINSIC_VOID is really an instance of
MemIntrinsicSDNode or not. Doing this allows both MemIntrinsicSDNode::classof
and MemSDNode::classof to return the correct answer for the underlying object
for both the memory-intrinsic and non-memory-intrinsic cases.
This fixes the problem that r214452 created in the SelectionDAGDumper (thanks
to Matt Arsenault for pointing it out).
Because PowerPC does not implement getTgtMemIntrinsic, this change breaks
test/CodeGen/PowerPC/unal-altivec-wint.ll. I've XFAILed it for now, and will
fix it in a follow-up commit.
llvm-svn: 215511
It's not clear what the semantics of a self-move should be. The
consensus appears to be that a self-move should leave the object in a
moved-from state, which is what our existing move assignment operator
does.
However, the MSVC 2013 STL will perform self-moves in some cases. In
particular, when doing a std::stable_sort of an already sorted APSInt
vector of an appropriate size, one of the merge steps will self-move
half of the elements.
We don't notice this when building with MSVC, because MSVC will not
synthesize the move assignment operator for APSInt. Presumably MSVC
does this because APInt, the base class, has user-declared special
members that implicitly delete move special members. Instead, MSVC
selects the copy-assign operator, which defends against self-assignment.
Clang, on the other hand, selects the move-assign operator, and we get
garbage APInts.
llvm-svn: 215478
No functional change. To be used in future commits that need to look
for such instructions.
Reviewed By: rafael
Differential Revision: http://reviews.llvm.org/D4504
llvm-svn: 215413
This patch adds a new property: isRegSequence and the related target hooks:
TargetIntrInfo::getRegSequenceInputs and
TargetInstrInfo::getRegSequenceLikeInputs to specify that a target specific
instruction is a (kind of) REG_SEQUENCE.
<rdar://problem/12702965>
llvm-svn: 215394
Remove the MinGW32 and Cygwin types from the OSType enumeration. These values
are represented via environments of Windows. It is a source of confusion and
needlessly clutters the code. The cost of doing this is that we must sink the
check for them into the normalization code path along with the spelling.
Addresses PR20592.
llvm-svn: 215303
floating point exceptions, added use of flag to fold potentially exception
raising floating point math in selection DAG. No functionality change, as
targets have to explicitly ask for this behavior and none does today.
llvm-svn: 215222
be deleted. This will be reapplied as soon as possible and before
the 3.6 branch date at any rate.
Approved by Jim Grosbach, Lang Hames, Rafael Espindola.
This reverts commits r215111, 215115, 215116, 215117, 215136.
llvm-svn: 215154
it breaks the modules builds (where CallGraph.h can be quite reasonably
transitively included by an unimported portion of a module, and CallGraph.cpp
not linked in), and appears to have been entirely redundant since PR780 was
fixed back in 2008.
If this breaks anything, please revert; I have only tested this with a single
configuration, and it's possible that this is still somehow fixing something
(though I doubt it, since no other similar file uses this mechanism any more).
llvm-svn: 215142
I am sure we will be finding bits and pieces of dead code for years to
come, but this is a good start.
Thanks to Lang Hames for making MCJIT a good replacement!
llvm-svn: 215111
This changes Win64EHEmitter into a utility WinEH UnwindEmitter that can be
shared across multiple architectures and a target specific bit which is
overridden (Win64::UnwindEmitter). This enables sharing the section selection
code across X86 and the intended use in ARM for emitting unwind information for
Windows on ARM.
llvm-svn: 215050
to get the subtarget and that's accessible from the MachineFunction
now. This helps clear the way for smaller changes where we getting
a subtarget will require passing in a MachineFunction/Function as
well.
llvm-svn: 214988
I initially used a `SmallVector<>` for `UseListOrder::Shuffle`, which
was a silly choice. When I realized my error I quickly rolled a custom
data structure.
This commit simplifies it to a `std::vector<>`. Now that I've had a
chance to measure performance, this data structure isn't part of a
bottleneck, so the additional complexity is unnecessary.
This is part of PR5680.
llvm-svn: 214979
This is mostly a cleanup, but it changes a fairly old behavior.
Every "real" LTO user was already disabling the silly internalize pass
and creating the internalize pass itself. The difference with this
patch is for "opt -std-link-opts" and the C api.
Now to get a usable behavior out of opt one doesn't need the funny
looking command line:
opt -internalize -disable-internalize -internalize-public-api-list=foo,bar -std-link-opts
llvm-svn: 214919
This is similar to what I did with the two-source permutation recently. (It's
almost too similar so that we should consider generating the masking variants
with some tablegen help.)
Both encoding and intrinsic tests are added as well. For the latter, this is
what the IR that the intrinsic test on the clang side generates.
Part of <rdar://problem/17688758>
llvm-svn: 214890
Some types, such as 128-bit vector types on AArch64, don't have any callee-saved registers. So if a value needs to stay live over a callsite, it must be spilled and refilled. This cost is now taken into account.
llvm-svn: 214859
shorter/easier and have the DAG use that to do the same lookup. This
can be used in the future for TargetMachine based caching lookups from
the MachineFunction easily.
Update the MIPS subtarget switching machinery to update this pointer
at the same time it runs.
llvm-svn: 214838
This comment was referring to the DiagnosticSeverity with RS_
prefixes, but they're actually DS_. I've also modernized the comment
style since I was changing it anyway.
llvm-svn: 214787
This flag will be used by the coverage tool to help
compute the execution counts for each line in a source file.
Differential Revision: http://reviews.llvm.org/D4746
llvm-svn: 214740
path::const_iterator claims that it's a bidirectional iterator, but it
doesn't satisfy all of the contracts for a bidirectional iterator.
For example, n3376 24.2.5 p6 says "If a and b are both dereferenceable,
then a == b if and only if *a and *b are bound to the same object",
but this doesn't work with how we stash and recreate Components.
This means that our use of reverse_iterator on this type is invalid
and leads to many of the valgrind errors we're hitting, as explained
by Tilmann Scheller here:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140728/228654.html
Instead, we admit that path::const_iterator is only an input_iterator,
and implement a second input_iterator for path::reverse_iterator (by
changing const_iterator::operator-- to reverse_iterator::operator++).
All of the uses of this just traverse once over the path in one
direction or the other anyway.
llvm-svn: 214737
Summary:
This patch also fixes an issue with the way the Mips assembler enables/disables architecture
features. Before this patch, the assembler never disabled feature bits. For example,
.set mips64
.set mips32r2
would result in the 'OR' of mips64 with mips32r2 feature bits which isn't right.
Unfortunately this isn't trivial to fix because there's not an easy way to clear
feature bits as the algorithm in MCSubtargetInfo (ToggleFeature) only clears the bits
that imply the feature being cleared and not the implied bits by the feature (there's a
better explanation to the code I added).
Patch by Matheus Almeida and updated by Toma Tabacu
Reviewers: vmedic, matheusalmeida, dsanders
Reviewed By: dsanders
Subscribers: tomatabacu, llvm-commits
Differential Revision: http://reviews.llvm.org/D4123
llvm-svn: 214709
sequence - target independent framework
When the DAGcombiner selects instruction sequences
it could increase the critical path or resource len.
For example, on arm64 there are multiply-accumulate instructions (madd,
msub). If e.g. the equivalent multiply-add sequence is not on the
crictial path it makes sense to select it instead of the combined,
single accumulate instruction (madd/msub). The reason is that the
conversion from add+mul to the madd could lengthen the critical path
by the latency of the multiply.
But the DAGCombiner would always combine and select the madd/msub
instruction.
This patch uses machine trace metrics to estimate critical path length
and resource length of an original instruction sequence vs a combined
instruction sequence and picks the faster code based on its estimates.
This patch only commits the target independent framework that evaluates
and selects code sequences. The machine instruction combiner is turned
off for all targets and expected to evolve over time by gradually
handling DAGCombiner pattern in the target specific code.
This framework lays the groundwork for fixing
rdar://16319955
llvm-svn: 214666
This makes EmitWindowsUnwindTables a virtual function and lowers the
implementation of the function to the X86WinCOFFStreamer. This method is a
target specific operation. This enables making the behaviour target dependent
by isolating it entirely to the target specific streamer.
llvm-svn: 214664
The frame information stored in this structure is driven by the requirements for
Windows NT unwinding rather than Windows 64 specifically. As a result, this
type can be shared across multiple architectures (ARM, AXP, MIPS, PPC, SH).
Rename this class in preparation for adding support for supporting unwinding
information for Windows on ARM.
Take the opportunity to constify the members as everything except the
ChainedParent is read-only. This required some adjustment to the label
handling.
llvm-svn: 214663
`shuffleUseLists()` is only used in `verify-uselistorder`, so move it
there to avoid bloating other executables. As a drive-by, update some
of the header docs.
This is part of PR5680.
llvm-svn: 214592
This updates the instrumentation based profiling format so that when
we have multiple functions with the same name (but different function
hashes) we keep all of them instead of rejecting the later ones.
There are a number of scenarios where this can come up where it's more
useful to keep multiple function profiles:
* Name collisions in unrelated libraries that are profiled together.
* Multiple "main" functions from multiple tools built against a common
library.
* Combining profiles from different build configurations (ie, asserts
and no-asserts)
The profile format now stores the number of counters between the hash
and the counts themselves, so that multiple sets of counts can be
stored. Since this is backwards incompatible, I've bumped the format
version and added some trivial logic to skip this when reading the old
format.
llvm-svn: 214585
variables (for example, by-value struct arguments passed in registers, or
large integer values split across several smaller registers).
On the IR level, this adds a new type of complex address operation OpPiece
to DIVariable that describes size and offset of a variable fragment.
On the DWARF emitter level, all pieces describing the same variable are
collected, sorted and emitted as DWARF expressions using the DW_OP_piece
and DW_OP_bit_piece operators.
http://reviews.llvm.org/D3373
rdar://problem/15928306
What this patch doesn't do / Future work:
- This patch only adds the backend machinery to make this work, patches
that change SROA and SelectionDAG's type legalizer to actually create
such debug info will follow. (http://reviews.llvm.org/D2680)
- Making the DIVariable complex expressions into an argument of dbg.value
will reduce the memory footprint of the debug metadata.
- The sorting/uniquing of pieces should be moved into DebugLocEntry,
to facilitate the merging of multi-piece entries.
llvm-svn: 214576
Although unlinked `BasicBlock`s can be created, there's currently no way
to insert them into `Function`s after the fact. In particular,
`moveAfter()` and `moveBefore()` require that the basic block is already
linked.
Extract the logic for initially linking a `BasicBlock` out of the
constructor and into a member function that can be used for lazy
insertion.
- Asserts that the basic block is currently unlinked.
- Matches the logic of the constructor.
- Changed the constructor to use it since the logic matches.
This is needed in a follow-up commit for PR5680.
llvm-svn: 214563
so that we can use it to get the old-style JIT out of the subtarget.
This code should be removed when the old-style JIT is removed
(imminently).
llvm-svn: 214560
`BlockAddress`es are interesting in that they can reference basic blocks
from *outside* the block's function. Since basic blocks are not global
values, this presents particular challenges for lazy parsing.
One corner case was found in PR11677 and fixed in r147425. In that
case, a global variable references a block address. It's necessary to
load the relevant function to resolve the forward reference before doing
anything with the module.
By inspection, I found (and have fixed here) two other cases:
- An instruction from one function references a block address from
another function, and only the first function is lazily loaded.
I fixed this the same way as PR11677: by eagerly loading the
referenced function.
- A function whose block address is taken is dematerialized, leaving
invalid references to it.
I fixed this by refusing to dematerialize functions whose block
addresses are taken (if you have to load it, you can't unload it).
llvm-svn: 214559
If INTRINSIC_W_CHAIN and INTRINSIC_VOID are MemIntrinsicSDNodes, and a
MemIntrinsicSDNode is a MemSDNode, then INTRINSIC_W_CHAIN and INTRINSIC_VOID
must be MemSDNodes too.
Noticed by inspection.
llvm-svn: 214452
Currently when DAGCombine converts loads feeding a switch into a switch of
addresses feeding a load the new load inherits the isInvariant flag of the left
side. This is incorrect since invariant loads can be reordered in cases where it
is illegal to reoarder normal loads.
This patch adds an isInvariant parameter to getExtLoad() and updates all call
sites to pass in the data if they have it or false if they don't. It also
changes the DAGCombine to use that data to make the right decision when
creating the new load.
llvm-svn: 214449
MSVC [1] thinks `UseListShuffleVector` needs a copy constructor, but I
don't. Let's see if being explicit about `UseListOrder` is convincing.
[1]: http://lab.llvm.org:8011/builders/lld-x86_64-win7/builds/11664/steps/build_Lld/logs/stdio
Here's the failure:
C:/lld-x86_64_win7/lld-x86_64-win7/llvm.src/include\llvm/IR/UseListOrder.h(92): error C2248: 'llvm::UseListShuffleVector::operator =' : cannot access private member declared in class 'llvm::UseListShuffleVector' (C:\lld-x86_64_win7\lld-x86_64-win7\llvm.src\lib\Bitcode\Writer\ValueEnumerator.cpp) [C:\lld-x86_64_win7\lld-x86_64-win7\llvm.obj\lib\Bitcode\Writer\LLVMBitWriter.vcxproj]
C:/lld-x86_64_win7/lld-x86_64-win7/llvm.src/include\llvm/IR/UseListOrder.h(56) : see declaration of 'llvm::UseListShuffleVector::operator ='
C:/lld-x86_64_win7/lld-x86_64-win7/llvm.src/include\llvm/IR/UseListOrder.h(32) : see declaration of 'llvm::UseListShuffleVector'
This diagnostic occurred in the compiler generated function 'llvm::UseListOrder &llvm::UseListOrder::operator =(const llvm::UseListOrder &)'
llvm-svn: 214224
Remove the copy constructor added in r214178 to appease MSVC17 since it
shouldn't be called at all. My guess is that explicitly deleting it
will make the compiler happy. To round out the operations I've also
deleted copy assignment and added move assignment. Otherwise no
functionality change.
llvm-svn: 214213
This will let users in other libraries know which error occurred. In particular,
it will be possible to check if the parsing failed or if the file is not
bitcode.
llvm-svn: 214209
Per feedback on r214111, we are going to use null to represent unspecified
parameter. If the type array is {null}, it means a function that returns void;
If the type array is {null, null}, it means a variadic function that returns
void. In summary if we have more than one element in the type array and the last
element is null, it is a variadic function.
rdar://17628609
llvm-svn: 214189
Since we're storing lots of these, save two-pointers per vector with a
custom type rather than using the relatively heavy `SmallVector`.
Part of PR5680.
llvm-svn: 214135
DITypeArray is an array of DITypeRef, at its creation, we will create
DITypeRef (i.e use the identifier if the type node has an identifier).
This is the last patch to unique the type array of a subroutine type.
rdar://17628609
llvm-svn: 214132
Predict and serialize use-list order in bitcode. This makes the option
`-preserve-bc-use-list-order` work *most* of the time, but this is still
experimental.
- Builds a full value-table up front in the writer, sets up a list of
use-list orders to write out, and discards the table. This is a
simpler first step than determining the order from the various
overlapping IDs of values on-the-fly.
- The shuffles stored in the use-list order list have an unnecessarily
large memory footprint.
- `blockaddress` expressions cause functions to be materialized
out-of-order. For now I've ignored this problem, so use-list orders
will be wrong for constants used by functions that have block
addresses taken. There are a couple of ways to fix this, but I
don't have a concrete plan yet.
- When materializing functions lazily, the use-lists for constants
will not be correct. This use case is out of scope: what should the
use-list order be, if it's incomplete?
This is part of PR5680.
llvm-svn: 214125
Typedef DIArray to DITypedArray<DIDescriptor>. Also typedef DITypeArray as
DITypedArray<DITypeRef>.
This is the third of a series of patches to handle type uniqueing of the
type array for a subroutine type.
This commit should have no functionality change.
llvm-svn: 214115
This is the second of a series of patches to handle type uniqueing of the
type array for a subroutine type.
For vector and array types, getElements returns the array of subranges, so it
is a better name than getTypeArray. Even for class, struct and enum types,
getElements returns the members, which can be subprograms.
setArrays can set up to two arrays, the second is the templates.
This commit should have no functionality change.
llvm-svn: 214112
This is the first of a series of patches to handle type uniqueing of the
type array for a subroutine type.
This commit makes sure unspecified_parameter is a DIType to enable converting
the type array for a subroutine type to an array of DITypes.
This commit should have no functionality change. With this commit, we may
change unspecified type to be a DITrivialType instead of a DIType.
llvm-svn: 214111
Rename to allowsMisalignedMemoryAccess.
On R600, 8 and 16 byte accesses are mostly OK with 4-byte alignment,
and don't need to be split into multiple accesses. Vector loads with
an alignment of the element type are not uncommon in OpenCL code.
llvm-svn: 214055
checking whether the ArrayRef is equal to an explicit list of arguments.
This is particularly easy to implement even without variadic templates
because ArrayRef happens to be homogeneously typed. As a consequence we
can use a "clever" wrapper type and default arguments to capture in
a single method many arguments as well as *how many* arguments the user
specified.
Thanks to Dave Blaikie for helping me pull together this little helper.
Suggestions for how to improve or generalize it are of course welcome.
I'll be using it immediately in my follow-up patch. =D
llvm-svn: 214041
over each node in the worklist prior to combining.
This allows the combiner to produce new nodes which need to go back
through legalization. This is particularly useful when generating
operands to target specific nodes in a post-legalize DAG combine where
the operands are significantly easier to express as pre-legalized
operations. My immediate use case will be PSHUFB formation where we need
to build a constant shuffle mask with a build_vector node.
This also refactors the relevant functionality in the legalizer to
support this, and updates relevant tests. I've spoken to the R600 folks
and these changes look like improvements to them. The avx512 change
needs to be investigated, I suspect there is a disagreement between the
legalizer and the DAG combiner there, but it seems a minor issue so
leaving it to be re-evaluated after this patch.
Differential Revision: http://reviews.llvm.org/D4564
llvm-svn: 214020
This patch removes the empty coverage mapping regions.
Those regions were produced by clang's old mapping region generation
algorithm, but the new algorithm doesn't generate them.
llvm-svn: 213981
This is the first commit in a series that add an @llvm.assume intrinsic which
can be used to provide the optimizer with a condition it may assume to be true
(when the control flow would hit the intrinsic call). Some basic properties are added here:
- llvm.invariant(true) is dead.
- llvm.invariant(false) is unreachable (this directly corresponds to the
documented behavior of MSVC's __assume(0)), so is llvm.invariant(undef).
The intrinsic is tagged as writing arbitrarily, in order to maintain control
dependencies. BasicAA has been updated, however, to return NoModRef for any
particular location-based query so that we don't unnecessarily block code
motion.
llvm-svn: 213973
address of the stack guard was being spilled to the stack.
Previously the address of the stack guard would get spilled to the stack if it
was impossible to keep it in a register. This patch introduces a new target
independent node and pseudo instruction which gets expanded post-RA to a
sequence of instructions that load the stack guard value. Register allocator
can now just remat the value when it can't keep it in a register.
<rdar://problem/12475629>
llvm-svn: 213967
Ugh. Turns out not even transformation passes link in how to read IR.
I sincerely believe the buildbots will finally agree with my system
after this though. (I don't really understand why all of this has been
working on my system, but not on all the buildbots.)
Create a new tool called llvm-uselistorder to use for verifying use-list
order. For now, just dump everything from the (now defunct)
-verify-use-list-order pass into the tool.
This might be a better way to test use-list order anyway.
Part of PR5680.
llvm-svn: 213957
This recommits r208930, r208933, and r208975 (by reverting r209338) and
reverts r209529 (the FIXME to readd this functionality once the tools
were fixed) now that DWP has been fixed to cope with a single section
for all fission type units.
Original commit message:
"Since type units in the dwo file are handled by a debug aware tool,
they don't need to leverage the ELF comdat grouping to implement
deduplication. Avoid creating all the .group sections for these as a
space optimization."
llvm-svn: 213956
In the process of fixing the noalias parameter -> metadata conversion process
that will take place during inlining (which will be committed soon, but not
turned on by default), I have come to realize that the semantics provided by
yesterday's commit are not really what we want. Here's why:
void foo(noalias a, noalias b, noalias c, bool x) {
*q = x ? a : b;
*c = *q;
}
Generically, we know that *c does not alias with *a and with *b (so there is an
'and' in what we know we're not), and we know that *q might be derived from *a
or from *b (so there is an 'or' in what we know that we are). So we do not want
the semantics currently, where any noalias scope matching any alias.scope
causes a NoAlias return. What we want to know is that the noalias scopes form a
superset of the alias.scope list (meaning that all the things we know we're not
is a superset of all of things the other instruction might be).
Making that change, however, introduces a composibility problem. If we inline
once, adding the noalias metadata, and then inline again adding more, and we
append new scopes onto the noalias and alias.scope lists each time. But, this
means that we could change what was a NoAlias result previously into a MayAlias
result because we appended an additional scope onto one of the alias.scope
lists. So, instead of giving scopes the ability to have parents (which I had
borrowed from the TBAA implementation, but seems increasingly unlikely to be
useful in practice), I've given them domains. The subset/superset condition now
applies within each domain independently, and we only need it to hold in one
domain. Each time we inline, we add the new scopes in a new scope domain, and
everything now composes nicely. In addition, this simplifies the
implementation.
llvm-svn: 213948
The dragonegg buildbot (and others?) started failing after
r213945/r213946 because `llvm-as` wasn't linking in the bitcode reader.
I think moving the verify functions to the same file as the verify pass
should fix the build. Adding a command-line option for maintaining
use-list order in assembly as a drive-by to prevent warnings about
unused static functions.
llvm-svn: 213947
Add a -verify-use-list-order pass, which shuffles use-list order, writes
to bitcode, reads back, and verifies that the (shuffled) order matches.
- The utility functions live in lib/IR/UseListOrder.cpp.
- Moved (and renamed) the command-line option to enable writing
use-lists, so that this pass can return early if the use-list orders
aren't being serialized.
It's not clear that this pass is the right direction long-term (perhaps
a separate tool instead?), but short-term it's a great way to test the
use-list order prototype. I've added an XFAIL-ed testcase that I'm
hoping to get working pretty quickly.
This is part of PR5680.
llvm-svn: 213945
SDValues, fixing the two bugs left in the regression suite.
The key for both of these was the use a single value type rather than
a VTList which caused an unintentionally single-result merge-value node.
Fix this by getting the appropriate VTList in place.
Doing this exposed that the comments in x86's code abouth how MUL_LOHI
operands are handle is wrong. The bug with the use of out-of-range
result numbers was hiding the bug about the order of operands here (as
best i can tell). There are more places where the code appears to get
this backwards still...
llvm-svn: 213931
with a result number outside the range of results for the node.
I don't know how we managed to not really check this very basic
invariant for so long, but the code is *very* broken at this point.
I have over 270 test failures with the assert enabled. I'm committing it
disabled so that others can join in the cleanup effort and reproduce the
issues. I've also included one of the obvious fixes that I already
found. More fixes to come.
llvm-svn: 213926
This patch implements the data structures, the reader and
the writers for the new code coverage mapping system.
The new code coverage mapping system uses the instrumentation
based profiling to provide code coverage analysis.
llvm-svn: 213910
This patch minimizes the number of nops that must be emitted on X86 to satisfy
stackmap shadow constraints.
To minimize the number of nops inserted, the X86AsmPrinter now records the
size of the most recent stackmap's shadow in the StackMapShadowTracker class,
and tracks the number of instruction bytes emitted since the that stackmap
instruction was encountered. Padding is emitted (if it is required at all)
immediately before the next stackmap/patchpoint instruction, or at the end of
the basic block.
This optimization should reduce code-size and improve performance for people
using the llvm stackmap intrinsic on X86.
<rdar://problem/14959522>
llvm-svn: 213892
This commit adds scoped noalias metadata. The primary motivations for this
feature are:
1. To preserve noalias function attribute information when inlining
2. To provide the ability to model block-scope C99 restrict pointers
Neither of these two abilities are added here, only the necessary
infrastructure. In fact, there should be no change to existing functionality,
only the addition of new features. The logic that converts noalias function
parameters into this metadata during inlining will come in a follow-up commit.
What is added here is the ability to generally specify noalias memory-access
sets. Regarding the metadata, alias-analysis scopes are defined similar to TBAA
nodes:
!scope0 = metadata !{ metadata !"scope of foo()" }
!scope1 = metadata !{ metadata !"scope 1", metadata !scope0 }
!scope2 = metadata !{ metadata !"scope 2", metadata !scope0 }
!scope3 = metadata !{ metadata !"scope 2.1", metadata !scope2 }
!scope4 = metadata !{ metadata !"scope 2.2", metadata !scope2 }
Loads and stores can be tagged with an alias-analysis scope, and also, with a
noalias tag for a specific scope:
... = load %ptr1, !alias.scope !{ !scope1 }
... = load %ptr2, !alias.scope !{ !scope1, !scope2 }, !noalias !{ !scope1 }
When evaluating an aliasing query, if one of the instructions is associated
with an alias.scope id that is identical to the noalias scope associated with
the other instruction, or is a descendant (in the scope hierarchy) of the
noalias scope associated with the other instruction, then the two memory
accesses are assumed not to alias.
Note that is the first element of the scope metadata is a string, then it can
be combined accross functions and translation units. The string can be replaced
by a self-reference to create globally unqiue scope identifiers.
[Note: This overview is slightly stylized, since the metadata nodes really need
to just be numbers (!0 instead of !scope0), and the scope lists are also global
unnamed metadata.]
Existing noalias metadata in a callee is "cloned" for use by the inlined code.
This is necessary because the aliasing scopes are unique to each call site
(because of possible control dependencies on the aliasing properties). For
example, consider a function: foo(noalias a, noalias b) { *a = *b; } that gets
inlined into bar() { ... if (...) foo(a1, b1); ... if (...) foo(a2, b2); } --
now just because we know that a1 does not alias with b1 at the first call site,
and a2 does not alias with b2 at the second call site, we cannot let inlining
these functons have the metadata imply that a1 does not alias with b2.
llvm-svn: 213864
truncstores to support EVTs and return expand for non-simple ones.
This makes them more consistent with the isLegal... query style methods
and makes using them simpler in many scenarios.
No functionality actually changed.
llvm-svn: 213860
In order to enable the preservation of noalias function parameter information
after inlining, and the representation of block-level __restrict__ pointer
information (etc.), additional kinds of aliasing metadata will be introduced.
This metadata needs to be carried around in AliasAnalysis::Location objects
(and MMOs at the SDAG level), and so we need to generalize the current scheme
(which is hard-coded to just one TBAA MDNode*).
This commit introduces only the necessary refactoring to allow for the
introduction of other aliasing metadata types, but does not actually introduce
any (that will come in a follow-up commit). What it does introduce is a new
AAMDNodes structure to hold all of the aliasing metadata nodes associated with
a particular memory-accessing instruction, and uses that structure instead of
the raw MDNode* in AliasAnalysis::Location, etc.
No functionality change intended.
llvm-svn: 213859