If the SI_KILL operand is constant, we can either clear the exec mask if
the operand is negative, or do nothing otherwise.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 202337
any ranges to the list of ranges for the CU as we don't want to emit
them anyway. This ensures that we will still emit ranges if we have
a compile unit compiled with only line tables and one compiled with
full debug info requested (we'll emit for the one with full debug info).
Update testcase metadata accordingly to continue emitting ranges.
llvm-svn: 202333
and update everything accordingly. This can be used to conditionalize
the amount of output in the backend based on the amount of debug
requested/metadata emission scheme by a front end (e.g. clang).
Paired with a commit to clang.
llvm-svn: 202332
This handles pathological cases in which we see 2x increase in spill
code for large blocks (~50k instructions). I don't have a unit test
for this behavior.
Fixes rdar://16072279.
llvm-svn: 202304
The current approach to lower a vsetult is to flip the sign bit of the
operands, swap the operands and then use a (signed) pcmpgt. psubus (unsigned
saturating subtract) can be used to emulate a vsetult more efficiently:
+ case ISD::SETULT: {
+ // If the comparison is against a constant we can turn this into a
+ // setule. With psubus, setule does not require a swap. This is
+ // beneficial because the constant in the register is no longer
+ // destructed as the destination so it can be hoisted out of a loop.
I also enable lowering via psubus in a few other cases where it's clearly
beneficial: setule and setuge if minu/maxu cannot be used.
rdar://problem/14338765
Patch by Adam Nemet <anemet@apple.com>.
llvm-svn: 202301
The aggressive anti-dependency breaker scans instructions, bottom-up, within the
scheduling region in order to find opportunities where register renaming can
be used to break anti-dependencies.
Unfortunately, the aggressive anti-dep breaker was treating a register definition
as defining all of that register's aliases (including super registers). This behavior
is incorrect when the super register is live and there are other definitions of
subregisters of the super register.
For example, given the following sequence:
%CR2EQ<def> = CROR %CR3UN, %CR3UN<kill>
%CR2GT<def> = IMPLICIT_DEF
%X4<def> = MFOCRF8 %CR2
the analysis of the first subregister definition would work as expected:
Anti: %CR2GT<def> = IMPLICIT_DEF
Def Groups: CR2GT=g194->g0(via CR2)
Antidep reg: CR2GT (zero group)
Use Groups:
but the analysis of the second one would not:
Anti: %CR2EQ<def> = CROR %CR3UN, %CR3UN<kill>
Def Groups: CR2EQ=g195
Antidep reg: CR2EQ
Rename Candidates for Group g195: ...
because, when processing the %CR2GT<def>, we'd mark all super registers of
%CR2GT (%CR2 in this case) as defined. As a result, when processing
%CR2EQ<def>, %CR2 no longer appears to be live, and %CR2EQ<def>'s group is not
%unioned with the %CR2 group.
I don't have an in-tree test case for this yet (and even if I did, I don't have
a small one).
llvm-svn: 202294
We should apply fastcc whenever profitable. We can expand this list,
but there are lots of conventions with performance implications that we
don't want to change.
Differential Revision: http://llvm-reviews.chandlerc.com/D2705
llvm-svn: 202293
COFF object files with 0 as string table size are currently rejected. This
prevents us from reading object files written by tools like cvtres that
violate the PECOFF spec and write 0 instead of 4 for the size of an empty
string table.
llvm-svn: 202292
We don't have any test with more than 6 address spaces, so a DenseMap is
probably not the correct answer.
An unsorted array would also be OK, but we have to sort it for printing anyway.
llvm-svn: 202275
Summary:
This should fix the MCJIT unit tests that were broken by r201792 on the MIPS buildbot.
MIPS currently uses the default implementation of sys::getHostCPUName() which
always returns "generic". For now, we will accept "generic" and coerce it to
"mips32" or "mips64" depending on the target architecture like we do for empty
CPU names.
Reviewers: jacksprat, matheusalmeida
Reviewed By: jacksprat
Differential Revision: http://llvm-reviews.chandlerc.com/D2878
llvm-svn: 202253
the default.
Based on the patch by Matt Arsenault, D1764!
I switched one place to use the more direct pointer type to compute the
desired address space, and I reworked the memcpy rewriting section to
reflect significant refactorings that this patch helped inspire.
Thanks to several of the folks who helped review and improve the patch
as well.
llvm-svn: 202247
to work independently for the slice side and the other side.
This allows us to only compute the minimum of the two when we actually
rewrite to a memcpy that needs to take the minimum, and preserve higher
alignment for one side or the other when rewriting to loads and stores.
This fix was inspired by seeing the result of some refactoring that
makes addrspace handling better.
llvm-svn: 202242
target_link_libraries(INTERFACE) doesn't bring inter-target dependencies in add_library,
although final targets have dependencies to whole dependent libraries.
It makes most libraries can be built in parallel.
target_link_libraries(PRIVATE) is used to shaared library.
Each dependent library is linked to the target.so, and its user will not see its grandchildren.
For example,
- libclang.so has sufficient libclang*.a(s).
- c-index-test requires just only libclang.so.
FIXME: lld is tweaked minimally. Adding INTERFACE in each library would be better thing.
llvm-svn: 202241
D1764, which in turn set off the other refactorings to make
'getSliceAlign()' a sensible thing.
There are two possible inputs to the required alignment of a memory
transfer intrinsic: the alignment constraints of the source and the
destination. If we are *only* introducing a (potentially new) offset
onto one side of the transfer, we don't need to consider the alignment
constraints of the other side. Use this to simplify the logic feeding
into alignment computation for unsplit transfers.
Also, hoist the clamp of the magical zero alignment for these intrinsics
to the more customary one alignment early. This lets several other
conditions melt away.
No functionality changed. There is a further improvement this exposes
which *will* change functionality, but that's arriving in a separate
patch.
llvm-svn: 202232
rewriting logic: don't pass custom offsets for the adjusted pointer to
the new alloca.
We always passed NewBeginOffset here. Sometimes we spelled it
BeginOffset, but only when they were in fact equal. Whats worse, the API
is set up so that you can't reasonably call it with anything else -- it
assumes that you're passing it an offset relative to the *original*
alloca that happens to fall within the new one. That's the whole point
of NewBeginOffset, it's the clamped beginning offset.
No functionality changed.
llvm-svn: 202231
alignment of the slice being rewritten, not any arbitrary offset.
Every caller is really just trying to compute the alignment for the
whole slice, never for some arbitrary alignment. They are also just
passing a type when they have one to see if we can skip an explicit
alignment in the IR by using the type's alignment. This makes for a much
simpler interface.
Another refactoring inspired by the addrspace patch for SROA, although
only loosely related.
llvm-svn: 202230
consistency with memcpy rewriting, and fix a latent bug in the alignment
management for memset.
The alignment issue is that getAdjustedAllocaPtr is computing the
*relative* offset into the new alloca, but the alignment isn't being set
to the relative offset, it was using the the absolute offset which is
into the old alloca.
I don't think its possible to write a test case that actually reaches
this code where the resulting alignment would be observably different,
but the intent was clearly to use the relative offset within the new
alloca.
llvm-svn: 202229
rather than passing them as arguments.
While I generally prefer actual arguments, in this case the readability
loss is substantial. By using members we avoid repeatedly calculating
the offsets, and once we're using members it is useful to ensure that
those names *always* refer to the original-alloca-relative new offset
for a rewritten slice.
No functionality changed. Follow-up refactoring, all toward getting the
address space patch merged.
llvm-svn: 202228
slice being rewritten.
We had the same code scattered across most of the visits. Instead,
compute the new offsets and the slice size once when we start to visit
a particular slice, and use the member variables from then on. This
reduces quite a bit of code duplication.
No functionality changed. Refactoring inspired to make it easier to
apply the address space patch to SROA.
llvm-svn: 202227
checking in SROA.
The primary change is to just rely on uge for checking that the offset
is within the allocation size. This removes the explicit checks against
isNegative which were terribly error prone (including the reversed logic
that led to PR18615) and prevented us from supporting stack allocations
larger than half the address space.... Ok, so maybe the latter isn't
*common* but it's a silly restriction to have.
Also, we used to try to support a PHI node which loaded from before the
start of the allocation if any of the loaded bytes were within the
allocation. This doesn't make any sense, we have never really supported
loading or storing *before* the allocation starts. The simplified logic
just doesn't care.
We continue to allow loading past the end of the allocation in part to
support cases where there is a PHI and some loads are larger than others
and the larger ones reach past the end of the allocation. We could solve
this a different and more conservative way, but I'm still somewhat
paranoid about this.
llvm-svn: 202224
Eventually DataLayoutPass should go away, but for now that is the only easy
way to get a DataLayout in some APIs. This patch only changes the ones that
have easy access to a Module.
One interesting issue with sometimes using DataLayoutPass and sometimes
fetching it from the Module is that we have to make sure they are equivalent.
We can get most of the way there by always constructing the pass with a Module.
In fact, the pass could be changed to point to an external DataLayout instead
of owning one to make this stricter.
Unfortunately, the C api passes a DataLayout, so it has to be up to the caller
to make sure the pass and the module are in sync.
llvm-svn: 202204
No tool does this currently, but as everything else in a module we should be
able to change its DataLayout.
Most of the fix is in DataLayout to make sure it can be reset properly.
The test uses Module::setDataLayout since the fact that we mutate a DataLayout
is an implementation detail. The module could hold a OwningPtr<DataLayout> and
the DataLayout itself could be immutable.
Thanks to Philip Reames for pushing me in the right direction.
llvm-svn: 202198
their inputs come from std::stable_sort and they are not total orders.
I'm not a huge fan of this, but the really bad std::stable_sort is right
at the beginning of Reassociate. After we commit to stable-sort based
consistent respect of source order, the downstream sorts shouldn't undo
that unless they have a total order or they are used in an
order-insensitive way. Neither appears to be true for these cases.
I don't have particularly good test cases, but this jumped out by
inspection when looking for output instability in this pass due to
changes in the ordering of std::sort.
llvm-svn: 202196
implemented this way a long time ago and due to the overwhelming bugs
that surfaced, moved to a much more relaxed variant. Richard Smith would
like to understand the magnitude of this problem and it seems fairly
harmless to keep some flag-controlled logic to get the extremely strict
behavior here. I'll remove it if it doesn't prove useful.
llvm-svn: 202193
We need to abort the formation of counter-register-based loops where there are
128-bit integer operations that might become function calls.
llvm-svn: 202192
Now that DataLayout is not a pass, store one in Module.
Since the C API expects to be able to get a char* to the datalayout description,
we have to keep a std::string somewhere. This patch keeps it in Module and also
uses it to represent modules without a DataLayout.
Once DataLayout is mandatory, we should probably move the string to DataLayout
itself since it won't be necessary anymore to represent the special case of a
module without a DataLayout.
llvm-svn: 202190
Variadic functions have an unspecified parameter tag after the last
argument. In IR this is represented as an unspecified parameter in the
subroutine type.
Paired commit with CFE r202185.
rdar://problem/13690847
This re-applies r202184 + a bugfix in DwarfDebug's argument handling.
llvm-svn: 202188
Variadic functions have an unspecified parameter tag after the last
argument. In IR this is represented as an unspecified parameter in the
subroutine type.
Paired commit with CFE.
rdar://problem/13690847
llvm-svn: 202184
The function with uwtable attribute might be visited by the
stack unwinder, thus the link register should be considered
as clobbered after the execution of the branch and link
instruction (i.e. the definition of the machine instruction
can't be ignored) even when the callee function are marked
with noreturn.
llvm-svn: 202165
The behaviour of the XCore's instruction buffer means that the performance
of the same code sequence can differ depending on whether it starts at a 4
byte aligned address or not. Since we don't model the instruction buffer
in the backend we have no way of knowing for sure if it is beneficial to
word align a specific function. However, in the absence of precise
modelling, it is better on balance to word align functions because:
* It makes a fetch-nop while executing the prologue slightly less likely.
* If we don't word align functions then a small perturbation in one
function can have a dramatic knock on effect. If the size of the function
changes it might change the alignment and therefore the performance of
all the functions that happen to follow it in the binary. This butterfly
effect makes it harder to reason about and measure the performance of
code.
llvm-svn: 202163
just "load". This helps avoid pointless de-duping with order-sensitive
numbers as we already have unique names from the original load. It also
makes the resulting IR quite a bit easier to read.
llvm-svn: 202140
the pointer adjustment code. This is the primary code path that creates
totally new instructions in SROA and being able to lump them based on
the pointer value's name for which they were created causes
*significantly* fewer name collisions and general noise in the debug
output. This is particularly significant because it is making it much
harder to track down instability in the output of SROA, as name
de-duplication is a totally harmless form of instability that gets in
the way of seeing real problems.
The new fancy naming scheme tries to dig out the root "pre-SROA" name
for pointer values and associate that all the way through the pointer
formation instructions. Digging out the root is important to prevent the
multiple iterative rounds of SROA from just layering too much cruft on
top of cruft here. We already track the layers of SROAs iteration in the
alloca name prefix. We don't need to duplicate it here.
Should have no functionality change, and shouldn't have any really
measurable impact on NDEBUG builds, as most of the complex logic is
debug-only.
llvm-svn: 202139
using OldPtr more heavily. Lots of this code was written before the
rewriter had an OldPtr member setup ahead of time. There are already
asserts in place that should ensure this doesn't change any
functionality.
llvm-svn: 202135
the break statement, not just think it to yourself....
No idea how this worked at all, much less survived most bots, my
bootstrap, and some bot bootstraps!
The Polly one didn't survive, and this was filed as PR18959. I don't
have a reduced test case and honestly I'm not seeing the need. What we
probably need here are better asserts / debug-build behavior in
SmallPtrSet so that this madness doesn't make it so far.
llvm-svn: 202129
sorting it. This helps uncover latent reliance on the original ordering
which aren't guaranteed to be preserved by std::sort (but often are),
and which are based on the use-def chain orderings which also aren't
(technically) guaranteed.
Only available in C++11 debug builds, and behind a flag to prevent noise
at the moment, but this is generally useful so figured I'd put it in the
tree rather than keeping it out-of-tree.
llvm-svn: 202106
the destination operand or source operand of a memmove.
It so happens that it was impossible for SROA to try to rewrite
self-memmove where the operands are *identical*, because either such
a think is volatile (and we don't rewrite) or it is non-volatile, and we
don't even register it as a use of the alloca.
However, making the 'IsDest' test *rely* on this subtle fact is... Very
confusing for the reader. We should use the direct and readily available
test of the Use* which gives us concrete information about which operand
is being rewritten.
No functionality changed, I hope! ;]
llvm-svn: 202103
ordering.
The fundamental problem that we're hitting here is that the use-def
chain ordering is *itself* not a stable thing to be relying on in the
rewriting for SROA. Further, we use a non-stable sort over the slices to
arrange them based on the section of the alloca they're operating on.
With a debugging STL implementation (or different implementations in
stage2 and stage3) this can cause stage2 != stage3.
The specific aspect of this problem fixed in this commit deals with the
rewriting and load-speculation around PHIs and Selects. This, like many
other aspects of the use-rewriting in SROA, is really part of the
"strong SSA-formation" that is doen by SROA where it works very hard to
canonicalize loads and stores in *just* the right way to satisfy the
needs of mem2reg[1]. When we have a select (or a PHI) with 2 uses of the
same alloca, we test that loads downstream of the select are
speculatable around it twice. If only one of the operands to the select
needs to be rewritten, then if we get lucky we rewrite that one first
and the select is immediately speculatable. This can cause the order of
operand visitation, and thus the order of slices to be rewritten, to
change an alloca from promotable to non-promotable and vice versa.
The fix is to defer all of the speculation until *after* the rewrite
phase is done. Once we've rewritten everything, we can accurately test
for whether speculation will work (once, instead of twice!) and the
order ceases to matter.
This also happens to simplify the other subtlety of speculation -- we
need to *not* speculate anything unless the result of speculating will
make the alloca fully promotable by mem2reg. I had a previous attempt at
simplifying this, but it was still pretty horrible.
There is actually already a *really* nice test case for this in
basictest.ll, but on multiple STL implementations and inputs, we just
got "lucky". Fortunately, the test case is very small and we can
essentially build it in exactly the opposite way to get reasonable
coverage in both directions even from normal STL implementations.
llvm-svn: 202092
The patch defines new or refines existing generic scheduling classes to match
the behavior of the SSE instructions.
It also maps those scheduling classes on the related SSE instructions.
<rdar://problem/15607571>
llvm-svn: 202065
After this I will set the default back to F_None. The advantage is that
before this patch forgetting to set F_Binary would corrupt a file on windows.
Forgetting to set F_Text produces one that cannot be read in notepad, which
is a better failure mode :-)
llvm-svn: 202052
During the LTO phase LICM will move loop invariant global variables out of loops
(informed by GlobalModRef). This makes more loops countable presenting
opportunity for the loop vectorizer.
Adding the loop vectorizer improves some TSVC benchmarks and twolf/ref dataset
(5%) on x86-64.
radar://15970632
llvm-svn: 202051
The .error directive is similar to .err in that it will halt assembly if it is
evaluated for assembly. However, it permits a user supplied message to be
rendered.
llvm-svn: 201999
The .ifeqs directive assembles the following code if the quoted string
parameters are equal. The strings must be quoted using double quotes.
llvm-svn: 201998
.align is handled specially on certain targets. .align without any parameters
on ARM indicates a default alignment (4). Handle the special case in the target
parser, but fall back to the generic parser for the normal version.
llvm-svn: 201988
The .ifne directive assembles the following section of code if the argument
expression is non-zero. Effectively, it is equivalent to if.
llvm-svn: 201986
If the strings are not quoted, the first string stops at the first comma, and
the second string stops at the end of the line. Strings which contain
whitespace should be quoted. Unquoted space is to be discarded.
llvm-svn: 201985
The .err directive produces an error whenever it is assembled. This can be
useful for preventing assembly when an unexpected condition occurs.
llvm-svn: 201984
Before this patch they would take an boolean argument to say if the path
already existed. This was redundant with the returned error_code which is able
to represent that. This allowed for callers to incorrectly check only the
existed flag instead of first checking the error code.
Instead, pass in a boolean flag to say if the previous (non-)existence should be
an error or not.
Callers of the of the old simple versions are not affected. They still ignore
the previous (non-)existence as they did before.
llvm-svn: 201979
The LLVMSupport library implementation consolidates all dependencies on
system libraries. Move the logic gathering system libraries out of
'cmake/modules/LLVM-Config.cmake' and into 'lib/Support/CMakeLists.txt'.
Use the target_link_libraries() command there to tell CMake about the
link dependencies of the LLVMSupport implementation. CMake will
automatically propagate this to all targets that link LLVMSupport
directly or indirectly.
We still need to build knowledge of system library dependencies into
'llvm-config'. Store the list of libraries needed in a property on
LLVMSupport and teach 'tools/llvm-config/CMakeLists.txt' to retrieve it
from there.
Drop all calls to 'link_system_libs' and 'get_system_libs' from our
CMake code. Replace their implementations with a warning that explains
the calls are no longer necessary. Also drop from 'LLVMConfig.cmake'
the HAVE_* and related variables that were published there only to allow
'get_system_libs' to run outside our build process.
Contributed by Brad King.
llvm-svn: 201969
This adds support for the .short and its alias .hword for adding literal values
into the object file. This is similar to the .word directive, however, rather
than inserting a value of 4 bytes, adds a 2-byte value.
llvm-svn: 201968
Offsets past the range of single-slash encoding are encoded as base64,
padded to 6 characters, and prefixed with two slashes. This encoding is
undocumented but used by MSVC.
llvm-svn: 201940
This commit moves getSLEB128Size() and getULEB128Size() from
MCAsmInfo to LEB128.h and removes some copy-and-paste code.
Besides, this commit also adds some unit tests for the LEB128
functions.
llvm-svn: 201937
The LLVM diagnostic are now wired-up in clang (since r200931),
thus the user experience will not be impacted by this change
anymore.
Related to <rdar://problem/15886697>
llvm-svn: 201915
CodeGenPrepare uses extensively TargetLowering which is part of libLLVMCodeGen.
This is a layer violation which would introduce eventually a dependence on
CodeGen in ScalarOpts.
Move CodeGenPrepare into libLLVMCodeGen to avoid that.
Follow-up of <rdar://problem/15519855>
llvm-svn: 201912
shifted mask rather than masking and shifting separately.
The patch adds this transformation to the DAGCombiner:
(shl (and (setcc:i8v16 ...) N01C) N1C) -> (and (setcc:i8v16 ...) N01C<<N1C)
<rdar://problem/16054492>
Patch by Adam Nemet <anemet@apple.com>
llvm-svn: 201906
The lowering of the frame index for stackmaps and patchpoints requires some
target-specific magic and should therefore be handled in the target-specific
eliminateFrameIndex method.
This is related to <rdar://problem/16106219>
llvm-svn: 201904
This interface allows IRObjectFile to be implemented without having dummy
methods for all section and segment related methods.
Both llvm-ar and llvm-nm are changed to use it. Unfortunately the mangler is
still not plugged in since it requires some refactoring to make a Module hold
a DataLayout.
llvm-svn: 201881
We were just emitting a label for this section for no real reason - this
caused us to emit the section even though we never put anything in it.
Not bothering with a test (though not adamantly anti-test) because it
seems somewhat arbitrary to test for the absence of this section anymore
than the absence of any other section.
llvm-svn: 201876
in the dependence test, we used to discard some information that the
delinearization provides: the size of the innermost dimension of an array,
i.e., the size of scalars stored in the array, and the remainder of the
delinearization that provides the offset from which the array reads start,
i.e., the base address of the array.
To avoid losing this data in the rest of the data dependence analysis, the fix
is to multiply the access function in the last delinearized dimension by its
size, effectively making the size of the last dimension to always be in bytes,
and then add the remainder of delinearization to the last subscript,
effectively making the last subscript start at the base address of the array.
llvm-svn: 201867
Because the delinearization is not a global analysis pass, it will compute the
delinearization independently of knowledge about the way the delinearization
happened for other data accesses to the same array: the dependence analysis will
only trigger the delinearization on a tuple of access functions, and thus
delinearization may compute different subscripts sizes for a same array. When
that happens the safest is to discard the delinearized information.
llvm-svn: 201866
This replaces the old NoIntegratedAssembler with at TargetOption. This is
more flexible and will be used to forward clang's -no-integrated-as option.
llvm-svn: 201836
I am really sorry for the noise, but the current state where some parts of the
code use TD (from the old name: TargetData) and other parts use DL makes it
hard to write a patch that changes where those variables come from and how
they are passed along.
llvm-svn: 201827
The SuppressWarnings flag, unfortunately, isn't very useful for custom tools
that want to use the LLVM module linker. So I'm changing it to a parameter of
the Linker, and the flag itself moves to the llvm-link tool.
For the time being as SuppressWarnings is pretty much the only "option" it
seems reasonable to propagate it to Linker objects. If we end up with more
options in the future, some sort of "struct collecting options" may be a
better idea.
llvm-svn: 201819
The va_start macro for AArch64 must set va_list.__stack to the address
following the last named argument on the stack, rounded up to an alignment
of 8 bytes.
llvm-svn: 201797
Summary:
This removes the need to coerce UnknownABI to the default ABI (O32 for
MIPS32, N64 for MIPS64 [*]) in both MipsSubtarget and MipsAsmParser.
Clang has been updated to disable both possible default ABI's before enabling
the ABI it intends to use.
[*] N64 being the default for MIPS64 is not actually correct.
However N32 is not fully implemented/tested yet.
Depends on: D2830
Reviewers: jacksprat, matheusalmeida
Reviewed By: matheusalmeida
Differential Revision: http://llvm-reviews.chandlerc.com/D2832
Differential Revision: http://llvm-reviews.chandlerc.com/D2846
llvm-svn: 201792
Summary:
This is consistent with the integrated assembler.
All mips64 codegen tests previously passed -mcpu. Removed -mcpu from
blez_bgez.ll and const-mult.ll to cover the default case.
Ideally, the two implementations of selectMipsCPU() will be merged but it's
proven difficult to find a home for the function that doesn't cause link errors.
For now, we'll hoist the common functionality into a function and mark it with
FIXME's.
Reviewers: jacksprat, matheusalmeida
Reviewed By: matheusalmeida
Differential Revision: http://llvm-reviews.chandlerc.com/D2830
llvm-svn: 201782