Summary:
This drastically reduces lit test execution startup time on Windows. Our
previous strategy was to manually create one Process per job and manage
the worker pool ourselves. Instead, let's use the worker pool provided
by multiprocessing. multiprocessing.Pool(jobs) returns almost
immediately, and initializes the appropriate number of workers, so they
can all start executing tests immediately. This avoids the ramp-up
period that the old implementation suffers from. This appears to speed
up small test runs.
Here are some timings of the llvm-readobj tests on Windows using the
various execution strategies:
# multiprocessing.Pool:
$ for i in `seq 1 3`; do tim python ./bin/llvm-lit.py -sv ../llvm/test/tools/llvm-readobj/ --use-process-pool |& grep real: ; done
real: 0m1.156s
real: 0m1.078s
real: 0m1.094s
# multiprocessing.Process:
$ for i in `seq 1 3`; do tim python ./bin/llvm-lit.py -sv ../llvm/test/tools/llvm-readobj/ --use-processes |& grep real: ; done
real: 0m6.062s
real: 0m5.860s
real: 0m5.984s
# threading.Thread:
$ for i in `seq 1 3`; do tim python ./bin/llvm-lit.py -sv ../llvm/test/tools/llvm-readobj/ --use-threads |& grep real: ; done
real: 0m9.438s
real: 0m10.765s
real: 0m11.079s
I kept the old code to launch processes in case this change doesn't work
on all platforms that LLVM supports, but at some point I would like to
remove both the threading and old multiprocessing execution strategies.
Reviewers: modocache, rafael
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31677
llvm-svn: 299560
Commit r298799 changed code that made the XFAIL on MachineBranchProb.ll
irrelevant, but some configurations still failed. I can't reproduce it
locally, so I'm hoping that enabling this will tell me if some
configurations will really fail or if they were just too slow.
llvm-svn: 299558
This test case depends on the loop being vectorized without forcing the
vectorization factor. If the profitability ever changes in the future (due to
cost model improvements), the test may no longer work as intended. Instead of
checking the resulting IR, we should just check the instruction costs. The
costs will be computed regardless if vectorization is profitable.
llvm-svn: 299545
This is a generic combine enabled via target hook to reduce icmp logic as discussed in:
https://bugs.llvm.org/show_bug.cgi?id=32401
It's likely that other targets will want to enable this hook for scalar transforms,
and there are probably other patterns that can use bitwise logic to reduce comparisons.
Note that we are missing an IR canonicalization for these patterns, and we will probably
prefer the pair-of-compares form in IR (shorter, more likely to fold).
Differential Revision: https://reviews.llvm.org/D31483
llvm-svn: 299542
When DAGCombiner visits a SIGN_EXTEND_INREG of a BUILD_VECTOR with
constant operands, a new BUILD_VECTOR node will be created transformed
constants.
Llvm-stress found a case where the new BUILD_VECTOR had constant operands
of an illegal type, because the (legal) element type is in fact not a legal
scalar type.
This patch changes this so that the new BUILD_VECTOR has the same operand
type as the old one.
Review: Eli Friedman, Nirav Dave
https://bugs.llvm.org//show_bug.cgi?id=32422
llvm-svn: 299540
Summary:
Temporaries are now allocated to operands instead of predicates and this
allocation is used to correctly pair up the rendered operands with the
matched operands.
Previously, ComplexPatterns were allocated temporaries independently in the
Src Pattern and Dst Pattern, leading to mismatches. Additionally, the Dst
Pattern failed to account for the allocated index and therefore always used
temporary 0, 1, ... when it should have used base+0, base+1, ...
Thanks to Aditya Nandakumar for noticing the bug.
Depends on D30539
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar
Reviewed By: rovka
Subscribers: igorb, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D31054
llvm-svn: 299538
A number of backends (AArch64, MIPS, ARM) have been using
MCContext::reportError to report issues such as out-of-range fixup values in
their TgtAsmBackend. This is great, but because MCContext couldn't easily be
threaded through to the adjustFixupValue helper function from its usual
callsite (applyFixup), these backends ended up adding an MCContext* argument
and adding another call to applyFixup to processFixupValue. Adding an
MCContext parameter to applyFixup makes this unnecessary, and even better -
applyFixup can take a reference to MCContext rather than a potentially null
pointer.
Differential Revision: https://reviews.llvm.org/D30264
llvm-svn: 299529
This is a latent bug that's been hanging around for a while. For a loop-invariant
pointer, expandBounds would return the range {Ptr, Ptr}, but this was interpreted
as a half-open range, not a closed range. So we ended up planting incorrect
bounds checks. Even worse, they were tautological, so we ended up incorrectly
executing the optimized loop.
llvm-svn: 299526
Fix a bug in ARC contract pass where an iterator that pointed to a
deleted instruction was dereferenced.
It appears that tryToContractReleaseIntoStoreStrong was incorrectly
assuming that a call to objc_retain would not immediately follow a call
to objc_release.
rdar://problem/25276306
llvm-svn: 299507
Before r294774, there was a problem when lowering broadcasts to use
128-bit subvectors.
When we looked through a bitcast to find the broadcast input, we'd keep
using the original type, so you'd end up with things like:
(v8f32 (broadcast
(v4f32 (extract_subvector
(v8i32 V),
...))
))
r294774 fixed it to always emit subvectors with the scalar type of the
original source.
It also introduced some asserts, to check that we use scalars with
the same size, and vectors with the same number of elements.
The scalar size equality is checked earlier when looking through bitcasts,
and is a useful assert.
However, the number of elements don't have to be identical: we're always
going to extract a 128-bit subvector, and we can have different size
inputs if we looked through a concat_vector to find a 256-bit source.
Relax the overzealous assert.
Replace it with a check of the original source vector being 256 or 512
bits. If it's 128 bits, we can't extract_subvector from it.
Fixes PR32371.
llvm-svn: 299490
Decouple this setting from EnableIRPA.
To support function calls on AMDGPU, it is necessary to
report the global register usage throughout the kernel's
call graph, so callees need to be handled first.
llvm-svn: 299487
stores with some fixes.
Summary:
This enables us to cache the clobbering access for stores, despite the
fact that we can't rewrite the use-def chains themselves.
Early testing shows that, after this change, for larger testcases, it
will be a significant net positive (memory and time) to remove the
walker caching.
Reviewers: george.burgess.iv, davide
Subscribers: Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D31567
llvm-svn: 299486
Set correct default flags and section type based on its name for .text,
.data, .bss, .init_array, .fini_array, .preinit_array, .tdata, and .tbss
and support section name suffixes for .data.*, .rodata.*, .text.*,
.bss.*, .tdata.* and .tbss.* which matches the behavior of GAS.
Fixes PR31888.
Differential Revision: https://reviews.llvm.org/D30229
llvm-svn: 299484
This improves upon r246462: that prevented FMOVs from being emitted
for the cross-class INSERT_SUBREGs by disabling the formation of
INSERT_SUBREGs of LOAD. But the ld1.s that we started selecting
caused us to introduce partial dependencies on the vector register.
Avoid that by using SCALAR_TO_VECTOR: it's a first-class citizen that
is folded away by many patterns, including the scalar LDRS that we
want in this case.
Credit goes to Adam for finding the issue!
llvm-svn: 299482
If an instruction has a true dependency, it makes sense for to use that
register for any undef read operands in the same instruction (we'll have
to wait for that register to become available anyway). This logic
was already implemented. However, the code would then still try to
revisit that instruction and break the dependency (and always fail,
since by definition a true dependency has to be live before the
instruction). Avoid revisiting such instructions as a performance
optimization. No functional change.
Differential Revision: https://reviews.llvm.org/D30173
llvm-svn: 299467
Currently we only fold with ConstantInt RHS. This generalizes to any Constant RHS.
Differential Revision: https://reviews.llvm.org/D31610
llvm-svn: 299466
This mode is just like -mcmodel=small except that it moves the
thread pointer from TPIDR_EL0 to TPIDR_EL1.
Patch by Roland McGrath.
Differential Revision: https://reviews.llvm.org/D31624
llvm-svn: 299462
This shares detection logic with ARM(32), since AArch64 capable CPUs may
also run in 32-bit system mode.
We observe weird /proc/cpuinfo output for MSM8992 and MSM8994, where
they report all CPU cores as one single model, depending on which CPU
core the kernel is running on. As a workaround, we hardcode the known
CPU part name for these SoCs.
For big.LITTLE systems, this patch would only return the part name of
the first core (usually the little core). Proper support will be added
in a follow-up change.
Differential Revision: D31675
llvm-svn: 299458
MS assembly syntax provide us with the 'EVEN' directive as a synonymous to at&t '.even'.
This patch include the (small, simple) changes need to allow it.
Test is provided at the following (clang-side) review:
https://reviews.llvm.org/D27418
Differential Revision: https://reviews.llvm.org/D27417
llvm-svn: 299453
When the ProcessAllSections flag (introduced in r204398) is set RuntimeDyld is
supposed to make a call to the client's memory manager for every section in each
object that is loaded. Due to some missing checks, this was not happening in all
cases. This patch adds the missing cases, and fixes the Orc unit test that
verifies correct behavior for ProcessAllSections (The unit test had been
silently bailing out due to an ordering issue: a change in the test order meant
that this unit-test was running before the native target was registered. This
issue has also been fixed in this patch).
This fixes <rdar://problem/22789965>
llvm-svn: 299449
https://reviews.llvm.org/D30537 / https://reviews.llvm.org/rL296977 added these transforms
and other related transforms to the generic DAGCombiner (with a hook that x86 sets to true),
so these patterns should not exist by the time we reach the target-specific combiner hook.
llvm-svn: 299448
This patch optimizes two memory intrinsic operations: memset and memcpy based
on the profiled size of the operation. The high level transformation is like:
mem_op(..., size)
==>
switch (size) {
case s1:
mem_op(..., s1);
goto merge_bb;
case s2:
mem_op(..., s2);
goto merge_bb;
...
default:
mem_op(..., size);
goto merge_bb;
}
merge_bb:
Differential Revision: http://reviews.llvm.org/D28966
llvm-svn: 299446
Fix other cases of 'const StringRef' creeping back in at the same time.
This should fix the llvm-clang-x86_64-expensive-checks-win buildbot.
llvm-svn: 299433
This patch is a part one of two reviews, one for the clang and the other for LLVM.
The patch deletes the back-end intrinsics and adds support for them in the auto upgrade.
Differential Revision: https://reviews.llvm.org/D31393
llvm-svn: 299432
Summary:
Lift the restrictions that prevented the tree walking introduced in the
previous change and add support for patterns like:
(G_ADD (G_MUL (G_SEXT $src1), (G_SEXT $src2)), $src3) -> SMADDWrrr $dst, $src1, $src2, $src3
Also adds support for G_SEXT and G_ZEXT to support these cases.
One particular aspect of this that I should draw attention to is that I've
tried to be overly conservative in determining the safety of matches that
involve non-adjacent instructions and multiple basic blocks. This is intended
to be used as a cheap initial check and we may add a more expensive check in
the future. The current rules are:
* Reject if any instruction may load/store (we'd need to check for intervening
memory operations.
* Reject if any instruction has implicit operands.
* Reject if any instruction has unmodelled side-effects.
See isObviouslySafeToFold().
Reviewers: t.p.northover, javed.absar, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: ab
Subscribers: igorb, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D30539
llvm-svn: 299430
This patch teaches the hazard scheduler how to handle empty blocks
when search for the next real instruction when dealing with forbidden
slots.
Reviewers: slthakur
Differential Revision: https://reviews.llvm.org/D31293
llvm-svn: 299427
PSADBW pattern currently supports the 32 bit IR pattern and only GLT (greather than) comparison.
The patch extends the pattern to catch also 64 bit IR pattern and includes all other comparison types (not only GLT).
Differential Revision: https://reviews.llvm.org/D31577
llvm-svn: 299425
Otherwise, yamlize in YAMLTraits.h might be wrongly defined.
This makes some AMDGPU tests fail when LLVM_LINK_LLVM_DYLIB is set.
Differential Revision: https://reviews.llvm.org/D30508
llvm-svn: 299415
Summary:
The TypeTableBuilder provides stable storage for type records. We don't
need to copy all of the bytes into a flat vector before adding it to the
TpiStreamBuilder.
This makes addTypeRecord take an ArrayRef<uint8_t> and a hash code to go
with it, which seems like a simplification.
Reviewers: ruiu, zturner, inglorion
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31634
llvm-svn: 299406
Summary:
MASM can produce type streams that are not topologically sorted. It can
even produce type streams with circular references, but those are not
common in practice.
Reviewers: inglorion, ruiu
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31629
llvm-svn: 299403
Summary:
Add a hook for simplification of shufflevector's with the following rules:
- Constant folding - NFC, as it was already being done by the default handler.
- If only one of the operands is constant, constant fold the shuffle if the
mask does not select elements from the variable operand - to show the hook is firing and affecting the test-cases.
Reviewers: RKSimon, craig.topper, spatel, sanjoy, nlopes, majnemer
Reviewed By: spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31525
llvm-svn: 299393
Dont emit Mapping symbols for sections that contain only data.
Summary:
Dont emit mapping symbols for sections that contain only data.
Reviewers: rengolin, weimingz, kparzysz, t.p.northover, peter.smith
Reviewed By: t.p.northover
Patched by Shankar Easwaran <shankare@codeaurora.org>
Subscribers: alekseyshl, t.p.northover, llvm-commits
Differential Revision: https://reviews.llvm.org/D30724
llvm-svn: 299392
It can be costly to transfer from the gprs to the xmm registers and can prevent loads merging.
This patch splits vXi16/vXi32/vXi64 BUILD_VECTORS that use the same operand in multiple elements into a BUILD_VECTOR with only a single insertion of each of those elements and then performs an unary shuffle to duplicate the values.
There are a couple of minor regressions this patch unearths due to some missing MOVDDUP/BROADCAST folds that I will address in a future patch.
Note: Now that vector shuffle lowering and combining is pretty good we should be reusing that instead of duplicating so much in LowerBUILD_VECTOR - this is the first of several patches to address this.
Differential Revision: https://reviews.llvm.org/D31373
llvm-svn: 299387
It turns out that SimplifyDemandedInstructionBits will get called earlier and remove bits from C1 first. Effectively doing (X & (C1&C2)) | C2. So by the time it got to this check there could be no common bits.
I think the DAGCombiner has the same check but its check can be executed because it handles demanded bits later. I'll look at it next.
llvm-svn: 299384
The x86_64 ABI requires that the stack is 16 byte aligned on function calls. Thus, the 8-byte error code, which is pushed by the CPU for certain exceptions, leads to a misaligned stack. This results in bugs such as Bug 26413, where misaligned movaps instructions are generated.
This commit fixes the misalignment by adjusting the stack pointer in these cases. The adjustment is done at the beginning of the prologue generation by subtracting another 8 bytes from the stack pointer. These additional bytes are popped again in the function epilogue.
Fixes Bug 26413
Patch by Philipp Oppermann.
Differential Revision: https://reviews.llvm.org/D30049
llvm-svn: 299383
Summary:
Move the aarch64-type-promotion pass within the existing type promotion framework in CGP.
This change also support forking sexts when a new sext is required for promotion.
Note that change is based on D27853 and I am submitting this out early to provide a better idea on D27853.
Reviewers: jmolloy, mcrosier, javed.absar, qcolombet
Reviewed By: qcolombet
Subscribers: llvm-commits, aemerson, rengolin, mcrosier
Differential Revision: https://reviews.llvm.org/D28680
llvm-svn: 299379
Summary:
This changes the static method TimerGroup::printAllJSONValues from private to
public, to match the static method TimerGroup::printAll. When trying to drive
the reporting machinery by hand, the existing API is _almost_ flexible enough,
but this entrypoint is required to intermix printing timers with other
non-timer output.
The underlying motive here is a Swift change to consolidate the collection of
timers, LLVM statistics and other (non-assert-dependent) counters into JSON
files, which requires a bit of manual intervention in LLVM's stat and timer
output routines. See https://github.com/apple/swift/pull/8477 for details.
Reviewers: MatzeB
Reviewed By: MatzeB
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31566
llvm-svn: 299371
Support for writing this module code was removed in r73220, which was well
before the LLVM 3.0 release, so we do not need to be able to understand it
for backwards compatibility.
Differential Revision: https://reviews.llvm.org/D31563
llvm-svn: 299370
This reverts commit r299047 which is incorrect because the
simplification may result in incorrect propogation of undefs to users of
the folded shuffle.
Thanks to Andrea Di Biagio for pointing this out.
llvm-svn: 299368
1. Improve enum, function, and variable names.
2. Improve comments.
3. Fix variable capitalization.
4. Run clang-format.
As an existing code comment suggests, this should work with vector types / splat constants too,
so making this look right first will reduce the diffs needed for that change.
llvm-svn: 299365
This moves the isMask and isShiftedMask functions to be class methods. They now use the MathExtras.h function for single word size and leading/trailing zeros/ones or countPopulation for the multiword size. The previous implementation made multiple temorary memory allocations to do the bitwise arithmetic operations to match the MathExtras.h implementation.
Differential Revision: https://reviews.llvm.org/D31565
llvm-svn: 299362
- we are now using immediate AsmOperands so that the range check functions are
tablegen'ed.
- Big bonus is that error messages become much more accurate, i.e. instead of a
useless "invalid operand" error message it will not say that the immediate
operand must in range [x,y], which is why regression tests needed updating.
More tablegen operand descriptions could probably benefit from using
immediateAsmOperand, but this is a first good step to get rid of most of the
nearly identical range check functions. I will address the remaining immediate
operands in next clean ups.
Differential Revision: https://reviews.llvm.org/D31333
llvm-svn: 299358
This patch is one step to attempt to unify the main APInt interface and the tc functions used by APFloat.
This patch adds a WordType to APInt and uses that in all the tc functions. I've added temporary typedefs to APFloat to alias it to integerPart to keep the patch size down. I'll work on removing that in a future patch.
In future patches I hope to reuse the tc functions to implement some of the main APInt functionality.
I may remove APINT_ from BITS_PER_WORD and WORD_SIZE constants so that we don't have the repetitive APInt::APINT_ externally.
Differential Revision: https://reviews.llvm.org/D31523
llvm-svn: 299341
The callers have already performed the necessary cast before calling. This allows us to remove a comment that says the instruction must be a BinaryOperator and make it explicit in the argument type.
Had to add a default case to the switch because BinaryOperator::getOpcode() returns a BinaryOps enum.
llvm-svn: 299339
As far as I can tell this combine is fully handled by SimplifyDemandedInstructionBits.
I was only looking at this because it is the only user of APIntOps::isShiftedMask which is itself broken. As demonstrated by r299187. I was going to fix isShiftedMask and needed to make sure we had coverage for the new cases it would expose to this combine. But looks like we can nuke it instead.
Differential Revision: https://reviews.llvm.org/D31543
llvm-svn: 299337
Summary:
Depends on D30928.
This adds support for coercion of stores and memory instructions that do not require insertion to process.
Another few tests down.
I added the relevant tests from rle.ll
Reviewers: davide
Subscribers: llvm-commits, Prazek
Differential Revision: https://reviews.llvm.org/D30929
llvm-svn: 299330
Disable bypassing if one of the operands looks like a hash value. Slow
division often occurs in hashtable implementations and fast division is
never taken there because a hash value is extremely unlikely to have
enough upper bits set to zero.
A value is considered to be hash-like if it is produced by
1) XOR operation
2) Multiplication by a constant wider than the shorter type
3) PHI node with all incoming values being hash-like
Differential Revision: https://reviews.llvm.org/D28200
llvm-svn: 299329
Summary:
This enables us to cache the clobbering access for stores, despite the
fact that we can't rewrite the use-def chains themselves.
Early testing shows that, after this change, for larger testcases, it will be a significant net positive (memory and time) to remove the walker caching.
Reviewers: george.burgess.iv, davide
Subscribers: Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D31567
llvm-svn: 299322
Summary:
GreatestComonDivisor currently makes a copy of both its inputs. Then in the loop we do one move and two copies, plus any allocation the urem call does.
This patch changes it to take its inputs by value so that we can do a move of any rvalue inputs instead of copying. Then in the loop we do 3 move assignments and no copies. This way the only possible allocations we have in the loop is from the urem call.
Reviewers: dblaikie, RKSimon, hans
Reviewed By: dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31572
llvm-svn: 299314
The code already allowed vector types in via "isInteger" (which might want
a more specific name), so use splat-friendly constant predicates to match
those types.
llvm-svn: 299304
processing the congruence class of the store.
Because we use the stored value of a store as the def, it isn't dead
just because it appears as a def when it comes from a store.
Note: I have not hit any cases with the memory code as it is where
this breaks anything, just because of what memory congruences we
actually allow. In a followup that improves memory congruence,
this bug actually breaks real stuff (but the verifier catches it).
llvm-svn: 299300
This can only happen when we have a mix of zero and undef elements and the two vectors have a different arrangement of zeros/undefs. The shuffle should eventually be constant folded to all zeros.
Fixes PR32484.
llvm-svn: 299291
REG_SEQUENCE falls into the same category as COPY for operands mapping:
- They don't have MCInstrDesc with register constraints
- The input variable could use whatever register classes
- It is possible to have register class already assigned to the operands
In particular, given REG_SEQUENCE are always target specific because of
the subreg indices. Those indices must apply to the register class of
the definition of the REG_SEQUENCE and therefore, the target must set a
register class to that definition. As a result, the generic code can
always use that register class to derive a valid mapping for a
REG_SEQUENCE.
llvm-svn: 299285
This patch refactors the code used in llc such that all the users of the
addPassesToEmitFile API have access to a homogeneous way of handling
start/stop-after/before options right out of the box.
Previously each user would have needed to duplicate this logic and set
up its own options.
NFC
llvm-svn: 299282
This removes a parameter from the routine that was responsible for a lot of the issue. It was a bit count that had to be set to the BitWidth of the APInt and would get passed to getLowBitsSet. This guaranteed the call to getLowBitsSet would create an all ones value. This was then compared to (V | (V-1)). So the only shifted masks we detected had to have the MSB set.
The one in tree user is a transform in InstCombine that never fires due to earlier transforms covering the case better. I've submitted a patch to remove it completely, but for now I've just adapted it to the new interface for isShiftedMask.
llvm-svn: 299273
Summary: GlobalValue has two getGUID methods: an instance method and a static method. The static method takes a string, which is expected to be what GlobalValue::getRealLinkageName() would return. In LTO.cpp, we were not doing this consistently, sometimes passing an IR name instead. This change makes it so that we call getRealLinkageName() first, making the static getGUID return value consistent with the instance method. Without this change, compiling FileCheck with ThinLTO on Windows fails with numerous undefined symbol errors. With the change, it builds successfully.
Reviewers: pcc, rnk
Reviewed By: pcc
Subscribers: tejohnson, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D31444
llvm-svn: 299268
This way we ensure we immediately revisit the instruction and do any additional optimizations before visiting the users. Otherwise we might visit the users, then the instruction, then users again, then instruction again.
llvm-svn: 299267
This is helpful when extracting objects from archives produced by MSVC's
lib.exe, which users absolute paths to describe the archive members.
llvm-svn: 299264
(and (setlt X, 0), (setlt Y, 0)) --> (setlt (and X, Y), 0)
We have 7 similar folds, but this one got away. The fact that the
x86 test with a branch didn't change is probably a separate bug. We
may also be missing this and the related folds in instcombine.
llvm-svn: 299252
A common way to implement nearbyint is by fiddling with the floating
point environment and calling rint. This is used at least by the BSD
libm and musl. As such, canonicalizing the latter to the former will
create infinite loops for libm and generally pessimize performance, at
least when the generic C versions are used.
This change preserves the rint in the libcall translation and also
handles the domain truncation logic, so that rint with float argument
will be reduced to rintf etc.
llvm-svn: 299247
Add a new node to act as a fancy bitcast from f16 operations to
i32 that implicitly zero the high 16-bits of the result.
Alternatively could try making v2f16 legal and canonicalizing
on build_vectors.
llvm-svn: 299246
These are the same tests added for x86 with r299238,
but PPC doesn't specify all branches as cheap, so we
see different patterns in tests with branches.
llvm-svn: 299244
Summary:
This feature enables folding of logical shift operations of up to 3 places into addressing mode on Kryo and Falkor that have a fastpath LSL.
Reviewers: mcrosier, rengolin, t.p.northover
Subscribers: junbuml, gberry, llvm-commits, aemerson
Differential Revision: https://reviews.llvm.org/D31113
llvm-svn: 299240
This wasn't covering for the case where you have multiple latches and hence
the use of the same loop-id which needs to be mapped to the same loop-id.
llvm-svn: 299237
Our _MM_HINT_T0/T1 constant values are 3/2 which matches gcc, but not icc or Intel documentation. Interestingly gcc had this same bug on their implementation of the gather/scatter builtins at one point too.
Fixes PR32411.
llvm-svn: 299234
Summary: Currently the VP metadata was dropped when InstCombine converts a call to direct call. This patch converts the VP metadata to branch_weights so that its hotness is recorded.
Reviewers: eraman, davidxl
Reviewed By: davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31344
llvm-svn: 299228
Implementation of TargetInstrInfo::findCommutedOpIndices for MIPS target,
restricting commutativity to second and third operand only for
dpaadd_[su].df instructions therein.
Prior to this change, there were cases where the vector that is to be added
to the dot product of the other two could take a position other than the
first one in the instruction, generating false output in the destination
vector.
Such behavior has been noticed in the two functions generating v2i64 output
values so far. Other ones may exhibit such behavior as well, just not for
the vector operands which are present in the test at the moment.
Tests altered so that the function's first operand is a constant splat so
that it can be loaded with a ldi instruction, since that is the case in
which the erroneous instruction operand placement has occurred. We check
that the register which is present in the ldi instruction is placed as the
first operand in the corresponding dpadd instruction.
Patch by Stefan Maksimovic.
Differential Revision: https://reviews.llvm.org/D30827
llvm-svn: 299223
Since LOCR only accepts GR32 virtual registers, its operands must be copied
into this regclass in insertSelect(), when an LOCR is built. Otherwise, the
case where the source operand was GRX32 will produce invalid IR.
Review: Ulrich Weigand
llvm-svn: 299220
Currently ComputeNumSignBits returns the minimum number of sign bits for all elements of vector data, when we may only be interested in one/some of the elements.
This patch adds a DemandedElts argument that allows us to specify the elements we actually care about. The original ComputeNumSignBits implementation calls with a DemandedElts demanding all elements to match current behaviour. Scalar types set this to 1.
I've only added support for BUILD_VECTOR and EXTRACT_VECTOR_ELT so far, all others will default to demanding all elements but can be updated in due course.
Followup to D25691.
Differential Revision: https://reviews.llvm.org/D31311
llvm-svn: 299219
Even on older subtargets that lack vector support, there may be vector values
with just one element in the input program. These are converted during DAG
legalization to scalar values.
The pre-legalize SystemZ DAGCombiner methods should in this circumstance not
touch these nodes. This patch adds a check for this in
SystemZTargetLowering::combineEXTRACT_VECTOR_ELT().
Review: Ulrich Weigand
llvm-svn: 299213
The patch rL298481 was reverted due to crash on clang-with-lto-ubuntu build.
The reason of the crash was type mismatch between either a or b and RHS in the following situation:
LHS = sext(a +nsw b) > RHS.
This is quite rare, but still possible situation. Normally we need to cast all {a, b, RHS} to their widest type.
But we try to avoid creation of new SCEV that are not constants to avoid initiating recursive analysis that
can take a lot of time and/or cache a bad value for iterations number. To deal with this, in this patch we
reject this case and will not try to analyze it if the type of sum doesn't match with the type of RHS. In this
situation we don't need to create any non-constant SCEVs.
This patch also adds an assertion to the method IsProvedViaContext so that we could fail on it and not
go further into range analysis etc (because in some situations these analyzes succeed even when the passed
arguments have wrong types, what should not normally happen).
The patch also contains a fix for a problem with too narrow scope of the analysis caused by wrong
usage of predicates in recursive invocations.
The regression test on the said failure: test/Analysis/ScalarEvolution/implied-via-addition.ll
Reviewers: reames, apilipenko, anna, sanjoy
Reviewed By: sanjoy
Subscribers: mzolotukhin, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D31238
llvm-svn: 299205
Previously compiler often extracted common immediates into specific register, e.g.:
```
%vreg0 = S_MOV_B32 0xff;
%vreg2 = V_AND_B32_e32 %vreg0, %vreg1
%vreg4 = V_AND_B32_e32 %vreg0, %vreg3
```
Because of this SDWA peephole failed to find SDWA convertible pattern. E.g. in previous example this could be converted into 2 SDWA src operands:
```
SDWA src: %vreg2 src_sel:BYTE_0
SDWA src: %vreg4 src_sel:BYTE_0
```
With this change peephole check if operand is either immediate or register that is copy of immediate.
llvm-svn: 299202
Follow up to D25691, this sets up the plumbing necessary to support vector demanded elements support in known bits calculations in target nodes.
Differential Revision: https://reviews.llvm.org/D31249
llvm-svn: 299201
Did you know that 0 is a shifted mask? But 0x0000ff00 and 0x000000ff aren't? At least we get 0xff000000 right.
I only see one usage of this function in the code base today and its in InstCombine. I think its protected against 0 being misreported as a mask. I guess we just don't have tests for the missed cases.
llvm-svn: 299187
Summary:
Triggered by commit r298620: "[LV] Vectorize GEPs".
If we encounter a vector GEP with scalar arguments, we splat the scalar
into a vector of appropriate size before we scatter the argument.
Reviewers: arsenm, mehdi_amini, bkramer
Reviewed By: arsenm
Subscribers: bjope, mssimpso, wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D31416
llvm-svn: 299186
Summary:
Currently Go binding only has SetCurrentDebugLocation method.
I added GetCurrentDebugLocation method to IRBuilder instance.
I added this because I want to save current debug location, change debug location temporary and restore the saved one finally.
This is useful when source location jumps and goes back after while LLVM IR generation.
I also added tests for this to ir_test.go.
I confirmed that all test passed with this patch based on r298890
Patch by Ryuichi Hayashida!
Differential Revision: https://reviews.llvm.org/D31415
llvm-svn: 299185
Introduce symbol table data structures that can be potentially written to
disk, have the LTO library build those data structures using temporarily
constructed modules and redirect the LTO library implementation to go through
those data structures. This allows us to remove the LLVMContext and Modules
owned by InputFile.
With this change I measured a peak memory consumption decrease from 5.4GB to
2.8GB in a no-op incremental ThinLTO link of Chromium on Linux. The impact on
memory consumption is larger in COFF linkers where we are currently forced
to materialize all metadata in order to read linker options. Peak memory
consumption linking a large piece of Chromium for Windows with full LTO and
debug info decreases from >64GB (OOM) to 15GB.
Part of PR27551.
Differential Revision: https://reviews.llvm.org/D31364
llvm-svn: 299168
Summary:
Assertions assuming that function calls may not have zero durations do
not seem to hold in the wild. There are valid cases where the conversion
of the tsc counters end up becoming zero-length durations. These
assertions don't really hold and the algorithms don't need those to be
true for them to work.
Reviewers: dblaikie, echristo
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31519
llvm-svn: 299150
Add support for the new relocations and linking metadata section support in
https://github.com/WebAssembly/tool-conventions/blob/master/Linking.md. In
particular, this allows LLVM to indicate which variable is the stack pointer,
so that it can be linked with other objects.
This also adds support for emitting type relocations for call_indirect
instructions.
Right now, this is mainly tested by using wabt and hexdump to examine the
output on selected testcases. We'll add more tests as the design stablizes
and more of the pieces are in place.
llvm-svn: 299141
What we really want to do is distinguish functions that may
be called by other functions, and graphics shaders are not
called kernels.
llvm-svn: 299140
Summary:
This document is an attempt at showing how XRay could be used to debug
latency issues with LLVM tools, and how to use the llvm-xray tool to
analyse XRay traces.
Reviewers: echristo, mehdi_amini, davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31493
llvm-svn: 299133
Since there is no sdiv in SCEV, an 'udiv' is a better canonical form than an 'sdiv' as the user of induction variable
Differential Revision: https://reviews.llvm.org/D31488
llvm-svn: 299118
This is needed by TestCases/Posix/coverage-direct.cc
The problem is that the test does:
mkdir <dir>
cd <dir>
cd ..
rm -rf <dir>
<more commands>
the current directory currently looks like "/.../<dir>/../" which
doesn't exist when dir is deleted.
at some point we should probably switch to using the os current
directory (specially if we want to add subshell), but this is a small
incremental improvement.
llvm-svn: 299113
This adds support for commands like
FileCheck < foobar*
which is used by some asan tests because the file they want to read
has a pid in the name.
llvm-svn: 299111
If set to false it does not remove global aliases. With this parameter
set to false it should be safe to run the pass before link.
Differential Revision: https://reviews.llvm.org/D31489
llvm-svn: 299108
BIND_OPCODE_DONE/REBASE_OPCODE_DONE may appear at the end of the opcode array,
but they are not required to. The linker only adds them as padding to align the
opcodes to pointer size.
This fixes rdar://problem/31285560.
llvm-svn: 299104
Mostly this change adds support converting to and from
YAML which will allow us to write more test cases for
the WebAssembly MC and lld ports.
Better support for objdump, readelf, and nm will be in
followup CLs.
I had to update the two wasm test binaries because they
used the old style 'name' section which is no longer
supported.
Differential Revision: https://reviews.llvm.org/D31099
Patch by Sam Clegg
llvm-svn: 299101
We do not want to close STDOUT as there may have been several uses of it
such as the case: llc %s -o=- -pass-remarks-output=- -filetype=asm
which cause multiple closes of STDOUT_FILENO and/or use-after-close of it.
Using dup() in getFD doesn't work as we end up with original STDOUT_FILENO
open anyhow.
reviewed by Rafael Espindola
Differential Revision: https://reviews.llvm.org/D31505
llvm-svn: 299098
Now alternatively to the TargetOption.AllowFPOpFusion global flag, FMUL->FADD
can also use the per operation FMF to allow fusion.
The idea here is not to port everything to the new scheme (e.g. fused
multiply-and-sub will be ported later) but that this work all the way from
clang.
The transformation is conditionalized on *both* the FADD and the FMUL having
the FMF contract flag.
Differential Revision: https://reviews.llvm.org/D31169
llvm-svn: 299096
In the long-term, we want to replace statistics with something
finer-grained that lets us gather per-function data.
Remarks are that replacement.
Create an ORE instance in SelectionDAGISel, and pass it to
SelectionDAG.
SelectionDAG was used so that we can emit remarks from all
SelectionDAG-related code, including TargetLowering and DAGCombiner.
This isn't used in the current patch but Adam tells me he's interested
for the fp-contract combines.
Use the ORE instance to emit FastISel failures as remarks (instead of
the mix of dbgs() dumps and statistics that we currently have).
Eventually, we want to have an API that tells us whether remarks are
enabled (http://llvm.org/PR32352) so that we don't emit expensive
remarks (in this case, dumping IR) when it's not needed. For now, use
'isEnabled' as a crude replacement.
This does mean that the replacement for '-fast-isel-verbose' is now
'-pass-remarks-missed=isel'. Additionally, clang users also need to
enable remark diagnostics, using '-Rpass-missed=isel'.
This also removes '-fast-isel-verbose2': there are no static statistics
that we want to only enable in asserts builds, so we can always use
the remarks regardless of the build type.
Differential Revision: https://reviews.llvm.org/D31405
llvm-svn: 299093
Summary:
These tests were not being run because the yaml extension
wasn't be picked up by lit.
This change also fixes the tests which themselves were broken.
Patch By: Sam Clegg
Reviewers: beanz
Reviewed By: beanz
Subscribers: fhahn
Differential Revision: https://reviews.llvm.org/D31436
llvm-svn: 299088
This reverts r299062, which caused build failures on Windows.
It also reverts the attempts to fix the windows builds in r299064 and r299065.
The introduction of namespace llvm::sys::detail makes MSVC, and seemingly also
mingw, complain about ambiguity with the existing namespace llvm::detail.
E.g.:
C:\b\slave\sanitizer-windows\llvm\include\llvm/Support/MathExtras.h(184): error C2872: 'detail': ambiguous symbol
C:\b\slave\sanitizer-windows\llvm\include\llvm/Support/PointerLikeTypeTraits.h(31): note: could be 'llvm::detail'
C:\b\slave\sanitizer-windows\llvm\include\llvm/Support/Host.h(80): note: or 'llvm::sys::detail'
In r299064 and r299065 I tried to fix these ambiguities, based on the errors
reported in the log files. It seems however that the build stops early when
this kind of error is encountered, and many build-then-fix-iterations on
Windows may be needed to fix this. Therefore reverting r299062 for now to
get the build working again on Windows.
llvm-svn: 299066
Functions that still return Expected<X> are now called createAndImport*()
Changing the return type was requested in the review comments for r299001
llvm-svn: 299063
This refactors getHostCPUName so that for the architectures that get the
host cpu info on linux from /proc/cpuinfo, the /proc/cpuinfo parsing
logic is present in the build, even if it wasn't built on a linux system
for that architecture.
Since the code is present in the build, we can then test that code also
on other systems, i.e. we don't need to have buildbots setup for all
architectures on linux to be able to test this. Instead, developers will
test this as part of the regression test run.
As an example, a few unit tests are added to test getHostCPUName for ARM
running linux. A unit test is preferred over a lit-based test, since the
expectation is that in the future, the functionality here will grow over
what can be tested with "llc -mcpu=native".
This is a preparation step to enable implementing the range of
improvements discussed on PR30516, such as adding AArch64 support,
support for big.LITTLE systems, reducing code duplication.
Differential Revision: https://reviews.llvm.org/D31236
llvm-svn: 299060
Turns out integerPartWidth only explicitly defines the width of the tc functions in the APInt class. Functions that aren't used by APInt implementation itself. Many places in the code base already assume APInt is made up of 64-bit pieces. Explicitly assuming 64-bit here doesn't make that situation much worse. A full audit would need to be done if it ever changes.
llvm-svn: 299059
This assert is just trying to test that processing each record adds
exactly one entry to the index map. The assert logic was wrong when the
first record in the type stream was a field list.
I've simplified the code by moving the LF_FIELDLIST-specific logic into
the callback for that record type.
llvm-svn: 299035
This should work on all platforms now that r299006 has landed. Tested locally
on Windows and Linux.
This moves exe symbol-specific method implementations out of NativeRawSymbol
into a concrete subclass. Also adds implementations for hasCTypes and
hasPrivateSymbols and a simple test to ensure the native reader can access the
summary information for the executable from the PDB.
Original Differential Revision: https://reviews.llvm.org/D31059
llvm-svn: 299019
Some of the GEP combines (e.g., descaling) can't handle vector GEPs. We have an
existing check that attempts to bail out if given a vector GEP. However, the
check only tests the GEP's pointer operand. A GEP results in a vector of
pointers if at least one of its operands is vector-typed (e.g., its pointer
operand could be a scalar, but its index could be a vector). We should just
check the type of the GEP itself. This should fix PR32414.
Reference: https://bugs.llvm.org/show_bug.cgi?id=32414
Differential Revision: https://reviews.llvm.org/D31470
llvm-svn: 299017
The -output-color option was successful at suppressing color changes, but
was still allowing color resets.
Differential Revision: https://reviews.llvm.org/D31468
llvm-svn: 299006
Summary:
But don't actually inspect the tree any deeper than we already do. This
change is NFC but the next one will enable full traversal of the
source/destination patterns.
Depends on D30535
Reviewers: t.p.northover, qcolombet, aditya_nandakumar, rovka, ab
Subscribers: igorb, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D30536
llvm-svn: 299001
Instantiation of the MachineVerifierPass through
PassInfo::getNormalCtor would yield a segfault since the default
constructor of the MachineVerifierPass takes a reference to nullptr.
Patch by Simone Pellegrini.
Differential Revision: https://reviews.llvm.org/D31387
llvm-svn: 298987
This will result in a KMOVW or KMOVD being emitted during register allocation. And in at least some cases this might allow the register coalescer to remove the copy all together.
llvm-svn: 298984
Summary:
It is problematic for this reader that it expects to read data from
several threads, but the header or message format does not define
framing. Since the buffers are reused, we can't rely on skipping
zeroed out data as a synchronization method either.
There is an argument that this is not version compatible with the format
the reader expected previously. I argue that since the writer wrote garbage
past the end of buffer record, there is no currently working reader to
compromise.
The corresponding writer change is posted to D31384.
Reviewers: dberris, pelikan
Reviewed By: dberris
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31385
llvm-svn: 298983
Summary:
Currently the llvm-xray commandline tool fails to handle the case for
when no subcommand is provided in a graceful manner. This fixes that to
print the help message explaining the subcommands and the available
options.
Reviewers: pcc, pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31409
llvm-svn: 298975
Properly propagate the FMF from the LLVM IR to this flag.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
Differential Revision: https://reviews.llvm.org/D31165
llvm-svn: 298961
We should be masking the value and emitting a register copy like we do in non-fast isel. Instead we were just updating the value map and emitting nothing.
After r298928 we started seeing cases where we would create a copy from GR8 to GR32 because the source register in a VK1 to GR32 copy was replaced by the GR8 going into a truncate.
This fixes PR32451.
llvm-svn: 298957
In PPCBoolRetToInt bool value is changed to i32 type. On ppc64 it may introduce an extra zero extension for the return value. This patch changes the integer type to i64 to avoid the zero extension on ppc64.
This patch fixed PR32442.
Differential Revision: https://reviews.llvm.org/D31407
llvm-svn: 298955
This is less important than increase threshold for private memory,
but still brings performance improvements in a wide range of tests.
Unrolling more for local memory serves three purposes: it allows
to combine ds operations if offset becomes static, saves registers
used for offsets in case of static offsets, and allows better lds
latency hiding.
Differential Revision: https://reviews.llvm.org/D31412
llvm-svn: 298948
This is incorrect to record region boundaries before scheduling,
it may change after scheduling. As a result second pass may see less
instructions to schedule than it should.
Differential Revision: https://reviews.llvm.org/D31434
llvm-svn: 298945
We currently perform the various fp_to_sint XMM conversion and then transfer to the MMX register (on 32-bit via the stack).
This patch improves support for MOVDQ2Q XMM to MMX transfers and adds the XMM->MMX fp_to_sint direct conversion patterns. The SSE2 specifications are the same as for XMM->XMM and XMM->MMX rounding/exceptions/etc.
Differential Revision: https://reviews.llvm.org/D30868
llvm-svn: 298943
-ffp-contract=fast does not currently work with LTO because it's passed as a
TargetOption to the backend rather than in the IR. This adds it to
FastMathFlags.
This is toward fixing PR25721
Differential Revision: https://reviews.llvm.org/D31164
llvm-svn: 298939
Previously it was covered by the internalization. It turns out we cannot
run internalizer in FE, it break separate compilation tests. Thus early
inliner gets its own option.
Differential Revision: https://reviews.llvm.org/D31429
llvm-svn: 298935
Deal with case that initial node is deleted during dag-combine leading
to an assertional failure in promoteIntShiftOp.
Fixes PR32420.
Reviewers: spatel, RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31403
llvm-svn: 298931
We've had several bugs(PR32256, PR32241) recently that resulted from usages of AH/BH/CH/DH either before or after a copy to/from a mask register.
This ultimately occurs because we create COPY_TO_REGCLASS with VK1 and GR8. Then in CopyToFromAsymmetricReg in X86InstrInfo we find a 32-bit super register for the GR8 to emit the KMOV with. But as these tests are demonstrating, its possible for the GR8 register to be a high register and we end up doing an accidental extra or insert from bits 15:8.
I think the best way forward is to stop making copies directly between mask registers and GR8/GR16. Instead I think we should restrict to only copies between mask registers and GR32/GR64 and use EXTRACT_SUBREG/INSERT_SUBREG to handle the conversion from GR32 to GR16/8 or vice versa.
Unfortunately, this complicates fastisel a bit more now to create the subreg extracts where we used to create GR8 copies. We can probably make a helper function to bring down the repitition.
This does result in KMOVD being used for copies when BWI is available because we don't know the original mask register size. This caused a lot of deltas on tests because we have to split the checks for KMOVD vs KMOVW based on BWI.
Differential Revision: https://reviews.llvm.org/D30968
llvm-svn: 298928
We want to check each test on each target, so we need another prefix
when SSE and AVX diverge (as they will if we handle 32-byte and higher).
llvm-svn: 298926
Reorder work in PromoteIntBinOp to prevent stale (deleted) nodes from
being used.
Fixes PR32340 and PR32345.
Reviewers: hfinkel, dbabokin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31148
llvm-svn: 298923
Summary:
Similar to the ARM target in https://reviews.llvm.org/rL298380, this
patch adds identical infrastructure for disabling negative immediate
conversions, and converts the existing aliases to the new infrastucture.
Reviewers: rengolin, javed.absar, olista01, SjoerdMeijer, samparker
Reviewed By: samparker
Subscribers: samparker, aemerson, llvm-commits
Differential Revision: https://reviews.llvm.org/D31243
llvm-svn: 298908
Summary:
G_LOAD/G_STORE, add alternative RegisterBank mapping.
For G_LOAD, Fast and Greedy mode choose the same RegisterBank mapping (GprRegBank ) for the G_GLOAD + G_FADD , can't get rid of cross register bank copy GprRegBank->VecRegBank.
Reviewers: zvi, rovka, qcolombet, ab
Reviewed By: zvi
Subscribers: llvm-commits, dberris, kristof.beyls, eladcohen, guyblank
Differential Revision: https://reviews.llvm.org/D30979
llvm-svn: 298907
This patch enables schedulers to specify instructions that
cannot be issued with any other instructions.
It also fixes BeginGroup/EndGroup.
Reviewed by: Andrew Trick
Differential Revision: https://reviews.llvm.org/D30744
llvm-svn: 298885
rebase entry errors and test cases for each of the error checks.
Also verified with Nick Kledzik that a BIND_OPCODE_SET_ADDEND_SLEB
opcode is legal in a lazy bind table, so code that had that as an error
check was removed.
With MachORebaseEntry and MachOBindEntry classes now returning
an llvm::Error in all cases for malformed input the variables Malformed
and logic to set use them is no longer needed and has been removed
from those classes.
Also in a few places, removed the redundant Done assignment to true
when also calling moveToEnd() as it does that assignment.
This only leaves the dyld compact export entries left to have
error handling yet to be added for the dyld compact info.
llvm-svn: 298883
The vectorizer tries to replace truncations of induction variables with new
induction variables having the smaller type. After r295063, this optimization
was applied to all integer induction variables, including non-primary ones.
When optimizing the truncation of a non-primary induction variable, we still
need to transform the new induction so that it has the correct start value.
This should fix PR32419.
Reference: https://bugs.llvm.org/show_bug.cgi?id=32419
llvm-svn: 298882
When using -debug with -gen-register-info, tablegen will crash when
trying to print a name of a non-native register unit. This patch only
affects the debug information generated while running llvm-tblgen,
and has no impact on the compilable code coming out of it.
llvm-svn: 298875
Several static functions from the signal API can be invoked
simultaneously; RemoveFileOnSignal for instance can be called indirectly
by multiple parallel loadModule() invocations, which might lead to
the assertion:
Assertion failed: (NumRegisteredSignals < array_lengthof(RegisteredSignalInfo) && "Out of space for signal handlers!"),
function RegisterHandler, file /llvm/lib/Support/Unix/Signals.inc, line 105.
RemoveFileOnSignal calls RegisterHandlers(), which isn't currently
mutex protected, leading to the behavior above. This potentially affect
a few other users of RegisterHandlers() too.
rdar://problem/30381224
llvm-svn: 298871
This method is pretty new and probably isn't use much in the code base so this should have a negligible size impact. The OR and XOR operators are already inline.
llvm-svn: 298870
We're not to the point of supporting the load/store patterns yet
(because they extensively use PatFrags).
But in the meantime, we can implement some of the simplest addressing
modes.
llvm-svn: 298863
CBZ/CBNZ represent a substantial portion of all conditional branches.
Look through G_ICMP to select them.
We can't use tablegen yet because the existing patterns match an
AArch64ISD node.
llvm-svn: 298856
Enabled clamp and omod for v_cvt_* opcodes which have src0 of an integer type
Reviewers: vpykhtin, arsenm
Differential Revision: https://reviews.llvm.org/D31327
llvm-svn: 298852
Among other things, this allows Machine LICM to hoist a costly 'mrs'
instruction from within a loop.
Differential Revision: http://reviews.llvm.org/D31151
llvm-svn: 298851
As we introduced target triple environment amdgiz and amdgizcl, the address
space values are no longer enums. We have to decide the value by target triple.
The basic idea is to use struct AMDGPUAS to represent address space values.
For address space values which are not depend on target triple, use static
const members, so that they don't occupy extra memory space and is equivalent
to a compile time constant.
Since the struct is lightweight and cheap, it can be created on the fly at
the point of usage. Or it can be added as member to a pass and created at
the beginning of the run* function.
Differential Revision: https://reviews.llvm.org/D31284
llvm-svn: 298846
Summary:
We are incorrectly folding selects into phi nodes when the incoming value of a phi
node is a constant vector. This optimization is done in `FoldOpIntoPhi` when the
select condition is a phi node with constant incoming values.
Without the fix, we are miscompiling (i.e. incorrectly folding the
select into the phi node) when the vector contains non-zero
elements.
This patch fixes the miscompile and we will correctly fold based on the
select vector operand (see added test cases).
Reviewers: majnemer, sanjoy, spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31189
llvm-svn: 298845
Summary:
It should return <0, 0, or >0 for less-than, equal, and greater-than like
strcmp() (according to the history, it used to be implemented with
strcmp()) but it actually returned 0, or 1 for not-equal and equal.
Reviewers: qcolombet
Reviewed By: qcolombet
Subscribers: qcolombet, llvm-commits
Differential Revision: https://reviews.llvm.org/D30996
llvm-svn: 298844
Summary:
The categories are emitted in a strange order in this patch due to a bug in the
CommandLine library.
Reviewers: ab
Reviewed By: ab
Subscribers: ab, llvm-commits
Differential Revision: https://reviews.llvm.org/D30995
llvm-svn: 298843
This is a patch for an on-going bugzilla bug 21281 on the generated X86 code for a matrix transpose8x8 subroutine which requires vector interleaving. The generated code in AVX2 is currently non-optimal and requires 60 instructions as opposed to only 40 instructions generated for AVX1.
The patch includes a fix for the AVX2 case where vector unpack instructions use less operations than the vector blend operations available in AVX2.
In this case using vector unpack instructions is more efficient.
Reviewers:
zvi
delena
igorb
craig.topper
guyblank
eladcohen
m_zuckerman
aymanmus
RKSimon
llvm-svn: 298840
Do not force the backends to use target name as namespace.
Original patch by Mattias Eriksson
Reviewers: stoklund, craig.topper
Reviewed By: stoklund
Subscribers: materi, llvm-commits
Differential Revision: https://reviews.llvm.org/D31322
llvm-svn: 298834
Summary:
During post-commit review of a previous change I made it was pointed out that const casting 'this' is technically a bad practice. This patch re-implements all of the methods in BasicBlock that do this to use the const BasicBlock version and const_cast the return value instead.
I think there are still many other classes that do similar things. I may look at more in the future.
Reviewers: dblaikie
Reviewed By: dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31377
llvm-svn: 298827
We're seeing binutils ld produce binaries where the import address
table's NameRVA entry is actually a VA instead (i.e. it's already base
relocated), which llvm-readobj then chokes on. Both dumpbin and the
Windows loader are able to handle these binaries correctly, however, and
we can make llvm-readobj handle them correctly too by iterating the
import lookup table (which doesn't have a relocated NameRVA) rather than
the import address table.
The import lookup table and the import address table are supposed to be
identical on disk, and prior to r277298 the import lookup table would be
used by `llvm-readobj -coff-imports` anyway, so this shouldn't have any
functional change (except in the case of our malformed binaries). The
import lookup table can apparently be missing when using old Borland
linkers, so fall back to the import address table in that case.
Resolves PR31766.
Differential Revision: https://reviews.llvm.org/D31362
llvm-svn: 298812
instead of `Ty`.
The `Ty` suffix is much more commonly used for LLVM `Type` variable
names, so this seemed like a particularly confusing collision.
llvm-svn: 298808
Fixed -verify-machineinstrs errors in fast-isel-select-sse.ll (one of many in PR27481)
The VMOVSSZrr/VMOVSSZrrk and VMOVSDZrr/VMOVSDZrrk instructions were assuming both source registers were V128X when the second is actually supposed to be FR32X/FR64X
Differential Revision: https://reviews.llvm.org/D31200
llvm-svn: 298805
The first variant contains all current transformations except
transforming switches into lookup tables. The second variant
contains all current transformations.
The switch-to-lookup-table conversion results in code that is more
difficult to analyze and optimize by other passes. Most importantly,
it can inhibit Dead Code Elimination. As such it is often beneficial to
only apply this transformation very late. A common example is inlining,
which can often result in range restrictions for the switch expression.
Changes in execution time according to LNT:
SingleSource/Benchmarks/Misc/fp-convert +3.03%
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk -11.20%
MultiSource/Benchmarks/Olden/perimeter/perimeter -10.43%
and a couple of smaller changes. For perimeter it also results 2.6%
a smaller binary.
Differential Revision: https://reviews.llvm.org/D30333
llvm-svn: 298799
This moves it to the iterator facade utilities giving it full random
access semantics, etc. It can also now be used with standard algorithms
like std::all_of and std::any_of and range adaptors like llvm::reverse.
Also make the semantics of iterating match what every other iterator
uses and forbid decrementing past the begin iterator. This was used as
a hacky way to work around iterator invalidation. However, every
instance trying to do this failed to actually avoid touching invalid
iterators despite the clear documentation that the removed and all
subsequent iterators become invalid including the end iterator. So I've
added a return of the next iterator to removeCase and rewritten the
loops that were doing this to correctly follow the iterator pattern of
either incremneting or removing and assigning fresh values to the
iterator and the end.
In one case we were trying to go backwards to make this cleaner but it
doesn't actually work. I've made that code match the code we use
everywhere else to remove cases as we iterate. This changes the order of
cases in one test output and I moved that test to CHECK-DAG so it
wouldn't care -- the order isn't semantically meaningful anyways.
llvm-svn: 298791
Patch to generalize combinePCMPAnd1 (for handling SETCC + ZEXT cases) to work for any input that has zero/all bits set masked with an 'all low bits' mask.
Replaced the implicit assumption of shift availability with a call to SupportedVectorShiftWithImm.
Part 1 of 3.
Differential Revision: https://reviews.llvm.org/D31347
llvm-svn: 298779
This is the payoff for D31156 - if a target has efficient comparison instructions for vector-sized equality,
we can replace memcmp calls with inline code that is both smaller and faster.
Differential Revision: https://reviews.llvm.org/D31290
llvm-svn: 298775
The first thing it did was get the User for the Use to get the instruction back. This requires looking through the Uses for the User using the waymarking walk. That's pretty fast, but its probably still better to just pass the Instruction we already had.
llvm-svn: 298772
Switch data layout by target triple environment amdgiz and amdgizcl indicating using of an address space mapping in which generic address space is 0.
amdgiz is for non-OpenCL environment where generic address space is 0.
amdgizcl is for OpenCL environment where generic address space is 0.
Differential Revision: https://reviews.llvm.org/D31211
llvm-svn: 298758
When possible, put ASan ctor/dtor in comdat.
The only reason not to is global registration, which can be
TU-specific. This is not the case when there are no instrumented
globals. This is also limited to ELF targets, because MachO does
not have comdat, and COFF linkers may GC comdat constructors.
The benefit of this is a lot less __asan_init() calls: one per DSO
instead of one per TU. It's also necessary for the upcoming
gc-sections-for-globals change on Linux, where multiple references to
section start symbols trigger quadratic behaviour in gold linker.
llvm-svn: 298756
If we have an array of a user-defined aggregates for which there was an
ODR violation, then the array size will not necessarily match the number
of elements times the size of the element.
Fixes PR32383
llvm-svn: 298750
When I tested r298734, I thought that red zones were enabled by default like in
X86. Since red zones are behind a flag on AArch64 the testing wasn't true.
llvm-svn: 298747
This is another step towards implementing register classes with
parametrized register/spill sizes and value types.
This is an updated version of r298652. The difference is that MCRegister-
Class still contains register size, available as getPhysRegSize(). The
old function getSize was retained as a temporary measure to avoid build
breakage for out-of-tree targets.
llvm-svn: 298739
If the branch condition for a loop was a phi which itself
was fed from a phi from a loop, it isn't safe to try
to delete the phi until after the loop is handled.
llvm-svn: 298737
Reason: breaks linking Chromium with LLD + ThinLTO (a pass crashes)
LLVM bug: https://bugs.llvm.org//show_bug.cgi?id=32413
Original change description:
[LV] Vectorize GEPs
This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.
With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.
The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.
Original Differential Revision: https://reviews.llvm.org/D30710
llvm-svn: 298735
AArch64 doesn't require -mno-red-zone; stack fixups are sufficient here. This was
unnecessarily copied over from the X86 target.
(You can now outline with red zones! Yay!)
Removing the requirement passes all Single/MultiSource tests.
llvm-svn: 298734
Create the constructor in the module pass.
This in needed for the GC-friendly globals change, where the constructor can be
put in a comdat in some cases, but we don't know about that in the function
pass.
llvm-svn: 298731
StructurizeCFG can't handle cases with multiple
returns creating regions with multiple exits.
Create a copy of UnifyFunctionExitNodes that only
unifies exit nodes that skips exit nodes
with uniform branch sources.
llvm-svn: 298729
Summary: Declarations need to be filtered out when counting functions.
Reviewers: eraman
Subscribers: Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D31336
llvm-svn: 298720
Summary: MSVC does this when producing a PDB.
Reviewers: ruiu
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31316
llvm-svn: 298717
SimplifyDemandedUseBits for Add/Sub already recursed down LHS and RHS for simplifying bits. If that didn't provide any simplifications we fall back to calling computeKnownBits which will recurse again. Instead just take the known bits for LHS and RHS we already have and call into a new function in ValueTracking that can calculate the known bits given the LHS/RHS bits.
llvm-svn: 298711
Summary:
This script will automatically create a new stable merge request bug in
bugzilla for the given svn revision and release number.
Reviewers: hans
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D30905
llvm-svn: 298705
Summary:
I discovered accidentally that the operator== for TargetOptions
is stale - it is missing many fields that have been added over
the recent years. It isn't used, so remove it. Ditto for the
comparator in MCTargetOptions, which doesn't seem stale yet but is
unused.
Reviewers: echristo
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D31301
llvm-svn: 298700
The patch rL298481 was reverted due to crash on clang-with-lto-ubuntu build.
The reason of the crash was type mismatch between either a or b and RHS in the following situation:
LHS = sext(a +nsw b) > RHS.
This is quite rare, but still possible situation. Normally we need to cast all {a, b, RHS} to their widest type.
But we try to avoid creation of new SCEV that are not constants to avoid initiating recursive analysis that
can take a lot of time and/or cache a bad value for iterations number. To deal with this, in this patch we
reject this case and will not try to analyze it if the type of sum doesn't match with the type of RHS. In this
situation we don't need to create any non-constant SCEVs.
This patch also adds an assertion to the method IsProvedViaContext so that we could fail on it and not
go further into range analysis etc (because in some situations these analyzes succeed even when the passed
arguments have wrong types, what should not normally happen).
The patch also contains a fix for a problem with too narrow scope of the analysis caused by wrong
usage of predicates in recursive invocations.
The regression test on the said failure: test/Analysis/ScalarEvolution/implied-via-addition.ll
llvm-svn: 298690
This change allows cross-compiling compiler-rt builtins for
multiple targets as part of runtimes on Darwin. This functionality
is already supported on other platforms.
Differential Revision: https://reviews.llvm.org/D30957
llvm-svn: 298678
r298178 capitalized the fields in `ArgListEntry`. All the official
targets were updated accordingly, but as an experimental target AVR
was missed.
llvm-svn: 298677
This prevents crashes when attempting to instrument functions containing
C++ try.
Sanitizer coverage will still fail at runtime when an exception is
thrown through a sancov instrumented function, but that seems marginally
better than what we have now. The full solution is to color the blocks
in LLVM IR and only instrument blocks that have an unambiguous color,
using the appropriate token.
llvm-svn: 298662
Summary:
For the following CFG:
A->B
B->C
A->C
If there is another edge B->D, then ABC should not be considered as triangle.
Reviewers: davidxl, iteratee
Reviewed By: iteratee
Subscribers: nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D31310
llvm-svn: 298661
Summary: In DeadArgumentElimination, the call instructions will be replaced. We also need to set the prof weights so that function inlining can find the correct profile.
Reviewers: eraman
Reviewed By: eraman
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31143
llvm-svn: 298660
Library functions can have specific semantics that affect the behavior of
certain passes. DSE, for instance, gives special treatment to malloc-ed pointers
but not to pointers returned from an equivalently typed (but differently named)
function.
MetaRenamer ought not to alter program semantics, so library functions must
remain untouched.
Reviewers: mehdi_amini, majnemer, chandlerc, davide
Reviewed By: davide
Subscribers: davide, llvm-commits
Differential Revision: https://reviews.llvm.org/D31304
llvm-svn: 298659
Summary: The current prefix based function layout algorithm only looks at function's entry count, which is not sufficient. A function should be grouped together if its entry count or any call edge count is hot.
Reviewers: davidxl, eraman
Reviewed By: eraman
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31225
llvm-svn: 298656
- Avoid explosive growth of the simplification queue by not queuing
expressions that are alredy in it.
- Add an iteration counter and abort after a sufficiently large number
of iterations (assuming that it's a symptom of an infinite loop).
llvm-svn: 298655
This could be used to either disable the runtimes build altogether
or avoid building them but still generate the build targets.
Differential Revision: https://reviews.llvm.org/D31060
llvm-svn: 298653
This is another step towards implementing register classes with
parametrized register/spill sizes.
Differential Revision: https://reviews.llvm.org/D31299
llvm-svn: 298652
Summary:
When dumping these records from an object file section, we should use
only one type database. However, when dumping from a PDB, we should use
two: one for the type stream and one for the IPI stream.
Certain type records that normally live in the .debug$T object file
section get moved over to the IPI stream of the PDB file and they get
new indices.
So far, I've noticed that the MSVC linker always moves these records
into IPI:
- LF_FUNC_ID
- LF_MFUNC_ID
- LF_STRING_ID
- LF_SUBSTR_LIST
- LF_BUILDINFO
- LF_UDT_MOD_SRC_LINE
These records have index fields that can point into TPI or IPI. In
particular, LF_SUBSTR_LIST and LF_BUILDINFO point to LF_STRING_ID
records to describe compilation command lines.
I've modified the dumper to have an optional pointer to the item DB, and
to do type name lookup of these fields in that DB. See printItemIndex.
The result is that our pdbdump-headers.test is more faithful to the PDB
contents and the output is less confusing.
Reviewers: ruiu
Subscribers: amccarth, zturner, llvm-commits
Differential Revision: https://reviews.llvm.org/D31309
llvm-svn: 298649
The old candidate collection method in the outliner caused some very large
regressions in compile time on large tests. For MultiSource/Benchmarks/7zip it
caused a 284.07 s or 1156% increase in compile time. On average, using the
SingleSource/MultiSource tests, it caused an average increase of 8 seconds in
compile time (something like 1000%).
This commit replaces that candidate collection method with a new one which
only visits each node in the tree once. This reduces the worst compile time
increase (still 7zip) to a 0.542 s overhead (22%) and the average compile time
increase on SingleSource and MultiSource to 0.018 s (4%).
llvm-svn: 298648
Summary:
loop unrolling and icp will make the sample profile annotation much harder in the backend. So disable these 2 optimization in the ThinLTO compile phase.
Will add a test in cfe in a separate patch.
Reviewers: tejohnson
Reviewed By: tejohnson
Subscribers: mehdi_amini, llvm-commits, Prazek
Differential Revision: https://reviews.llvm.org/D31217
llvm-svn: 298646
Now that we call ShrinkDemandedConstant on the RHS of sub this should be taken care of. This code doesn't trigger on any in tree regressions, but did before ShrinkDemandedConstant was added to the RHS.
llvm-svn: 298644
GCC has the alloc_align attribute, which is similar to assume_aligned, except the attribute's parameter is the index of the integer parameter that needs aligning to.
This is the required LLVM changes to make this happen.
Differential Revision: https://reviews.llvm.org/D29598
llvm-svn: 298643
It is not guaranteed that the memory used for MachineBasicBlocks in
the previous MachineFunction hasn't been freed, so holding on to a
pointer to the last function's isn't correct. Particularly I have
observed the sret.ll testcase failing because the first BasicBlock in
the new function happened to be allocated to the exact same memory as
the previously saved and (deleted) PrevInstBB.
llvm-svn: 298642
The new test asserts that scalarized memory operations get memcheck metadata
added even if the loop is only unrolled.
Differential Revision: https://reviews.llvm.org/D30972
llvm-svn: 298641
Using AssemblyAnnotationWriter for LVI printer prints
for instructions and basic blocks.
So, we explicitly need to print LVI info for the arguments of the function (these
are values and not instructions).
llvm-svn: 298640
Summary:
The cumulative size of the bitcode files for a very large application
can be huge, particularly with -g. In a distributed build environment,
all of these files must be sent to the remote build node that performs
the thin link step, and this can exceed size limits.
The thin link actually only needs the summary along with a bitcode
symbol table. Until we have a proper bitcode symbol table, simply
stripping the debug metadata results in significant size reduction.
Add support for an option to additionally emit minimized bitcode
modules, just for use in the thin link step, which for now just strips
all debug metadata. I plan to add a cc1 option so this can be invoked
easily during the compile step.
However, care must be taken to ensure that these minimized thin link
bitcode files produce the same index as with the original bitcode files,
as these original bitcode files will be used in the backends.
Specifically:
1) The module hash used for caching is typically produced by hashing the
written bitcode, and we want to include the hash that would correspond
to the original bitcode file. This is because we want to ensure that
changes in the stripped portions affect caching. Added plumbing to emit
the same module hash in the minimized thin link bitcode file.
2) The module paths in the index are constructed from the module ID of
each thin linked bitcode, and typically is automatically generated from
the input file path. This is the path used for finding the modules to
import from, and obviously we need this to point to the original bitcode
files. Added gold-plugin support to take a suffix replacement during the
thin link that is used to override the identifier on the MemoryBufferRef
constructed from the loaded thin link bitcode file. The assumption is
that the build system can specify that the minimized bitcode file has a
name that is similar but uses a different suffix (e.g. out.thinlink.bc
instead of out.o).
Added various tests to ensure that we get identical index files out of
the thin link step.
Reviewers: mehdi_amini, pcc
Subscribers: Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D31027
llvm-svn: 298638
Given below case:
%y = shl %x, n
%z = ashr %y, m
when n = m, SCEV models it as sext(trunc(x)). This patch tries to handle
the case where n > m by using sext(mul(trunc(x), 2^(n-m)))) as the SCEV
expression.
llvm-svn: 298631
Reverting until I can figure out the root cause.
Revert "Re-land: Make NativeExeSymbol a concrete subclass of NativeRawSymbol [PDB]"
This reverts commit f461a70cc376f0f91c8b4917be79479cc86330a5.
llvm-svn: 298626
Use the -color-output option explicitly to eliminate the ANSI color codes in
pdb-native-summary.test. (The default should have done this.)
llvm-svn: 298625
Summary:
The true and false operands for the CMOV are operands 0 and 1.
ARMISelLowering.cpp::computeKnownBits was looking at operands 1 and 2
instead. This can cause CMOV instructions to be incorrectly folded into
BFI if value set by the CMOV is another CMOV, whose known bits are
computed incorrectly.
This patch fixes the issue and adds a test case.
Reviewers: kristof.beyls, jmolloy
Subscribers: llvm-commits, aemerson, srhines, rengolin
Differential Revision: https://reviews.llvm.org/D31265
llvm-svn: 298624
The new test should pass on all platforms now that llvm-pdbdump has the
`-color-output` option.
This moves exe symbol-specific method implementations out of NativeRawSymbol
into a concrete subclass. Also adds implementations for hasCTypes and
hasPrivateSymbols and a simple test to ensure the native reader can access
the summary information for the executable from the PDB.
Original Differential Revision: https://reviews.llvm.org/D31059
llvm-svn: 298623
This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.
With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.
The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.
Differential Revision: https://reviews.llvm.org/D30710
llvm-svn: 298620