Adds a section to the User Guides page for articles related to building, packaging, and distributing LLVM. Includes sub-sections for CMake, Clang, and Docker.
llvm-svn: 373113
Summary:
This is valid for any `sext` bitwidth pair:
```
Processing /tmp/opt.ll..
----------------------------------------
%signed = sext %y
%r = shl %x, %signed
ret %r
=>
%unsigned = zext %y
%r = shl %x, %unsigned
ret %r
%signed = sext %y
Done: 2016
Optimization is correct!
```
(This isn't so for funnel shifts, there it's illegal for e.g. i6->i7.)
Main motivation is the C++ semantics:
```
int shl(int a, char b) {
return a << b;
}
```
ends as
```
%3 = sext i8 %1 to i32
%4 = shl i32 %0, %3
```
https://godbolt.org/z/0jgqUq
which is, as this shows, too pessimistic.
There is another problem here - we can only do the fold
if sext is one-use. But we can trivially have cases
where several shifts have the same sext shift amount.
This should be resolved, later.
Reviewers: spatel, nikic, RKSimon
Reviewed By: spatel
Subscribers: efriedma, hiraditya, nlopes, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68103
llvm-svn: 373106
Summary: This is a cleanup patch for MachineDominatorTree. It would be an NFC, except for replacing custom DomTree verification with the generic one.
Reviewers: tstellar, tpr, nhaehnle, arsenm, NutshellySima, grosser, hliao
Reviewed By: arsenm
Subscribers: wdng, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67976
llvm-svn: 373101
The static analyzer is warning about a potential null dereference, but we should be able to use cast<> directly and if not assert will fire for us.
llvm-svn: 373099
These two test cases use -march=systemz instead of a triple. In
particular, the used file format is then based on the default host
triple. This leads to different behaviour on different platforms.
The SystemZ implementation uses the integrated assembler for a
long time now. The mature-mc-support test can be fully enabled.
Differential Revision: https://reviews.llvm.org/D68129
llvm-svn: 373098
The static analyzer is warning about a potential null dereference, but we should be able to use cast<FunctionSummary> directly and if not assert will fire for us.
llvm-svn: 373097
The new names for FPRs ensure that the Register values within the same class are
enumerated consecutively (the order is determined by the `LessRecordRegister`
function object). Where there were tables mapping between 32- and 64-bit FPRs
(and vice versa) this patch replaces them with Register arithmetic. The
enumeration order between different register classes is expected to continue to
be arbitrary, although it does impact the conversion from the (overloaded) asm
FPR names to Register values, and therefore might require updates to the target
if the sorting algorithm is changed. Static asserts were added to ensure that
changes to the ordering that would impact the current implementation are
detected.
Differential Revision: https://reviews.llvm.org/D67423
llvm-svn: 373096
The static analyzer is warning about a potential null dereference, but we should be able to use cast<StructType> directly and if not assert will fire for us.
llvm-svn: 373095
Abandon describing of loaded values due to safety concerns. Loaded
values are described as derefed memory location at caller point.
At callee we can unintentionally change that memory location which
would lead to different entry being printed value before and after
the memory location clobbering. This problem is described in
llvm.org/PR43343.
Patch by Nikola Prica
Differential Revision: https://reviews.llvm.org/D67717
llvm-svn: 373089
Summary:
An erroneously negated if-statement by an earlier (March 2019) bugfix left phi replacement/simplification under optimizeMemoryInst() in CodeGenPrepare largely inactivated. The error was found when csmith found that the same assert as in the original bug report could still be triggered in a different way. This patch fixes the bugfix. The original bug was:
https://bugs.llvm.org/show_bug.cgi?id=41052
... and the previous fix was D59358.
Reviewers: aprantl, skatkov
Reviewed By: skatkov
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67838
llvm-svn: 373084
Summary:
Before this change the Executable function was made by duplicating the
snippet. This change adds a --repetion-mode={loop|duplicate} flag that
allows choosing between this behaviour and wrapping the snippet instructions
in a loop.
The new mode can help measurements when the snippet fits in the DSB by
short-cirtcuiting decoding. The loop adds a dec + jmp to the measurements, but
since these are not part of the critical path, they execute in parallel
with the measured code and do not impact measurements in practice.
Overview of the change:
- New SnippetRepetitor abstraction that handles repeating the snippet.
The assembler delegates repeating the instructions to this class.
- ExegesisTarget learns how to decrement loop counter and jump.
- Some refactoring of the assembler into FunctionFiller/BasicBlockFiller.
Reviewers: gchatelet
Subscribers: mgorny, tschuett, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68125
llvm-svn: 373083
D64572 / rL365818 changed the way that the file paths were collected, which meant we lost the file pattern expansion necessary when working with DOS command prompt
llvm-svn: 373062
This caused severe compile-time regressions, see PR43455.
> Modern processors predict the targets of an indirect branch regardless of
> the size of any jump table used to glean its target address. Moreover,
> branch predictors typically use resources limited by the number of actual
> targets that occur at run time.
>
> This patch changes the semantics of the option `-max-jump-table-size` to limit
> the number of different targets instead of the number of entries in a jump
> table. Thus, it is now renamed to `-max-jump-table-targets`.
>
> Before, when `-max-jump-table-size` was specified, it could happen that
> cluster jump tables could have targets used repeatedly, but each one was
> counted and typically resulted in tables with the same number of entries.
> With this patch, when specifying `-max-jump-table-targets`, tables may have
> different lengths, since the number of unique targets is counted towards the
> limit, but the number of unique targets in tables is the same, but for the
> last one containing the balance of targets.
>
> Differential revision: https://reviews.llvm.org/D60295
llvm-svn: 373060
Summary:
The const-correctness of match() was fixed in rL372764, which allows
such static Regex objects to be marked const.
Reviewers: thopre
Reviewed By: thopre
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68091
llvm-svn: 373058
Summary:
Right now latency generation can incorrectly select the scratch register
as a dependency-carrying register.
- Move the logic for preventing register selection from Uops
implementation to common SnippetGenerator class.
- Aliasing detection now takes a set of forbidden registers just like
random register assignment does.
Reviewers: gchatelet
Subscribers: tschuett, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68084
llvm-svn: 373048
hasDedicatedExits.
For the compile time problem described in https://reviews.llvm.org/D67359,
turns out the root cause is there are many duplicates in ExitBlocks so
the algorithm complexity of hasDedicatedExits gets very high. If we remove
the duplicates, the compile time issue is gone.
Thanks to Philip Reames for raising a good question and it leads me to
find the root cause.
Differential Revision: https://reviews.llvm.org/D68107
llvm-svn: 373045
exits"
Get a better approach in https://reviews.llvm.org/D68107 to solve the problem.
Revert the initial patch and will commit the new one soon.
This reverts commit rL372990.
llvm-svn: 373044
We can't use short granules with stack instrumentation when targeting older
API levels because the rest of the system won't understand the short granule
tags stored in shadow memory.
Moreover, we need to be able to let old binaries (which won't understand
short granule tags) run on a new system that supports short granule
tags. Such binaries will call the __hwasan_tag_mismatch function when their
outlined checks fail. We can compensate for the binary's lack of support
for short granules by implementing the short granule part of the check in
the __hwasan_tag_mismatch function. Unfortunately we can't do anything about
inline checks, but I don't believe that we can generate these by default on
aarch64, nor did we do so when the ABI was fixed.
A new function, __hwasan_tag_mismatch_v2, is introduced that lets code
targeting the new runtime avoid redoing the short granule check. Because tag
mismatches are rare this isn't important from a performance perspective; the
main benefit is that it introduces a symbol dependency that prevents binaries
targeting the new runtime from running on older (i.e. incompatible) runtimes.
Differential Revision: https://reviews.llvm.org/D68059
llvm-svn: 373035
We have isel patterns that can put an IMPLICIT_DEF on one of
the sources for these instructions. So we should make sure
we break any dependencies there. This should be done by
just using one of the other sources.
llvm-svn: 373025
Similar for f64 and having a non-zero passthru value.
We were previously not trying to fold the load at all. Using
a CodeGenOnly instruction allows us to use FR32X/FR64X as the
register class to avoid a bunch of COPY_TO_REGCLASS.
llvm-svn: 373021
Summary:
This patch extends the current capabilities in loop fusion to fuse guarded loops
(as defined in https://reviews.llvm.org/D63885). The patch adds the necessary
safety checks to ensure that it safe to fuse the guarded loops (control flow
equivalent, no intervening code, and same guard conditions). It also provides an
alternative method to perform the actual fusion of guarded loops. The mechanics
to fuse guarded loops are slightly different then fusing non-guarded loops, so I
opted to keep them separate methods. I will be cleaning this up in later
patches, and hope to converge on a single method to fuse both guarded and
non-guarded loops, but for now I think the review will be easier to keep them
separate.
Reviewers: jdoerfert, Meinersbur, dmgreen, etiotto, Whitney
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65464
llvm-svn: 373018
For a runtime loop if we can compute its trip count upperbound:
Don't unroll if:
1. loop is not guaranteed to run either zero or upperbound iterations; and
2. trip count upperbound is less than UnrollMaxUpperBound
Unless user or TTI asked to do so.
If unrolling, limit unroll factor to loop's trip count upperbound.
Differential Revision: https://reviews.llvm.org/D62989
Change-Id: I6083c46a9d98b2e22cd855e60523fdc5a4929c73
llvm-svn: 373017
Summary: As discussed in the loop group meeting. With the current
definition of loop guard, we should not allow multiple loop exiting
blocks. For loops that has multiple loop exiting blocks, we can simply
unable to find the loop guard.
When getUniqueExitBlock() obtains a vector size not equals to one, that
means there is either no exit blocks or there exists more than one
unique block the loop exit to.
If we don't disallow loop with multiple loop exit blocks, then with our
current implementation, there can exist exit blocks don't post dominated
by the non pre-header successor of the guard block.
Reviewer: reames, Meinersbur, kbarton, etiotto, bmahjour
Reviewed By: Meinersbur, kbarton
Subscribers: fhahn, hiraditya, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D66529
llvm-svn: 373011
This patch emits the function descriptor csect for functions with definitions
under both 32-bit/64-bit mode on AIX.
Differential Revision: https://reviews.llvm.org/D66724
llvm-svn: 373009
Summary:
The syntax table was originally based on and attributed to
jasmin.el, but was rewritten in r45192, so the comment that
says the code comes from jasmin.el is no longer accurate. This
change removes the comment, shortening the code a bit.
Reviewers: MaskRay, lattner
Reviewed By: MaskRay
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68042
llvm-svn: 373008
The test case here previously infinite looped. Only one element from the GEP is used so SimplifyDemandedVectorElts would replace the other lanes in each index with undef leading to the first index being <0, undef, undef, undef>. But there's a GEP transform that tries to replace an index into a 0 sized type with a zero index. But the zero index check only works on ConstantInt 0 or ConstantAggregateZero so it would turn the index back to zeroinitializer. Resulting in a loop.
The fix is to use m_Zero() to allow a vector of zeroes and undefs.
Differential Revision: https://reviews.llvm.org/D67977
llvm-svn: 373000
The static analyzer is warning about a potential null dereference, but we should be able to use cast<TypedInit> directly and if not assert will fire for us.
I've also pulled out the repeated getType() call which was the only user of the pointer.
llvm-svn: 372997
The static analyzer is warning about a potential null dereference, but we should be able to use cast<ExtractValueInst> directly and if not assert will fire for us.
llvm-svn: 372993
The static analyzer is warning about potential null dereferences, but we should be able to use cast<> directly and if not assert will fire for us.
llvm-svn: 372992
for extreme large case.
We had a case that a single loop which has 4000 exits and the average number
of predecessors of each exit is > 1000, and we found compiling the case spent
a significant amount of time on checking whether a loop has dedicated exits.
This patch adds a limit for the iterations to the check. With the patch, the
time to compile our testcase reduced from 1000s to 200s (clang release build).
Differential Revision: https://reviews.llvm.org/D67359
llvm-svn: 372990
Summary:
FlattenCFG merges two 'if' basicblocks by inserting one basicblock
to another basicblock. The inserted basicblock can have a successor
that contains a PHI node whoes incoming basicblock is the inserted
basicblock. Since the existing code does not handle it, it becomes
a badref.
if (cond1)
statement
if (cond2)
statement
successor - contains PHI node whose predecessor is cond2
-->
if (cond1 || cond2)
statement
(BB for cond2 was deleted)
successor - contains PHI node whose predecessor is cond2 --> bad ref!
Author: Jaebaek Seo
Reviewers: asbirlea, kuhar, tstellar, chandlerc, davide, dexonsmith
Reviewed By: kuhar
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68032
llvm-svn: 372989
Summary:
This was found during review of https://reviews.llvm.org/D66050.
In the simple test of fdiv, we miss to fold
```
fneg 2, 2
xsmaddasp 3, 2, 0
```
to
```
xsnmsubasp 3, 2, 0
```
We have the patterns for Double Precision and vectors, just missing
Single Precision, the patch add that.
Reviewers: #powerpc, hfinkel, nemanjai, steven.zhang
Reviewed By: #powerpc, steven.zhang
Subscribers: wuzish, hiraditya, kbarton, MaskRay, shchenz, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67595
llvm-svn: 372985
The static analyzer is warning about a potential null dereferences, but we should be able to use cast<BranchInst> directly and if not assert will fire for us.
llvm-svn: 372977
llvm/test/Object/ contains tests for the ArchiveWriter library, however
support for MRI scripts is found in llvm-ar and not the library. This
diff moves the MRI related tests and removes those that are duplicates.
Differential Revision: https://reviews.llvm.org/D68038
llvm-svn: 372973
Summary:
Removing an assumption (assert) that the CmpInst already has been
simplified in getFlippedStrictnessPredicateAndConstant. Solution is
to simply bail out instead of hitting the assertion. Instead we
assume that any profitable rewrite will happen in the next iteration
of InstCombine.
The reason why we can't assume that the CmpInst already has been
simplified is that the worklist does not guarantee such an ordering.
Solves https://bugs.llvm.org/show_bug.cgi?id=43376
Reviewers: spatel, lebedev.ri
Reviewed By: lebedev.ri
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68022
llvm-svn: 372972
The static analyzer is warning about a potential null dereference, but we should be able to use cast<MDNode> directly and if not assert will fire for us.
llvm-svn: 372966
The static analyzer is warning about a potential null dereferences, but since the pointer is only used in a switch statement for Operator::getOpcode() (with an empty default) then its easiest just to wrap this in a null test as the dyn_cast might return null here.
llvm-svn: 372962
The static analyzer is warning about a potential null dereferences, but we should be able to use cast<> directly and if not assert will fire for us.
llvm-svn: 372960
The static analyzer is warning about a potential null dereference, but we should be able to use cast<MemIntrinsic> directly and if not assert will fire for us.
llvm-svn: 372959
Implement aggregate structure split to simpler types in splitToValueTypes.
splitToValueTypes is used for return values.
According to MipsABIInfo from clang/lib/CodeGen/TargetInfo.cpp,
aggregate structure arguments for O32 always get simplified and thus
will remain unsupported by the MIPS GlobalISel for the time being.
For O32, aggregate structures can be encountered only for complex number
returns e.g. 'complex float' or 'complex double' from <complex.h>.
Differential Revision: https://reviews.llvm.org/D67963
llvm-svn: 372957
The static analyzer is warning about a potential null dereference, but we should be able to use cast<MCConstantExpr> directly and if not assert will fire for us.
llvm-svn: 372956
SLM is 2 x slower for <2 x i64> comparison ops than other vector types, we should account for this like we do for SLM <2 x i64> add/sub/mul costs.
This should remove some of the SLM codegen diffs in D43582
llvm-svn: 372954
With -pg -mfentry -mnop-mcount, a nop is emitted instead of the call to
fentry.
Review: Ulrich Weigand
https://reviews.llvm.org/D67765
llvm-svn: 372950
This matches what's done for VRNDSCALE and most other instructions.
This mainly determines which instruction will be preferred by
disassembler and assembly parser. The printing and encoding
information is the same.
We prefer the _Int form since it uses the VR128 class due to
intrinsic interface. For some of EVEX features like embedded
rounding, we only select from intrinsics today. So there is
only a VR128 version. So making the VR128 version the preferred
is overally consistent.
llvm-svn: 372947
Summary:
Previously the case
EBB
| \_
| |
| TBB
| /
FBB
was treated as a valid triangle also when TBB and FBB was the same basic
block. This could then lead to an invalid CFG when we removed the edge
from EBB to TBB, since that meant we would also remove the edge from EBB
to FBB.
Since TBB == FBB is quite a degenerated case of a triangle, we now
don't treat it as a valid triangle anymore, and thus we will avoid the
trouble with updating the CFG.
Reviewers: efriedma, dmgreen, kparzysz
Reviewed By: efriedma
Subscribers: bjope, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67832
llvm-svn: 372943
Previously we might attempt to use a BitCast to turn bits into vectors of pointers,
but that requires an inttoptr cast to be legal. Add an assertion to detect the formation of illegal bitcast attempts
early (in the tests, we often constant-fold away the result before getting to this assertion check),
while being careful to still handle the early-return conditions without adding extra complexity in the result.
Patch by Jameson Nash <jameson@juliacomputing.com>.
Differential Revision: https://reviews.llvm.org/D65057
llvm-svn: 372940
atomicrmw and cmpxchg have a volatile flag, so allow them to be get and set with LLVM{Get,Set}Volatile. atomicrmw and fence have orderings, so allow them to be get and set with LLVM{Get,Set}Ordering. Add missing LLVMAtomicRMWBinOpFAdd and LLVMAtomicRMWBinOpFSub enum constants. AtomicCmpXchg also has a weak flag, add a getter/setter for that too. Add a getter/setter for the binary-op of an atomicrmw.
atomicrmw and cmpxchg have a volatile flag, so allow it to be set/get with LLVMGetVolatile and LLVMSetVolatile. Add missing LLVMAtomicRMWBinOpFAdd and LLVMAtomicRMWBinOpFSub enum constants. AtomicCmpXchg also has a weak flag, add a getter/setter for that too. Add a getter/setter for the binary-op of an atomicrmw.
Add LLVMIsA## for CatchSwitchInst, CallBrInst and FenceInst, as well as AtomicCmpXchgInst and AtomicRMWInst.
Update llvm-c-test to include atomicrmw and fence, and to copy volatile for the four applicable instructions.
Differential Revision: https://reviews.llvm.org/D67132
llvm-svn: 372938
Rename old function to explicitly show that it cares only about alignment.
The new allowsMemoryAccess call the function related to alignment by default
and can be overridden by target to inform whether the memory access is legal or
not.
Differential Revision: https://reviews.llvm.org/D67121
llvm-svn: 372935
Previously we had an assert but this can actually occur in valid user
code so we need to handle this in release builds too.
Differential Revision: https://reviews.llvm.org/D67997
llvm-svn: 372934
This pass is only concerned with ZMM0-15 and YMM0-15. For YMM
we use VR256 which only contains YMM0-15, but for ZMM we were
using VR512 which contains ZMM0-31. Using VR512_0_15 is more
correct.
Given that the ABI and register allocator will use registers in
order, its unlikely that register from 16-31 would be used
without also using 0-15. So this probably doesn't functionally
matter.
llvm-svn: 372933
Summary:
If a block has all incoming values with the same MemoryAccess (ignoring
incoming values from unreachable blocks), then use that incoming
MemoryAccess and do not create a Phi in the first place.
Revert IDF work-around added in rL372673; it should not be required unless
the Def inserted is the first in its block.
The patch also cleans up a series of tests, added during the many
iterations on insertDef.
The patch also fixes PR43438.
The same issue that occurs in insertDef with "adding phis, hence the IDF of
Phis is needed", can also occur in fixupDefs: the `getPreviousRecursive`
call only adds Phis walking on the predecessor edges, which means there
may be the case of a Phi added walking the CFG "backwards" which
triggers the needs for an additional Phi in successor blocks.
Such Phis are added during fixupDefs only in the presence of unreachable
blocks.
Hence this highlights the need to avoid adding Phis in blocks with
unreachable predecessors in the first place.
Reviewers: george.burgess.iv
Subscribers: Prazek, sanjoy.google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67995
llvm-svn: 372932
For large functions, verifying the whole function after each loop takes
non-linear time.
Differential Revision: https://reviews.llvm.org/D67571
llvm-svn: 372924