Summary:
* add missing comma.
* remove "having to register them here" phrasing, since register it
is what we're doing, which made the comment a bit confusing.
* remove duplicate code.
* clarify link to chapter 3, since "folder" doesn't appear in that
chapter.
Differential Revision: https://reviews.llvm.org/D75263
Summary:
* Use bold font (not monospace) for legal/illegal.
* Say a few words about operation<->dialect precedence.
* Omit duplicate code samples.
* Indent items in bullet-point list.
Differential Revision: https://reviews.llvm.org/D75262
Summary:
* Let's use "override" when we're just doing standard baseclassing.
("Specialization" makes it sound like template specialization, which
this is not.)
* CallInterfaces.td has an include guard, so #ifdef not needed anymore.
* Omit duplicate code in code samples.
* Clarify which algorithm we're talking about.
* Mention that the ShapeInference code is code a snippet that belongs to
algorithm discussed in the paragraph above it.
* Add missing definition for createShapeInferencePass.
Differential Revision: https://reviews.llvm.org/D75260
MathExtras.h was just wrapping SwapByteOrder.h functionality, so have
the callers use it directly. Use the MathExtras.h name (ByteSwap_NN) as
the standard naming, since it appears to be the most popular.
Summary:
For now just insert the callback for stores, similar to how MSan tracks
origins. In the future we may want to add callbacks for loads, memcpy,
function calls, CMPs, etc.
Reviewers: pcc, vitalybuka, kcc, eugenis
Reviewed By: vitalybuka, kcc, eugenis
Subscribers: eugenis, hiraditya, #sanitizers, llvm-commits, kcc
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D75312
DSE would mistakenly remove store (2):
a = calloc(n+1)
for (int i = 0; i < n; i++) {
store 1, a[i+1] // (1)
store 0, a[i] // (2)
}
The fix is to do PHI transaltion while looking for clobbering
instructions between the store and the calloc.
Reviewed By: efriedma, bjope
Differential Revision: https://reviews.llvm.org/D68006
Hopefully fixes compile errors on some bots, like:
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-avx2-linux/builds/13383/steps/ninja%20check%201/logs/stdio
/home/ssglocal/clang-cmake-x86_64-avx2-linux/clang-cmake-x86_64-avx2-linux/llvm/llvm/unittests/ADT/CoalescingBitVectorTest.cpp:452:3: required from here
/home/ssglocal/clang-cmake-x86_64-avx2-linux/clang-cmake-x86_64-avx2-linux/llvm/llvm/utils/unittest/googletest/include/gtest/gtest-printers.h:377:56: error: ‘const class llvm::CoalescingBitVector<long unsigned int>::const_iterator’ has no member named ‘begin’
for (typename C::const_iterator it = container.begin();
^
/home/ssglocal/clang-cmake-x86_64-avx2-linux/clang-cmake-x86_64-avx2-linux/llvm/llvm/utils/unittest/googletest/include/gtest/gtest-printers.h:378:11: error: ‘const class llvm::CoalescingBitVector<long unsigned int>::const_iterator’ has no member named ‘end’
it != container.end(); ++it, ++count) {
^
Summary:
Sanitizer tests don't entirely pass on an R device. Fix up all the
incompatibilities with the new system.
Reviewers: eugenis, pcc
Reviewed By: eugenis
Subscribers: #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D75303
On some bots, using gtest asserts to compare iterators does not compile,
and I'm not sure why (this certainly compiles with clang). Disable the
checks for now :/.
```
C:\buildbot\as-builder-3\llvm-clang-x86_64-win-fast\llvm-project\llvm\utils\unittest\googletest\include\gtest/gtest-printers.h(377): error C2039: 'begin': is not a member of 'llvm::CoalescingBitVector<unsigned int,16>::const_iterator'
C:\buildbot\as-builder-3\llvm-clang-x86_64-win-fast\llvm-project\llvm\include\llvm/ADT/CoalescingBitVector.h(243): note: see declaration of 'llvm::CoalescingBitVector<unsigned int,16>::const_iterator'
C:\buildbot\as-builder-3\llvm-clang-x86_64-win-fast\llvm-project\llvm\utils\unittest\googletest\include\gtest/gtest-printers.h(478): note: see reference to function template instantiation 'void testing::internal::DefaultPrintTo<T>(testing::internal::IsContainer,testing::internal::false_type,const C &,std::ostream *)' being compiled
with
[
T=T1,
C=T1
]
```
http://lab.llvm.org:8011/builders/llvm-clang-x86_64-win-fast/builds/12006/steps/test-check-llvm-unit/logs/stdiohttp://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/34521/steps/ninja%20check%201/logs/stdio
This is part 3 of a 3-part series to address a compile-time explosion
issue in LiveDebugValues.
---
Start encoding register locations within VarLoc IDs, and take advantage
of this encoding to speed up transferRegisterDef.
There is no fundamental algorithmic change: this patch simply swaps out
SparseBitVector in favor of CoalescingBitVector. That changes iteration
order (hence the test updates), but otherwise this patch is NFCI.
The only interesting change is in transferRegisterDef. Instead of doing:
```
KillSet = {}
for (ID : OpenRanges.getVarLocs())
if (DeadRegs.count(ID))
KillSet.add(ID)
```
We now do:
```
KillSet = {}
for (Reg : DeadRegs)
for (ID : intervalsReservedForReg(Reg, OpenRanges.getVarLocs()))
KillSet.add(ID)
```
By not visiting each open location every time we visit an instruction,
this eliminates some potentially quadratic behavior. The new
implementation basically does a constant amount of work per instruction
because the interval map lookups are very fast.
For a file in WebKit, this brings the time spent in LiveDebugValues down
from ~2.5 minutes to 4 seconds, reducing compile time spent in that pass
from 28% of the total to just over 1%.
Before:
```
2.49 min 27.8% 0 s LiveDebugValues::process
2.41 min 27.0% 5.40 s LiveDebugValues::transferRegisterDef
1.51 min 16.9% 1.51 min LiveDebugValues::VarLoc::isDescribedByReg() const
32.73 s 6.1% 8.70 s llvm::SparseBitVector<128u>::SparseBitVectorIterator::operator++()
```
After:
```
4.53 s 1.1% 0 s LiveDebugValues::process
3.00 s 0.7% 107.00 ms LiveDebugValues::transferRegisterCopy
892.00 ms 0.2% 406.00 ms LiveDebugValues::transferSpillOrRestoreInst
404.00 ms 0.1% 32.00 ms LiveDebugValues::transferRegisterDef
110.00 ms 0.0% 2.00 ms LiveDebugValues::getUsedRegs
57.00 ms 0.0% 1.00 ms std::__1::vector<>::push_back
40.00 ms 0.0% 1.00 ms llvm::CoalescingBitVector<>::find(unsigned long long)
```
FWIW, I tried the same approach using SparseBitVector, but got bad
results. To do that, I had to extend SparseBitVector to support 64-bit
indices and expose its lower bound operation. The problem with this is
that the performance is very hard to predict: SparseBitVector's lower
bound operation falls back to O(n) linear scans in a std::list if you're
not /very/ careful about managing iteration order. When I profiled this
the performance looked worse than the baseline.
You can see the full CoalescingBitVector-based implementation here:
https://github.com/vedantk/llvm-project/commits/try-coalescing
You can see the full SparseBitVector-based implementation here:
https://github.com/vedantk/llvm-project/commits/try-sparsebitvec-find
Depends on D74984 and D74985.
Differential Revision: https://reviews.llvm.org/D74986
This is part 2 of a 3-part series to address a compile-time explosion
issue in LiveDebugValues.
---
Each VarLoc has a unique ID: this ID is used to look up a VarLoc in the
VarLocMap, and to virtually insert a VarLoc into a VarLocSet. Instead of
inserting the VarLoc /itself/ into the VarLocSet, we insert just the ID,
because this can be represented efficiently with a SparseBitVector.
This change introduces LocIndex, a layer of abstraction on top of VarLoc
IDs. Prior to this change, an ID was just an index into a vector. With
this change, an ID encodes both an index /and/ a register location. The
type-checker ensures that conversions to and from LocIndex are correct.
For the moment the register location is always 0 (undef). We have plenty
of bits left over to encode physregs, stack slots, and other locations
in the future.
Differential Revision: https://reviews.llvm.org/D74985
Add CoalescingBitVector to ADT. This is part 1 of a 3-part series to
address a compile-time explosion issue in LiveDebugValues.
---
CoalescingBitVector is a bitvector that, under the hood, relies on an
IntervalMap to coalesce elements into intervals.
CoalescingBitVector efficiently represents sets which predominantly
contain contiguous ranges (e.g. the VarLocSets in LiveDebugValues,
which are very long sequences that look like {1, 2, 3, ...}). OTOH,
CoalescingBitVector isn't good at representing sets with lots of gaps
between elements. The first N coalesced intervals of set bits are stored
in-place (in the initial heap allocation).
Compared to SparseBitVector, CoalescingBitVector offers more predictable
performance for non-sequential find() operations. This provides a
crucial speedup in LiveDebugValues.
Differential Revision: https://reviews.llvm.org/D74984
I've been sitting on this change for a while and have been using
it to build the bot images, so it should be upstream.
This re-configures the docker build files to use docker-compose
more heavily. This allows for composing large images with multiple
compilers without invalidating the docker caches.
After this commit I'll quickly switch all the current buildbots
over to a new docker image, followed by another update to add new
compilers
The code changes here are hopefully straightforward:
1. Use MachineInstruction flags to decide if FP ops can be reassociated
(use both "reassoc" and "nsz" to be consistent with IR transforms;
we probably don't need "nsz", but that's a safer interpretation of
the FMF).
2. Check that both nodes allow reassociation to change instructions.
This is a stronger requirement than we've usually implemented in
IR/DAG, but this is needed to solve the motivating bug (see below),
and it seems unlikely to impede optimization at this late stage.
3. Intersect/propagate MachineIR flags to enable further reassociation
in MachineCombiner.
We managed to make MachineCombiner flexible enough that no changes are
needed to that pass itself. So this patch should only affect x86
(assuming no other targets have implemented the hooks using MachineIR
flags yet).
The motivating example in PR43609 is another case of fast-math transforms
interacting badly with special FP ops created during lowering:
https://bugs.llvm.org/show_bug.cgi?id=43609
The special fadd ops used for converting int to FP assume that they will
not be altered, so those are created without FMF.
However, the MachineCombiner pass was being enabled for FP ops using the
global/function-level TargetOption for "UnsafeFPMath". We managed to run
instruction/node-level FMF all the way down to MachineIR sometime in the
last 1-2 years though, so we can do better now.
The test diffs require some explanation:
1. llvm/test/CodeGen/X86/fmf-flags.ll - no target option for unsafe math was
specified here, so MachineCombiner kicks in where it did not previously;
to make it behave consistently, we need to specify a CPU schedule model,
so use the default model, and there are no code diffs.
2. llvm/test/CodeGen/X86/machine-combiner.ll - replace the target option for
unsafe math with the equivalent IR-level flags, and there are no code diffs;
we can't remove the NaN/nsz options because those are still used to drive
x86 fmin/fmax codegen (special SDAG opcodes).
3. llvm/test/CodeGen/X86/pow.ll - similar to #1
4. llvm/test/CodeGen/X86/sqrt-fastmath.ll - similar to #1, but MachineCombiner
does some reassociation of the estimate sequence ops; presumably these are
perf wins based on latency/throughput (and we get some reduction of move
instructions too); I'm not sure how it affects numerical accuracy, but the
test reflects reality better now because we would expect MachineCombiner to
be enabled if the IR was generated via something like "-ffast-math" with clang.
5. llvm/test/CodeGen/X86/vec_int_to_fp.ll - this is the test added to model PR43609;
the fadds are not reassociated now, so we should get the expected results.
6. llvm/test/CodeGen/X86/vector-reduce-fadd-fast.ll - similar to #1
7. llvm/test/CodeGen/X86/vector-reduce-fmul-fast.ll - similar to #1
Differential Revision: https://reviews.llvm.org/D74851
The lldb sanitizer bot is flagging a container-overflow error after we
introduced test TestWasm.py. MemoryCache::Read didn't behave correctly
in case of partial reads that can happen with object files whose size is
smaller that the cache size. It should return the actual number of bytes
read and not try to fill the buffer with random memory.
Module::GetMemoryObjectFile needs to be modified accordingly, to resize
its buffer to only the size that was read.
Differential Revision: https://reviews.llvm.org/D75200
Summary:
This revision split out a new CRunnerUtils library that supports
MLIR execution on targets without a C++ runtime.
Differential Revision: https://reviews.llvm.org/D75257
Summary:
This change does not add any functionality but merely exposes existing
static functions to make the associated transformations available
outside of their testing passes.
Differential Revision: https://reviews.llvm.org/D75232
Display the list of dialects known to mlir-opt. This is useful
for ensuring that linkage has happened correctly, for instance.
Differential Revision: https://reviews.llvm.org/D74865
Summary: Instead of dropping all the ranges associated with a Diagnostic when
converting them to a ClangTidy error, instead attach them to the ClangTidyError,
so they can be consumed by other APIs.
Patch by Joe Turner <joturner@google.com>.
Differential Revision: https://reviews.llvm.org/D69782
Summary:
We need to handle the MCSA_LGlobal case in emitSymbolAttribute for functions marked internal in the IR so that the
appropriate storage class is emitted on the function descriptor csect. As part of this we need to make sure that external
labels are not emitted into the symbol table, so we don't emit the descriptor label in the object writing path.
Reviewers: jasonliu, DiggerLin, hubert.reinterpretcast
Reviewed By: jasonliu
Subscribers: Xiangling_L, wuzish, nemanjai, hiraditya, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74968
These tests check that an operations happens within a specified
deadline, which causes flaky failures on slow machines or machines
under heavy load.
By adding the // FLAKY_TEST. tag it allows the test suite to
retry or ignore the tests
FileManager.h is an expensive header (~350ms for me in isolation), so
try to do without it.
Notably, we need to avoid checking the alignment of FileEntry, which
happens for DenseMap<FileEntry*> and PointerUnion<FileEntry*>. I
adjusted the code to avoid PointerUnion, and moved the DenseMap
insertion to the .cpp file.
Globally, this only saved about ~17 includes of the related headers
because SourceManager.h still includes FileManager.h, and it is more
popular than Module.h.
Summary:
The affected LIT test intends to test the correct use of divergence
analysis to detect a divergent branch with a uniform predicate. The
passes involved are LLVM IR passes, but the test runs llc and tries to
match against generated ISA, which makes it hard to demonstrate that
the intended behavior was really tested. Replaced this with a test
that invokes opt on the required passes and then checks for the
appropriate changes in the LLVM IR.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D75267
When InstCombine initially populates the worklist, it already
performs constant folding and DCE. However, as the instructions
are initially visited in program order, this DCE can pick up only
the last instruction of a dead chain, the rest would only get
picked up in the main InstCombine run.
To avoid this, we instead perform the DCE in separate pass over the
collected instructions in reverse order, which will allow us to
pick up full dead instruction chains. We already need to do this
reverse iteration anyway to populate the worklist, so this
shouldn't add extra cost.
This by itself only fixes a small part of the problem though:
The same basic issue also applies during the main InstCombine loop.
We generally always want DCE to occur as early as possible,
because it will allow one-use folds to happen. Address this by also
performing DCE while adding deferred instructions to the main worklist.
This drops the number of tests that perform more than 2 InstCombine
iterations from ~80 to ~40. There's some spurious test changes due
to operand order / icmp toggling.
Differential Revision: https://reviews.llvm.org/D75008
When constructing a ParsedAttr the ParsedAttrInfo gets looked up in the
AttrInfoMap, which is auto-generated using tablegen. If that lookup fails then
we look through the ParsedAttrInfos that plugins have added to the registry and
check if any has a spelling that matches.
Differential Revision: https://reviews.llvm.org/D31338
Use UnaryOperator::CreateFNeg instead.
Summary:
With the introduction of the native fneg instruction, the
fsub -0.0, %x idiom is obsolete. This patch makes LLVM
emit fneg instead of the idiom in all places.
Reviewed By: cameron.mcinally
Differential Revision: https://reviews.llvm.org/D75130