Prefer to use the 32-bit AND with immediate instead.
Primarily I'm doing this to ensure that immediates created by shrinkAndImmediate will always get absorbed into the AND. But I do believe this would be a reduction in the number of uops that need to execute. Ideally we should shrink the 'and' and the 'load' during DAG combine to re-enable the fold.
Fixes PR37063.
llvm-svn: 329667
While reading Codeview records which contain variable-length encoded integers,
such as LF_BCLASS, LF_ENUMERATE, LF_MEMBER, LF_VBCLASS or LF_IVBCLASS,
the record's size would be improperly calculated in cases where the value was
indeed of a variable length (>= LF_NUMERIC). This caused a bad alignement on
the next record, which would/might crash later on.
Differential Revision: https://reviews.llvm.org/D45104
llvm-svn: 329659
The key idea is to lower COPY nodes populating EFLAGS by scanning the
uses of EFLAGS and introducing dedicated code to preserve the necessary
state in a GPR. In the vast majority of cases, these uses are cmovCC and
jCC instructions. For such cases, we can very easily save and restore
the necessary information by simply inserting a setCC into a GPR where
the original flags are live, and then testing that GPR directly to feed
the cmov or conditional branch.
However, things are a bit more tricky if arithmetic is using the flags.
This patch handles the vast majority of cases that seem to come up in
practice: adc, adcx, adox, rcl, and rcr; all without taking advantage of
partially preserved EFLAGS as LLVM doesn't currently model that at all.
There are a large number of operations that techinaclly observe EFLAGS
currently but shouldn't in this case -- they typically are using DF.
Currently, they will not be handled by this approach. However, I have
never seen this issue come up in practice. It is already pretty rare to
have these patterns come up in practical code with LLVM. I had to resort
to writing MIR tests to cover most of the logic in this pass already.
I suspect even with its current amount of coverage of arithmetic users
of EFLAGS it will be a significant improvement over the current use of
pushf/popf. It will also produce substantially faster code in most of
the common patterns.
This patch also removes all of the old lowering for EFLAGS copies, and
the hack that forced us to use a frame pointer when EFLAGS copies were
found anywhere in a function so that the dynamic stack adjustment wasn't
a problem. None of this is needed as we now lower all of these copies
directly in MI and without require stack adjustments.
Lots of thanks to Reid who came up with several aspects of this
approach, and Craig who helped me work out a couple of things tripping
me up while working on this.
Differential Revision: https://reviews.llvm.org/D45146
llvm-svn: 329657
Summary:
Another clean up, following D43208.
Interleaved memory access analysis/optimization has nothing to do with vectorization legality. It doesn't really belong there. On the other hand, cost model certainly has to know about it.
In principle, vectorization should proceed like Legality ==> Optimization ==> CostModel ==> CodeGen, and this change just does that,
by moving the interleaved access analysis/decision out of Legal, and run it just before CostModel object is created.
After this, I can move LoopVectorizationLegality and Hints/Requirements classes into it's own header file, making it shareable within Transform tree. I have the patch already but I don't want to mix with this change. Eventual goal is to move to Analysis tree, but I first need to move RecurrenceDescriptor/InductionDescriptor from Transform/Util/LoopUtil.* to Analysis.
Reviewers: rengolin, hfinkel, mkuper, dcaballe, sguggill, fhahn, aemerson
Reviewed By: rengolin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45072
llvm-svn: 329645
Summary:
SSAUpdater is a bottleneck in JumpThreading, and this patch improves the
situation by using SSAUpdaterBulk instead.
Compile time impact: no noticable changes on CTMark, a big improvement
on the test from PR16756.
Reviewers: dberlin, davide, MatzeB
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D44282
llvm-svn: 329644
Summary:
SSAUpdater is a bottleneck in a number of passes, and one of the reasons
is that it performs a lot of unnecessary computations (DT/IDF) over and
over again. This patch adds a new SSAUpdaterBulk that uses existing DT
and avoids recomputing IDF when possible.
Reviewers: dberlin, davide, MatzeB
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D44282
llvm-svn: 329643
The caching walker used to hold its own caches, which made its `reset()`
function meaningful. Since caching has been moved out of it, there's no
reason to continue to have these cache-related methods.
Similarly, the EXPENSIVE_CHECKS block that's getting removed used to
rerun the query with caching disabled. Since that's how we always do
queries now, it's redundant.
llvm-svn: 329638
Lower is slightly odd. It often doesn't change the type but the lowerings
do use the new type to decide what code to create. Treat it like a mutation
but provide convenience functions that re-use the existing type.
Re-uses the existing tests:
test/CodeGen/AArch64/GlobalISel/legalize-rem.mir
test/CodeGen/AArch64/GlobalISel//legalize-mul.mir
test/CodeGen/AArch64/GlobalISel//legalize-cmpxchg-with-success.mir
llvm-svn: 329623
Fix PR36484, as suggested:
<quote>
during moves, mark the direct users of the erased things that were phis as "not to be optimized"
<quote>
llvm-svn: 329621
1. Remove max_scratch_backing_memory_byte_size from kernel header
2. Make it a reserved field
3. Ignore it while parsing assembly for backwards compatibility
4. Bump up minor version of kernel header
Differential Revision: https://reviews.llvm.org/D45452
llvm-svn: 329620
LowerIntUnary as its name says has an assert for integer types. But for the bitcast case one side might be an FP type.
Rather than making sure the function really works for fp types and renaming it. Just do really basic splitting directly. The LowerIntUnary has the advantage that it can peek through BUILD_VECTOR because every other call is during Lowering. But these calls are during legalization and will be followed by a DAG combine round.
Revert some change to LowerVectorIntUnary that were originally made just to make these two calls work even in pure integer cases.
This was found purely by compiling the avx512f-builtins.c test from clang so I've copied over the offending function from that.
llvm-svn: 329616
This is a code size win in code that takes offseted addresses
frequently, such as C++ constructors that typically need to compute
an offseted address of a vtable. It reduces the size of Chromium for
Android's .text section by 46KB, or 56KB with ThinLTO (which exposes
more opportunities to use a direct access rather than a GOT access).
Because the addend range is limited in COFF and Mach-O, this is
enabled for ELF only.
Differential Revision: https://reviews.llvm.org/D45199
llvm-svn: 329611
Summary:
r327219 added wrappers to std::sort which randomly shuffle the container before sorting.
This will help in uncovering non-determinism caused due to undefined sorting
order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of std::sort.
Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort.
Refer the comments section in D44363 for a list of all the required patches.
Reviewers: sunfish, RKSimon
Reviewed By: sunfish
Subscribers: jfb, dschuff, sbc100, jgravelle-google, aheejin, llvm-commits
Differential Revision: https://reviews.llvm.org/D44873
llvm-svn: 329607
In somes cases fast-isel fails to remove the and/shifts and uses blends or conditional moves.
But once masking gets involved, fast-isel aborts on the mask portion and we DAG combine more thorougly.
llvm-svn: 329604
This allows MachineMemOperand::getSize()'s result to be fed directly into
MachineMemOperand::MachineMemOperand() without a narrowing type conversion
warning.
llvm-svn: 329602
building.
https://reviews.llvm.org/D45067
This change attempts to do two things:
1) It separates out the state that is stored in the
MachineIRBuilder(InsertionPt, MF, MRI, InsertFunction etc) into a
separate object called MachineIRBuilderState.
2) Add the ability to constant fold operations while building instructions
(optionally). MachineIRBuilder is now refactored into a MachineIRBuilderBase
which contains lots of non foldable build methods and their implementation.
Instructions which can be constant folded/transformed are now in a class
called FoldableInstructionBuilder which uses CRTP to use the implementation
of the derived class for buildBinaryOps. Additionally buildInstr in the derived
class can be used to implement other kinds of transformations.
Also because of separation of state, given a MachineIRBuilder in an API,
if one wishes to use another MachineIRBuilder, a new one can be
constructed from the state locally. For eg,
void doFoo(MachineIRBuilder &B) {
MyCustomBuilder CustomB(B.getState());
// Use CustomB for building.
}
reviewed by : aemerson
llvm-svn: 329596
While it appears to be correct information based on Intel's optimization manual and Agner's data, it causes perf regressions on a couple of the benchmarks in our internal list.
llvm-svn: 329593
Author: Samuel Pitoiset
ds_read_b128 and ds_write_b128 have been recently enabled
under the amdgpu-ds128 option because the performance benefit
is unclear.
Though, using 128-bit loads/stores for the local address space
appears to introduce regressions in tessellation shaders. Not
sure what is broken, but as ds_read_b128/ds_write_b128 are not
enabled by default, just introduce a global option and enable
128-bit only if requested (until it's fixed/used correctly).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105464
llvm-svn: 329591
This patch teaches llvm-mca how to parse code comments in search for special
"markers" used to select regions of code.
Example:
# LLVM-MCA-BEGIN My Code Region
....
# LLVM-MCA-END
The MCAsmLexer now delegates to an object of class MCACommentParser (i.e. an
AsmCommentConsumer) the parsing of code comments to search for begin/end code
region markers.
A comment starting with substring "LLVM-MCA-BEGIN" marks the beginning of a new
region of code. A comment starting with substring "LLVM-MCA-END" marks the end
of the last region.
This implementation doesn't allow regions to overlap. Each region can have a
optional description; internally, each region is identified by a range of source
code locations (SMLoc).
MCInst objects are added to a region R only if the source location for the
MCInst is in the range of locations specified by R.
By default, the tool allocates an implicit "Default" code region which contains
every source location. See new tests llvm-mca-marker-*.s for a few examples.
A new Backend object is created for every region. So, the analysis is conducted
on every parsed code region. The final report is the union of the reports
generated for every code region. Note that empty regions are skipped.
Special "[#] Code Region - ..." strings are used in the report to mark the
portion which is specific to a code region only. For example, see
llvm-mca-markers-5.s.
Differential Revision: https://reviews.llvm.org/D45433
llvm-svn: 329590
Summary:
This fixes AMDGPU GlobalISel test failures when enabling the AMDGPU
target without any other targets that use GlobalISel.
Reviewers: arsenm
Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D45353
llvm-svn: 329588
Without the fast math flags, the llvm.experimental.vector.reduce.fadd/fmul intrinsic expansions must be expanded in order.
This patch scalarizes the reduction, applying the accumulator at the start of the sequence: ((((Acc + Scl[0]) + Scl[1]) + Scl[2]) + ) ... + Scl[NumElts-1]
Differential Revision: https://reviews.llvm.org/D45366
llvm-svn: 329585
Summary:
The option is helpful for large projects where it's not feasible to specify sources which
user would like to see in the report. Instead, it allows to black-list specific sources via
regular expressions (e.g. now it's possible to skip all files that have "test" in its name).
This also partially fixes https://bugs.llvm.org/show_bug.cgi?id=34277
Reviewers: vsk, morehouse, liaoyuke
Reviewed By: vsk
Subscribers: kcc, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D43907
llvm-svn: 329581
This patch fixes an issue exposed on the SystemZ build bots when committing
https://reviews.llvm.org/rL327856. The hoisting was temporarily disabled with
an option. This patch now re-enables hoisting and checks that we only hoist a
store instruction when all its operands are either constant caller preserved
registers or immediates.
Differential Revision: https://reviews.llvm.org/D45286
llvm-svn: 329577
Summary:
If an input DICompileUnit is completely empty (e.g., the result of
running "clang -g" on an empty file), we don't bother emitting an empty
DWARF CU. When we do that, we must make sure we don't also emit a DWARF v5
name index, as DWARF specifies that each index must reference at least
one compilation unit.
Reviewers: JDevlieghere, aprantl, dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45435
llvm-svn: 329575
Summary: The bit widths are wrong.
Reviewers: bkramer, lhames, hans
Reviewed By: hans
Subscribers: hans, nemanjai, kbarton, llvm-commits
Differential Revision: https://reviews.llvm.org/D45361
llvm-svn: 329573
Summary:
We do not try to move the instructions and split the block till we
know the blocks can be split, i.e. BCE-cmp-insts can be separated from
non-BCE-cmp-insts.
Reviewers: davide, courbet
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D44443
llvm-svn: 329564
- when tuning for SCE debugger (default for ps4 targets), we will not emit
the DW_AT_linkage_name, which this test needs. I explicitly set the
debugger tuning parameter to get the attribute always.
- darwin targets did not like the "section .text.startup" fragment of
the test. This is not actually needed for the test, so I remove it.
llvm-svn: 329555
With the threading refactoring, loading of object files happens before
checking whether we're dealing with a swift AST. While that's not an
issue per se, it causes a warning to be printed:
warning: /path/to/a.swiftmodule: The file was not recognized as a valid object file
note: while processing /path/to/a.swiftmodule
This suppresses the warning by checking for a Swift AST before
attempting to load is as an object file.
rdar://39240444
llvm-svn: 329553
Summary:
We were emitting accelerator entries for functions with no name, which
is contrary to the DWARF v5 spec: "All other (i.e., *not*
DW_TAG_namespace) debugging information entries without a DW_AT_name
attribute are excluded." Besides that, a name table entry with an empty
string as a key is fairly useless.
We can sometimes end up with functions which have a DW_AT_linkage_name but no
DW_AT_name. One such example is the global-constructor-initialization functions,
which C++ compilers synthesize for each compilation unit with global
constructors.
A very strict reading of the DWARF v5 spec would suggest that we should not even
emit the accelerator entry for the linkage name in this case, but I don't think
we should go that far.
I found this when running the dwarf verifier over llvm codebase compiled
with DWARF v5 accelerator tables.
Reviewers: JDevlieghere, aprantl, dblaikie
Subscribers: vleschuk, clayborg, echristo, probinson, llvm-commits
Differential Revision: https://reviews.llvm.org/D45367
llvm-svn: 329552
Recommitting r329283, third time lucky...
If the SRL node is only used by an AND, we may be able to set the
ExtVT to the width of the mask, making the AND redundant. To support
this, another check has been added in isLegalNarrowLoad which queries
whether the load is valid.
Differential Revision: https://reviews.llvm.org/D41350
llvm-svn: 329551
These are were previously grouped in small groups of similarish intrinsics. But all the intrinsics have the same number of arguments and the same order. So we can move them all into a larger group for handling.
llvm-svn: 329549
In IRCE, we have a very old legacy check that works when we collect comparisons that we
treat as range checks. It ensures that the value against which the indvar is compared is
loop invariant and is also positive.
This latter condition remained there since the times when IRCE was only able to handle
signed latch comparison. As the optimization evolved, it now learned how to intersect
signed or unsigned ranges, and this logic has no reliance on the fact that the right border
of each range should be positive.
The old implementation of this non-negativity check was also naive enough and just looked
into ranges (while most of other IRCE logic tries to use power of SCEV implications), so this
check did not allow to deal with the most simple case that looks like follows:
int size; // not known non-negative
int length; //known non-negative;
i = 0;
if (size != 0) {
do {
range_check(i < size);
range_check(i < length);
++i;
} while (i < size)
}
In this case, even if from some dominating conditions IRCE could parse loop
structure, it could only remove the range check against `length` and simply
ignored the check against `size`.
In this patch we remove this obsolete check. It will allow IRCE to pick comparison
against `size` as a potential range check and then let Range Intersection logic
decide whether it is OK to eliminate it or not.
Differential Revision: https://reviews.llvm.org/D45362
Reviewed By: samparker
llvm-svn: 329547
Summary:
This change consolidates the always/never lists that may be provided to
clang to externally control which functions should be XRay instrumented
by imbuing attributes. The files follow the same format as defined in
https://clang.llvm.org/docs/SanitizerSpecialCaseList.html for the
sanitizer blacklist.
We also deprecate the existing `-fxray-instrument-always=` and
`-fxray-instrument-never=` flags, in favour of `-fxray-attr-list=`.
This fixes http://llvm.org/PR34721.
Reviewers: echristo, vlad.tsyrklevich, eugenis
Reviewed By: vlad.tsyrklevich
Subscribers: llvm-commits, cfe-commits
Differential Revision: https://reviews.llvm.org/D45357
llvm-svn: 329543
Summary:
Currently MachineLoopInfo is used in only two places:
1) for computing IsBasicBlockInsideInnermostLoop field of MCCodePaddingContext, and it is never used.
2) in emitBasicBlockLoopComments, which is called only if `isVerbose()` is true.
Despite that, we currently have a dependency on MachineLoopInfo, which makes
pass manager to compute it and MachineDominator Tree. This patch removes the
use (1) and makes the use (2) lazy, thus avoiding some redundant
recomputations.
Reviewers: opaparo, gadi.haber, rafael, craig.topper, zvi
Subscribers: rengolin, javed.absar, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D44812
llvm-svn: 329542
The TargetSchedModel is always initialized using the TargetSubtargetInfo's
MCSchedModel and TargetInstrInfo, so we don't need to extract those and
pass 3 parameters to init().
Differential Revision: https://reviews.llvm.org/D44789
llvm-svn: 329540
Summary:
Cmov and setcc previously used WriteALU, but on Intel processors at least they are more restricted than basic ALU ops.
This patch adds new SchedWrites for them and removes the InstRWs. I had to leave some InstRWs for CMOVA/CMOVBE and SETA/SETBE because those have an extra uop relative to the other condition codes on Intel CPUs.
The test changes are due to fixing a missing ZnAGU dependency on the memory form of setcc.
Reviewers: RKSimon, andreadb, GGanesh
Reviewed By: RKSimon
Subscribers: GGanesh, llvm-commits
Differential Revision: https://reviews.llvm.org/D45380
llvm-svn: 329539
Summary:
This removes the InstRWs for BLENDVPS/PD in favor of WriteFVarBlend. The latency listed was 3 cycles but WriteFVarBlend is defined as 1 cycle latency. The 1 cycle latency matches Agner Fog's data.
The patterns were missing the VEX forms which is why there are no test changes. We don't test "-mcpu=znver1 -mattr=-avx"
Reviewers: RKSimon, GGanesh
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D44841
llvm-svn: 329538
Summary:
r327219 added wrappers to std::sort which randomly shuffle the container before sorting.
This will help in uncovering non-determinism caused due to undefined sorting
order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of std::sort.
Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort.
Refer the comments section in D44363 for a list of all the required patches.
Reviewers: chandlerc, jordan_rose, bkramer
Reviewed By: bkramer
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45140
llvm-svn: 329536
Summary:
r327219 added wrappers to std::sort which randomly shuffle the container before sorting.
This will help in uncovering non-determinism caused due to undefined sorting
order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of std::sort.
Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort.
Refer the comments section in D44363 for a list of all the required patches.
Reviewers: hfinkel, RKSimon
Reviewed By: RKSimon
Subscribers: nemanjai, kbarton, llvm-commits
Differential Revision: https://reviews.llvm.org/D44870
llvm-svn: 329535
Summary:
r327219 added wrappers to std::sort which randomly shuffle the container before sorting.
This will help in uncovering non-determinism caused due to undefined sorting
order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of std::sort.
Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort.
Refer the comments section in D44363 for a list of all the required patches.
Reviewers: chandlerc, craig.topper, RKSimon
Reviewed By: chandlerc, craig.topper
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D44874
llvm-svn: 329534
Previously MapVector assumed `Map::mapped_type` was `unsigned`.
This caused problems when using MapVector with a user-specified
map where this didn't hold (For example StringMap<unsigned>).
This patch adjusts MapVector to use the same type as the underlying
map, avoiding reference binding errors in functions like `insert`.
llvm-svn: 329523
Explicitly link LLVMTestingSupport library against LLVMSupport. This
is necessary to fix linking errors when LLVMTestingSupport is built
as a shared library (with BUILD_SHARED_LIBS=ON) and -Wl,-z,defs is used.
Differential Revision: https://reviews.llvm.org/D45408
llvm-svn: 329522
In our real world application, we found the following optimization is missed in DAGCombiner
(zext (and/or/xor (shl/shr (load x), cst), cst)) -> (and/or/xor (shl/shr (zextload x), (zext cst)), (zext cst))
If the user of original zext is an add, it may enable further lea optimization on x86.
This patch add a new function CombineZExtLogicopShiftLoad to do this optimization.
Differential Revision: https://reviews.llvm.org/D44402
llvm-svn: 329516
Previously we used a custom lowering for this because of the AVX1 splitting requirement. But we can do the split during DAG combine if we check the types and subtarget
llvm-svn: 329510
Summary: Fixes the bots - I moved LLVMSetSubprogram into the DIBuilder bindings, so the Go bindings need to move as well.
Reviewers: whitequark
Reviewed By: whitequark
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45402
llvm-svn: 329505
There are a pair of folds that try to merge fneg into fsub
with an intervening cast, but as shown in the FIXME tests,
they can create extra instructions.
llvm-svn: 329501
Should fix UBSan bot by also checking there's no "uwtable" attribute
before skipping. Otherwise the unwind table will be useless since its
moves expect CSRs to actually be preserved.
A noreturn nounwind function can be expected to never return in any way, and by
never returning it will also never have to restore any callee-saved registers
for its caller. This makes it possible to skip spills of those registers during
function entry, saving some stack space and time in the process. This is rather
useful for embedded targets with limited stack space.
Should fix PR9970.
Patch mostly by myeisha (pmb).
llvm-svn: 329494
Summary:
D44883 extends -Wself-assign to also work on C++ classes.
In it's current state (as suggested by @rjmccall), it is not under it's own sub-group.
Since that diag is enabled by `-Wall`, stage2 testing showed that:
* It does not fire on any llvm code
* It does fire for these 3 unittests
* It does fire for libc++ tests
This diff simply silences those new warnings in llvm's unittests.
A similar diff will be needed for libcxx. (`libcxx/test/std/language.support/support.types/byteops/`, maybe something else)
Since i don't think we want to repeat rL322901, let's talk about it.
I've subscribed everyone who i think might be interested...
There are several ways forward:
* Not extend -Wself-assign, close D44883. Not very productive outcome i'd say.
* Keep D44883 in it's current state.
Unless your custom overloaded operators do something unusual for when self-assigning,
the warning is no less of a false-positive than the current -Wself-assign.
Except for tests of course, there you'd want to silence it. The current suggestion is:
```
S a;
a = (S &)a;
```
* Split the diagnostic in two - `-Wself-assign-builtin` (i.e. what is `-Wself-assign` in trunk),
and `-Wself-assign-overloaded` - the new part in D44883.
Since, as i said, i'm not really sure why it would be less of a error than the current `-Wself-assign`,
both would still be in `-Wall`. That way one could simply pass `-Wno-self-assign-overloaded` for all the tests.
Pretty simple to do, and will surely work.
* Split the diagnostic in two - `-Wself-assign-trivial`, and `-Wself-assign-nontrivial`.
The choice of which diag to emit would depend on trivial-ness of that particular operator.
The current `-Wself-assign` would be `-Wself-assign-trivial`.
https://godbolt.org/g/gwDASe - `A`, `B` and `C` case would be treated as trivial, and `D`, `E` and `F` as non-trivial.
Will be the most complicated to implement.
Thoughts?
Reviewers: aaron.ballman, rsmith, rtrieu, rjmccall, dblaikie, atrick, gottesmm
Reviewed By: dblaikie
Subscribers: lebedev.ri, phosek, vsk, rnk, thakis, sammccall, mclow.lists, llvm-commits, rjmccall
Differential Revision: https://reviews.llvm.org/D45082
llvm-svn: 329491
r327219 added wrappers to std::sort which randomly shuffle the container before
sorting. This will help in uncovering non-determinism caused due to undefined
sorting order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of
std::sort.
Note: This patch is one of a series of patches to replace *all* std::sort to
llvm::sort. Refer the comments section in D44363 for a list of all the
required patches.
llvm-svn: 329475
Summary:
The LLVM SourceMgr class (which is used indirectly by Swift, though not Clang)
has a routine for looking up line numbers of SMLocs. This routine uses a
shared, special-purpose cache that handles exactly one access pattern
efficiently: looking up the line number of an SMLoc that points into the same
buffer as the last query made to the SourceMgr, at a location in the buffer at
or ahead of the last query.
When this works it's fine, but when it fails it's catastrophic for performancer:
one recent out-of-order access from a Swift utility routine ran for tens of
seconds, spending 99% of its time repeatedly scanning buffers for '\n'.
This change removes the shared cache from the SourceMgr and installs a new
cache in each SrcBuffer. The per-SrcBuffer caches are also "full", in the sense
that rather than caching a single last-query pointer, they cache _all_ the
line-ending offsets, in a binary-searchable array, such that once it's
populated (on first access), all subsequent access patterns run at the same
speed.
Performance measurements I've done show this is actually a little bit faster on
real codebases (though only a couple fractions of a percent). Memory usage is
up by a few tens to hundreds of bytes per SrcBuffer that has a line lookup done
on it; I've attempted to minimize this by using dynamic selection of integer
sized when storing offset arrays. But the main motive here is to
make-impossible the cases we don't always see, that show up by surprise when
there is an out-of-order access pattern.
Reviewers: jordan_rose
Reviewed By: jordan_rose
Subscribers: probinson, llvm-commits
Differential Revision: https://reviews.llvm.org/D45003
llvm-svn: 329470
Llvm-mc (and tools that use Path.inc on Windows) assume that strings are utf-8
encoded, however, this is not always the case. On Windows the default codepage
is not utf-8, so most of the time the strings are not utf-8 encoded.
The lld test 'format-binary-non-ascii' uses llvm-mc with a file with non-ascii
characters in the name which is how this bug was found. The test fails when run
using Python 3 because it uses properly encoded unicode strings (Python 2 actually
ends up using a byte string which is not utf-8 encoded, so the test passes, but
that's separate issue).
Patch by Stella Stamenova!
llvm-svn: 329468
- In Python 2.x, basestring is the base string type, but in
Python 3.x basestring is not defined and instead str includes
unicode strings.
- When Python is in a path that includes spaces, it needs to
be specified with quotes in the test files for it to run.
- The cache.ll test relies on files of a specific size being
created by Python, but on some versions of Windows the
files that are created by the current code are one byte
larger than expected. To fix the test, update file creation
to always make files of the expected size.
Patch by Stella Stamenova!
llvm-svn: 329466
Previously HalfTy was not handled which would either trigger an assertion,
or result in array initialized with garbage.
Differential Revision: https://reviews.llvm.org/D45391
llvm-svn: 329463
It was reported that this change measurably regressed -plugin-opt=O3
performance.
There is an ongoing discussion on llvm-dev about the correct way to
set the CG opt level, see thread "[llvm-dev] [RFC] Adding function
attributes to represent codegen optimization level".
llvm-svn: 329458
v2f16 is a special case in NVPTX. v4f16 may be loaded as a pair of v2f16
and that was not previously handled correctly by tryLDGLDU()
Differential Revision: https://reviews.llvm.org/D45339
llvm-svn: 329456
Summary:
This patch implements a tablegen-driven Instruction Compression
mechanism for generating RISCV compressed instructions
(C Extension) from the expanded instruction form.
This tablegen backend processes CompressPat declarations in a
td file and generates all the compile-time and runtime checks
required to validate the declarations, validate the input
operands and generate correct instructions.
The checks include validating register operands, immediate
operands, fixed register operands and fixed immediate operands.
Example:
class CompressPat<dag input, dag output> {
dag Input = input;
dag Output = output;
list<Predicate> Predicates = [];
}
let Predicates = [HasStdExtC] in {
def : CompressPat<(ADD GPRNoX0:$rs1, GPRNoX0:$rs1, GPRNoX0:$rs2),
(C_ADD GPRNoX0:$rs1, GPRNoX0:$rs2)>;
}
The result is an auto-generated header file
'RISCVGenCompressEmitter.inc' which exports two functions for
compressing/uncompressing MCInst instructions, plus
some helper functions:
bool compressInst(MCInst& OutInst, const MCInst &MI,
const MCSubtargetInfo &STI,
MCContext &Context);
bool uncompressInst(MCInst& OutInst, const MCInst &MI,
const MCRegisterInfo &MRI,
const MCSubtargetInfo &STI);
The clients that include this auto-generated header file and
invoke these functions can compress an instruction before emitting
it, in the target-specific ASM or ELF streamer, or can uncompress
an instruction before printing it, when the expanded instruction
format aliases is favored.
The following clients were added to implement compression\uncompression
for RISCV:
1) RISCVAsmParser::MatchAndEmitInstruction:
Inserted a call to compressInst() to compresses instructions
parsed by llvm-mc coming from an ASM input.
2) RISCVAsmPrinter::EmitInstruction:
Inserted a call to compressInst() to compress instructions that
were lowered from Machine Instructions (MachineInstr).
3) RVInstPrinter::printInst:
Inserted a call to uncompressInst() to print the expanded
version of the instruction instead of the compressed one (e.g,
add s0, s0, a5 instead of c.add s0, a5) when -riscv-no-aliases
is not passed.
This patch squashes D45119, D42780 and D41932. It was reviewed in smaller patches by
asb, efriedma, apazos and mgrang.
Reviewers: asb, efriedma, apazos, llvm-commits, sabuasal
Reviewed By: sabuasal
Subscribers: mgorny, eraman, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, niosHD, kito-cheng, shiva0217, zzheng
Differential Revision: https://reviews.llvm.org/D45385
llvm-svn: 329455
Summary:
r327219 added wrappers to std::sort which randomly shuffle the container before sorting.
This will help in uncovering non-determinism caused due to undefined sorting
order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of std::sort.
Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort.
Refer the comments section in D44363 for a list of all the required patches.
Reviewers: stoklund, kparzysz, dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45144
llvm-svn: 329451
Summary:
The 'strong' StackProtector heuristic takes into consideration call instructions.
Certain intrinsics, such as lifetime.start, can cause the
StackProtector to protect functions that do not need to be protected.
Specifically, a volatile variable, (not optimized away), but belonging to a stack
allocation will encourage a llvm.lifetime.start to be inserted during
compilation. Because that intrinsic is a 'call' the strong StackProtector
will see that the alloca'd variable is being passed to a call instruction, and
insert a stack protector. In this case the intrinsic isn't really lowered to a
call. This can cause unnecessary stack checking, at the cost of additional
(wasted) CPU cycles.
In the future we should rely on TargetTransformInfo::isLoweredToCall, but as of
now that routine considers all intrinsics as not being lowerable. That needs
to be corrected, and such a change is on my list of things to get moving on.
As a side note, the updated stack-protector-dbginfo.ll test always seems to
pass. I never see the dbg.declare/dbg.value reaching the
StackProtector::HasAddressTaken, but I don't see any code excluding dbg
intrinsic calls either, so I think it's the safest thing to do.
Reviewers: void, timshen
Reviewed By: timshen
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45331
llvm-svn: 329450
Summary:
This patch updates MC tests related to compression in RISCV to
insure they work correctly with automatic compression and relaxation
enabled. This is the first part of a series of patches to implement
automatic compression for RISCV.
Reviewers: asb, apazos
Reviewed By: asb
Subscribers: shiva0217, efriedma, llvm-commits, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, niosHD, kito-cheng
Differential Revision: https://reviews.llvm.org/D43328
llvm-svn: 329441
The compiler is generating packet with the following instructions,
which causes an undefined register assert in the verifier.
$r0 = IMPLICIT_DEF
$r1 = IMPLICIT_DEF
S2_storerd_io killed $r29, 0, killed %d0
The problem is that the packetizer is not saving the IMPLICIT_DEF
instructions, which are needed when checking if it is legal to
add the store instruction. The fix is to add the IMPLICIT_DEF
instructions to the CurrentPacketMIs structure.
Patch by Brendon Cahoon.
llvm-svn: 329439
Packetizer keeps two zero-latency bound instrctions in the same packet ignoring
the stalls on the later instruction. This should not be the case if there is no
data dependence.
Patch by Sumanth Gundapaneni.
llvm-svn: 329437
Summary:
r327219 added wrappers to std::sort which randomly shuffle the container before sorting.
This will help in uncovering non-determinism caused due to undefined sorting
order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of std::sort.
Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort.
Refer the comments section in D44363 for a list of all the required patches.
Reviewers: bogner, rnk, MatzeB, RKSimon
Reviewed By: rnk
Subscribers: JDevlieghere, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D45133
llvm-svn: 329435
Summary:
This patch removes InstRW overrides for basic arithmetic/logic instructions. To do this I've added the store address port to RMW. And used a WriteSequence to make the latency additive. It does not cover ADC/SBB because they have different latency.
Apparently we were inconsistent about whether the store has latency or not thus the test changes.
I've also left out Sandy Bridge because the load latency there is currently 4 cycles and should be 5.
Reviewers: RKSimon, andreadb
Reviewed By: andreadb
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45351
llvm-svn: 329416
D45344 is proposing to remove the use restriction that made the calloc
transform safe, but it doesn't currently address the problematic example
given inD16337. Add a test to make sure that doesn't break.
llvm-svn: 329412
Summary:
Fixing an issue where initializations of globals where constructors use
casts were silently translated to 0-initialization.
Reviewers: davidxl, evgeny777
Reviewed By: evgeny777
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45198
llvm-svn: 329409