The lock tables were being reallocated if kmp_set_defaults() was called.
In the env_init code it says that the user should be able to switch between
different KMP_CONSISTENCY_CHECK values which is what this change enables.
llvm-svn: 292349
There is no corresponding free() for this expandable array. The logic is
added in __kmp_cleanup() next to the freeing of __kmp_nested_nth.
llvm-svn: 292348
Adding `path::operator=(string_type&&)` made the expression `p = {}`
ambiguous. This path fixes that ambiguity by making the `string&&`
overload a template so it ranks lower during overload resolution.
llvm-svn: 292345
These two are independent: it's possible to use LLD without LTO,
and it's possible to do LTO build without LLD.
Differential Revision: https://reviews.llvm.org/D28821
llvm-svn: 292343
This patch adds a test for an invalid output path for -Map option,
though that test is not for verifying that we are using error()
instead of fatal() in writeMapFile.
llvm-svn: 292336
Ported from the amd-builtins branch.
Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <Matthew.Arsenault@amd.com>
CC: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 292335
Ported from the amd-builtins branch.
Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <Matthew.Arsenault@amd.com>
CC: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 292334
claims to test.
LoopSimplify was unifying the multiple exits in this test case, making
it never even test the multiple exit handling of LoopDeletion. Doh.
Now it works (thanks to a great idea from mkuper) and will fail if we
ever change something to make it stop working.
llvm-svn: 292331
Although this relocation type is not part of the x86-64 psABI, I intend to
use it internally as part of the ThinLTO implementation.
Differential Revision: https://reviews.llvm.org/D28841
llvm-svn: 292330
You can now define the register class of a virtual register on the
operand itself avoiding the need to use a "registers:" block.
Example: "%0:gr64 = COPY %rax"
Differential Revision: https://reviews.llvm.org/D22398
llvm-svn: 292321
Summary: The parameter `input` to `subprocess.Popen.communicate(...)` must be an object of type `bytes` . This is strictly enforced in python3. This patch (1) allows `to_bytes` to be safely called redundantly. (2) Explicitly convert `input` within `executeCommand`. This allows for usages like `executeCommand(['clang++', '-'], input='int main() {}\n')`.
Reviewers: ddunbar, BinaryKhaos, modocache, dim, EricWF
Reviewed By: EricWF
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28736
llvm-svn: 292308
Summary:
This change also lets us use max.{s,u}16. There's a vague warning in a
test about this maybe being less efficient, but I could not come up with
a case where the resulting SASS (sm_35 or sm_60) was different with or
without max.{s,u}16. It's true that nvcc seems to emit only
max.{s,u}32, but even ptxas 7.0 seems to have no problem generating
efficient SASS from max.{s,u}16 (the casts up to i32 and back down to
i16 seem to be implicit and nops, happening via register aliasing).
In the absence of evidence, better to have fewer special cases, emit
more straightforward code, etc. In particular, if a new GPU has 16-bit
min/max instructions, we want to be able to use them.
Reviewers: tra
Subscribers: jholewinski, llvm-commits
Differential Revision: https://reviews.llvm.org/D28732
llvm-svn: 292304
Summary: Previously we lowered it literally, to shifts and xors.
Reviewers: tra
Subscribers: jholewinski, llvm-commits
Differential Revision: https://reviews.llvm.org/D28722
llvm-svn: 292303
Summary:
Avoid an unnecessary conversion operation when using the result of
ctpop.i32 or ctpop.i16 as an i32, as in both cases the ptx instruction
we run returns an i32.
(Previously if we used the value as an i32, we'd do an unnecessary
zext+trunc.)
Reviewers: tra
Subscribers: jholewinski, llvm-commits
Differential Revision: https://reviews.llvm.org/D28721
llvm-svn: 292302
Summary:
* Disable "ctlz speculation", which inserts a branch on every ctlz(x) which
has defined behavior on x == 0 to check whether x is, in fact zero.
* Add DAG patterns that avoid re-truncating or re-expanding the result
of the 16- and 64-bit ctz instructions.
Reviewers: tra
Subscribers: llvm-commits, jholewinski
Differential Revision: https://reviews.llvm.org/D28719
llvm-svn: 292299
Summary: Partial unrolling should have separate threshold with full unrolling.
Reviewers: efriedma, mzolotukhin
Reviewed By: efriedma, mzolotukhin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28831
llvm-svn: 292293
The patch is to solve the performance problem described in PR27827.
Register coalescing sometimes cannot remove a copy because of interference.
But if we can find a reverse copy in one of the predecessor block of the copy,
the copy is partially redundent and we may remove the copy partially by moving
it to the predecessor block without the reverse copy.
Differential Revision: https://reviews.llvm.org/D28585
llvm-svn: 292292