Rework getLongestCommonPrefixLen() so that it doesn't access string null
terminators. The old version with std::mismatch would do this:
|
v
Strings[0] = ['a', nil]
Strings[1] = ['a', 'a', nil]
^
|
This should silence a warning from the MSVC runtime (PR30515). As
before, I tested this out by preparing a coverage report for FileCheck.
Thanks to Yaron Keren for the report!
llvm-svn: 282422
This patch ensures that we actually scalarize instructions marked scalar after
vectorization. Previously, such instructions may have been vectorized instead.
Differential Revision: https://reviews.llvm.org/D23889
llvm-svn: 282418
Summary:
If coroutine has no suspend points, remove heap allocation and turn a coroutine into a normal function.
Also, if a pattern is detected that coroutine resumes or destroys itself prior to coro.suspend call, turn the suspend point into a simple jump to resume or cleanup label. This pattern occurs when coroutines are used to propagate errors in functions that return expected<T>.
Reviewers: majnemer
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D24408
llvm-svn: 282414
Don't match the UXTW extended reg forms of ADD/ADDS/SUB/SUBS if the
32-bit to 64-bit zero-extend can be done for free by taking advantage
of the 32-bit defining instruction zeroing the upper 32-bits of the X
register destination. This enables better instruction selection in a
few cases, such as:
sub x0, xzr, x8
instead of:
mov x8, xzr
sub x0, x8, w9, uxtw
madd x0, x1, x1, x8
instead of:
mul x9, x1, x1
add x0, x9, w8, uxtw
cmp x2, x8
instead of:
sub x8, x2, w8, uxtw
cmp x8, #0
add x0, x8, x1, lsl #3
instead of:
lsl x9, x1, #3
add x0, x9, w8, uxtw
Reviewers: t.p.northover, jmolloy
Subscribers: mcrosier, aemerson, llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D24747
llvm-svn: 282413
Many high-performance processors have a dedicated branch predictor for
indirect branches, commonly used with jump tables. As sophisticated as such
branch predictors are, they tend to have well defined limits beyond which
their effectiveness is hampered or even nullified. One such limit is the
number of possible destinations for a given indirect branches that such
branch predictors can handle.
This patch considers a limit that a target may set to the number of
destination addresses in a jump table.
Patch by: Evandro Menezes <e.menezes@samsung.com>, Aditya Kumar
<aditya.k7@samsung.com>, Sebastian Pop <s.pop@samsung.com>.
Differential revision: https://reviews.llvm.org/D21940
llvm-svn: 282412
This is a follow up to r282152.
A more extensive testing on real apps revealed a subtle bug in r282152.
The revision made shadow mapping non-linear even within a single
user region. But there are lots of code in runtime that processes
memory ranges and assumes that mapping is linear. For example,
region memory access handling simply increments shadow address
to advance to the next shadow cell group. Similarly, DontNeedShadowFor,
java memory mover, search of heap memory block header, etc
make similar assumptions.
To trigger the bug user range would need to cross 0x008000000000 boundary.
This was observed for a module data section.
Make shadow mapping linear within a single user range again.
Add a startup CHECK for linearity.
llvm-svn: 282405
Don't xor user address with kAppMemXor in meta mapping.
The only purpose of kAppMemXor is to raise shadow for ~0 user addresses,
so that they don't map to ~0 (which would cause overlap between
user memory and shadow).
For meta mapping we explicitly add kMetaShadowBeg offset,
so we don't need to additionally raise meta shadow.
llvm-svn: 282403
The index of the new insertelement instruction was evaluated in the
wrong way, it was considered as the index of the inserted value instead
of index of the position, where the value should be inserted.
llvm-svn: 282401
This patch fixes PR30366.
Function foldUDivShl() worked under the assumption that one of the values
in input to the function was always an instance of llvm::Instruction.
However, function visitUDivOperand() (the only user of foldUDivShl) was
clearly violating that precondition; internally, visitUDivOperand() uses pattern
matches to check the operands of a udiv. Pattern matchers for binary operators
know how to handle both Instruction and ConstantExpr values.
This patch fixes the problem in foldUDivShl(). Now we use pattern matchers
instead of explicit casts to Instruction. The reduced test case from PR30366
has been added to test file InstCombine/udiv-simplify.ll.
Differential Revision: https://reviews.llvm.org/D24565
llvm-svn: 282398
It's wrong to pass to MsanReallocate a pointer that MSan allocator doesn't own.
Use nullptr instead of ptr to prevent possible (still unlikely) failure.
llvm-svn: 282390
After applying `clang-rename` to a vim buffer (using `clang-rename.py` as part
of the vim integration) the buffer gets reloaded using `bufdo`. This solution is
suboptimal, since syntax highlighting is turned off for performance reasons and
never turned on, after all changes to the source file have been applied.
A better solution to this is using `checktime`. It is exactly designed for this
kind of task and doesn't have the syntax highlighting issue.
Patch by Kai Wolf!
Reviewers: omtcyfz
Differential Revision: https://reviews.llvm.org/D24791
llvm-svn: 282388
If a constant is unamed_addr and is only used within one function, we can save
on the code size and runtime cost of an indirection by changing the global's storage
to inside the constant pool. For example, instead of:
ldr r0, .CPI0
bl printf
bx lr
.CPI0: &format_string
format_string: .asciz "hello, world!\n"
We can emit:
adr r0, .CPI0
bl printf
bx lr
.CPI0: .asciz "hello, world!\n"
This can cause significant code size savings when many small strings are used in one
function (4 bytes per string).
This recommit contains fixes for a nasty bug related to fast-isel fallback - because
fast-isel doesn't know about this optimization, if it runs and emits references to
a string that we inline (because fast-isel fell back to SDAG) we will end up
with an inlined string and also an out-of-line string, and we won't emit the
out-of-line string, causing backend failures.
It also contains fixes for emitting .text relocations which made the sanitizer
bots unhappy.
llvm-svn: 282387
This patch extends clang-tidy's readability-redundant-smartptr-get to produce
warnings for previously unsupported cases:
```
std::unique_ptr<void> ptr;
if (ptr.get())
if (ptr.get() == NULL)
if (ptr.get() != NULL)
```
This is intended to fix https://llvm.org/bugs/show_bug.cgi?id=25804, a bug
report opened by @Eugene.Zelenko.
However, there still are cases not detected by the check. They can be found in
`void Negative()` function defined in
test/clang-tidy/readability-redundant-smartptr-get.cpp.
Reviewers: alexfh
Differential Revision: https://reviews.llvm.org/D24893
llvm-svn: 282386
Summary:
Replace a LEA instruction of the form 'lea (%esp), %ebx' --> 'mov %esp, %ebx'
MOV is preferable over LEA because usually there are more issue-slots available to execute MOVs than LEAs. Latest processors also support zero-latency MOVs.
Fixes pr29022.
Reviewers: hfinkel, delena, igorb, myatsina, mkuper
Differential Revision: https://reviews.llvm.org/D24705
llvm-svn: 282385
This patch extends clang-tidy's readability-redundant-smartptr-get to produce
warnings for previously unsupported cases:
```
std::unique_ptr<void> ptr;
if (ptr.get())
if (ptr.get() == NULL)
if (ptr.get() != NULL)
```
This is intended to fix https://llvm.org/bugs/show_bug.cgi?id=25804, a bug
report opened by @Eugene.Zelenko.
However, there still are cases not detected by the check. They can be found in
`void Negative()` function defined in
test/clang-tidy/readability-redundant-smartptr-get.cpp.
llvm-svn: 282382
Avoid failing in the backend when the rewrite map does not exist. Rather check
that the map exists in the frontend before handing it off to the backend. Add
the missing rewrite maps that the tests were referencing.
llvm-svn: 282379
a function pass nested inside of a CGSCC pass manager.
This is very similar to the previous unittest but makes sure the
invalidation logic works across all the layers here.
llvm-svn: 282378
This reinstates r280447. Original commit log:
This wasn't really well explicitly tested with a nice unittest before.
It seems good to have reasonably broken out unittests for this kind of
functionality as I'm workin go other invalidation features to make sure
none of the existing ones regress.
This still has too much duplicated code, I plan to factor that out in
a subsequent commit to use common helpers for repeated parts of this.
llvm-svn: 282377
In a previous change I collapsed two different caches into one. When
doing that I noticed that ScalarEvolution's move constructor was not
moving those caches.
To keep the previous change simple, I've moved that bugfix into this
separate change.
llvm-svn: 282376
Both `loopHasNoSideEffects` and `loopHasNoAbnormalExits` involve walking
the loop and maintaining similar sorts of caches. This commit changes
SCEV to compute both the predicates via a single walk, and maintain a
single cache instead of two.
llvm-svn: 282375
__attribute__((amdgpu_flat_work_group_size(<min>, <max>))) - request minimum and maximum flat work group size
__attribute__((amdgpu_waves_per_eu(<min>[, <max>]))) - request minimum and/or maximum waves per execution unit
Differential Revision: https://reviews.llvm.org/D24513
llvm-svn: 282371