Summary:
This change adds a new AST matcher for block expressions.
Test Notes:
Ran the clang unit tests.
Reviewers: aaron.ballman
Reviewed By: aaron.ballman
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D55546
llvm-svn: 349004
Statement memoization was removed in r348822 because it was noticed to cause
memory corruption. This was happening because a reference to an object
in a DenseMap was used after being invalidated by inserting a new key
into the map.
This test case crashes reliably under ASan (i.e., when Clang is built with
-DLLVM_USE_SANITIZER="Address") on at least some machines before r348822
and doesn't crash after it.
llvm-svn: 349000
For anyone curious, the first test example is illustrative of a real code idiom produced by branching on the result of a three way comparison.
llvm-svn: 348997
The rename_internal function used for Windows has a minor bug where the
filename length is passed as a character count instead of a byte count.
Windows internally ignores this field, but other tools that hook NT
api's may use the documented behavior:
MSDN documentation specifying the size should be in bytes:
https://docs.microsoft.com/en-us/windows/desktop/api/winbase/ns-winbase-_file_rename_info
Patch by Ben Hillis.
Differential Revision: https://reviews.llvm.org/D55624
llvm-svn: 348995
Also, add tests making sure that vector and deque both catch the problem
when assertions are enabled. Otherwise, deque would segfault and vector
would never terminate.
llvm-svn: 348994
On Windows, we won't go into the `host_os != "win"` block, so `defines`
won't have been defined, and we'll run into an undefined identifier
error when we try to later append to it. Unconditionally define it at
the start and append to it everywhere else.
Differential Revision: https://reviews.llvm.org/D55617
llvm-svn: 348993
Summary:
In addition to knowing that an instruction is changed. It's also useful to
know when it's about to change. For example, it might print the instruction so
you can track the changes in a debug log, it might remove it from some queue
while it's being worked on, or it might want to change several instructions as
a single transaction and act on all the changes at once.
Added changingInstr() to all existing uses of changedInstr()
Reviewers: aditya_nandakumar
Reviewed By: aditya_nandakumar
Subscribers: rovka, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D55623
llvm-svn: 348992
The previous assertion was relatively easy to trigger, and likely will
be easy to trigger going forward. EmitDelegateCallArg is relatively
popular.
This cleanly diagnoses PR28299 while I work on a proper solution.
llvm-svn: 348991
When loops are deleted, we don't keep track of variables modified inside
the loops, so the DI will contain the wrong value for these.
e.g.
int b() {
int i;
for (i = 0; i < 2; i++)
;
patatino();
return a;
-> 6 patatino();
7 return a;
8 }
9 int main() { b(); }
(lldb) frame var i
(int) i = 0
We mark instead these values as unavailable inserting a
@llvm.dbg.value(undef to make sure we don't end up printing an incorrect
value in the debugger. We could consider doing something fancier,
for, e.g. constants, in the future.
PR39868.
rdar://problem/46418795)
Differential Revision: https://reviews.llvm.org/D55299
llvm-svn: 348988
This fixes https://bugs.llvm.org/show_bug.cgi?id=39908.
The evaluateGEPOffsetExpression() function simplifies GEP offsets for
use in comparisons against zero, basically by converting X*Scale+Offset==0
to X+Offset/Scale==0 if Scale divides Offset. However, before this is done,
Offset is masked down to the pointer size. This results in incorrect
results for negative Offsets, because we basically end up dividing the
32-bit offset *zero* extended to 64-bit bits (rather than sign extended).
Fix this by explicitly sign extending the truncated value.
Differential Revision: https://reviews.llvm.org/D55449
llvm-svn: 348987
Summary:
In both Elf{32,64}_Phdr, the field Elf{32,64}_World p_type is uint32_t.
Also reorder the fields to be similar to Elf64_Phdr (which is different
from Elf32_Phdr but quite similar).
Reviewers: rupprecht, jhenderson, jakehehrlich, alexshap, espindola
Reviewed By: rupprecht
Subscribers: emaste, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D55618
llvm-svn: 348985
Summary:
The TLS_SLOT_TSAN slot is available starting in N, but its location (8)
is incompatible with the proposed solution for implementing ELF TLS on
Android (i.e. bump ARM/AArch64 alignment to reserve an 8-word TCB).
Instead, starting in Q, Bionic replaced TLS_SLOT_DLERROR(6) with
TLS_SLOT_SANITIZER(6). Switch compiler-rt to the new slot.
Reviewers: eugenis, srhines, enh
Reviewed By: eugenis
Subscribers: ruiu, srhines, kubamracek, javed.absar, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D55581
llvm-svn: 348984
Summary:
The change is needed to support ELF TLS in Android. See D55581 for the
same change in compiler-rt.
Reviewers: srhines, eugenis
Reviewed By: eugenis
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D55592
llvm-svn: 348983
As mentioned in D55604, there are 2 bugs here:
1. The new pass manager is speculating wildly by default.
2. The old pass manager is not converting this to funnel shift.
llvm-svn: 348980
Summary:
Add a check that TLS_SLOT_TSAN / TLS_SLOT_SANITIZER, whichever
android_get_tls_slot is using, is not conflicting with
TLS_SLOT_DLERROR.
Reviewers: rprichard, vitalybuka
Subscribers: srhines, kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D55587
llvm-svn: 348979
__builtin_cpu_supports and __builtin_cpu_is use information in __cpu_model to decide cpu features. Before this change, __cpu_model was not declared as dso local. The generated code looks up the address in GOT when reading __cpu_model. This makes it impossible to use these functions in ifunc, because at that time GOT entries have not been relocated. This change makes it dso local.
Differential Revision: https://reviews.llvm.org/D53850
llvm-svn: 348978
Summary:
Currently the Clang AST doesn't store information about how the callee of a CallExpr was found. Specifically if it was found using ADL.
However, this information is invaluable to tooling. Consider a tool which renames usages of a function. If the originally CallExpr was formed using ADL, then the tooling may need to additionally qualify the replacement.
Without information about how the callee was found, the tooling is left scratching it's head. Additionally, we want to be able to match ADL calls as quickly as possible, which means avoiding computing the answer on the fly.
This patch changes `CallExpr` to store whether it's callee was found using ADL. It does not change the size of any AST nodes.
Reviewers: fowles, rsmith, klimek, shafik
Reviewed By: rsmith
Subscribers: aaron.ballman, riccibruno, calabrese, titus, cfe-commits
Differential Revision: https://reviews.llvm.org/D55534
llvm-svn: 348977
Summary:
There's little of interest that can be done to an already-erased instruction.
You can't inspect it, write it to a debug log, etc. It ought to be notification
that we're about to erase it. Rename the function to clarify the timing of the
event and reflect current usage.
Also fixed one case where we were trying to print an erased instruction.
Reviewers: aditya_nandakumar
Reviewed By: aditya_nandakumar
Subscribers: rovka, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D55611
llvm-svn: 348976
MULX has somewhat improved register allocation constraints compared to the legacy MUL instruction. Both output registers are encoded instead of fixed to EAX/EDX, but EDX is used as input. It also doesn't touch flags. Unfortunately, the encoding is longer.
Prefering it whenever BMI2 is enabled is probably not optimal. Choosing it should somehow be a function of register allocation constraints like converting adds to three address. gcc and icc definitely don't pick MULX by default. Not sure what if any rules they have for using it.
Differential Revision: https://reviews.llvm.org/D55565
llvm-svn: 348975
A future patch may stop using MULX by default so use MIR to ensure we're always testing MULX.
Add the 32-bit case that we couldn't do in the 64-bit mode IR test due to it being promoted to a 64-bit mul.
llvm-svn: 348972
Updated the annotate-kernel-features pass to support the propagation of uniform-work-group attribute from the kernel to the called functions. Once this pass is run, all kernels, even the ones which initially did not have the attribute, will be able to indicate weather or not they have uniform work group size depending on the value of the attribute.
Differential Revision: https://reviews.llvm.org/D50200
llvm-svn: 348971
Use the replacement execute once threading support in LLVM for PPC64le. It
seems that GCC does not define `__ppc__` and so we would actually call out to
the C++ runtime there which is not what the current code intended. Check both
`__ppc__` and `__PPC__`. This avoids the need for checking the endianness.
Thanks to nemanjai for the hint about GCC's behaviour and the fact that the
reviewed condition could be simplified.
Original patch by Sarvesh Tamba!
llvm-svn: 348970
The __builtin_unpredictable implementation is confused by any implicit
casts, which happen in C++. This patch strips those off so that
if/switch statements now work with it in C++.
Change-Id: I73c3bf4f1775cd906703880944f4fcdc29fffb0a
llvm-svn: 348969
Doesn't handle varargs and other fun things, but it's a start. (also
doesn't print these strictly as valid C++ when it's a pointer to
function, it'll print as "void(int)*" instead of "void (*)(int)")
llvm-svn: 348965
Continue to present HSA metadata as YAML in ASM and when output by tools
(e.g. llvm-readobj), but encode it in Messagepack in the code object.
Differential Revision: https://reviews.llvm.org/D48179
llvm-svn: 348963
This lays the foundation for dumping types not referenced by DW_AT_type
attributes (in the near-term, that'll be DW_AT_containing_type for a
DW_TAG_ptr_to_member_type - in the future, potentially dumping the
pretty printed name next to the DW_TAG for the type, rather than only
when the type is referenced from elsewhere)
llvm-svn: 348961
I'm hoping we can just replace SETCC_CARRY with SBB. This is another step towards that.
I've explicitly used zero as the input to the setcc to avoid a false dependency that we've had with the SETCC_CARRY. I changed one of the patterns that used NEG to instead use an explicit compare with 0 on the LHS. We needed the zero anyway to avoid the false dependency. The negate would clobber its input register. By using a CMP we can avoid that which could be useful.
Differential Revision: https://reviews.llvm.org/D55414
llvm-svn: 348959
Indices for getelementptr can be signed so we should use
getMinSignedBits instead of getActiveBits here. The function later calls
getSExtValue to get the int64_t value, which also checks
getMinSignedBits.
This fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=11647.
Reviewers: mssimpso, efriedma, davide
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D55536
llvm-svn: 348957
This patch introduces a generic function to determine whether a given vector type is known to be a splat value for the specified demanded elements, recursing up the DAG looking for BUILD_VECTOR or VECTOR_SHUFFLE splat patterns.
It also keeps track of the elements that are known to be UNDEF - it returns true if all the demanded elements are UNDEF (as this may be useful under some circumstances), so this needs to be handled by the caller.
A wrapper variant is also provided that doesn't take the DemandedElts or UndefElts arguments for cases where we just want to know if the SDValue is a splat or not (with/without UNDEFS).
I had hoped to completely remove the X86 local version of this function, but I'm seeing some regressions in shift/rotate codegen that will take a little longer to fix and I hope to get this in sooner so I can continue work on PR38243 which needs more capable splat detection.
Differential Revision: https://reviews.llvm.org/D55426
llvm-svn: 348953