LLVM has changed the semantics of dbg.declare for describing function
arguments. After this patch a dbg.declare always takes the *address*
of a variable as the first argument, even if the argument is not an
alloca.
https://bugs.llvm.org/show_bug.cgi?id=32382
rdar://problem/31205000
llvm-svn: 300523
The DWARF specification knows 3 kinds of non-empty simple location
descriptions:
1. Register location descriptions
- describe a variable in a register
- consist of only a DW_OP_reg
2. Memory location descriptions
- describe the address of a variable
3. Implicit location descriptions
- describe the value of a variable
- end with DW_OP_stack_value & friends
The existing DwarfExpression code is pretty much ignorant of these
restrictions. This used to not matter because we only emitted very
short expressions that we happened to get right by accident. This
patch makes DwarfExpression aware of the rules defined by the DWARF
standard and now chooses the right kind of location description for
each expression being emitted.
This would have been an NFC commit (for the existing testsuite) if not
for the way that clang describes captured block variables. Based on
how the previous code in LLVM emitted locations, DW_OP_deref
operations that should have come at the end of the expression are put
at its beginning. Fixing this means changing the semantics of
DIExpression, so this patch bumps the version number of DIExpression
and implements a bitcode upgrade.
There are two major changes in this patch:
I had to fix the semantics of dbg.declare for describing function
arguments. After this patch a dbg.declare always takes the *address*
of a variable as the first argument, even if the argument is not an
alloca.
When lowering a DBG_VALUE, the decision of whether to emit a register
location description or a memory location description depends on the
MachineLocation — register machine locations may get promoted to
memory locations based on their DIExpression. (Future) optimization
passes that want to salvage implicit debug location for variables may
do so by appending a DW_OP_stack_value. For example:
DBG_VALUE, [RBP-8] --> DW_OP_fbreg -8
DBG_VALUE, RAX --> DW_OP_reg0 +0
DBG_VALUE, RAX, DIExpression(DW_OP_deref) --> DW_OP_reg0 +0
All testcases that were modified were regenerated from clang. I also
added source-based testcases for each of these to the debuginfo-tests
repository over the last week to make sure that no synchronized bugs
slip in. The debuginfo-tests compile from source and run the debugger.
https://bugs.llvm.org/show_bug.cgi?id=32382
<rdar://problem/31205000>
Differential Revision: https://reviews.llvm.org/D31439
llvm-svn: 300522
These tests were unconditionally asserting that optional and unique_ptr declare throwing hashes, but MSVC++ implements conditional noexcept forwarding that of the underlying hash function. As a result we were failing these tests but there's nothing forbidding strengthening noexcept in that way.
Changed the ASSERT_NOT_NOEXCEPT asserts to use types which themselves have non-noexcept hash functions.
llvm-svn: 300516
Previously, if an escaped newline was followed by a newline or a nul, we'd lex
the escaped newline as a bogus space character. This led to a bunch of
different broken corner cases:
For the pattern "\\\n\0#", we would then have a (horizontal) space whose
spelling ends in a newline, and would decide that the '#' is at the start of a
line, and incorrectly start preprocessing a directive in the middle of a
logical source line. If we were already in the middle of a directive, this
would result in our attempting to process multiple directives at the same time!
This resulted in crashes, asserts, and hangs on invalid input, as discovered by
fuzz-testing.
For the pattern "\\\n" at EOF (with an implicit following nul byte), we would
produce a bogus trailing space character with spelling "\\\n". This was mostly
harmless, but would lead to clang-format getting confused and misformatting in
rare cases. We now produce a trailing EOF token with spelling "\\\n",
consistent with our handling for other similar cases -- an escaped newline is
always part of the token containing the next character, if any.
For the pattern "\\\n\n", this was somewhat more benign, but would produce an
extraneous whitespace token to clients who care about preserving whitespace.
However, it turns out that our lexing for line comments was relying on this bug
due to an off-by-one error in its computation of the end of the comment, on the
slow path where the comment might contain escaped newlines.
llvm-svn: 300515
Instead of storing an UncommonIndex on the Symbol, use a flag bit to store
whether the Symbol has an Uncommon. This shrinks Chromium's .bc files (after
D32061) by about 1%.
Differential Revision: https://reviews.llvm.org/D32070
llvm-svn: 300514
The IR builder can constant-fold null checks if the pointer operand
points to a constant. If the "is-non-null" check is folded away to
"true", don't emit the null check + branch.
Testing: check-clang, check-ubsan.
This slightly reduces the amount of null checks we emit when compiling
X86ISelLowering.cpp. Here are the numbers from patched/unpatched clangs
based on r300371.
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 25251 |
| patched, -O0 | 23925 | (-5.3%)
-------------------------------------
llvm-svn: 300509
Pointers to the start of an alloca are non-null, so we don't need to
emit runtime null checks for them.
Testing: check-clang, check-ubsan.
This significantly reduces the amount of null checks we emit when
compiling X86ISelLowering.cpp. Here are the numbers from patched /
unpatched clangs based on r300371.
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 45439 |
| patched, -O0 | 25251 | (-44.4%)
-------------------------------------
llvm-svn: 300508
Summary: If there is suffix added in the function name (e.g. module hash added by thinLTO), we will not be able to find a match in profile as the suffix does not exist in profile. This patch build a map from function name to Function *. The map includes the entry for the stripped function name so that inlineHotFunctions can find the corresponding function to promote/inline.
Reviewers: davidxl, dnovillo, tejohnson
Reviewed By: davidxl
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D31952
llvm-svn: 300507
The use list is a linked list so getNumUses requires a linear scan through the whole list. hasNUses will stop scanning at N and see if that is the end.
llvm-svn: 300505
Summary:
Certain implicitly generated coroutine statements, such as the calls to 'return_value()' or `return_void()` or `get_return_object_on_allocation_failure()`, cannot be built until the promise type is no longer dependent. This means they are not built until after the coroutine body statement has been transformed.
This patch fixes an issue where these statements would never be built for coroutine templates.
It also fixes a small issue where diagnostics about `get_return_object_on_allocation_failure()` were incorrectly suppressed.
Reviewers: rsmith, majnemer, GorNishanov, aaron.ballman
Reviewed By: GorNishanov
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D31487
llvm-svn: 300504
This merges the two different multiword shift right implementations into a single version located in tcShiftRight. lshrInPlace now calls tcShiftRight for the multiword case.
I retained the memmove fast path from lshrInPlace and used a memset for the zeroing. The for loop is basically tcShiftRight's implementation with the zeroing and the intra-shift of 0 removed.
Differential Revision: https://reviews.llvm.org/D32114
llvm-svn: 300503
Summary:
Refactoring changed paramHasAttr(1 + i) to paramHasAttr(0), fix that to
paramHasAttr(i).
Add more tests to WebAssemblyOptimizeReturned that catch that
regression.
Reviewers: dschuff
Subscribers: jfb, sbc100, llvm-commits
Differential Revision: https://reviews.llvm.org/D32136
llvm-svn: 300502
It sounds like MSVC is adding support for two-phase name lookup in a
future version, enabled by this flag (see bug).
Differential Revision: https://reviews.llvm.org/D32138
llvm-svn: 300501
Summary:
This patch adds a very simple linker script to version the lib's symbols
and thus trying to avoid crashes if an application loads two different
LLVM versions (as long as they do not share data between them).
Note that we deliberately *don't* make LLVM_5.0 depend on LLVM_4.0:
they're incompatible and the whole point of this patch is
to tell the linker that.
Avoid unexpected crashes when two LLVM versions are used in the same process.
Author: Rebecca N. Palmer <rebecca_palmer@zoho.com>
Author: Lisandro Damían Nicanor Pérez Meyer <lisandro@debian.org>
Author: Sylvestre Ledru <sylvestre@debian.org>
Bug-Debian: https://bugs.debian.org/848368
Reviewers: beanz, rnk
Reviewed By: rnk
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D31524
llvm-svn: 300496
So, `cast<Instruction>` is not guaranteed to succeed. Change the
code so that we create a new constant and use it in the newly
created instruction, as it's done in other places in InstCombine.
OK'ed by Sanjay/Craig. Fixes PR32686.
llvm-svn: 300495
the exponential behavior.
The patch is to fix PR32043. Functions getZeroExtendExpr and getSignExtendExpr
may call themselves recursively more than once. This is potentially a 2^N
complexity behavior. The exponential behavior was not commonly exposed before
because of existing global cache mechnism like UniqueSCEVs or some early return
mechanism when flags FlagNSW or FlagNUW are seen. However, we still have case
which can expose the exponential behavior, like the case in PR32043, so we add
a local cache in getZeroExtendExpr and getSignExtendExpr. If the input of the
functions -- SCEV and type pair have been seen before, we can find the extended
expression directly in the local cache.
Differential Revision: https://reviews.llvm.org/D30350
llvm-svn: 300494
Summary:
On Darwin, we need to track thread and tid as separate values.
This patch splits out the implementation of the suspended threads list
to be OS-specific.
Reviewers: glider, kubamracek, kcc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31474
llvm-svn: 300491
Avoid looping through program to determine register counts.
This avoids needing to look at regmask operands.
Also fixes some counting errors with flat_scr when there
are no stack objects.
llvm-svn: 300482
While the incoming stack for a kernel is 256-byte aligned,
this refers to the base address of the entire wave. This isn't
useful information for most of codegen. Fixes unnecessarily
aligning stack objects in callees.
llvm-svn: 300481
The splitIndirectCriticalEdges function generates and invalid CFG when the
'Target' basic block is a loop to itself. When this occurs, the code that
updates the predecessor terminator needs to update the terminator in the split
basic block.
This occurs when there is an edge from block D back to D. Since D is split in
to D0 and D1, the code needs to update the terminator in D1. But D1 is not in
the OtherPreds vector, so it was not getting updated.
Differential Revision: https://reviews.llvm.org/D32126
llvm-svn: 300480
This was added to work around a bug in MSVC 2013's implementation of stable_sort. That bug has been fixed as of MSVC 2015 so we shouldn't need this anymore.
Technically the current implementation has undefined behavior because we only protect the deleting of the pVal array with the self move check. There is still a memcpy of that.VAL to VAL that isn't protected. In the case of self move those are the same local and memcpy is undefined for src and dst overlapping.
This reduces the size of the opt binary on my local x86-64 build by about 4k.
Differential Revision: https://reviews.llvm.org/D32116
llvm-svn: 300477
Currently we use getTypeSizeInBits which contains a switch statement to dispatch based on what the Type is. We know we always have a pointer type here, but the compiler isn't able to figure out that out to remove the switch.
This patch changes it to just call handle the pointer type directly by calling getPointerSizeInBits without going through a switch.
getPointerTypeSizeInBits is called pretty often, particularly by getOrEnforceKnownAlignment which is used by InstCombine. This should speed that up a little bit.
Differential Revision: https://reviews.llvm.org/D31841
llvm-svn: 300475
It's basically a terrible idea anyway but objc_msgSend gets emitted like that.
We can decide on a better way to deal with it in the unlikely event that anyone
actually uses it.
llvm-svn: 300474