Adds the assembler pseudo instructions of RV32I and RV64I which can
be mapped to a single canonical instruction. The missing pseudo
instructions (e.g., call, tail, ...) are marked as TODO. Other
things, like for example PCREL_LO, have to be implemented first.
Currently, alias emission is disabled by default to keep the patch
minimal. Alias emission by default will be enabled in a subsequent
patch which also updates all affected tests. Note that this patch
should actually break the floating point MC tests. However, the
used FileCheck configuration is not tight enought to detect the
breakage.
Differential Revision: https://reviews.llvm.org/D40902
Patch by Mario Werner.
llvm-svn: 320487
Summary:
* The "Symbol" class represents a C++ symbol in the codebase, containing all the
information of a C++ symbol needed by clangd. clangd will use it in clangd's
AST/dynamic index and global/static index (code completion and code
navigation).
* The SymbolCollector (another IndexAction) will be used to recollect the
symbols when the source file is changed (for ASTIndex), or to generate
all C++ symbols for the whole project.
In the long term (when index-while-building is ready), clangd should share a
same "Symbol" structure and IndexAction with index-while-building, but
for now we want to have some stuff working in clangd.
Reviewers: ioeric, sammccall, ilya-biryukov, malaperle
Reviewed By: sammccall
Subscribers: malaperle, klimek, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D40897
llvm-svn: 320486
If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1,
&V2)))), bitcast)`, but the load is used in other instructions, it leads
to looping in InstCombiner. Patch adds additional check that all users
of the load instructions are stores and then replaces all uses of load
instruction by the new one with new type.
Reviewers: RKSimon, spatel, majnemer
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D41072
llvm-svn: 320483
Recognize constant arrays with the following values:
0x0, 0x1, 0x3, 0x7, 0xF, 0x1F, .... , 2^(size - 1) -1
where //size// is the size of the array.
the result of a load with index //idx// from this array is equivalent to the result of the following:
(0xFFFFFFFF >> (sub 32, idx)) (assuming the array of type 32-bit integer).
And the result of an 'AND' operation on the returned value of such a load and another input, is exactly equivalent to the X86 BZHI instruction behavior.
See test cases in the LIT test for better understanding.
Differential Revision: https://reviews.llvm.org/D34141
llvm-svn: 320481
Summary:
Currently, in InstCombineLoadStoreAlloca, we have simplification
rules for the following cases:
1. load off a null
2. load off a GEP with null base
3. store to a null
This patch adds support for the fourth case which is store into a
GEP with null base. Since this is UB as well (and directly analogous to
the load off a GEP with null base), we can substitute the stored val
with undef in instcombine, so that SimplifyCFG can optimize this code
into unreachable code.
Note: Right now, simplifyCFG hasn't been taught about optimizing
this to unreachable and adding an llvm.trap (this is already done for
the above 3 cases).
Reviewers: majnemer, hfinkel, sanjoy, davide
Reviewed by: sanjoy, davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D41026
llvm-svn: 320480
This patch improves detection of ObjC header files.
Right now many ObjC headers, especially short ones, are categorized as C/C++.
Way of filtering still isn't the best, as most likely it should be token-based.
Contributed by jolesiak!
llvm-svn: 320479
Moving the SHF_LINK_ORDER processing out of OutputSection::finalize()
means that we no longer need to copy all InputSections as we now only need
the first one.
Differential Revision: https://reviews.llvm.org/D40966
llvm-svn: 320478
By moving this step before thunk creation and other processing that depends
on the size of sections, we permit removal of duplicates in the .ARM.exidx
section.
Differential Revision: https://reviews.llvm.org/D40964
llvm-svn: 320477
This flag was missing but it wasn't an issue as nothing depended on it
for these asm parser-only instructions. Now that LLDB support is slowly
landing, it is important to get this right.
Committing on behalf of Leonardo Bianconi.
Differential revision: https://reviews.llvm.org/D40846
llvm-svn: 320475
The last of the three patches that https://reviews.llvm.org/D40348 was
broken up into.
Canonicalize the materialization of constants so that they are more likely
to be CSE'd regardless of the bit-width of the use. If a constant can be
materialized using PPC::LI, materialize it the same way always.
For example:
li 4, -1
li 4, 255
li 4, 65535
are equivalent if the uses only use the low byte. Canonicalize it to the
first form.
Differential Revision: https://reviews.llvm.org/D40348
llvm-svn: 320473
The size of an OutputSection is calculated early, to aid handling of compressed
debug sections. However, subsequent to this point, unused synthetic sections are
removed. In the event that an OutputSection, from which such an InputSection is
removed, is still required (e.g. because it has a symbol assignment), and no longer
has any InputSections, dot assignments, or BYTE()-family directives, the size
member is never updated when processing the commands. If the removed InputSection
had a non-zero size (such as a .got.plt section), the section ends up with the
wrong size in the output.
The fix is to reset the OutputSection size prior to processing the linker script
commands relating to that OutputSection. This ensures that the size is correct even
in the above situation.
Additionally, to reduce the risk of developers misusing OutputSection Size and
InputSection OutSecOff, they are set to simply the number of InputSections in an
OutputSection, and the corresponding index respectively. We cannot completely
stop using them, due to SHF_LINK_ORDER sections requiring them.
Compressed debug sections also require the full size. This is now calculated in
maybeCompress for these kinds of sections.
Reviewers: ruiu, rafael
Differential Revision: https://reviews.llvm.org/D38361
llvm-svn: 320472
[X86] Use regular expressions more aggressively to reduce the number of scheduler entries needed for FMA3 instructions.
When the scheduler tables are generated by tablegen, the instructions are divided up into groups based on their default scheduling information and how they are referenced by groups for each processor. For any set of instructions that are matched by a specific InstRW line, that group of instructions is guaranteed to not be in a group with any other instructions. So in general, the more InstRW class definitions are created, the more groups we end up with in the generated files. Particularly if a lot of the InstRW lines only match to single instructions, which is true of a large number of the Intel scheduler models.
This change alone reduces the number of instructions groups from ~6000 to ~5500. And there's lots more we could do.
llvm-svn: 320470
This patch removes the hard-coded check for DWARFv2 line tables. Now
dsymutil accepts line tables for DWARF versions 2 to 5 (inclusive).
Differential revision: https://reviews.llvm.org/D41084
rdar://35968319
llvm-svn: 320469
Summary:
It will be used to pass around things like Logger and Tracer throughout
clangd classes.
Reviewers: sammccall, ioeric, hokein, bkramer
Reviewed By: sammccall
Subscribers: klimek, bkramer, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D40485
llvm-svn: 320468
Introduces usage of AvailableValueSet alias name instead of
DenseSet<const Value *> for better reading.
Patch Author: Daniil Suchkov
Reviewers: mkazantsev, anna, apilipenko
Reviewed By: anna
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D41002
llvm-svn: 320465
VecValuesToIgnore holds values that will not appear in the vectorized loop.
We should therefore ignore their cost when VF > 1.
Differential Revision: https://reviews.llvm.org/D40883
llvm-svn: 320463
When the scheduler tables are generated by tablegen, the instructions are divided up into groups based on their default scheduling information and how they are referenced by groups for each processor. For any set of instructions that are matched by a specific InstRW line, that group of instructions is guaranteed to not be in a group with any other instructions. So in general, the more InstRW class definitions are created, the more groups we end up with in the generated files. Particularly if a lot of the InstRW lines only match to single instructions, which is true of a large number of the Intel scheduler models.
This change alone reduces the number of instructions groups from ~6000 to ~5500. And there's lots more we could do.
llvm-svn: 320461
Summary:
This solves PR35616.
We don't want the compiler to generate different code when we compile
with/without -g, so we now ignore debug intrinsics when determining if
the optimization can trigger or not.
Reviewers: junbuml
Subscribers: davide, JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D41068
llvm-svn: 320460
Correct committed version to match intended accepted review D40051 id=123417
- Rename Bonaire target to be gfx704.
- Eliminate gfx800 and make Iceland and Tonga both use gfx802 as they use the same code.
- List target features supported by each processor in the processor table together with the default value.
- Add xnack flag to e_flags.
- Remove xnack from kernel metadata and kernel descriptor since it is now a whole code object property.
Differential Revision: https://reviews.llvm.org/D40051
llvm-svn: 320457
The new check introduced in r318705 is useful, but suffers from a particular
class of false positives, namely, it does not account for
dispatch_barrier_sync() API which allows one to ensure that the asyncronously
executed block that captures a pointer to a local variable does not actually
outlive that variable.
The new check is split into a separate checker, under the name of
alpha.core.StackAddressAsyncEscape, which is likely to get enabled by default
again once these positives are fixed. The rest of the StackAddressEscapeChecker
is still enabled by default.
Differential Revision: https://reviews.llvm.org/D41042
llvm-svn: 320455
I tested on x86-64 and Jason on embedded architectures.
This cleans up another couple of reported unexpected successes.
<rdar://problem/28623427>
llvm-svn: 320452
This is a follow-up from r314910. When a checker developer attempts to
dereference a location in memory through ProgramState::getSVal(Loc) or
ProgramState::getSVal(const MemRegion *), without specifying the second
optional QualType parameter for the type of the value he tries to find at this
location, the type is auto-detected from location type. If the location
represents a value beyond a void pointer, we thought that auto-detecting the
type as 'char' is a good idea. However, in most practical cases, the correct
behavior would be to specify the type explicitly, as it is available from other
sources, and the few cases where we actually need to take a 'char' are
workarounds rather than an intended behavior. Therefore, try to fail with an
easy-to-understand assertion when asked to read from a void pointer location.
Differential Revision: https://reviews.llvm.org/D38801
llvm-svn: 320451
By using an index instead of a pointer for verdef we can put the index
next to the alignment field. This uses the otherwise wasted area and
reduces the shared symbol size.
By itself the performance change of this is in the noise, but I have a
followup patch to remove another 8 bytes that improves performance
when combined with this.
llvm-svn: 320449
An internal linker has support for merging identical data and in some
cases it can be a significant win.
This is behind an off by default flag so it has to be requested
explicitly.
llvm-svn: 320448
After discussing this with Jim and Jason, I think my commit was
actually sweeping the issue under the carpet rather than fixing it.
I'll take a closer look between tonight and tomorrow.
llvm-svn: 320447
This also slightly refactors the code that's checking the directory
presence which allows eliminating one unnecessary variable.
Differential Revision: https://reviews.llvm.org/D40637
llvm-svn: 320446
Some tests are failing on macOS when building with the in-tree
clang, and this is because they're conditional on the version released.
Apple releases using a different versioning number, but as these are
conditional on clang < 7, they fail for clang ToT (which is 6.0).
As a general solution, we actually need either a mapping between
Apple internal release version and public ones.
That said, I discussed this with Fred , and Apple Clang 6.0 seems
to be old enough that we can remove this altogether (which means I
can delay implementing the general purpose solution for a bit).
Differential Revision: https://reviews.llvm.org/D41101
llvm-svn: 320444
This adds the install-*-stripped targets to LLDB, which are required for
the install-distribution-stripped option. We also need to create some
install-*-stripped targets manually, which are modeled after their
corresponding install-* targets.
Differential Revision: https://reviews.llvm.org/D41099
llvm-svn: 320443
Also make function bodies unique so they can be distinguished
in the output. This is helpful for adding support for --gc-sections.
Differential Revision: https://reviews.llvm.org/D41093
llvm-svn: 320441
When an output section has no byte commands and has no input sections then it
would be ideal if the type of the section is SHT_NOBITS so that the file can
take up less space. This change sets the default type of of output sections to
SHT_NOBITS instead of SHT_PROGBITS to allow this. This required some minor test
changes (which double as tests for this new behavior) but extend-pt-load.s had
be changed in a non-trivial way. Since it seems to me that the point of the
test is to point out the consequences of how flags are assigned to output
sections that don't have input sections I changed the test to work and still
show how the memsize of the executable segment was changed.
Differential Revision: https://reviews.llvm.org/D41082
llvm-svn: 320437