Emit predefined macros for GPU family. e.g.
for GPU gfx9xx emit __GFX9__, etc.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D125909
AMDGPUAsmParser::validateSOPLiteral already knew about this but
SIInstrInfo::verifyInstruction did not.
Differential Revision: https://reviews.llvm.org/D125976
This summarises the changes made by d9398a91e2.
Which forms the bulk of the fixes needed for non-address bit handling.
Note that in the previous releases we noted memory tagging support,
which is a subset of non-address bits. The recent changes enable
debugging of programs using memory tagging, pointer authentication
and top byte ignore (all at once) on AArch64.
Fix __has_builtin to return 1 only if the requested target features
of a builtin are enabled by refactoring the code for checking
required target features of a builtin and use it in evaluation
of __has_builtin.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D125829
The runtime check threshold should also restrict interleave count.
Otherwise, too many runtime checks will be generated for some cases.
Reviewed By: fhahn, dmgreen
Differential Revision: https://reviews.llvm.org/D122126
VPWidenMemoryInstruction also models stores which may not produce a value.
This can trip over analyses. Improve the modeling by only adding
VPValues for VPWidenMemoryInstructionRecipes modeling loads.
The _LIBUNWIND_HAS_NO_THREADS macro is only picked up by libunwind
inside its sources, so it is only required when it builds. It doesn't
need to be defined when running the tests.
Also, add a CI job that tests this configuration. The exact configuration
is that we build a shared libc++ and merge objects for the ABI library
and the unwinder library into it.
Differential Revision: https://reviews.llvm.org/D125903
This patch basically extends https://reviews.llvm.org/D122008 with
support for MacOSX/Darwin.
To facilitate this, I've added `MacOSX` to the list of supported OSes in
Target.cpp. Flang already supports `Darwin` and it doesn't really do
anything OS-specific there (it could probably safely skip checking the
OS for now).
Note that generating executables remains hidden behind the
`-flang-experimental-exec` flag. Also, we don't need to add `-lm` on
MacOSX as `libm` is effectively included in `libSystem` (which is linked
in unconditionally).
Differential Revision: https://reviews.llvm.org/D125628
Convert Fortran parse-tree into MLIR for collapse-clause.
Includes simple Fortran to LLVM-IR test, with auto-generated
check-lines (some of which have been edited by hand).
Reviewed By: kiranchandramohan, shraiysh, peixin
Differential Revision: https://reviews.llvm.org/D125302
This change allows to write whitespaces before the `ERROR` keyword
in semantic tests for consistency with other testing infrastructure.
Also, one test is changed in order to test if the change works
correctly.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D125884
We require move semantics in C++03 anyways, so let's enable them for the containers.
Reviewed By: ldionne, #libc
Spies: libcxx-commits
Differential Revision: https://reviews.llvm.org/D123802
This allows index implementations to fill container details when required specially when computing containerID is expensive.
Differential Revision: https://reviews.llvm.org/D125925
When we generate a horizontal reduction of floating adds fed by a vectorized
tree rooted at floating multiplies, we should account for the cost of no
longer being able to generate scalar FMAs. Similarly, if we vectorize a
list of floating multiplies that each feeds a single floating add, we should
again account for this cost.
The first test was reduced from a case where the vectorizable tree looked
barely profitable (cost -1) with a horizontal reduction, but produced
substantially worse code than allowing the FMAs to be generated. The second
test was derived from the first: we again generate a horizontal reduction
here, but even if the horizontal reduction is forced to be unprofitable, we
try to vectorize the multiplies. I have follow-up patches to address these
issues.
Differential Revision: https://reviews.llvm.org/D124867
This is off by default. If you get a result and that
memory has memory tags, when --show-tags is given you'll
see the tags inline with the memory content.
```
(lldb) memory read mte_buf mte_buf+64 --show-tags
<...>
0xfffff7ff8020: 00 00 00 00 00 00 00 00 0d f0 fe ca 00 00 00 00 ................ (tag: 0x2)
<...>
(lldb) memory find -e 0xcafef00d mte_buf mte_buf+64 --show-tags
data found at location: 0xfffff7ff8028
0xfffff7ff8028: 0d f0 fe ca 00 00 00 00 00 00 00 00 00 00 00 00 ................ (tags: 0x2 0x3)
0xfffff7ff8038: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ (tags: 0x3 0x4)
```
The logic for handling alignments is the same as for memory read
so in the above example because the line starts misaligned to the
granule it covers 2 granules.
Depends on D125089
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D125090
A TBL instruction will fill out-of-range values with 0's, something used
in D121139 to turn tbl2 with a zero input into tbl1s. This works OK for
v16i8, but for v8i8 the input is still treated as a v16i8, so
out-of-range values (like a lane index of 8) would end up loading values
from the top half of the input register. Clean this up by detecting the
out of range values and making sure they really use out of range values.
There is a fix for swapped indices of 64bit input vectors too, which
could be incorrectly adjusted if the zerovector was the first operand.
Fixes#55545
Differential Revision: https://reviews.llvm.org/D125865
This reverts commit 3e928c4b9d.
This fixes an issue seen on Windows where we did not properly
get the section names of regions if they overlapped. Windows
has regions like:
[0x00007fff928db000-0x00007fff949a0000) ---
[0x00007fff949a0000-0x00007fff949a1000) r-- PECOFF header
[0x00007fff949a0000-0x00007fff94a3d000) r-x .hexpthk
[0x00007fff949a0000-0x00007fff94a85000) r-- .rdata
[0x00007fff949a0000-0x00007fff94a88000) rw- .data
[0x00007fff949a0000-0x00007fff94a94000) r-- .pdata
[0x00007fff94a94000-0x00007fff95250000) ---
I assumed that you could just resolve the address and get the section
name using the start of the region but here you'd always get
"PECOFF header" because they all have the same start point.
The usual command repeating loop used the end address of the previous
region when requesting the next, or getting the section name.
So I've matched this in the --all scenario.
In the example above, somehow asking for the region at
0x00007fff949a1000 would get you a region that starts at
0x00007fff949a0000 but has a different end point. Using the load
address you get (what I assume is) the correct section name.
`-module-dir` is Flang's equivalent for `-J` from GFortran (in fact,
`-J` is an alias for `-module-dir` in Flang). Currently, only
`-module-dir <value>` is accepted. However, `-J` (and other options for
specifying various paths) accepts `-J<value>` as well as `-J <value>`.
This patch makes sure that `-module-dir` behaves consistently with other
such flags.
Differential Revision: https://reviews.llvm.org/D125957
Most clients only used these methods because they wanted to be able to
extend or truncate to the same bit width (which is a no-op). Now that
the standard zext, sext and trunc allow this, there is no reason to use
the OrSelf versions.
The OrSelf versions additionally have the strange behaviour of allowing
extending to a *smaller* width, or truncating to a *larger* width, which
are also treated as no-ops. A small amount of client code relied on this
(ConstantRange::castOp and MicrosoftCXXNameMangler::mangleNumber) and
needed rewriting.
Differential Revision: https://reviews.llvm.org/D125557
Some functions like `stpncpy` are implemented in terms of `memset` but are not
currently using `-fno-builtin-memset`. This is somewhat hidden by the fact that
we use `-ffreestanding` globally and that `-ffreestanding` implies
`-fno-builtin` for Clang.
This patch also removes `-mllvm -combiner-global-alias-analysis` that is Clang
specific and that does not bring substantial gains on modern processors.
Also we keep `-mllvm --tail-merge-threshold=0` for aarch64 in CMakeLists.txt
but we omit it in the Bazel config. This is because Bazel consumes the source
files directly and so it can use PGO to take optimal decisions locally.
Differential Revision: https://reviews.llvm.org/D125894
Much of the size of PCH/PCM files comes from stored SourceLocations.
These are encoded using (almost) their raw value, VBR-encoded. Absolute
SourceLocations can be relatively large numbers, so this commonly takes
20-30 bits per location.
We can reduce this by exploiting redundancy: many "nearby" SourceLocations are
stored differing only slightly and can be delta-encoded.
Randam-access loading of AST nodes constrains how long these sequences
can be, but we can do it at least within a node that always gets
deserialized as an atomic unit.
TypeLoc is implemented in this patch as it's a relatively small change
that shows most of the API.
This saves ~3.5% of PCH size, I have local changes applying this technique
further that save another 3%, I think it's possible to get to 10% total.
Differential Revision: https://reviews.llvm.org/D125403
Explicitly sizing Kind enum suggests that too-large values are allowed,
and that putting it in a bitfield is dangerous.
GCC doesn't like condition ? integer : enum.