Commit Graph

423511 Commits

Author SHA1 Message Date
Craig Topper edbf390d10 [CodeGenPrepare] Use const reference to avoid unnecessary APInt copy. NFC
Spotted while looking at Matthias' patches.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D124985
2022-05-11 12:06:45 -07:00
Philip Reames ca81c0f8fc [test, riscv] Add test illustrating missing handling for fallthrough blocks in 541c9ba 2022-05-11 11:58:51 -07:00
River Riddle 5a9a438a54 [TableGen] Refactor TableGenParseFile to no longer use a callback
Now that TableGen no longer relies on global Record state, we can allow
for the client to own the RecordKeeper and SourceMgr. Given that TableGen
internally still relies on the global llvm::SrcMgr, this method unfortunately
still isn't thread-safe.

Differential Revision: https://reviews.llvm.org/D125277
2022-05-11 11:55:33 -07:00
River Riddle 2ac3cd20ca [TableGen] Remove the use of global Record state
This commits removes TableGens reliance on managed static global record state
by moving the RecordContext into the RecordKeeper. The RecordKeeper is now
treated similarly to a (LLVM|MLIR|etc)Context object and is passed to static
construction functions. This is an important step forward in removing TableGens
reliance on global state, and in a followup will allow for users that parse tablegen
to parse multiple tablegen files without worrying about Record lifetime.

Differential Revision: https://reviews.llvm.org/D125276
2022-05-11 11:55:33 -07:00
Qiongsi Wu 3ca6328637 [clang][ppc] Creating Seperate Install Target for PPC htm Headers
This patch splits out the htm intrinsic headers from the PPC headers list.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D125386
2022-05-11 14:48:40 -04:00
Craig Topper f499ec6b3d [RISCV] Add caching to the gather/scatter to strided load/store conversion.
If we have multiple gather/scatter instructions using the same the
same strided address we would scalarize it multiple times. I guess
a later pass cleans this up, but I don't know if that's guaranteed.

This patch adds a cache to remember the scalarization we already
created for a previous gather/scatter.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D125326
2022-05-11 11:47:27 -07:00
Yaxun (Sam) Liu 84db355949 [clang] Fix KEYALL
Update KEYALL to cover KEYCUDA. Introduce KEYMAX and
a generic way to update KEYALL.

Reviewed by: Dan Liew

Differential Revision: https://reviews.llvm.org/D125396
2022-05-11 14:28:08 -04:00
Xiang Li 4dae38ebfb [HLSL] add -D option for dxc mode.
Create dxc_D as alias to option D which Define <macro> to <value> (or 1 if <value> omitted).

Reviewed By: aaron.ballman

Differential Revision: https://reviews.llvm.org/D125338
2022-05-11 11:26:31 -07:00
Craig Topper 09f48c6b80 [RISCV] Move implementation of getVLOpNum and getSEWOpNum from RISCVInsertVSETVLI to RISCVBaseInfo.h. NFC
We should consolidate the operand counting and ordering into
RISCVBaseInfo.h and stop spreading it around.

Reviewed By: asb

Differential Revision: https://reviews.llvm.org/D125344
2022-05-11 11:14:58 -07:00
Craig Topper 0ebb02b90a [RISCV] Override TargetLowering::shouldProduceAndByConstByHoistingConstFromShiftsLHSOfAnd.
This hook determines if SimplifySetcc transforms (X & (C l>>/<< Y))
==/!= 0 into ((X <</l>> Y) & C) ==/!= 0. Where C is a constant and
X might be a constant.

The default implementation favors doing the transform if X is not
a constant. Otherwise the code is left alone. There is a provision
that if the target supports a bit test instruction then the transform
will favor ((1 << Y) & X) ==/!= 0. RISCV does not say it has a variable
bit test operation.

RISCV with Zbs does have a BEXT instruction that performs (X >> Y) & 1.
Without Zbs, (X >> Y) & 1 still looks preferable to ((1 << Y) & X) since
we can fold use ANDI instead of putting a 1 in a register for SLL.

This patch overrides this hook to favor bit extract patterns and
otherwise falls back to the "do the transform if X is not a constant"
heuristic.

I've added tests where both C and X are constants with both the shl form
and lshr form. I've also added a test for a switch statement that lowers
to a bit test. That was my original motivation for looking at this.

Reviewed By: asb

Differential Revision: https://reviews.llvm.org/D124639
2022-05-11 11:13:17 -07:00
Sanjay Patel 99ef341ce9 [InstCombine] freeze operand in sdiv expansion
As discussed in issue #37809, this transform is not safe
if the input is an undefined value.

This is similar to a recent change for urem:
d428f09b2c

There is no difference in codegen on the basic examples,
but this could lead to regressions. We may need to
improve freeze analysis or lowering if that happens.

Presumably, in real cases that are similar to the tests
where a subsequent transform removes the select, we
will also be able to remove the freeze by seeing that
the parameter has 'noundef'.
2022-05-11 14:01:28 -04:00
Sanjay Patel 5fdfcf4892 [InstCombine] update auto-generated CHECK lines in test file; NFC
These are all cosmetic (value naming) diffs that would distract from
real changes in this file.
2022-05-11 14:01:28 -04:00
Craig Topper 0781742785 [RISCV] Add a DAG combine to pre-promote (i32 (and (srl X, Y), 1)) with Zbs on RV64.
Type legalization will want to turn (srl X, Y) into RISCVISD::SRLW,
which will prevent us from using a BEXT instruction.

I don't think there is any precedent for type promotion checking
users to decide how to promote. Instead, I've added this DAG combine to
do it before type legalization.

Reviewed By: asb

Differential Revision: https://reviews.llvm.org/D124109
2022-05-11 10:49:16 -07:00
Philip Reames 72925d98bf [riscv] Canonicalize vsetvli (vsetvli avl, vtype1) vtype2 transitionsas reviewed
This patch is an alternative to a piece of D125270. If we have one vsetvli which is using as AVL the output of another, and the prior AVL can be proven to produce the same VL value as that defining one, we can use the AVL from the prior instruction. This has the effect of removing a state transition on AVL, and will let us use the cheaper 'vsetvli x0, x0, vtype1' form or possible even skip emitting it entirely.

This builds on the same infrastructure as D125337, and does the analogous extension to working on abstract states instead of only prior explicit vsetvli instructions. This is where the (relatively minor) code improvements come from.

More importantly, this fixes the last case where the state computed in phase 1 and 2 of the algorithm differs from the state computed during phase 3. Note that such differences can cause miscompiles by creating disagreements about contents of the VL and VTYPE registers at block boundaries.

Doing this transform inside the dataflow can cause the compatibility of a later store to change with regards to the current state. test15 in the diff illustrates this case well. What we have is a vsetvli which is mutated by one following vector op, but whose GPR result is used by another. The compatibility logic walks back to the def in this case, and checks to see if it matches the immediate prior state. In phase 1 and 2, it doesn't, and in phase 3 (after mutation) it does because we remove a transition which caused it to differ.

Differential Revision: https://reviews.llvm.org/D125392
2022-05-11 10:45:29 -07:00
Arthur Eubanks f37e6faf52 [gn build] Use llvm-ar when clang_base_path is specified
Only applies linux for now.

This prevents warnings with use_thinlto like
  bfd plugin: LLVM gold plugin has failed to create LTO module: Not an int attribute (Producer: 'LLVM15.0.0git' Reader: 'LLVM 13.0.1')

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D125399
2022-05-11 10:40:54 -07:00
Peter Klausler d80d812df0 [flang] Fix check for assumed-size arguments to SHAPE() & al.
The predicate that is used to detect an invalid assumed-size argument
to the intrinsic functions SHAPE, SIZE, & LBOUND gives false results
for arguments whose shapes are not calculatable at compilation time.
Replace with an explicit test for an assumed-size array dummy argument
symbol.

Differential Revision: https://reviews.llvm.org/D125342
2022-05-11 10:33:17 -07:00
Philip Reames 9873623425 [riscv] Add tests for vsetvli reuse across iterations of a loop
These variations are chosen to exercise both FRE and PRE cases involving loops which don't change state in the iteration and can thus perform vsetvli in the preheader of the loop only.  At the moment, these are essentially all TODOs.
2022-05-11 10:30:54 -07:00
David Tenty d9c1d3cbcb [clang][AIX] Don't ignore XCOFF visibility by default
D87451 added -mignore-xcoff-visibility for AIX targets and made it the default (which mimicked the behaviour of the XL 16.1 compiler on AIX).

However, ignoring hidden visibility has unwanted side effects and some libraries depend on visibility to hide non-ABI facing entities from user headers and
reserve the right to change these implementation details based on this (https://libcxx.llvm.org/DesignDocs/VisibilityMacros.html). This forces us to use
internal linkage fallbacks for these cases on AIX and creates an unwanted divergence in implementations on the plaform.

For these reasons, it's preferable to not add -mignore-xcoff-visibility by default, which is what this patch does.

Reviewed By: DiggerLin

Differential Revision: https://reviews.llvm.org/D125141
2022-05-11 13:27:48 -04:00
Mircea Trofin 0cc607345f [mlgo] Fix test
Updated reference file for dev-mode-logging.ll and expected output.
2022-05-11 10:07:40 -07:00
Peter Klausler 3a26596af3 [flang] Fold complex component references
Complex component references (z%RE, z%IM) of complex named constants
should be evaluated at compilation time.

Differential Revision: https://reviews.llvm.org/D125341
2022-05-11 10:04:13 -07:00
Sanjay Patel d428f09b2c [InstCombine] freeze operand in urem expansion
As discussed in issue #37809, this transform is not safe
if the input is an undefined value.

There is no difference in codegen on the basic examples,
but this could lead to regressions. We may need to
improve freeze analysis or lowering if that happens.
2022-05-11 12:47:26 -04:00
Amir Ayupov 8cb7a873ab [BOLT][NFC] Add MCPlus::primeOperands iterator_range
Reviewed By: yota9

Differential Revision: https://reviews.llvm.org/D125397
2022-05-11 09:34:51 -07:00
Joseph Huber f933c896d1 [OpenMP] Add a check for alignment in the offload packager
Summary:
These sections need to be aligned correctly to be extracted later, add
a check to indicate if they aren't.
2022-05-11 12:25:44 -04:00
Vibhuti Sawant 6b6e796b74 [Bazel] Add support for s390x build target
While executing the test suite for Tensorflow(v2.8.0), we encountered multiple TC failures with the below error
```
'z14' is not a recognized processor for this target
```
This patch adds the s390x target to the build target list. It fixes TC failures in multiple modules of Tensorflow on s390x arch. It is also tested to have no effect on x86 machines.

Reviewed By: GMNGeoffrey

Differential Revision: https://reviews.llvm.org/D125096
2022-05-11 09:23:16 -07:00
Aaron Ballman 65860a9f5d Fix the Clang sphinx build
This should address:
https://lab.llvm.org/buildbot/#/builders/92/builds/26609
2022-05-11 12:09:21 -04:00
Matthias Braun de9ad98d2d Fix endless loop in optimizePhiConst with integer constant switch condition
Avoid endless loop in degenerate case with an integer constant as switch
condition as reported in https://reviews.llvm.org/D124552
2022-05-11 08:49:01 -07:00
Alban Bridonneau e6635377e5 [NFC] Change comment number in aarch64 isel 2022-05-11 15:41:33 +00:00
python3kgae 0c7f7f1b01 [DirectX backend] Add pass to emit dxil metadata.
A new pass DxilEmitMetadata is added to translate information saved in llvm ir into metadata to match DXIL spec.

Only generate DXIL validator version in this PR.

In llvm ir, validator version is saved in ModuleFlag with "dx.valver" as Key.

  !llvm.module.flags = !{!0, !1}
  !1 = !{i32 6, !"dx.valver", !2}
  !2 = !{i32 1, i32 1}

DXIL validator version has major and minor versions that are specified as named metadata:

  !dx.valver = !{!2}
  !2 = !{i32 1, i32 7}

Reviewed By: kuhar, beanz

Differential Revision: https://reviews.llvm.org/D125158
2022-05-11 08:40:13 -07:00
Joe Nash a0a406b257 [AMDGPU] gfx11 Decode wider instructions. NFC
Refactor to pass a templatized size parameter to the decoder to allow wider than
64bit decodes in a later patch.

Contributors:
Jay Foad <jay.foad@amd.com>

Depends on D125261

Patch 5/N for upstreaming of AMDGPU gfx11 architecture.

Reviewed By: dp

Differential Revision: https://reviews.llvm.org/D125316
2022-05-11 11:05:58 -04:00
Florian Hahn 301fe084bf
[ConstraintElimination] Add test where ssub result is not used.
Extra tests for D125264.
2022-05-11 16:10:25 +01:00
Joe Nash 18ed279a3a [AMDGPU] gfx11 subtarget features & early tests
Tablegen definitions for subtarget features and cpp predicate functions to
access the features.
New Sub-TargetProcessors and common latencies.
Simple changes to MIR codegen tests which pass on gfx11 because they have the
same output as previous subtargets or operate on pseudo instructions which
are reused from previous subtargets.

Contributors:
Jay Foad <jay.foad@amd.com>
Petar Avramovic <Petar.Avramovic@amd.com>

Patch 4/N for upstreaming of AMDGPU gfx11 architecture

Depends on D124538

Reviewed By: Petar.Avramovic, foad

Differential Revision: https://reviews.llvm.org/D125261
2022-05-11 10:31:49 -04:00
Shao-Ce SUN b049eb1fec [RISCV] Remove some TODOs in tests
Reviewed By: asb

Differential Revision: https://reviews.llvm.org/D125289
2022-05-11 22:53:50 +08:00
Nikita Popov 6001bfcedc [InstCombine] Freeze other uses of frozen value
If there is a freeze %x, we currently replace all other uses of %x
with freeze %x -- as long as they are dominated by the freeze
instruction. This patch extends this behavior to cases where we
did not originally dominate the use by moving the freeze
instruction directly after the definition of the frozen value.

The motivation can be seen in test @combine_and_after_freezing_uses:
Canonicalizing everything to freeze %x allows folds that are based
on value identity (i.e. same operand occurring in two places) to
trigger. This also covers the case from D125248.

Differential Revision: https://reviews.llvm.org/D125321
2022-05-11 16:47:12 +02:00
Philip Reames cc0283a635 [riscv] Prefer to use previous VL for scalar move instructionsK
This patch is an alternative to a piece of D125270. Its direct motivation is to fix a wrong code bug (described below), but somewhat unexpectedly, it also results in a significant code quality improvement for idiomatic fixed length vector patterns.

The existing transform is simply wrong in its current location. We are correct about the fact that the scalar move itself can use the previous vsetvli, but we loose track of the fact that later instructions might depend on the state change represented. That is, the actual value of VL in the register is different than the abstract state thinks it is. Not simply due to precision of modeling, but e.g. the VL register could contain 3 when the abstract state says it is 1. This is annoying hard to demonstrate in practice due to differences in policy flags on the intrinsics, but this is at least a latent wrong code bug.

The code quality benefit comes from the fact we don't need to tie this to explicit vsetvli instructions at all. We can propagate the abstract state, and reduce a) the number of transitions, or b) the cost of those transitions. It turns out we have a bunch of cases - in tests at least - where fixed length AVLs are known non-zero, and we can leave VL unchanged while changing VTYPE.

Differential Revision: https://reviews.llvm.org/D125337
2022-05-11 07:37:50 -07:00
Matthias Springer 248e113e9f [mlir][bufferize][NFC] Move helper functions to BufferizationOptions
Move helper functions for creating allocs/deallocs/memcpys to BufferizationOptions.

Differential Revision: https://reviews.llvm.org/D125375
2022-05-11 16:23:22 +02:00
Louis Dionne c631e33f31 [runtimes] Print the testing configuration in use in libunwind and libc++abi
We do it for libc++, and it's rather useful for debugging e.g. CI.
2022-05-11 10:18:09 -04:00
Nico Weber 53342c6bcf [gn build] (manually) port 26eb04268f (clang-offload-packager) 2022-05-11 15:14:00 +01:00
Joseph Huber 26eb04268f [Clang] Introduce clang-offload-packager tool to bundle device files
In order to do offloading compilation we need to embed files into the
host and create fatbainaries. Clang uses a special binary format to
bundle several files along with their metadata into a single binary
image. This is currently performed using the `-fembed-offload-binary`
option. However this is not very extensibile since it requires changing
the command flag every time we want to add something and makes optional
arguments difficult. This patch introduces a new tool called
`clang-offload-packager` that behaves similarly to CUDA's `fatbinary`.
This tool takes several input files with metadata and embeds it into a
single image that can then be embedded in the host.

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D125165
2022-05-11 09:39:13 -04:00
Matt Devereau 75bb815231 [AArch64][SVE] Add aarch64_sve_pcs attribute to Clang
Enable function attribute aarch64_sve_pcs at the C level, which correspondes to
aarch64_sve_vector_pcs at the LLVM IR level.

This requirement was created by this addition to the ARM C Language Extension:
https://github.com/ARM-software/acle/pull/194

Differential Revision: https://reviews.llvm.org/D124998
2022-05-11 13:33:56 +00:00
Alexey Bataev f5d45d70a5 [SLP]Further improvement of the cost model for scalars used in buildvectors.
Further improvement of the cost model for the scalars used in
buildvectors sequences. The main functionality is outlined into
a separate function.
The cost is calculated in the following way:
1. If the Base vector is not undef vector, resizing the very first mask to
have common VF and perform action for 2 input vectors (including non-undef
Base). Other shuffle masks are combined with the resulting after the 1 stage and processed as a shuffle of 2 elements.
2. If the Base is undef vector and have only 1 shuffle mask, perform the
action only for 1 vector with the given mask, if it is not the identity
mask.
3. If > 2 masks are used, perform serie of shuffle actions for 2 vectors,
combing the masks properly between the steps.

The original implementation misses the very first analysis for the Base
vector, so the cost might too optimistic in some cases. But it improves
the cost for the insertelements which are part of the current SLP graph.

Part of D107966.

Differential Revision: https://reviews.llvm.org/D115750
2022-05-11 06:08:55 -07:00
Sanjay Patel 400587ba0c [InstCombine] improve auto-generated test checks by matching function signature; NFC
Without this, miscompiles go undetected here as shown in D125352.
2022-05-11 08:47:05 -04:00
Whisperity 06a98328fc [ASTMatchers][NFC] Fix name of matcher in docs and add a missing test 2022-05-11 14:15:53 +02:00
Sergey Semushin dab5e10ea5 [clang-format] fix nested angle brackets parse inside concept definition
Due to how parseBracedList always stopped on the first closing angle
bracket and was used in parsing angle bracketed expression inside concept
definition, nested brackets inside concepts were parsed incorrectly.

nextToken() call before calling parseBracedList is required because
we were processing opening angle bracket inside parseBracedList second
time leading to incorrect logic after my fix.

Fixes https://github.com/llvm/llvm-project/issues/54943
Fixes https://github.com/llvm/llvm-project/issues/54837

Reviewed By: HazardyKnusperkeks, curdeius

Differential Revision: https://reviews.llvm.org/D123896
2022-05-11 14:02:51 +02:00
Fraser Cormack 27c7e922fe [RISCV][NFC] Rename variable to appease code style 2022-05-11 12:41:25 +01:00
Fraser Cormack 874b802a6d [RISCV][NFC] Move variable down closer to its first use 2022-05-11 12:33:01 +01:00
Joseph Huber f49d576a88 [CUDA] Add wrapper code generation for registering CUDA images
This patch adds the necessary code generation to create the wrapper code
that registers all the globals in CUDA. We create the necessary
functions and iterate through the list of
`__start_cuda_offloading_entries` to find which globals must be
registered. This is very similar to the code generation done currently
in Clang for non-rdc builds, but here we are registering a fully linked
fatbinary and finding the globals via the above sections.

With this we should be able to fully support basic RDC / LTO building of CUDA
code.

It's also worth noting that this does not include the necessary PTX to JIT the
image, so to use this support the offloading architecture must match the
system's architecture.

Depends on D123810

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D123812
2022-05-11 07:30:25 -04:00
Joseph Huber e7858a9fab [Cuda] Add initial support for wrapping CUDA images in the new driver.
This patch adds the initial support for wrapping CUDA images. This
requires changing some of the logic for how we bundle images. We now
need to copy the image for all kinds that are active for the
architecture. Then we need to run a separate wrapping job if the Kind is
Cuda. For cuda wrapping we need to use the `fatbinary` program from the
CUDA SDK to bundle all the binaries together. This is then passed to a
new function to perfom the actual module code generation that will be
implemented in a later patch.

Depends on D120273 D123471

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D123810
2022-05-11 07:30:23 -04:00
Joseph Huber 0035f7154c [CUDA] Create offloading entries when using the new driver
The changes made in D123460 generalized the code generation for OpenMP's
offloading entries. We can use the same scheme to register globals for
CUDA code. This patch adds the code generation to create these
offloading entries when compiling using the new offloading driver mode.
The offloading entries are simple structs that contain the information
necessary to register the global. The struct used is as follows:

```
Type struct __tgt_offload_entry {
  void    *addr;      // Pointer to the offload entry info.
                      // (function or global)
  char    *name;      // Name of the function or global.
  size_t  size;       // Size of the entry info (0 if it a function).
  int32_t flags;
  int32_t reserved;
};
```

Currently CUDA handles RDC code generation by deferring the registration
of globals in the current TU to a callback function containing the
modules ID. Later all the module IDs will be used to register all of the
globals at once. Rather than mimic this, offloading entries allow us to
mimic the way OpenMP registers globals. That is, we create a simple
global struct for each device global to be registered. These are placed
at a special section `cuda_offloading_entires`. Because this section is
a valid C-identifier, the linker will profide a `__start` and `__stop`
pointer that we can use to iterate and register all globals at runtime.

the registration requires a flag variable to indicate which registration
function to use. I have assigned the flags somewhat arbitrarily, but
these use the following values.

Kernel: 0
Variable: 0
Managed: 1
Surface: 2
Texture: 3

Depends on D120272

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D123471
2022-05-11 07:30:21 -04:00
Aaron Ballman c7ba568f40 Fix test; we now expect a pedantic warning
This fixes:
https://lab.llvm.org/buildbot/#/builders/109/builds/38337
2022-05-11 06:52:21 -04:00
Ken Matsui 786c721c2b Add extension diagnostic for linemarker directives
This adds the -Wgnu-line-marker diagnostic flag, grouped under -Wgnu,
to warn about use of the GNU linemarker preprocessor extension.

Fixes #55067

Differential Revision: https://reviews.llvm.org/D124534
2022-05-11 06:42:00 -04:00