Currently tagged these as system instructions, once we have uses for them (ASAN?) and they are faster we will need to improve on this.
llvm-svn: 320173
These are aliases, but the thing we're checking here is that the target has
vpsllv*, not that the data type is 256-bit. Those instructions exist for
128-bit vectors too...but sadly, not for all element sizes.
llvm-svn: 320170
Summary:
llvm-objdump's Mach-O parser was updated in r306037 to display external
relocations for MH_KEXT_BUNDLE file types. This change extends the Macho-O
parser to display local relocations for MH_PRELOAD files. When used with
the -macho option relocations will be displayed in a historical format.
rdar://35778019
Reviewers: enderby
Reviewed By: enderby
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D40867
llvm-svn: 320166
Summary:
If we have the code like this:
```
float a, b;
a = std::max(a ,b);
```
it is converted into something like this:
```
%call = call dereferenceable(4) float* @_ZSt3maxIfERKT_S2_S2_(float* nonnull dereferenceable(4) %a.addr, float* nonnull dereferenceable(4) %b.addr)
%1 = bitcast float* %call to i32*
%2 = load i32, i32* %1, align 4
%3 = bitcast float* %a.addr to i32*
store i32 %2, i32* %3, align 4
```
After inlinning this code is converted to the next:
```
%1 = load float, float* %a.addr
%2 = load float, float* %b.addr
%cmp.i = fcmp fast olt float %1, %2
%__b.__a.i = select i1 %cmp.i, float* %a.addr, float* %b.addr
%3 = bitcast float* %__b.__a.i to i32*
%4 = load i32, i32* %3, align 4
%5 = bitcast float* %arrayidx to i32*
store i32 %4, i32* %5, align 4
```
This pattern is not recognized as minmax pattern.
Patch solves this problem by converting sequence
```
store (bitcast, (load bitcast (select ((cmp V1, V2), &V1, &V2))))
```
to a sequence
```
store (,load (select((cmp V1, V2), &V1, &V2)))
```
After this the code is recognized as minmax pattern.
Reviewers: RKSimon, spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D40304
llvm-svn: 320157
Summary:
+DumpCode is a hack to embed disassembly in the ELF file. This commit
fixes it to include labels, to make it slightly more useful.
Reviewers: arsenm, kzhuravl
Subscribers: nhaehnle, timcorringham, dstuttard, llvm-commits, t-tye, yaxunl, wdng, kzhuravl
Differential Revision: https://reviews.llvm.org/D40169
llvm-svn: 320146
In this method, we invoke `SimplifyICmpOperands` which takes the `Cond` predicate
by reference and may change it along with `LHS` and `RHS` SCEVs. But then we invoke
`computeShiftCompareExitLimit` with Values from which the SCEVs have been derived,
these Values have not been modified while `Cond` could be.
One of possible outcomes of this is that we may falsely prove that an infinite loop ends
within some finite number of iterations.
In this patch, we save the original `Cond` and pass it along with original operands.
This logic may be removed in future once `computeShiftCompareExitLimit` works
with SCEVs instead of value operands.
Reviewed By: sanjoy
Differential Revision: https://reviews.llvm.org/D40953
llvm-svn: 320142
Summary:
r319898 made it possible to override these variables via the
CROSS_TOOLCHAIN_FLAGS setting, but this only worked if one explicitly
specifies these variables there. If, instead, one uses
CROSS_TOOLCHAIN_FLAGS to specify a toolchain file (as our internal
builds do, to point cmake to a checked-in toolchain), the
CMAKE_C(XX)_COMPILER flags would still win over the ones specified by
the toolchain file.
To fix is to make the mere presence of these flags overridable. I do
this by putting them as a default value for the CROSS_TOOLCHAIN_FLAGS
setting, so they can be overridden at cmake configuration time.
Reviewers: hintonda, beanz
Subscribers: bogner, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D40947
llvm-svn: 320138
Updated the scheduling information for the Haswell subtarget with the following changes:
Regrouped the instructions after adding appropriate load + store latencies.
Added scheduling for missing instructions such as the GATHER instrs.
The changes were made after revisiting the latencies impact of all memory uOps.
Reviewers: RKSimon, zvi, craig.topper, apilipenko
Differential Revision: https://reviews.llvm.org/D40021
Change-Id: Iaf6c1f5169add1552845a8a566af4e5a359217a7
llvm-svn: 320137
Previously we only allowed these through if the subvector came from a compare or test instruction which we would again check for during isel.
With this change we only check for the compare and test instructions during isel and have fallback patterns that emit the shifts if needed.
I noticed that in a lot of cases we don't actually see the compare during lowering and rely on an odd legalization of concat_vectors with a zero vector as the second argument. This keeps the concat_vectors around long enough for a later dag combine to expose the compare then we re-legalize the concat_vectors and catch the compare.
llvm-svn: 320134
Replace interleaved store instructions by equivalent and more efficient instructions based on latency cost model.
Https://reviews.llvm.org/D38196
llvm-svn: 320123
This reverts commit 959e37e669b0c3cfad4cb9f1f7c9261ce9f5e9ae.
That commit doesn't handle the case where main is declared rather than defined,
in particular the even-more special case where main is a prototypeless
declaration (which is of course the one actually used by musl currently).
llvm-svn: 320121
We previously only supported inserting to the LSB or MSB where it was easy to zero to perform an OR to insert.
This change effectively extracts the old value and the new value, xors them together and then xors that single bit with the correct location in the original vector. This will cancel out the old value in the first xor leaving the new value in the position.
The way I've implemented this uses 3 shifts and two xors and uses an additional register. We can avoid the additional register at the cost of another shift.
llvm-svn: 320120
Summary: Make LLVM_ENABLE_DUMP independent LLVM_ENABLE_ASSERTIONS,
move it to llvm-config.h, and update description.
Differential Revision: https://reviews.llvm.org/D38406
llvm-svn: 320111
In more recent Linux kernels with 47 bit VMAs the layout of virtual memory
for powerpc64 changed causing the address sanitizer to not work properly. This
patch adds support for 47 bit VMA kernels for powerpc64 and fixes up test
cases.
https://reviews.llvm.org/D40907
There is an associated patch for compiler-rt.
Tested on several 4.x and 3.x kernel releases.
llvm-svn: 320109
Previously, when linking against libcmt from the MSVC runtime,
lld-link /verbose would show "Ignoring unknown symbol record
with kind 0x1006". It turns out this was because
TypeIndexDiscovery did not handle S_REGISTER records, so these
records were not getting properly remapped.
Patch by: Alexnadre Ganea
Differential Revision: https://reviews.llvm.org/D40919
llvm-svn: 320108
Summary:
Make enum ModRefInfo an enum class. Changes to ModRefInfo values should
be done using inline wrappers.
This should prevent future bit-wise opearations from being added, which can be more error-prone.
Reviewers: sanjoy, dberlin, hfinkel, george.burgess.iv
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D40933
llvm-svn: 320107
It is causing sanitizer failures on llvm tests in a bootstrapped compiler. No bot link since it's currently down, but following up to get the bot up.
This reverts commit r319218.
llvm-svn: 320106
The offset overflow check before was incorrect. It would always give the
correct result, but it was comparing the SCALED potential fixed-up offset
against an UNSCALED minimum/maximum. As a result, the outliner was missing a
bunch of frame setup/destroy instructions that ought to have been safe to
outline. This fixes that, and adds an instruction to the .mir test that
failed the old test.
llvm-svn: 320090
-amdgpu-waitcnt-forcezero={1|0} Force all waitcnt instrs to be emitted as s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-amdgpu-waitcnt-forceexp=<n> Force emit a s_waitcnt expcnt(0) before the first <n> instrs
-amdgpu-waitcnt-forcelgkm=<n> Force emit a s_waitcnt lgkmcnt(0) before the first <n> instrs
-amdgpu-waitcnt-forcevm=<n> Force emit a s_waitcnt vmcnt(0) before the first <n> instrs
Differential Revision: https://reviews.llvm.org/D40091
llvm-svn: 320084
There's no v2i1 or v4i1 kshift, and v8i1 is only supported with AVXDQ. Isel has fake patterns to extend these types to native shifts, but makes no guarantees about the value of any bits shifted in when shifting right.
This patch promotes the vector to a type that supports a native shift first and only allows inserting into the msb of a native sized shift.
I've constructed this in a way that doesn't do the promotion if we're going to fallback to using a xmm/ymm/zmm shuffle. I think I have a plan to remove the shuffle fall back entirely. In which case we this can be simplified, but I wanted to fix the correctness issue first.
llvm-svn: 320081
Tagged as IMUL instructions for a reasonable approximation (ALU tends to be a lot faster) - POPCNT is currently tagged as FAdd which I think should be replaced with IMUL as well
llvm-svn: 320051
I noticed this pattern in D38316 / D38388. We failed to combine a shuffle that is either
repeating a scalar insertion at the same position in a vector or translated to a different
element index.
Like the earlier patch, this could be an instcombine too, but since we opted to make this
a DAG transform earlier, I've made this one a DAG patch too.
We do not need any legality checking because the new insert is identical to the existing
insert except that it may have a different constant insertion operand.
The constant insertion test in test/CodeGen/X86/vector-shuffle-combining.ll was the
motivation for D38756.
Differential Revision: https://reviews.llvm.org/D40209
llvm-svn: 320050
WebAssembly requires caller and callee signatures to match, so the usual
C runtime trick of calling main and having it just work regardless of
whether main is defined as '()' or '(int argc, char *argv[])' doesn't
work. Extend the FixFunctionBitcasts pass to rewrite main to use the
latter form.
llvm-svn: 320041
Simply checking for register class equality will break once additional
register classes are added (as is done for the RVC instruction set extension).
llvm-svn: 320036
This patch includes all missing functionality needed to provide first
compilation of a simple program that just returns from a function.
I've added a test case that checks for "ret" instruction printed in assembly
output.
Patch by Andrei Grischenko (andrei.l.grischenko@intel.com)
Differential revision: https://reviews.llvm.org/D39688
llvm-svn: 320035
This patch adds support for running the DWARF verifier on the linked
debug info files. If the -verify options is specified and verification
fails, dsymutil exists with abort with non-zero exit code. This behavior
is *not* enabled by default.
Differential revision: https://reviews.llvm.org/D40777
llvm-svn: 320033
Summary:
This did not work because the ExpectedHolder was trying to hold the
value in an Optional<T*>. Instead of trying to mimic the behavior of
Expected and try to make ExpectedHolder work with references and
non-references, I simply store the reference to the Expected object in
the holder.
I also add a bunch of tests for these matchers, which have helped me
flesh out some problems in my initial implementation of this patch, and
uncovered the fact that we are not consistent in quoting our values in
the matcher output (which I also fix).
Reviewers: zturner, chandlerc
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D40904
llvm-svn: 320025
As the FPR32 and FPR64 registers have the same names, use
validateTargetOperandClass in RISCVAsmParser to coerce a parsed FPR32 to an
FPR64 when necessary. The rest of this patch is very similar to the RV32F
patch.
Differential Revision: https://reviews.llvm.org/D39895
llvm-svn: 320023
The most interesting part of this patch is probably the handling of
rounding mode arguments. Sadly, the RISC-V assembler handles floating point
rounding modes as a special "argument" when it would be more consistent to
handle them like the atomics, opcode suffixes. This patch supports parsing
this optional parameter, using InstAlias to allow parsing these floating point
instructions when no rounding mode is specified.
Differential Revision: https://reviews.llvm.org/D39893
llvm-svn: 320020
A number of architectures re-use the same register names (e.g. for both 32-bit
FPRs and 64-bit FPRs). They are currently unable to use the tablegen'erated
MatchRegisterName and MatchRegisterAltName, as tablegen (when built with
asserts enabled) will fail.
When the AllowDuplicateRegisterNames in AsmParser is set, duplicated register
names will be tolerated. A backend can then coerce registers to the desired
register class by (for instance) implementing validateTargetOperandClass.
At least the in-tree Sparc backend could benefit from this, as does RISC-V
(single and double precision floating point registers).
Differential Revision: https://reviews.llvm.org/D39845
llvm-svn: 320018
NFC.
Adding MC regressions tests to cover the FMA and FMA4 ISA sets.
This patch is part of a larger task to cover MC encoding of all X86 ISA Sets starting revision https://reviews.llvm.org/D39952
Reviewers: craig.topper, RKSimon, zvi
Differential Revision: https://reviews.llvm.org/D40880
Change-Id: Ie39c0edce69ad647076b3d4e816948b2b6e1a9e4
llvm-svn: 320016
NFC.
Currently, not all the X86 ISA Sets are covered by the MC regressions tests for X86.
A full coverage needs to be added for each ISA set and for both 32bit and 64bit instructions + registers.
This patch includes MC assembly tests for the X87 32bit and 64bit.
Reviewers: craigt, RKSimon, zvi
Differential Revision: https://reviews.llvm.org/D39952
Change-Id: I55e1719c09a70644a6a4073c720cb5341c80fee9
llvm-svn: 320015
Summary:
Changed use_instructions() to use_nodbg_instructions() when
building an instruction set.
We don't want the presence of debug info to affect the code
we generate.
Reviewers: dblaikie, Eugene.Zelenko, chandlerc, aprantl
Reviewed By: aprantl
Subscribers: aprantl, llvm-commits
Differential Revision: https://reviews.llvm.org/D40882
llvm-svn: 320010
Currently, when creating a named section, the Wasm
frontend forces it to use `SectionKind::Data`, whereas
in fact C++ does generate code sections with custom
names.
Patch by Nicholas Wilson
Differential Revision: https://reviews.llvm.org/D40906
llvm-svn: 320002
As we emit different linetables format on different operating
systems, this currently fails on linux. Speculative commit
to fix the bots.
llvm-svn: 319997
dsymutil doesn't yet understand the new format and the change,
among others, breaks a large fraction of the debugger tests on
mac OS.
rdar://problem/35856354
llvm-svn: 319995