Commit Graph

70967 Commits

Author SHA1 Message Date
Sanjay Patel a954b8a363 [ValueTracking] fix CannotBeNegativeZero() to disregard 'nsz' FMF
The 'nsz' flag is different than 'nnan' or 'ninf' in that it does not create poison.
Make that explicit in the LangRef and fix ValueTracking analysis that misinterpreted
the definition.

This manifests as bugs in InstSimplify shown in the test diffs and as discussed in
PR45778:
https://bugs.llvm.org/show_bug.cgi?id=45778

Differential Revision: https://reviews.llvm.org/D79422
2020-05-05 16:04:59 -04:00
Christudasan Devadasan b8a616ec59 [AMDGPU] Fixed the test by adding the triple. 2020-05-06 00:14:01 +05:30
Momchil Velikov fb18dffaeb Revert "[ARM] CMSE code generation"
This reverts commit 7cbbf89d23.

The regression tests fail with the expensive checks.
2020-05-05 19:05:40 +01:00
Christudasan Devadasan 375cec4b6c [AMDGPU] Introduce more scratch registers in the ABI.
The AMDGPU target has a convention that defined all VGPRs
(execept the initial 32 argument registers) as callee-saved.
This convention is not efficient always, esp. when the callee
requiring more registers, ended up emitting a large number of
spills, even though its caller requires only a few.

This patch revises the ABI by introducing more scratch registers
that a callee can freely use.
The 256 vgpr registers now become:
  32 argument registers
  112 scratch registers and
  112 callee saved registers.
The scratch registers and the CSRs are intermixed at regular
intervals (a split boundary of 8) to obtain a better occupancy.

Reviewers: arsenm, t-tye, rampitec, b-sumner, mjbedy, tpr

Reviewed By: arsenm, t-tye

Differential Revision: https://reviews.llvm.org/D76356
2020-05-05 23:02:58 +05:30
Momchil Velikov 7cbbf89d23 [ARM] CMSE code generation
This patch implements the final bits of CMSE code generation:

* emit special linker symbols

* restrict parameter passing to not use memory

* emit BXNS and BLXNS instructions for returns from non-secure entry
  functions, and non-secure function calls, respectively

* emit code to save/restore secure floating-point state around calls
  to non-secure functions

* emit code to save/restore non-secure floating-pointy state upon
  entry to non-secure entry function, and return to non-secure state

* emit code to clobber registers not used for arguments and returns
  when switching to no-secure state

Patch by Momchil Velikov, Bradley Smith, Javed Absar, David Green,
possibly others.

Differential Revision: https://reviews.llvm.org/D76518
2020-05-05 18:23:28 +01:00
Stanislav Mekhanoshin 9ef166e657 [AMDGPU] Fix FoldImmediate for 16 bit operand
Differential Revision: https://reviews.llvm.org/D79362
2020-05-05 10:19:14 -07:00
Sanjay Patel 86dfbc676e [SLP] add another bailout for load-combine patterns
This builds on the or-reduction bailout that was added with D67841.
We still do not have IR-level load combining, although that could
be a target-specific enhancement for -vector-combiner.

The heuristic is narrowly defined to catch the motivating case from
PR39538:
https://bugs.llvm.org/show_bug.cgi?id=39538
...while preserving existing functionality.

That is, there's an unmodified test of pure load/zext/store that is
not seen in this patch at llvm/test/Transforms/SLPVectorizer/X86/cast.ll.
That's the reason for the logic difference to require the 'or'
instructions. The chances that vectorization would actually help a
memory-bound sequence like that seem small, but it looks nicer with:

  vpmovzxwd	(%rsi), %xmm0
  vmovdqu	%xmm0, (%rdi)

rather than:

  movzwl	(%rsi), %eax
  movl	%eax, (%rdi)
  ...

In the motivating test, we avoid creating a vector mess that is
unrecoverable in the backend, and SDAG forms the expected bswap
instructions after load combining:

  movzbl (%rdi), %eax
  vmovd %eax, %xmm0
  movzbl 1(%rdi), %eax
  vmovd %eax, %xmm1
  movzbl 2(%rdi), %eax
  vpinsrb $4, 4(%rdi), %xmm0, %xmm0
  vpinsrb $8, 8(%rdi), %xmm0, %xmm0
  vpinsrb $12, 12(%rdi), %xmm0, %xmm0
  vmovd %eax, %xmm2
  movzbl 3(%rdi), %eax
  vpinsrb $1, 5(%rdi), %xmm1, %xmm1
  vpinsrb $2, 9(%rdi), %xmm1, %xmm1
  vpinsrb $3, 13(%rdi), %xmm1, %xmm1
  vpslld $24, %xmm0, %xmm0
  vpmovzxbd %xmm1, %xmm1 # xmm1 = xmm1[0],zero,zero,zero,xmm1[1],zero,zero,zero,xmm1[2],zero,zero,zero,xmm1[3],zero,zero,zero
  vpslld $16, %xmm1, %xmm1
  vpor %xmm0, %xmm1, %xmm0
  vpinsrb $1, 6(%rdi), %xmm2, %xmm1
  vmovd %eax, %xmm2
  vpinsrb $2, 10(%rdi), %xmm1, %xmm1
  vpinsrb $3, 14(%rdi), %xmm1, %xmm1
  vpinsrb $1, 7(%rdi), %xmm2, %xmm2
  vpinsrb $2, 11(%rdi), %xmm2, %xmm2
  vpmovzxbd %xmm1, %xmm1 # xmm1 = xmm1[0],zero,zero,zero,xmm1[1],zero,zero,zero,xmm1[2],zero,zero,zero,xmm1[3],zero,zero,zero
  vpinsrb $3, 15(%rdi), %xmm2, %xmm2
  vpslld $8, %xmm1, %xmm1
  vpmovzxbd %xmm2, %xmm2 # xmm2 = xmm2[0],zero,zero,zero,xmm2[1],zero,zero,zero,xmm2[2],zero,zero,zero,xmm2[3],zero,zero,zero
  vpor %xmm2, %xmm1, %xmm1
  vpor %xmm1, %xmm0, %xmm0
  vmovdqu %xmm0, (%rsi)

  movl	(%rdi), %eax
  movl	4(%rdi), %ecx
  movl	8(%rdi), %edx
  movbel	%eax, (%rsi)
  movbel	%ecx, 4(%rsi)
  movl	12(%rdi), %ecx
  movbel	%edx, 8(%rsi)
  movbel	%ecx, 12(%rsi)

Differential Revision: https://reviews.llvm.org/D78997
2020-05-05 12:44:38 -04:00
Jinsong Ji 80b78a47e5 [MachinePipeliner] Add ORE for MachinePipeliner
This patch adds ORE for MachinePipeliner, so that people can anaylyze
their code using opt-viewer or other tools, then optimize the code to
catch more piplining opportunities.

Reviewed By: bcahoon

Differential Revision: https://reviews.llvm.org/D79368
2020-05-05 16:04:53 +00:00
Pengxuan Zheng 85aff8a4e4 [RISCV] Update debug scratch register names
Summary:
The RISC-V debug register was named dscratch in a previous draft of the RISC-V
debug mode spec. The number of registers has been increased to 2 in the latest
ratified version of the debug mode spec and the registers were named dscratch0
and dscratch1. We still support using the old register name "dscratch", but it
would be disassembled as "dscratch0" with this change.

Reviewers: apazos, asb, lenary, luismarques

Reviewed By: asb

Subscribers: hiraditya, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, sameer.abuasal, evandro, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78764
2020-05-05 08:46:07 -07:00
Fangrui Song 32b19334da [llvm-objcopy][ELF] Allow --dump-section to dump an empty non-SHT_NOBITS section
This is the ELF part of D75949.

GNU objcopy from binutils 2.35 onwards will support an empty non-SHT_NOBITS section as
well
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=e052e2ba295a65b6ea80cbc3f90495beca299c42

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D79339
2020-05-05 08:26:34 -07:00
Fangrui Song a11e90a6b9 [llvm-objcopy][test] ELF/dump-section.test: change #CHECK to # CHECK
The latter is more common and thus the preferred way to use CHECK lines.
2020-05-05 08:26:34 -07:00
Jay Foad 22829ab5fa [InstCombine] Allow denormal C in pow(C,y) -> exp2(log2(C)*y)
We check that C is finite and strictly positive, but there's no need to
check that it's normal too. exp2 should be just as accurate on denormals
as pow is.

Differential Revision: https://reviews.llvm.org/D79413
2020-05-05 16:25:48 +01:00
Jay Foad 47f5066553 Precommit new test cases for D79413 [InstCombine] Allow denormal C in pow(C,y) -> exp2(log2(C)*y) 2020-05-05 16:08:09 +01:00
David Green 146d44c251 [LSR] Don't require register reuse under postinc
LSR has some logic that tries to aggressively reuse registers in
formula. This can lead to sub-optimal decision in complex loops where
the backend it trying to use shouldFavorPostInc. This disables the
re-use in those situations.

Differential Revision: https://reviews.llvm.org/D79301
2020-05-05 16:04:50 +01:00
Jay Foad 3d76824b7f [AMDGPU] Better support for VMEM soft clauses in GCNHazardRecognizer
VMEM soft clauses only contain VMEM and FLAT instructions. Teaching
GCNHazardRecognizer::checkSoftClauseHazards that other kinds of
instructions will naturally break the clause means there are far fewer
cases where it has to insert an s_nop instruction to forcibly break the
clause.

Differential Revision: https://reviews.llvm.org/D79353
2020-05-05 15:49:09 +01:00
Sam Parker f35ccfa2af [NFC] Update tests
Run the update script on a couple of tests.
2020-05-05 15:28:40 +01:00
Jay Foad fa2783d79a [InstCombine] Remove hasOneUse check for pow(C,x) -> exp2(log2(C)*x)
I don't think there's any good reason not to do this transformation when
the pow has multiple uses.

Differential Revision: https://reviews.llvm.org/D79407
2020-05-05 14:46:08 +01:00
Sebastian Neubauer 1de4e56933 [AMDGPU] Don't mark the .note section as ALLOC
Marking a section as ALLOC tells the ELF loader to load the section into memory.
As we do not want to load the notes into VRAM, the flag should not be there.

On AMDHSA, .note is still marked as ALLOC, apparently this is currently
needed for OpenCL (see https://reviews.llvm.org/D74995).

Differential Revision: https://reviews.llvm.org/D76278
2020-05-05 14:21:45 +02:00
Sander de Smalen 8cb5663abd [AArch64][SVE] Guard bitcast patterns under IsLE predicate
Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79352
2020-05-05 13:18:35 +01:00
David Green f85acb1915 [ARM] Correct the type on a predicate cast
A PREDICATE_CAST(PREDICATE_CAST(X)) can be converted to a
PREDICATE_CAST(X) as the operation can convert between any forms of
predicates (v4i1/v8i1/v16i1/i32). Unfortunately I got the type wrong on
one of the rarer converts, which would lead to invalid nodes during
isel. This fixes it up to use the correct type.

Differential Revision: https://reviews.llvm.org/D79402
2020-05-05 13:15:10 +01:00
Simon Pilgrim 5c91aa6603 [InstCombine] Fold or(zext(bswap(x)),shl(zext(bswap(y)),bw/2)) -> bswap(or(zext(x),shl(zext(y), bw/2))
This adds a general combine that can be used to fold:

  or(zext(OP(x)), shl(zext(OP(y)),bw/2))
-->
  OP(or(zext(x), shl(zext(y),bw/2)))

Allowing us to widen 'concat-able' style or+zext patterns - I've just set this up for BSWAP but we could use this for other similar ops (BITREVERSE for instance).

We already do something similar for bitop(bswap(x),bswap(y)) --> bswap(bitop(x,y))

Fixes PR45715

Reviewed By: @lebedev.ri

Differential Revision: https://reviews.llvm.org/D79041
2020-05-05 12:30:10 +01:00
Simon Pilgrim e53d4869a0 [X86][AVX] combineVectorSignBitsTruncation - avoid complex vXi64->vXi32 PACKSS truncations (PR45794)
Unless we're truncating an 'all-bits' result, using PACKSS for vXi64->vXi32 truncation causes problems with later combines as ComputeNumSignBits struggles to see through BITCASTs to smaller types. If we don't use PACKSS in these cases then we fallback to shuffles which are usually just as good.
2020-05-05 11:57:25 +01:00
Simon Pilgrim 371a69ac9a [X86][AVX] Add PR45794 sitofp v4i64-v4f32 test case 2020-05-05 11:32:48 +01:00
Andrea Di Biagio 5bb5fa3c0a Forgot to add a -mtriple to a test. NFC
This should unbreak the clang-ppc64be-linux buildbot.
2020-05-05 10:48:00 +01:00
Andrea Di Biagio 5578ec32f9 [MCA] Fixed a bug where loads and stores were sometimes incorrectly marked as depedent. Fixes PR45793.
This fixes a regression introduced by a very old commit 280ac1fd1d (was
llvm-svn 361950).

Commit 280ac1fd1d redesigned the logic in the LSUnit with the goal of
speeding up isReady() queries, and stabilising the LSUnit API (while also making
the load store unit more customisable).

The concept of MemoryGroup (effectively an alias set) was added by that commit
to better describe and track dependencies between memory operations.  However,
that concept was not just used for alias dependencies, but it was also used for
describing memory "order" dependencies (enforced by the memory consistency
model).

Instructions of a same memory group were considered "equivalent" as in:
independent operations that can potentially execute in parallel.  The problem
was that the cost of a dependency (in terms of number of cycles) should have
been different for "order" dependency. Instructions in an order dependency
simply have to have to wait until their predecessors are "issued" to an
underlying pipeline (rather than having to wait until predecessors have beeng
fully executed). For simple "order" dependencies, this was effectively
introducing an artificial delay on the "issue" of independent loads and stores.

This patch fixes the issue and adds a new test named 'independent-load-stores.s'
to a bunch of x86 targets. That test contains the reproducible posted by Fabian
Ritter on PR45793.

I had to rerun the update-mca-tests script on several files. To avoid expected
regressions on some Exynos tests, I have added a -noalias=false flag (to match
the old strict behavior on latencies).

Some tests for processor Barcelona are improved/fixed by this change and they
now show better results.  In a few tests we were incorrectly counting the time
spent by instructions in a scheduler queue.  In one case in particular we now
correctly see a store executed out of order.  That test was affected by the same
underlying issue reported as PR45793.

Reviewers: mattd

Differential Revision: https://reviews.llvm.org/D79351
2020-05-05 10:25:36 +01:00
Pratyai Mazumder 08032e7192 [SanitizerCoverage] Replace the unconditional store with a load, then a conditional store.
Reviewers: vitalybuka, kcc

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79392
2020-05-05 02:25:05 -07:00
Heejin Ahn 834debfffd [WebAssembly] Fix block marker placing after fixUnwindMismatches
Summary:
This fixes a few things that are connected. It is very hard to provide
an independent test case for each of those fixes, because they are
interconnected and sometimes one masks another. The provided test case
triggers some of those bugs below but not all.

---

1. Background:
`placeBlockMarker` takes a BB, and if the BB is a destination of some
branch, it places `end_block` marker there, and computes the nearest
common dominator of all predecessors (what we call 'header') and places
a `block` marker there.

When we first place markers, we traverse BBs from top to bottom. For
example, when there are 5 BBs A, B, C, D, and E and B, D, and E are
branch destinations, if mark the BB given to `placeBlockMarker` with `*`
and draw a rectangle representing the border of `block` and `end_block`
markers, the process is going to look like
```
                       -------
           -----       |-----|
 ---       |---|       ||---||
 |A|       ||A||       |||A|||
 ---  -->  |---|  -->  ||---||
 *B        | B |       || B ||
  C        | C |       || C ||
  D        -----       |-----|
  E         *D         |  D  |
             E         -------
                         *E
```
which means when we first place markers, we go from inner to outer
scopes. So when we place a `block` marker, if the header already
contains other `block` or `try` marker, it has to belong to an inner
scope, so the existing `block`/`try` markers should go _after_ the new
marker. This was the assumption we had.

But after placing all markers we run `fixUnwindMismatches` function.
There we do some control flow transformation and create some branches,
and we call `placeBlockMarker` again to place `block`/`end_block`
markers for those newly created branches. We can't assume that we are
traversing branch destination BBs from top to bottom now because we are
basically inserting some new markers in the middle of existing markers.

Fix:
In `placeBlockMarker`, we don't have the assumption that the BB given is
in the order of top to bottom, and when placing `block` markers,
calculates whether existing `block` or `try` markers are inner or
outer scopes with respect to the current scope.

---

2. Background:
In `fixUnwindMismatches`, when there is a call whose correct unwind
destination mismatches the current destination after initially placing
`try` markers, we wrap that with a new nested `try`/`catch`/`end` and
jump to the correct handler within the new `catch`. The correct handler
code is split as a separate BB from its original EH pad so it can be
branched to. Here's an example:

- Before
```
mbb:
  call @foo       <- Unwind destination mismatch!
wrong-ehpad:
  catch
  ...
cont:
  end_try
  ...
correct-ehpad:
  catch
  [handler code]
```

- After
```
mbb:
  try                (new)
  call @foo
nested-ehpad:        (new)
  catch              (new)
  local.set n / drop (new)
  br %handleri       (new)
nested-end:          (new)
  end_try            (new)
wrong-ehpad:
  catch
  ...
cont:
  end_try
  ...
correct-ehpad:
  catch
  local.set n / drop (new)
handler:             (new)
  end_try
  [handler code]
```

Note that after this transformation, it is possible there are no calls
to actually unwind to `correct-ehpad` here. `call @foo` now
branches to `handler`, and there can be no other calls to unwind to
`correct-ehpad`. In this case `correct-ehpad` does not have any
predecessors anymore.

This can cause a bug in `placeBlockMarker`, because we may need to place
`end_block` marker in `handler`, and `placeBlockMarker` computes the
nearest common dominator of all predecessors. If one of `handler`'s
predecessor (here `correct-ehpad`) does not have any predecessors, i.e.,
no way of reaching it, we cannot correctly compute the common dominator
of predecessors of `handler`, and end up placing no `block`/`end`
markers. This bug actually sometimes masks the bug 1.

Fix:
When we have an EH pad that does not have any predecessors after this
transformation, deletes all its successors, so that its successors don't
have any dangling predecessors.

---

3. Background:
Actually the `handler` BB in the example shown in bug 2 doesn't need
`end_block` marker, despite it being a new branch destination, because
it already has `end_try` marker which can serve the same purpose. I just
put that example there for an illustration purpose. There is a case we
actually need to place `end_block` marker: when the branch dest is the
appendix BB. The appendix BB is created when there is a call that is
supposed to unwind to the caller ends up unwinding to a wrong EH pad. In
this case we also wrap the call with a nested `try`/`catch`/`end`,
create an 'appendix' BB at the very end of the function, and branch to
that BB, where we rethrow the exception to the caller.

Fix:
When we don't actually need to place block markers, we don't.

---

4. In case we fall through to the continuation BB after the catch block,
after extracting handler code in `fixUnwindMismatches` (refer to bug 2
for an example), we now have to add a branch to it to bypass the
handler.
- Before
```
try
  ...
  (falls through to 'cont')
catch
  handler body
end
              <-- cont
```

- After
```
try
  ...
  br %cont    (new)
catch
end
handler body
              <-- cont
```

The problem is, we haven't been placing a new `end_block` marker in the
`cont` BB in this case. We should, and this fixes it. But it is hard to
provide a test case that triggers this bug, because the current
compilation pipeline from .ll to .s does not generate this kind of code;
we always have a `br` after `invoke`. But code without `br` is still
valid, and we can have that kind of code if we have some pipeline
changes or optimizations later. Even mir test cases cannot trigger this
part for now, because we don't encode auxiliary EH-related data
structures (such as `WasmEHFuncInfo`) in mir now. Those functionalities
can be added later, but I don't think we should block this fix on that.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79324
2020-05-05 02:06:47 -07:00
Pierre-vh d5eb7ffa33 [Target][ARM] Fold or(A, B) more aggressively for I1 vectors
This patch makes the folding of or(A, B) into not(and(not(A), not(B)))
more agressive for I1 vector. This only affects Thumb2 MVE and improves
codegen, because it removes a lot of msr/mrs instructions on VPR.P0.

This patch also adds a xor(vcmp) -> !vcmp fold for MVE.

Differential Revision: https://reviews.llvm.org/D77202
2020-05-05 10:03:02 +01:00
Pierre-vh ffdda495f7 [Target][ARM] Add PerformVSELECTCombine for MVE Integer Ops
This patch adds an implementation of PerformVSELECTCombine in the
ARM DAG Combiner that transforms vselect(not(cond), lhs, rhs) into
vselect(cond, rhs, lhs).

Normally, this should be done by the target-independent DAG Combiner,
but it doesn't handle the kind of constants that we generate, so we
have to reimplement it here.

Differential Revision: https://reviews.llvm.org/D77712
2020-05-05 10:03:02 +01:00
David Green 09767af848 [ARM] MVE predcast with const test. NFC 2020-05-05 09:53:42 +01:00
Sergey Dmitriev f637334df9 [CallGraphUpdater] Removed references to calles when deleting function
Summary: Otherwise we can get unaccounted references to call graph nodes.

Reviewers: jdoerfert, sstefan1

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79382
2020-05-04 18:59:47 -07:00
Zakk Chen ad5fad0ac5 [LTO] Suppress emission of empty combined module by default
Summary:
That unless the user requested an output object (--lto-obj-path), the an
unused empty combined module is not emitted.

This changed is helpful for some target (ex. RISCV-V) which encoded the
ABI info in IR module flags (target-abi). Empty unused module has no ABI
info so the linker would get the linking error during merging
incompatible ABIs.

Reviewers: tejohnson, espindola, MaskRay

Subscribers: emaste, inglorion, arichardson, hiraditya, simoncook, MaskRay, steven_wu, dexonsmith, PkmX, dang, lenary, s.egerton, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78988
2020-05-04 18:31:09 -07:00
Krzysztof Parzyszek 156092bbcc [RegisterCoalescer] Extend a subrange if needed when filling range gap
Register live ranges may have had gaps that after coalescing should be
removed. This is done by adding a new segment to the range, and merging
it with neighboring segments. When doing so, do not assume that each
subrange of the register ended at the same index. If a subrange ended
earlier, adding this segment could make the live range invalid.
Instead, if the subrange is not live at the start of the segment,
extend it first.
2020-05-04 16:49:59 -05:00
Sanjay Patel 58c1770b8f [x86] add test for shift+op+concat; NFC
D79360 could change this kind of sequence.
2020-05-04 17:31:45 -04:00
Vedant Kumar 8dfe819bcd [Verifier] Constrain where DILocations may be nested
Summary:
Constrain which metadata nodes are allowed to be, or contain,
DILocations. This ensures that logic for updating DILocations in a
Module is complete.

Currently, !llvm.loop metadata is the only odd duck which contains
nested DILocations. This has caused problems in the past: some passes
forgot to visit the nested locations, leading to subtly broken debug
info and late verification failures.

If there's a compelling reason for some future metadata to nest
DILocations, we'll need to introduce a generic API for updating the
locations attached to an Instruction before relaxing this check.

Reviewers: aprantl, dsanders

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79245
2020-05-04 14:02:43 -07:00
David Green 8430141578 [ARM] Complex LSR test showing inefficient codegen. NFC 2020-05-04 21:50:10 +01:00
Eli Friedman 1eb160fe8d [ARM] Fix tail call validity checking for varargs calls.
If a varargs function is calling a non-varargs function, or vice versa,
make sure we use the correct "varargs" bit for each.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45234

Differential Revision: https://reviews.llvm.org/D79199
2020-05-04 12:34:14 -07:00
Sanjay Patel f1d083ab45 [x86] add tests for concat of casts; NFC 2020-05-04 15:22:31 -04:00
Snehasish Kumar c8ac29ab1d Descriptive symbol names for machine basic block sections.
Today symbol names generated for machine basic block sections use a
unary encoding to reduce bloat. This is essential when every basic block
in the binary is assigned a symbol however with basic block clusters
(rG05192e585ce175b55f2a26b83b4ed7882785c8e6) when we only need to
generate a few non-temporary symbols we can assign more descriptive
names making them more user friendly. With this change -

Cold cluster section for function foo is named "foo.cold"
Exception cluster section for function foo is named "foo.eh"
Other cluster sections identified by their ids are named "foo.ID"
Using this format works well with existing tools. It will demangle as
expected and works with existing symbolizers, profilers and debuggers
out of the box.

$ c++filt _Z3foov.cold
foo() [clone .cold]

$ c++filt _Z3foov.eh
foo() [clone .eh]

$c++filt _Z3foov.1234
foo() [clone 1234]

Tests for basicblock-sections are updated with some cleanup where
appropriate.

Differential Revision: https://reviews.llvm.org/D79221
2020-05-04 19:06:43 +00:00
Fangrui Song ac9e8b3a7e [llvm-objdump][ARM] Print inline relocations when dumping ARM data
Fixes PR44357

For ARM ELF, regions covered by data mapping symbols `$d` are dumped as `.byte`, `.short` or `.word` but inline relocations are not printed. This patch merges its loop into the normal instruction printing loop so that inline relocations are printed.

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D79284
2020-05-04 11:51:39 -07:00
Sean Fertile 47f9e71ac7 [PowerPC][AIX][NFC] Remove spills and reloads from arg passing test. 2020-05-04 14:26:33 -04:00
Alexandre Ganea 721ea5b380 [DebugInfo][CodeView] Include namespace into emitted globals
Before this patch, global variables didn't have their namespace prepended in the Codeview debug symbol stream. This prevented Visual Studio from displaying them in the debugger (they appeared as 'unspecified error')

Differential Revision: https://reviews.llvm.org/D79028
2020-05-04 13:59:36 -04:00
Stanislav Mekhanoshin c85eda74b8 [AMDGPU] fix copies between 32 and 16 bit
This a hack to fix illegal 32 to 16 bit copies.
The problem is when we make 16 bit subregs legal it creates
a huge amount of failures which can only be resolved at once
without a temporary hack like this.

The next step is to change operands, instruction definitions
and patterns until this hack is not needed.

Differential Revision: https://reviews.llvm.org/D79119
2020-05-04 08:54:22 -07:00
Simon Pilgrim 940061438e [InstCombine] Fold (mul(abs(x),abs(x))) -> (mul(x,x)) (PR39476)
This patch adds support for discarding integer absolutes (abs + nabs variants) from self-multiplications.

ABS Alive2: http://volta.cs.utah.edu:8080/z/rwcc8W
NABS Alive2: http://volta.cs.utah.edu:8080/z/jZXUwQ

This is an InstCombine version of D79304 - I'm not sure yet if we'll need that after this.

Reviewed By: @lebedev.ri and @xbolva00

Differential Revision: https://reviews.llvm.org/D79319
2020-05-04 15:21:52 +01:00
Alex Richardson d1ff003fbb [SelectionDAGBuilder] Stop setting alignment to one for hidden sret values
We allocated a suitably aligned frame index so we know that all the values
have ABI alignment.
For MIPS this avoids using pair of lwl + lwr instructions instead of a
single lw. I found this when compiling CHERI pure capability code where
we can't use the lwl/lwr unaligned loads/stores and and were to falling
back to a byte load + shift + or sequence.

This should save a few instructions for MIPS and possibly other backends
that don't have fast unaligned loads/stores.
It also improves code generation for CodeGen/X86/pr34653.ll and
CodeGen/WebAssembly/offset.ll since they can now use aligned loads.

Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D78999
2020-05-04 14:44:39 +01:00
Alex Richardson 3fc738846e [MIPS] Add a baseline test showing current inefficient hidden sret lowering
SelectionDAGBuilder currently doesn't propagate the known alignment of
the sret parameter. This is inefficient for MIPS and highly inefficient for
our out-of-tree CHERI-extended MIPS since we don't have lwl/lwr so fall back
to byte loads for align == 1.
2020-05-04 14:44:39 +01:00
alex-t 5b898bddff [AMDGPU] Enable carry out ADD/SUB operations divergence driven instruction selection.
Summary: This change enables all kind of carry out ISD opcodes to be selected according to the node divergence.

Reviewers: rampitec, arsenm, vpykhtin

Reviewed By: rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78091
2020-05-04 16:42:25 +03:00
Raul Tambre 0863e94ebd [AArch64] Add NVIDIA Carmel support
Summary:
NVIDIA's Carmel ARM64 cores are used in Tegra194 chips found in Jetson AGX Xavier, DRIVE AGX Xavier and DRIVE AGX Pegasus.

References:
* https://devblogs.nvidia.com/nvidia-jetson-agx-xavier-32-teraops-ai-robotics/#h.huq9xtg75a5e
* NVIDIA Xavier Series System-on-Chip Technical Reference Manual 1.3 (https://developer.nvidia.com/embedded/downloads#?search=Xavier%20Series%20SoC%20Technical%20Reference%20Manual)

Reviewers: sdesmalen, paquette

Reviewed By: sdesmalen

Subscribers: llvm-commits, ianshmean, kristof.beyls, hiraditya, jfb, danielkiss, cfe-commits, t.p.northover

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D77940
2020-05-04 13:52:30 +01:00
Kerry McLaughlin 19f5da9c1d [SVE][Codegen] Lower legal min & max operations
Summary:
This patch adds AArch64ISD nodes for [S|U]MIN_PRED
and [S|U]MAX_PRED, and lowers both SVE intrinsics and
IR operations for min and max to these nodes.

There are two forms of these instructions for SVE: a predicated
form and an immediate (unpredicated) form. The patterns
which existed for the latter have been updated to match a
predicated node with an immediate and map this
to the immediate instruction.

Reviewers: sdesmalen, efriedma, dancgr, rengolin

Reviewed By: efriedma

Subscribers: huihuiz, tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79087
2020-05-04 11:19:19 +01:00
Jay Foad e737847b8f [SLC] Allow llvm.pow(x,2.0) -> x*x etc even if no pow() lib func
optimizePow does not create any new calls to pow, so it should work
regardless of whether the pow library function is available. This allows
it to optimize the llvm.pow intrinsic on targets with no math library.

Based on a patch by Tim Renouf.

Differential Revision: https://reviews.llvm.org/D68231
2020-05-04 10:54:07 +01:00