Commit Graph

29675 Commits

Author SHA1 Message Date
Craig Topper ab5a30ac9d [X86] Add tests for an alternative sequence for _mm_storel_pi/_mm_storeh_pi intrinsics. NFC
llvm-svn: 365667
2019-07-10 17:11:18 +00:00
Nick Desaulniers 8728e45706 [TargetLowering] support BlockAddress as "i" inline asm constraint
Summary:
This allows passing address of labels to inline assembly "i" input
constraints.

Fixes pr/42502.

Reviewers: ostannard

Reviewed By: ostannard

Subscribers: void, echristo, nathanchance, ostannard, javed.absar, hiraditya, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64167

llvm-svn: 365664
2019-07-10 17:08:25 +00:00
Matt Arsenault 6ce1b4fec5 GlobalISel: Legalization for G_FMINNUM/G_FMAXNUM
llvm-svn: 365658
2019-07-10 16:31:19 +00:00
Matt Arsenault e595a2c964 GlobalISel: Define the full family of FP min/max instructions
llvm-svn: 365657
2019-07-10 16:31:15 +00:00
Matt Arsenault 58426a3707 AMDGPU: Serialize mode from MachineFunctionInfo
llvm-svn: 365653
2019-07-10 16:09:26 +00:00
Jay Foad bba37e89a5 [AMDGPU] Allow abs/neg source modifiers on v_cndmask_b32
Summary:
D59191 added support for these modifiers in the assembler and
disassembler. This patch just teaches instruction selection that it can
use them.

Reviewers: arsenm, tstellar

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64497

llvm-svn: 365640
2019-07-10 14:53:47 +00:00
Petar Avramovic 7d0778ea6b [MIPS GlobalISel] Select float and double phi
Select float and double phi for MIPS32.

Differential Revision: https://reviews.llvm.org/D64420

llvm-svn: 365627
2019-07-10 13:18:13 +00:00
Petar Avramovic 7b31491ae2 [MIPS GlobalISel] Select float and double load and store
Select float and double load and store for MIPS32.

Differential Revision: https://reviews.llvm.org/D64419

llvm-svn: 365626
2019-07-10 12:55:21 +00:00
Simon Pilgrim 6a58583951 [X86][SSE] EltsFromConsecutiveLoads - add basic dereferenceable support
This patch checks to see if the vector element loads are based off a dereferenceable pointer that covers the entire vector width, in which case we don't need to have element loads at both extremes of the vector width - just the start (base pointer) of it.

Another step towards partial vector loads......

Differential Revision: https://reviews.llvm.org/D64205

llvm-svn: 365614
2019-07-10 10:46:36 +00:00
Tom Stellard d0ba79fe7b AMDGPU/GlobalISel: Add support for wide loads >= 256-bits
Summary:
This adds support for the most commonly used wide load types:
<8xi32>, <16xi32>, <4xi64>, and <8xi64>

Reviewers: arsenm

Reviewed By: arsenm

Subscribers: hiraditya, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, volkan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D57399

llvm-svn: 365586
2019-07-10 00:22:41 +00:00
Matt Arsenault b1843e130a GlobalISel: Implement lower for G_FCOPYSIGN
In SelectionDAG AMDGPU treated these as legal, but this was mostly
because the bitcasts required for FP types were painful. Theoretically
the bitpattern should eventually match to bfi, so don't bother trying
to get the patterns to import.

llvm-svn: 365583
2019-07-09 23:34:29 +00:00
Matt Arsenault 3f1a34546c AMDGPU/GlobalISel: Fix legality for G_BUILD_VECTOR
llvm-svn: 365575
2019-07-09 22:48:04 +00:00
Matt Arsenault 14a4495155 GlobalISel: Combine unmerge of merge with intermediate cast
This eliminates some illegal intermediate vectors when operations are
scalarized.

llvm-svn: 365566
2019-07-09 22:19:13 +00:00
Peter Collingbourne 1366262b74 hwasan: Improve precision of checks using short granule tags.
A short granule is a granule of size between 1 and `TG-1` bytes. The size
of a short granule is stored at the location in shadow memory where the
granule's tag is normally stored, while the granule's actual tag is stored
in the last byte of the granule. This means that in order to verify that a
pointer tag matches a memory tag, HWASAN must check for two possibilities:

* the pointer tag is equal to the memory tag in shadow memory, or
* the shadow memory tag is actually a short granule size, the value being loaded
  is in bounds of the granule and the pointer tag is equal to the last byte of
  the granule.

Pointer tags between 1 to `TG-1` are possible and are as likely as any other
tag. This means that these tags in memory have two interpretations: the full
tag interpretation (where the pointer tag is between 1 and `TG-1` and the
last byte of the granule is ordinary data) and the short tag interpretation
(where the pointer tag is stored in the granule).

When HWASAN detects an error near a memory tag between 1 and `TG-1`, it
will show both the memory tag and the last byte of the granule. Currently,
it is up to the user to disambiguate the two possibilities.

Because this functionality obsoletes the right aligned heap feature of
the HWASAN memory allocator (and because we can no longer easily test
it), the feature is removed.

Also update the documentation to cover both short granule tags and
outlined checks.

Differential Revision: https://reviews.llvm.org/D63908

llvm-svn: 365551
2019-07-09 20:22:36 +00:00
Craig Topper 84a1f07363 [X86][AMDGPU][DAGCombiner] Move call to allowsMemoryAccess into isLoadBitCastBeneficial/isStoreBitCastBeneficial to allow X86 to bypass it
Basically the problem is that X86 doesn't set the Fast flag from
allowsMemoryAccess on certain CPUs due to slow unaligned memory
subtarget features. This prevents bitcasts from being folded into
loads and stores. But all vector loads and stores of the same width
are the same cost on X86.

This patch merges the allowsMemoryAccess call into isLoadBitCastBeneficial to allow X86 to skip it.

Differential Revision: https://reviews.llvm.org/D64295

llvm-svn: 365549
2019-07-09 19:55:28 +00:00
Stanislav Mekhanoshin 9e77d0c6df [AMDGPU] gfx908 register file changes
Differential Revision: https://reviews.llvm.org/D64438

llvm-svn: 365546
2019-07-09 19:41:51 +00:00
Sean Fertile f09d54ed2a Boilerplate for producing XCOFF object files from the PowerPC backend.
Stubs out a number of the classes needed to produce a new object file format
(XCOFF) for the powerpc-aix target. For testing input is an empty module which
produces an object file with just a file header.

Differential Revision: https://reviews.llvm.org/D61694

llvm-svn: 365541
2019-07-09 19:21:01 +00:00
Stanislav Mekhanoshin 22b2c3d651 [AMDGPU] gfx908 target
Differential Revision: https://reviews.llvm.org/D64429

llvm-svn: 365525
2019-07-09 18:10:06 +00:00
Matt Arsenault 077df01918 AMDGPU: Fix test failing since r365512
llvm-svn: 365521
2019-07-09 17:54:34 +00:00
Christudasan Devadasan b2d24bd540 [AMDGPU] Created a sub-register class for the return address operand in the return instruction.
Function return instruction lowering, currently uses the fixed register pair s[30:31] for holding
the return address. It can be any SGPR pair other than the CSRs. Created an SGPR pair sub-register class
exclusive of the CSRs, and used this regclass while lowering the return instruction.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D63924

llvm-svn: 365512
2019-07-09 16:48:42 +00:00
Sam Elliott 114d2db49b [RISCV] Fix ICE in isDesirableToCommuteWithShift
Summary:
There was an error being thrown from isDesirableToCommuteWithShift in
some tests. This was tracked down to the method being called before
legalisation, with an extended value type, not a machine value type.

In the case I diagnosed, the error was only hit with an instruction sequence
involving `i24`s in the add and shift. `i24` is not a Machine ValueType, it is
instead an Extended ValueType which was causing the issue.

I have added a test to cover this case, and fixed the error in the callback.

Reviewers: asb, luismarques

Reviewed By: asb

Subscribers: hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64425

llvm-svn: 365511
2019-07-09 16:24:16 +00:00
Amara Emerson 6616e269a6 [AArch64][GlobalISel] Optimize conditional branches followed by unconditional branches
If we have an icmp->brcond->br sequence where the brcond just branches to the
next block jumping over the br, while the br takes the false edge, then we can
modify the conditional branch to jump to the br's target while inverting the
condition of the incoming icmp. This means we can eliminate the br as an
unconditional branch to the fallthrough block.

Differential Revision: https://reviews.llvm.org/D64354

llvm-svn: 365510
2019-07-09 16:05:59 +00:00
Simon Atanasyan e3892d84e0 [mips] Show error in case of using FP64 mode on pre MIPS32R2 CPU
llvm-svn: 365508
2019-07-09 15:48:16 +00:00
Simon Atanasyan 623282f0dd [mips] Explicitly select `mips32r2` CPU for test cases require 64-bit FPU. NFC
Support for 64-bit coprocessors on a 32-bit architecture
was added in `MIPS32 R2`.

llvm-svn: 365507
2019-07-09 15:48:05 +00:00
Yonghong Song d3d88d08b5 [BPF] Support for compile once and run everywhere
Introduction
============

This patch added intial support for bpf program compile once
and run everywhere (CO-RE).

The main motivation is for bpf program which depends on
kernel headers which may vary between different kernel versions.
The initial discussion can be found at https://lwn.net/Articles/773198/.

Currently, bpf program accesses kernel internal data structure
through bpf_probe_read() helper. The idea is to capture the
kernel data structure to be accessed through bpf_probe_read()
and relocate them on different kernel versions.

On each host, right before bpf program load, the bpfloader
will look at the types of the native linux through vmlinux BTF,
calculates proper access offset and patch the instruction.

To accommodate this, three intrinsic functions
   preserve_{array,union,struct}_access_index
are introduced which in clang will preserve the base pointer,
struct/union/array access_index and struct/union debuginfo type
information. Later, bpf IR pass can reconstruct the whole gep
access chains without looking at gep itself.

This patch did the following:
  . An IR pass is added to convert preserve_*_access_index to
    global variable who name encodes the getelementptr
    access pattern. The global variable has metadata
    attached to describe the corresponding struct/union
    debuginfo type.
  . An SimplifyPatchable MachineInstruction pass is added
    to remove unnecessary loads.
  . The BTF output pass is enhanced to generate relocation
    records located in .BTF.ext section.

Typical CO-RE also needs support of global variables which can
be assigned to different values to different hosts. For example,
kernel version can be used to guard different versions of codes.
This patch added the support for patchable externals as well.

Example
=======

The following is an example.

  struct pt_regs {
    long arg1;
    long arg2;
  };
  struct sk_buff {
    int i;
    struct net_device *dev;
  };

  #define _(x) (__builtin_preserve_access_index(x))
  static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) =
          (void *) 4;
  extern __attribute__((section(".BPF.patchable_externs"))) unsigned __kernel_version;
  int bpf_prog(struct pt_regs *ctx) {
    struct net_device *dev = 0;

    // ctx->arg* does not need bpf_probe_read
    if (__kernel_version >= 41608)
      bpf_probe_read(&dev, sizeof(dev), _(&((struct sk_buff *)ctx->arg1)->dev));
    else
      bpf_probe_read(&dev, sizeof(dev), _(&((struct sk_buff *)ctx->arg2)->dev));
    return dev != 0;
  }

In the above, we want to translate the third argument of
bpf_probe_read() as relocations.

  -bash-4.4$ clang -target bpf -O2 -g -S trace.c

The compiler will generate two new subsections in .BTF.ext,
OffsetReloc and ExternReloc.
OffsetReloc is to record the structure member offset operations,
and ExternalReloc is to record the external globals where
only u8, u16, u32 and u64 are supported.

   BPFOffsetReloc Size
   struct SecLOffsetReloc for ELF section #1
   A number of struct BPFOffsetReloc for ELF section #1
   struct SecOffsetReloc for ELF section #2
   A number of struct BPFOffsetReloc for ELF section #2
   ...
   BPFExternReloc Size
   struct SecExternReloc for ELF section #1
   A number of struct BPFExternReloc for ELF section #1
   struct SecExternReloc for ELF section #2
   A number of struct BPFExternReloc for ELF section #2

  struct BPFOffsetReloc {
    uint32_t InsnOffset;    ///< Byte offset in this section
    uint32_t TypeID;        ///< TypeID for the relocation
    uint32_t OffsetNameOff; ///< The string to traverse types
  };

  struct BPFExternReloc {
    uint32_t InsnOffset;    ///< Byte offset in this section
    uint32_t ExternNameOff; ///< The string for external variable
  };

Note that only externs with attribute section ".BPF.patchable_externs"
are considered for Extern Reloc which will be patched by bpf loader
right before the load.

For the above test case, two offset records and one extern record
will be generated:
  OffsetReloc records:
        .long   .Ltmp12                 # Insn Offset
        .long   7                       # TypeId
        .long   242                     # Type Decode String
        .long   .Ltmp18                 # Insn Offset
        .long   7                       # TypeId
        .long   242                     # Type Decode String

  ExternReloc record:
        .long   .Ltmp5                  # Insn Offset
        .long   165                     # External Variable

  In string table:
        .ascii  "0:1"                   # string offset=242
        .ascii  "__kernel_version"      # string offset=165

The default member offset can be calculated as
    the 2nd member offset (0 representing the 1st member) of struct "sk_buff".

The asm code:
    .Ltmp5:
    .Ltmp6:
            r2 = 0
            r3 = 41608
    .Ltmp7:
    .Ltmp8:
            .loc    1 18 9 is_stmt 0        # t.c:18:9
    .Ltmp9:
            if r3 > r2 goto LBB0_2
    .Ltmp10:
    .Ltmp11:
            .loc    1 0 9                   # t.c:0:9
    .Ltmp12:
            r2 = 8
    .Ltmp13:
            .loc    1 19 66 is_stmt 1       # t.c:19:66
    .Ltmp14:
    .Ltmp15:
            r3 = *(u64 *)(r1 + 0)
            goto LBB0_3
    .Ltmp16:
    .Ltmp17:
    LBB0_2:
            .loc    1 0 66 is_stmt 0        # t.c:0:66
    .Ltmp18:
            r2 = 8
            .loc    1 21 66 is_stmt 1       # t.c:21:66
    .Ltmp19:
            r3 = *(u64 *)(r1 + 8)
    .Ltmp20:
    .Ltmp21:
    LBB0_3:
            .loc    1 0 66 is_stmt 0        # t.c:0:66
            r3 += r2
            r1 = r10
    .Ltmp22:
    .Ltmp23:
    .Ltmp24:
            r1 += -8
            r2 = 8
            call 4

For instruction .Ltmp12 and .Ltmp18, "r2 = 8", the number
8 is the structure offset based on the current BTF.
Loader needs to adjust it if it changes on the host.

For instruction .Ltmp5, "r2 = 0", the external variable
got a default value 0, loader needs to supply an appropriate
value for the particular host.

Compiling to generate object code and disassemble:
   0000000000000000 bpf_prog:
           0:       b7 02 00 00 00 00 00 00         r2 = 0
           1:       7b 2a f8 ff 00 00 00 00         *(u64 *)(r10 - 8) = r2
           2:       b7 02 00 00 00 00 00 00         r2 = 0
           3:       b7 03 00 00 88 a2 00 00         r3 = 41608
           4:       2d 23 03 00 00 00 00 00         if r3 > r2 goto +3 <LBB0_2>
           5:       b7 02 00 00 08 00 00 00         r2 = 8
           6:       79 13 00 00 00 00 00 00         r3 = *(u64 *)(r1 + 0)
           7:       05 00 02 00 00 00 00 00         goto +2 <LBB0_3>

    0000000000000040 LBB0_2:
           8:       b7 02 00 00 08 00 00 00         r2 = 8
           9:       79 13 08 00 00 00 00 00         r3 = *(u64 *)(r1 + 8)

    0000000000000050 LBB0_3:
          10:       0f 23 00 00 00 00 00 00         r3 += r2
          11:       bf a1 00 00 00 00 00 00         r1 = r10
          12:       07 01 00 00 f8 ff ff ff         r1 += -8
          13:       b7 02 00 00 08 00 00 00         r2 = 8
          14:       85 00 00 00 04 00 00 00         call 4

Instructions #2, #5 and #8 need relocation resoutions from the loader.

Signed-off-by: Yonghong Song <yhs@fb.com>

Differential Revision: https://reviews.llvm.org/D61524

llvm-svn: 365503
2019-07-09 15:28:41 +00:00
David Green 781e3aff8c [ARM] Add test for MVE and no floats. NFC
Adds a simple test that MVE with no floating point will be promoted correctly
to software float calls.

llvm-svn: 365496
2019-07-09 14:43:17 +00:00
Petar Avramovic be20e36107 [MIPS GlobalISel] Register bank select for G_PHI. Select i64 phi
Select gprb or fprb when def/use register operand of G_PHI is
used/defined by either:
 copy to/from physical register or
 instruction with only one mapping available for that use/def operand.

Integer s64 phi is handled with narrowScalar when mapping is applied,
produced artifacts are combined away. Manually set gprb to all register
operands of instructions created during narrowScalar.

Differential Revision: https://reviews.llvm.org/D64351

llvm-svn: 365494
2019-07-09 14:36:17 +00:00
Matt Arsenault fdd761af15 AMDGPU/GlobalISel: Prepare some tests for store selection
Mostsly these would fail due to trying to use SI with a flat
operation. Implementing global loads with MUBUF is more work than
flat, so these won't be handled in the initial load selection.

Others fail because store of s64 won't initially work, as the current
set of patterns expect everything to be turned into v2i32.

llvm-svn: 365493
2019-07-09 14:30:57 +00:00
Petar Avramovic dbb6d01d34 [MIPS GlobalISel] Regbanks for G_SELECT. Select i64, f32 and f64 select
Select gprb or fprb when def/use register operand of G_SELECT is
used/defined by either:
 copy to/from physical register or
 instruction with only one mapping available for that use/def operand.

Integer s64 select is handled with narrowScalar when mapping is applied,
produced artifacts are combined away. Manually set gprb to all register
operands of instructions created during narrowScalar.

For selection of floating point s32 or s64 select it is enough to set
fprb of appropriate size and selectImpl will do the rest.

Differential Revision: https://reviews.llvm.org/D64350

llvm-svn: 365492
2019-07-09 14:30:29 +00:00
Matt Arsenault 85ad662dfd AMDGPU/GlobalISel: Fix test
llvm-svn: 365491
2019-07-09 14:30:02 +00:00
Matt Arsenault 4dd5755d01 AMDGPU/GlobalISel: Legalize more concat_vectors
llvm-svn: 365488
2019-07-09 14:17:31 +00:00
Matt Arsenault 6bdb92d833 AMDGPU/GlobalISel: Improve regbankselect for icmp s16
Account for 64-bit scalar eq/ne when available.

llvm-svn: 365487
2019-07-09 14:13:09 +00:00
Matt Arsenault 8b8eee5904 AMDGPU/GlobalISel: Make s16 G_ICMP legal
llvm-svn: 365486
2019-07-09 14:10:43 +00:00
Matt Arsenault e6d10f97dd AMDGPU/GlobalISel: Select G_SUB
llvm-svn: 365484
2019-07-09 14:05:11 +00:00
Matt Arsenault 872f38be7e AMDGPU/GlobalISel: Select G_UNMERGE_VALUES
llvm-svn: 365483
2019-07-09 14:02:26 +00:00
Matt Arsenault 9b7ffc4e55 AMDGPU/GlobalISel: Select G_MERGE_VALUES
llvm-svn: 365482
2019-07-09 14:02:20 +00:00
Bjorn Pettersson 59029017a6 [LegalizeTypes] Fix saturation bug for smul.fix.sat
Summary:
Make sure we use SETGE instead of SETGT when checking
if the sign bit is zero at SMULFIXSAT expansion.

The faulty expansion occured when doing "expand" of
SMULFIXSAT and the scale was exactly matching the
size of the smaller type. For example doing
  i64 Z = SMULFIXSAT X, Y, 32
and expanding X/Y/Z into using two i32 values.

The problem was that we sometimes did not saturate
to min when overflowing.

Here is an example using Q3.4 numbers:

Consider that we are multiplying X and Y.
  X = 0x80 (-8.0 as Q3.4)
  Y = 0x20 (2.0 as Q3.4)
To avoid loss of precision we do a widening
multiplication, getting a 16 bit result
  Z = 0xF000 (-16.0 as Q7.8)

To detect negative overflow we should check if
the five most significant bits in Z are less than -1.
Assume that we name the 4 most significant bits
as HH and the next 4 bits as HL. Then we can do the
check by examining if
 (HH < -1) or (HH == -1 && "sign bit in HL is zero").

The fault was that we have been doing the check as
 (HH < -1) or (HH == -1 && HL > 0)
instead of
 (HH < -1) or (HH == -1 && HL >= 0).

In our example HH is -1 and HL is 0, so the old
code did not trigger saturation and simply truncated
the result to 0x00 (0.0). With the bugfix we instead
detect that we should saturate to min, and the result
will be set to 0x80 (-8.0).

Reviewers: leonardchan, bevinh

Reviewed By: leonardchan

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64331

llvm-svn: 365455
2019-07-09 10:24:50 +00:00
Guillaume Chatelet 336f3e1601 Fixing @llvm.memcpy not honoring volatile.
This is explicitly not addressing target-specific code, or calls to memcpy.

Summary: https://bugs.llvm.org/show_bug.cgi?id=42254

Reviewers: courbet

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63215

llvm-svn: 365449
2019-07-09 09:53:36 +00:00
Kai Luo 09329ce6c4 [NFC][PowerPC] Added a test to show current codegen of MachinePRE
llvm-svn: 365447
2019-07-09 09:12:17 +00:00
Stanislav Mekhanoshin 818d748a45 [AMDGPU] Always use s_memtime for readcyclecounter
Differential Revision: https://reviews.llvm.org/D64369

llvm-svn: 365431
2019-07-09 03:10:18 +00:00
Kai Luo 1931ed73c3 [PowerPC][Peephole] Combine extsw and sldi after instruction selection
Summary:
`extsw` and `sldi` are supposed to be combined if they are in the same
BB in instruction selection phase. This patch handles the case where
extsw and sldi are not in the same BB.

Differential Revision: https://reviews.llvm.org/D63806

llvm-svn: 365430
2019-07-09 02:55:08 +00:00
Jinsong Ji cbd64f7648 [MachinePipeliner] Fix Phi refers to Phi in same stage in 1st epilogue
Summary:
This is exposed by functional testing on PowerPC.
In some pipelined loops, Phi refer to phi did not get value defined by
the Phi, hence getting wrong value later.

As the comment mentioned, we should "use the value defined by the Phi,
unless we're generating the firstepilog and the Phi refers to a Phi
 in a different stage.", so Phi refering to same stage Phi should use
the value defined by the Phi here.

Reviewers: bcahoon, hfinkel

Reviewed By: hfinkel

Subscribers: MaskRay, wuzish, nemanjai, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64035

llvm-svn: 365428
2019-07-09 02:27:35 +00:00
Jinsong Ji 18301fa82b [PowerPC][MachinePipeliner][NFC] Add a testcase for Phi bug.
llvm-svn: 365427
2019-07-09 02:27:29 +00:00
Heejin Ahn 947bfe73fc [WebAssembly] Make sret parameter work with AddMissingPrototypes
Summary:
Even with functions with `no-prototype` attribute, there can be an
argument `sret` (structure return) attribute, which is an optimization
when a function return type is a struct. Fixes PR42420.

Reviewers: sbc100

Subscribers: dschuff, jgravelle-google, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64318

llvm-svn: 365426
2019-07-09 02:10:33 +00:00
Heejin Ahn 8f9a4b2af0 [WebAssembly] Fix a typo in a test file name
Reviewers: sbc100

Subscribers: dschuff, jgravelle-google, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64324

llvm-svn: 365418
2019-07-09 01:21:04 +00:00
Jessica Paquette 55d19247ef [AArch64][GlobalISel] Use TST for comparisons when possible
Porting over the part of `emitComparison` in AArch64ISelLowering where we use
TST to represent a compare.

- Rename `tryOptCMN` to `tryFoldIntegerCompare`, since it now also emits TSTs
  when possible.

- Add a utility function for emitting a TST with register operands.

- Rename opt-fold-cmn.mir to opt-fold-compare.mir, since it now also tests the
  TST fold as well.

Differential Revision: https://reviews.llvm.org/D64371

llvm-svn: 365404
2019-07-08 22:58:36 +00:00
Reid Kleckner 2f07c2e9d9 Standardize on MSVC behavior for triples with no environment
Summary:
This makes it so that IR files using triples without an environment work
out of the box, without normalizing them.

Typically, the MSVC behavior is more desirable. For example, it tends to
enable things like constant merging, use of associative comdats, etc.

Addresses PR42491

Reviewers: compnerd

Subscribers: hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64109

llvm-svn: 365387
2019-07-08 21:05:20 +00:00
Matt Arsenault 71dfb7ec5c AMDGPU: Make s34 the FP register
Make the FP register callee saved.

This is tricky because now the FP needs to be spilled in the prolog
relative to the incoming SP register, rather than the frame register
used throughout the rest of the function. I don't like how this
bypassess the standard mechanism for CSR spills just to get the
correct insert point. I may look for a better solution, since all CSR
VGPRs may also need to have all lanes activated. Another option might
be to make getFrameIndexReference change the base register if the
frame index is a CSR, and then try to figure out the right insertion
point in emitProlog.

If there is a free VGPR lane available for SGPR spilling, try to use
it for the FP. If that would require intrtoducing a new VGPR spill,
try to use a free call clobbered SGPR. Only fallback to introducing a
new VGPR spill as a last resort.

This also doesn't attempt to handle SGPR spilling with scalar stores.

llvm-svn: 365372
2019-07-08 19:03:38 +00:00
Matt Arsenault 5630e3a1c7 RegUsageInfoCollector: Don't iterate all regs for every reg class
This is extremly slow on AMDGPU, which has a lot of physical register
and a lot of register classes.

determineCalleeSaves, via MachineRegisterInfo::isPhysRegUsed already
added all of the super registers to the saved set.

llvm-svn: 365370
2019-07-08 18:48:42 +00:00
Brian Homerding b4b21d807e Add, and infer, a nofree function attribute
This patch adds a function attribute, nofree, to indicate that a function does
not, directly or indirectly, call a memory-deallocation function (e.g., free,
C++'s operator delete).

Reviewers: jdoerfert

Differential Revision: https://reviews.llvm.org/D49165

llvm-svn: 365336
2019-07-08 15:57:56 +00:00