Commit Graph

2863 Commits

Author SHA1 Message Date
Matt Arsenault 9cf980d4a7 GlobalISel: Add G_ATOMICRMW_{FADD|FSUB}
llvm-svn: 367369
2019-07-30 23:56:30 +00:00
Roman Lebedev c197732e39 [NFC][X86][AArch64] Revisit test coverage for X s% C == 0 fold - add tests for negative divisors, INT_MIN divisors
As discussed in the review, that fold is only valid for positive
divisors, so while we can negate negative divisors,
we have to special-case INT_MIN.

llvm-svn: 367294
2019-07-30 08:00:49 +00:00
Simon Pilgrim 8a52671782 [SelectionDAG] Check for any recursion depth greater than or equal to limit instead of just equal the limit.
If anything called the recursive isKnownNeverNaN/computeKnownBits/ComputeNumSignBits/SimplifyDemandedBits/SimplifyMultipleUseDemandedBits with an incorrect depth then we could continue to recurse if we'd already exceeded the depth limit.

This replaces the limit check (Depth == 6) with a (Depth >= 6) to make sure that we don't circumvent it. 

This causes a couple of regressions as a mixture of calls (SimplifyMultipleUseDemandedBits + combineX86ShufflesRecursively) were calling with depths that were already over the limit. I've fixed SimplifyMultipleUseDemandedBits to not do this. combineX86ShufflesRecursively is trickier as we get a lot of regressions if we reduce its own limit from 8 to 6 (it also starts at Depth == 1 instead of Depth == 0 like the others....) - I'll see what I can do in future patches.

llvm-svn: 367171
2019-07-27 12:48:46 +00:00
Amara Emerson 7bc4fad0fb [AArch64][GlobalISel] Implement narrowing of G_SEXT.
We need this to narrow a sext to s128.

Differential Revision: https://reviews.llvm.org/D65357

llvm-svn: 367164
2019-07-26 23:46:38 +00:00
Jessica Paquette aa8b9993c2 [AArch64][GlobalISel] Select @llvm.aarch64.stlxr for 32-bit pointers
Add partial instruction selection for intrinsics like this:

```
declare i32 @llvm.aarch64.stlxr(i64, i32*)
```

(This only handles the case where a G_ZEXT is feeding the intrinsic.)

Also make sure that the added store instruction actually has the memory op from
the original G_STORE.

Update select-stlxr-intrin.mir and arm64-ldxr-stxr.ll.

Differential Revision: https://reviews.llvm.org/D65355

llvm-svn: 367163
2019-07-26 23:28:53 +00:00
Simon Pilgrim 9758407bf1 [TargetLowering] SimplifyMultipleUseDemandedBits - add SIGN_EXTEND_INREG support.
llvm-svn: 367096
2019-07-26 09:41:08 +00:00
Francis Visoiu Mistrih 0503add6da [CodeGen] Don't resolve the stack protector frame accesses until PEI
Currently, stack protector loads and stores are resolved during
LocalStackSlotAllocation (if the pass needs to run). When this is the
case, the base register assigned to the frame access is going to be one
of the vregs created during LocalStackSlotAllocation. This means that we
are keeping a pointer to the stack protector slot, and we're using this
pointer to load and store to it.

In case register pressure goes up, we may end up spilling this pointer
to the stack, which can be a security concern.

Instead, leave it to PEI to resolve the frame accesses. In order to do
that, we make all stack protector accesses go through frame index
operands, then PEI will resolve this using an offset from sp/fp/bp.

Differential Revision: https://reviews.llvm.org/D64759

llvm-svn: 367068
2019-07-25 22:23:48 +00:00
Amara Emerson e54dc6b8b5 [AArch64][GlobalISel] Fix G_SELECT legalization fallback after r366943.
Changes the order of legalization of G_ICMP suggested by Petar in D65079.

llvm-svn: 367060
2019-07-25 21:44:52 +00:00
Roman Lebedev 8b65c4e331 [NFC][CodeGen][X86][AArch64] div-rem pair reconstruction tests (PR42673)
As discussed in https://bugs.llvm.org/show_bug.cgi?id=42673
there is a TTI hook hasDivRemOp() that matters here.
While -div-rem-pairs will decompose 'rem' if that hook returns false,
nothing does the opposite transform.

We can't to this in InstCombine, because it does not currently
access TTI, and i'm not sure we should change that.

We may be able to teach DivRemPairs to do this, but this really is a
per-target perf optimization, and we seem to do the opposite transform
in backend if hasDivRemOp() returned false: https://godbolt.org/z/ttt4HZ
I think it makes sense to be consistent.

https://bugs.llvm.org/show_bug.cgi?id=42673

llvm-svn: 367034
2019-07-25 16:39:57 +00:00
Pablo Barrio 275954539d [ARM][AArch64] Support for Cortex-A65 & A65AE, Neoverse E1 & N1
Summary:
Add support for Cortex-A65, Cortex-A65AE, Neoverse E1 and Neoverse N1.
Neoverse E1 and Cortex-A65(&AE) only implement the AArch64 state of the
Arm architecture. Neoverse N1 implements both AArch32 and AArch64.

Cortex-A65:
https://developer.arm.com/ip-products/processors/cortex-a/cortex-a65

Cortex-A65AE:
https://developer.arm.com/ip-products/processors/cortex-a/cortex-a65ae

Neoverse E1:
https://developer.arm.com/ip-products/processors/neoverse/neoverse-e1

Neoverse N1:
https://developer.arm.com/ip-products/processors/neoverse/neoverse-n1

Patch by Diogo Sampaio and Pablo Barrio

Reviewers: samparker, LukeCheeseman, sbaranga, ostannard

Reviewed By: ostannard

Subscribers: ostannard, javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64406

llvm-svn: 367007
2019-07-25 10:59:45 +00:00
Jessica Paquette 728b18f29f [AArch64][GlobalISel] Select immediate modes for ADD when selecting G_GEP
Before, we weren't able to select things like this for G_GEP:

add	x0, x8, #8

And instead we'd materialize the 8.

This teaches GISel to do that. It gives some considerable code size savings
on 252.eon-- about 4%!

Differential Revision: https://reviews.llvm.org/D65248

llvm-svn: 366959
2019-07-24 23:11:01 +00:00
Amara Emerson de81bd0faa [AArch64][GlobalISel] Don't try to use GISel if subtarget doesn't have neon or fp.
Throughout the legalizerinfo we currently make the assumption that the target
has neon and FP target features available. Fixing it will require a refactor of
the whole thing, so until then make sure we fall back.

Works around PR42734

Differential Revision: https://reviews.llvm.org/D65244

llvm-svn: 366957
2019-07-24 23:00:04 +00:00
Roman Lebedev 017e272c3a [Codegen] (X & (C l>>/<< Y)) ==/!= 0 --> ((X <</l>> Y) & C) ==/!= 0 fold
Summary:
This was originally reported in D62818.
https://rise4fun.com/Alive/oPH

InstCombine does the opposite fold, in hope that `C l>>/<< Y` expression
will be hoisted out of a loop if `Y` is invariant and `X` is not.
But as it is seen from the diffs here, if it didn't get hoisted,
the produced assembly is almost universally worse.

Much like with my recent "hoist add/sub by/from const" patches,
we should get almost universal win if we hoist constant,
there is almost always an "and/test by imm" instruction,
but "shift of imm" not so much, so we may avoid having to
materialize the immediate, and thus need one less register.
And since we now shift not by constant, but by something else,
the live-range of that something else may reduce.

Special care needs to be applied not to disturb x86 `BT` / hexagon `tstbit`
instruction pattern. And to not get into endless combine loop.

Reviewers: RKSimon, efriedma, t.p.northover, craig.topper, spatel, arsenm

Reviewed By: spatel

Subscribers: hiraditya, MaskRay, wuzish, xbolva00, nikic, nemanjai, jvesely, wdng, nhaehnle, javed.absar, tpr, kristof.beyls, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D62871

llvm-svn: 366955
2019-07-24 22:57:22 +00:00
Jessica Paquette 68499112cf [AArch64][GlobalISel] Fold G_MUL into XRO load addressing mode when possible
If we have a G_MUL, and either the LHS or the RHS of that mul is the legal
shift value for a load addressing mode, we can fold it into the load.

This gives some code size savings on some SPEC tests. The best are around 2%
on 300.twolf and 3% on 254.gap.

Differential Revision: https://reviews.llvm.org/D65173

llvm-svn: 366954
2019-07-24 22:49:42 +00:00
Amara Emerson 13af1ed8e3 [GlobalISel] Support for inlining memcpy, memset and memmove calls.
This introduces a new family of combiner helper routines that re-use the
target specific cost model from SelectionDAG, and generate inline implementations
of the memcpy family of intrinsics.

The combines are only enabled at optimization levels higher than -O0, and give
very substantial performance improvements.

Differential Revision: https://reviews.llvm.org/D65167

llvm-svn: 366951
2019-07-24 22:17:31 +00:00
Amara Emerson a1997ce2e5 [AArch64][GlobalISel] Fix a crash during s128 G_ICMP legalization due to r366317.
r366317 added a legalization for s128 G_ICMP narrow scalar which tried to hard
code the result type of the new legalized G_SELECT. Change this to instead use
type of the original G_ICMP result and allow the target to legalize it if necessary
later.

llvm-svn: 366943
2019-07-24 20:46:42 +00:00
Jessica Paquette c19c30776a [AArch64][GlobalISel] Make vector dup optimization look at last elt of ZeroVec
Fix an off-by-one error which made us not look at the last element of the
zero vector. This caused a miscompile in 188.ammp.

Differential Revision: https://reviews.llvm.org/D65168

llvm-svn: 366930
2019-07-24 17:18:51 +00:00
Amara Emerson 511f7f5785 [AArch64][GlobalISel] Add support for s128 loads, stores, extracts, truncs.
We need to be able to load and store s128 for memcpy inlining, where we want to
generate Q register mem ops. Making these legal also requires that we add some
support in other instructions. Regbankselect should also know about these since
they have no GPR register class that can hold them, so need special handling to
live on the FPR bank.

Differential Revision: https://reviews.llvm.org/D65166

llvm-svn: 366857
2019-07-23 22:05:13 +00:00
Jessica Paquette a2fae1e3e9 [GlobalISel][AArch64] Save a copy on G_SELECT by fixing condition to GPR
The condition can never be fed by FPRs, so it should always be on a GPR.

Differential Revision: https://reviews.llvm.org/D65157

llvm-svn: 366854
2019-07-23 21:39:50 +00:00
Jessica Paquette 2b404d01e8 [GlobalISel][AArch64] Teach GISel to handle shifts in load addressing modes
When we select the XRO variants of loads, we can pull in very specific shifts
(of the size of an element). E.g.

```
ldr x1, [x2, x3, lsl #3]
```

This teaches GISel to handle these when they're coming from shifts
specifically.

This adds a new addressing mode function, `selectAddrModeShiftedExtendXReg`
which recognizes this pattern.

This also packs this up with `selectAddrModeRegisterOffset` into
`selectAddrModeXRO`. This is intended to be equivalent to `selectAddrModeXRO`
in AArch64ISelDAGtoDAG.

Also update load-addressing-modes to show that all of the cases here work.

Differential Revision: https://reviews.llvm.org/D65119

llvm-svn: 366819
2019-07-23 16:09:42 +00:00
Simon Pilgrim 743d45ee25 [TargetLowering] Add SimplifyMultipleUseDemandedBits
This patch introduces the DAG version of SimplifyMultipleUseDemandedBits, which attempts to peek through ops (mainly and/or/xor so far) that don't contribute to the demandedbits/elts of a node - which means we can do this even in cases where we have multiple uses of an op, which normally requires us to demanded all bits/elts. The intention is to remove a similar instruction - SelectionDAG::GetDemandedBits - once SimplifyMultipleUseDemandedBits has matured.

The InstCombine version of SimplifyMultipleUseDemandedBits can constant fold which I haven't added here yet, and so far I've only wired this up to some basic binops (and/or/xor/add/sub/mul) to demonstrate its use.

We do see a couple of regressions that need to be addressed:

    AMDGPU unsigned dot product codegen retains an AND mask (for ZERO_EXTEND) that it previously removed (but otherwise the dotproduct codegen is a lot better).
	
    X86/AVX2 has poor handling of vector ANY_EXTEND/ANY_EXTEND_VECTOR_INREG - it prematurely gets converted to ZERO_EXTEND_VECTOR_INREG.

The code owners have confirmed its ok for these cases to fixed up in future patches.

Differential Revision: https://reviews.llvm.org/D63281

llvm-svn: 366799
2019-07-23 12:39:08 +00:00
Aditya Nandakumar d7504a1569 [GISel]: Attach missing range metadata while translating G_LOADs
https://reviews.llvm.org/D65048

Attach range information to G_LOAD when only defining one register.

reviewed by: arsenm

llvm-svn: 366656
2019-07-21 14:07:54 +00:00
Roman Lebedev 7f0c23576f [NFC][Codegen][X86][AArch64] Add "(x s% C) == 0" tests
Much like with `urem`, the same optimization (albeit with slightly
different algorithm) applies for the signed case, too.

I'm simply copying the test coverage from `urem` case for now,
i believe it should be (close to?) sufficient.

llvm-svn: 366640
2019-07-20 19:25:44 +00:00
Roman Lebedev cd9b19484b [Codegen][SelectionDAG] X u% C == 0 fold: non-splat vector improvements
Summary:
Four things here:
1. Generalize the fold to handle non-splat divisors. Reasonably trivial.
2. Unban power-of-two divisors. I don't see any reason why they should
   be illegal.
   * There is no ban in Hacker's Delight
   * I think the ban came from the same bug that caused the miscompile
      in the base patch - in `floor((2^W - 1) / D)` we were dividing by
      `D0` instead of `D`, and we **were** ensuring that `D0` is not `1`,
      which made sense.
3. Unban `1` divisors. I no longer believe Hacker's Delight actually says
   that the fold is invalid for `D = 0`. Further considerations:
   * We know that
     * `(X u% 1) == 0`  can be constant-folded to `1`,
     * `(X u% 1) != 0`  can be constant-folded to `0`,
   *  Also, we know that
     * `X u<= -1` can be constant-folded to `1`,
     * `X u>  -1` can be constant-folded to `0`,
   * https://godbolt.org/z/7jnZJX https://rise4fun.com/Alive/oF6p
   * We know will end up with the following:
       `(setule/setugt (rotr (mul N, P), K), Q)`
   * Therefore, for given new DAG nodes and comparison predicates
     (`ule`/`ugt`), we will still produce the correct answer if:
     `Q` is a all-ones constant; and both `P` and `K` are *anything*
     other than `undef`.
   * The fold will indeed produce `Q = all-ones`.
4. Try to re-splat the `P` and `K` vectors - we don't care about
   their values for the lanes where divisor was `1`.

Reviewers: RKSimon, hermord, craig.topper, spatel, xbolva00

Reviewed By: RKSimon

Subscribers: hiraditya, javed.absar, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63963

llvm-svn: 366637
2019-07-20 16:33:15 +00:00
Jessica Paquette 41affad967 [GlobalISel][AArch64] Contract trivial same-size cross-bank copies into G_STOREs
Sometimes, you can end up with cross-bank copies between same-sized GPRs and
FPRs, which feed into G_STOREs. When these copies feed only into stores, they
aren't necessary; we can just store using the original register bank.

This provides some minor code size savings for some floating point SPEC
benchmarks. (Around 0.2% for 453.povray and 450.soplex)

This issue doesn't seem to show up due to regbankselect or anything similar. So,
this patch introduces an early select function, `contractCrossBankCopyIntoStore`
which performs the contraction when possible. The selector then continues
normally and selects the correct store opcode, eliminating needless copies
along the way.

Differential Revision: https://reviews.llvm.org/D65024

llvm-svn: 366625
2019-07-20 01:55:35 +00:00
Kai Luo dec624682e [MachineCSE][MachinePRE] Avoid hoisting code from code regions into hot BBs.
Summary:
Current PRE hoists common computations into
CMBB = DT->findNearestCommonDominator(MBB, MBB1).
However, if CMBB is in a hot loop body, we might get performance
degradation.

Differential Revision: https://reviews.llvm.org/D64394

llvm-svn: 366570
2019-07-19 12:58:16 +00:00
Amara Emerson cf12c7815f [GlobalISel] Translate calls to memcpy et al to G_INTRINSIC_W_SIDE_EFFECTs and legalize later.
I plan on adding memcpy optimizations in the GlobalISel pipeline, but we can't
do that unless we delay lowering to actual function calls. This patch changes
the translator to generate G_INTRINSIC_W_SIDE_EFFECTS for these functions, and
then have each target specify that using the new custom legalizer for intrinsics
hook that they want it expanded it a libcall.

Differential Revision: https://reviews.llvm.org/D64895

llvm-svn: 366516
2019-07-19 00:24:45 +00:00
Jessica Paquette 7a1dcc5ff1 [GlobalISel][AArch64] Add support for base register + offset register loads
Add support for folding G_GEPs into loads of the form

```
ldr reg, [base, off]
```

when possible. This can save an add before the load. Currently, this is only
supported for loads of 64 bits into 64 bit registers.

Add a new addressing mode function, `selectAddrModeRegisterOffset` which
performs this folding when it is profitable.

Also add a test for addressing modes for G_LOAD.

Differential Revision: https://reviews.llvm.org/D64944

llvm-svn: 366503
2019-07-18 21:50:11 +00:00
Francis Visoiu Mistrih 9f2b290add [PEI] Don't re-allocate a pre-allocated stack protector slot
The LocalStackSlotPass pre-allocates a stack protector and makes sure
that it comes before the local variables on the stack.

We need to make sure that later during PEI we don't re-allocate a new
stack protector slot. If that happens, the new stack protector slot will
end up being **after** the local variables that it should be protecting.

Therefore, we would have two slots assigned for two different stack
protectors, one at the top of the stack, and one at the bottom. Since
PEI will overwrite the assigned slot for the stack protector, the load
that is used to compare the value of the stack protector will use the
slot assigned by PEI, which is wrong.

For this, we need to check if the object is pre-allocated, and re-use
that pre-allocated slot.

Differential Revision: https://reviews.llvm.org/D64757

llvm-svn: 366371
2019-07-17 20:46:19 +00:00
Francis Visoiu Mistrih 39fc2843e4 [CodeGen] Add stack protector tests where the guard gets re-assigned
In preparation of a fix, add tests for multiple backends.

llvm-svn: 366370
2019-07-17 20:46:16 +00:00
Evgeniy Stepanov 851339fb29 Basic MTE stack tagging instrumentation.
Summary:
Use MTE intrinsics to tag stack variables in functions with
sanitize_memtag attribute.

Reviewers: pcc, vitalybuka, hctim, ostannard

Subscribers: srhines, mgorny, javed.absar, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64173

llvm-svn: 366361
2019-07-17 19:24:12 +00:00
Evgeniy Stepanov d752f5e953 Basic codegen for MTE stack tagging.
Implement IR intrinsics for stack tagging. Generated code is very
unoptimized for now.

Two special intrinsics, llvm.aarch64.irg.sp and llvm.aarch64.tagp are
used to implement a tagged stack frame pointer in a virtual register.

Differential Revision: https://reviews.llvm.org/D64172

llvm-svn: 366360
2019-07-17 19:24:02 +00:00
Momchil Velikov 0e2b74a2b0 Revert [AArch64] Add support for Transactional Memory Extension (TME)
This reverts r366322 (git commit 4b8da3a503)

llvm-svn: 366355
2019-07-17 17:43:32 +00:00
Momchil Velikov 4b8da3a503 [AArch64] Add support for Transactional Memory Extension (TME)
TME is a future architecture technology, documented in

https://developer.arm.com/architectures/cpu-architecture/a-profile/exploration-tools
https://developer.arm.com/docs/ddi0601/a

More about the future architectures:

https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/new-technologies-for-the-arm-a-profile-architecture

This patch adds support for the TME instructions TSTART, TTEST, TCOMMIT, and
TCANCEL and the target feature/arch extension "tme".

It also implements TME builtin functions, defined in ACLE Q2 2019
(https://developer.arm.com/docs/101028/latest)

Patch by Javed Absar and Momchil Velikov

Differential Revision: https://reviews.llvm.org/D64416

llvm-svn: 366322
2019-07-17 13:23:27 +00:00
Kyrylo Tkachov eb72138340 [AArch64] Implement __jcvt intrinsic from Armv8.3-A
The jcvt intrinsic defined in ACLE [1] is available when ARM_FEATURE_JCVT is defined.

This change introduces the AArch64 intrinsic, wires it up to the instruction and a new clang builtin function.
The __ARM_FEATURE_JCVT macro is now defined when an Armv8.3-A or higher target is used.
I've implemented the target detection logic in Clang so that this feature is enabled for architectures from armv8.3-a onwards (so -march=armv8.4-a also enables this, for example).

make check-all didn't show any new failures.

[1] https://developer.arm.com/docs/101028/latest/data-processing-intrinsics

Differential Revision: https://reviews.llvm.org/D64495

llvm-svn: 366197
2019-07-16 09:27:39 +00:00
Douglas Yung 4d95c3684b [test][AArch64] Relax the opcode tests for FP min/max instructions.
llvm-svn: 365961
2019-07-12 21:39:45 +00:00
Sanjay Patel 138328e45c [SDAG] commute setcc operands to match a subtract
If we have:

R = sub X, Y
P = cmp Y, X

...then flipping the operands in the compare instruction can allow using a subtract that sets compare flags.

Motivated by diffs in D58875 - not sure if this changes anything there,
but this seems like a good thing independent of that.

There's a more involved version of this transform already in IR (in instcombine
although that seems misplaced to me) - see "swapMayExposeCSEOpportunities()".

Differential Revision: https://reviews.llvm.org/D63958

llvm-svn: 365711
2019-07-10 23:23:54 +00:00
Amara Emerson 7a4d2df04a [AArch64][GlobalISel] Optimize compare and branch cases with G_INTTOPTR and unknown values.
Since we have distinct types for pointers and scalars, G_INTTOPTRs can sometimes
obstruct attempts to find constant source values. These usually come about when
try to do some kind of null pointer check. Teaching getConstantVRegValWithLookThrough
about this operation allows the CBZ/CBNZ optimization to catch more cases.

This change also improves the case where we can't find a constant source at all.
Previously we would emit a cmp, cset and tbnz for that. Now we try to just emit
a cmp and conditional branch, saving an instruction.

The cumulative code size improvement of this change plus D64354 is 5.5% geomean
on arm64 CTMark -O0.

Differential Revision: https://reviews.llvm.org/D64377

llvm-svn: 365690
2019-07-10 19:21:43 +00:00
Jessica Paquette 7c95925b13 [GlobalISel][AArch64] Use getOpcodeDef instead of findMIFromReg
Some minor cleanup.

This function in Utils does the same thing as `findMIFromReg`. It also looks
through copies, which `findMIFromReg` didn't.

Delete `findMIFromReg` and use `getOpcodeDef` instead. This only happens in
`tryOptVectorDup` right now.

Update opt-shuffle-splat to show that we can look through the copies now, too.

Differential Revision: https://reviews.llvm.org/D64520

llvm-svn: 365684
2019-07-10 18:46:56 +00:00
Jessica Paquette 3132968ae9 [GlobalISel][AArch64][NFC] Use getDefIgnoringCopies from Utils where we can
There are a few places where we walk over copies throughout
AArch64InstructionSelector.cpp. In Utils, there's a function that does exactly
this which we can use instead.

Note that the utility function works with the case where we run into a COPY
from a physical register. We've run into bugs with this a couple times, so using
it should defend us from similar future bugs.

Also update opt-fold-compare.mir to show that we still handle physical registers
properly.

Differential Revision: https://reviews.llvm.org/D64513

llvm-svn: 365683
2019-07-10 18:44:57 +00:00
Nick Desaulniers 8728e45706 [TargetLowering] support BlockAddress as "i" inline asm constraint
Summary:
This allows passing address of labels to inline assembly "i" input
constraints.

Fixes pr/42502.

Reviewers: ostannard

Reviewed By: ostannard

Subscribers: void, echristo, nathanchance, ostannard, javed.absar, hiraditya, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64167

llvm-svn: 365664
2019-07-10 17:08:25 +00:00
Matt Arsenault e595a2c964 GlobalISel: Define the full family of FP min/max instructions
llvm-svn: 365657
2019-07-10 16:31:15 +00:00
Peter Collingbourne 1366262b74 hwasan: Improve precision of checks using short granule tags.
A short granule is a granule of size between 1 and `TG-1` bytes. The size
of a short granule is stored at the location in shadow memory where the
granule's tag is normally stored, while the granule's actual tag is stored
in the last byte of the granule. This means that in order to verify that a
pointer tag matches a memory tag, HWASAN must check for two possibilities:

* the pointer tag is equal to the memory tag in shadow memory, or
* the shadow memory tag is actually a short granule size, the value being loaded
  is in bounds of the granule and the pointer tag is equal to the last byte of
  the granule.

Pointer tags between 1 to `TG-1` are possible and are as likely as any other
tag. This means that these tags in memory have two interpretations: the full
tag interpretation (where the pointer tag is between 1 and `TG-1` and the
last byte of the granule is ordinary data) and the short tag interpretation
(where the pointer tag is stored in the granule).

When HWASAN detects an error near a memory tag between 1 and `TG-1`, it
will show both the memory tag and the last byte of the granule. Currently,
it is up to the user to disambiguate the two possibilities.

Because this functionality obsoletes the right aligned heap feature of
the HWASAN memory allocator (and because we can no longer easily test
it), the feature is removed.

Also update the documentation to cover both short granule tags and
outlined checks.

Differential Revision: https://reviews.llvm.org/D63908

llvm-svn: 365551
2019-07-09 20:22:36 +00:00
Amara Emerson 6616e269a6 [AArch64][GlobalISel] Optimize conditional branches followed by unconditional branches
If we have an icmp->brcond->br sequence where the brcond just branches to the
next block jumping over the br, while the br takes the false edge, then we can
modify the conditional branch to jump to the br's target while inverting the
condition of the incoming icmp. This means we can eliminate the br as an
unconditional branch to the fallthrough block.

Differential Revision: https://reviews.llvm.org/D64354

llvm-svn: 365510
2019-07-09 16:05:59 +00:00
Jessica Paquette 55d19247ef [AArch64][GlobalISel] Use TST for comparisons when possible
Porting over the part of `emitComparison` in AArch64ISelLowering where we use
TST to represent a compare.

- Rename `tryOptCMN` to `tryFoldIntegerCompare`, since it now also emits TSTs
  when possible.

- Add a utility function for emitting a TST with register operands.

- Rename opt-fold-cmn.mir to opt-fold-compare.mir, since it now also tests the
  TST fold as well.

Differential Revision: https://reviews.llvm.org/D64371

llvm-svn: 365404
2019-07-08 22:58:36 +00:00
Jessica Paquette a99cfeea44 [GlobalISel][AArch64] Use getConstantVRegValWithLookThrough for selectArithImmed
Instead of just stopping to see if we have a G_CONSTANT, instead, look through
G_TRUNCs, G_SEXTs, and G_ZEXTs.

This gives an average ~1.3% code size improvement on CINT2000 at -O3.

Differential Revision: https://reviews.llvm.org/D64108

llvm-svn: 365063
2019-07-03 17:46:23 +00:00
Roman Lebedev c4b83a6054 [Codegen][X86][AArch64][ARM][PowerPC] Inc-of-add vs sub-of-not (PR42457)
Summary:
This is the backend part of [[ https://bugs.llvm.org/show_bug.cgi?id=42457 | PR42457 ]].
In middle-end, we'd want to prefer the form with two adds - D63992,
but as this diff shows, not every target will prefer that pattern.

Out of 4 targets for which i added tests all seem to be ok with inc-of-add for scalars,
but only X86 prefer that same pattern for vectors.

Here i'm adding a new TLI hook, always defaulting to the inc-of-add,
but adding AArch64,ARM,PowerPC overrides to prefer inc-of-add only for scalars.

Reviewers: spatel, RKSimon, efriedma, t.p.northover, hfinkel

Reviewed By: efriedma

Subscribers: nemanjai, javed.absar, kristof.beyls, kbarton, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64090

llvm-svn: 365010
2019-07-03 09:41:35 +00:00
Amara Emerson cac1151845 [AArch64][GlobalISel] Overhaul legalization & isel or shifts to select immediate forms.
There are two main issues preventing us from generating immediate form shifts:
1) We have partial SelectionDAG imported support for G_ASHR and G_LSHR shift
immediate forms, but they currently don't work because the amount type is
expected to be an s64 constant, but we only legalize them to have homogenous
types.

To deal with this, first we introduce a custom legalizer to *only* custom legalize
s32 shifts which have a constant operand into a s64.

There is also an additional artifact combiner to fold zexts(g_constant) to a
larger G_CONSTANT if it's legal, a counterpart to the anyext version committed
in an earlier patch.

2) For G_SHL the importer can't cope with the pattern. For this I introduced an
early selection phase in the arm64 selector to select these forms manually
before the tablegen selector pessimizes it to a register-register variant.

Differential Revision: https://reviews.llvm.org/D63910

llvm-svn: 364994
2019-07-03 01:49:06 +00:00
Jessica Paquette 99316043bb [AArch64][GlobalISel] Teach tryOptSelect to handle G_ICMP
This teaches `tryOptSelect` to handle folding G_ICMP, and removes the
requirement that the G_SELECT we're dealing with is floating point.

Some refactoring to make this work nicely as well:

- Factor out the scalar case from the selection code for G_ICMP into
  `emitIntegerCompare`.
- Make `tryOptCMN` return a MachineInstr* instead of a bool.
- Make `tryOptCMN` not modify the instruction being selected.
- Factor out the CMN emission into `emitCMN` for readability.

By doing this this way, we can get all of the compare selection optimizations
in select emission.

Differential Revision: https://reviews.llvm.org/D64084

llvm-svn: 364961
2019-07-02 19:44:16 +00:00
Roman Lebedev 059f495831 [NFC][Codegen][X86][AArch64][ARM][PowerPC] Recommit: Add test coverage for "add-of-inc" vs "sub-of-not"
I initially committed it with --check-prefix instead of --check-prefixes
(again, shame on me, and utils/update_*.py not complaining!)
and did not have a moment to understand the failure,
so i reverted it initially in rL64939.

llvm-svn: 364945
2019-07-02 16:48:49 +00:00