Commit Graph

2451 Commits

Author SHA1 Message Date
Anna Thomas 58d1192a22 Add invariant start call creation in IRBuilder.NFC
Differential Revision: https://reviews.llvm.org/D22700

llvm-svn: 276471
2016-07-22 20:57:23 +00:00
Anna Thomas 0be4a0e6a4 Invariant start/end intrinsics overloaded for address space
Summary:
The llvm.invariant.start and llvm.invariant.end intrinsics currently
support specifying invariant memory objects only in the default address
space.

With this change, these intrinsics are overloaded for any adddress space
for memory objects
and we can use these llvm invariant intrinsics in non-default address
spaces.

Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr)

This overloaded intrinsic is needed for representing final or invariant
memory in managed languages.

Reviewers: apilipenko, reames

Subscribers: llvm-commits
llvm-svn: 276447
2016-07-22 17:49:40 +00:00
Simon Pilgrim ea0d4f9962 [X86][AVX] Added support for lowering to VBROADCASTF128/VBROADCASTI128 (reapplied)
As reported on PR26235, we don't currently make use of the VBROADCASTF128/VBROADCASTI128 instructions (or the AVX512 equivalents) to load+splat a 128-bit vector to both lanes of a 256-bit vector.

This patch enables lowering from subvector insertion/concatenation patterns and auto-upgrades the llvm.x86.avx.vbroadcastf128.pd.256 / llvm.x86.avx.vbroadcastf128.ps.256 intrinsics to match.

We could possibly investigate using VBROADCASTF128/VBROADCASTI128 to load repeated constants as well (similar to how we already do for scalar broadcasts).

Reapplied with fix for PR28657 - removed intrinsic definitions (clang companion patch to be be submitted shortly).

Differential Revision: https://reviews.llvm.org/D22460

llvm-svn: 276416
2016-07-22 13:58:44 +00:00
Benjamin Kramer 5ba0e20315 Revert "[X86][AVX] Added support for lowering to VBROADCASTF128/VBROADCASTI128"
It caused PR28657.

This reverts commit r276281.

llvm-svn: 276405
2016-07-22 11:03:10 +00:00
Anna Thomas c858faa244 Revert "Invariant start/end intrinsics overloaded for address space"
This reverts commit r276316.

llvm-svn: 276320
2016-07-21 19:06:28 +00:00
Anna Thomas 29b24dfe44 Invariant start/end intrinsics overloaded for address space
Summary:
The llvm.invariant.start and llvm.invariant.end intrinsics currently
support specifying invariant memory objects only in the default address space.

With this change, these intrinsics are overloaded for any adddress space for memory objects
and we can use these llvm invariant intrinsics in non-default address spaces.

Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr)

This overloaded intrinsic is needed for representing final or invariant memory in managed languages.

Reviewers: tstellarAMD, reames, apilipenko

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22519

llvm-svn: 276316
2016-07-21 18:41:44 +00:00
Simon Pilgrim c8e20b1150 [X86][AVX] Added support for lowering to VBROADCASTF128/VBROADCASTI128
As reported on PR26235, we don't currently make use of the VBROADCASTF128/VBROADCASTI128 instructions (or the AVX512 equivalents) to load+splat a 128-bit vector to both lanes of a 256-bit vector.

This patch enables lowering from subvector insertion/concatenation patterns and auto-upgrades the llvm.x86.avx.vbroadcastf128.pd.256 / llvm.x86.avx.vbroadcastf128.ps.256 intrinsics to match.

We could possibly investigate using VBROADCASTF128/VBROADCASTI128 to load repeated constants as well (similar to how we already do for scalar broadcasts).

Differential Revision: https://reviews.llvm.org/D22460

llvm-svn: 276281
2016-07-21 14:10:54 +00:00
Benjamin Kramer eab3d36753 Rename StringMap::emplace_second to try_emplace.
Coincidentally this function maps to the C++17 try_emplace. Rename it
for consistentcy with C++17 std::map. NFC.

llvm-svn: 276276
2016-07-21 13:37:48 +00:00
Amaury Sechet ed7b7d7efe Add missing import to fix the build
llvm-svn: 276237
2016-07-21 04:31:38 +00:00
Amaury Sechet 17b67cd1ad Expose AttributeSetNode, use it to provide aggregate getter for attribute in the C API.
Summary: See D19181 for context.

Reviewers: whitequark, Wallbraker, jyknight, echristo, bkramer, void

Subscribers: mehdi_amini

Differential Revision: http://reviews.llvm.org/D21265

llvm-svn: 276236
2016-07-21 04:25:06 +00:00
Simon Pilgrim 0ea8d275cc [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

A companion clang patch is at D22105

Differential Revision: https://reviews.llvm.org/D22106

llvm-svn: 275981
2016-07-19 15:07:43 +00:00
Matt Arsenault 4ced16dd2e Fix -Wreturn-type with gcc 4.8 and libc++
llvm-svn: 275922
2016-07-18 22:12:46 +00:00
Adam Nemet aad816083e [OptRemark,LDist] RFC: Add hotness attribute
Summary:
This is the first set of changes implementing the RFC from
http://thread.gmane.org/gmane.comp.compilers.llvm.devel/98334

This is a cross-sectional patch; rather than implementing the hotness
attribute for all optimization remarks and all passes in a patch set, it
implements it for the 'missed-optimization' remark for Loop
Distribution.  My goal is to shake out the design issues before scaling
it up to other types and passes.

Hotness is computed as an integer as the multiplication of the block
frequency with the function entry count.  It's only printed in opt
currently since clang prints the diagnostic fields directly.  E.g.:

  remark: /tmp/t.c:3:3: loop not distributed: use -Rpass-analysis=loop-distribute for more info (hotness: 300)

A new API added is similar to emitOptimizationRemarkMissed.  The
difference is that it additionally takes a code region that the
diagnostic corresponds to.  From this, hotness is computed using BFI.
The new API is exposed via an analysis pass so that it can be made
dependent on LazyBFI.  (Thanks to Hal for the analysis pass idea.)

This feature can all be enabled by setDiagnosticHotnessRequested in the
LLVM context.  If this is off, LazyBFI is not calculated (D22141) so
there should be no overhead.

A new command-line option is added to turn this on in opt.

My plan is to switch all user of emitOptimizationRemark* to use this
module instead.

Reviewers: hfinkel

Subscribers: rcox2, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D21771

llvm-svn: 275583
2016-07-15 17:23:20 +00:00
Justin Bogner 92a8c6112c IR: Sort generic intrinsics before target specific ones
This splits out the intrinsic table such that generic intrinsics come
first and target specific intrinsics are grouped by target. From here
we can find out which target an intrinsic is for or differentiate
between generic and target intrinsics.

The motivation here is to make it easier to move target specific
intrinsic handling out of generic code.

llvm-svn: 275575
2016-07-15 16:31:37 +00:00
David Majnemer 92f84ccf0f [IR] andIRFlags and copyIRFlags needs to handle GEP
We didn't consider the inbounds flag on GEPs leading to downstream users
introducing UB.

This fixes PR28562.

llvm-svn: 275532
2016-07-15 05:02:31 +00:00
David Majnemer 17bdf445e4 [IR] Make getIndexedOffsetInType return a signed result
A GEPed offset can go negative, the result of getIndexedOffsetInType
should according be a signed type.

llvm-svn: 275246
2016-07-13 03:42:38 +00:00
David Majnemer a7b6c973e5 [ConstantFold] Don't incorrectly infer inbounds on array GEP
The many levels of nesting inside the responsible code made it easy for
bugs to sneak in.  Flattening the logic makes it easier to see what's
going on.

llvm-svn: 275244
2016-07-13 03:24:41 +00:00
Craig Topper a6e6febe2c [AVX512] Remove masked logic op intrinsics and autoupgrade them to native IR.
llvm-svn: 275155
2016-07-12 05:27:53 +00:00
Craig Topper 46b34fe315 [X86,IR] Remove unnecessary or unused LLVMContext parameter from some of the X86 intrinsic upgrade functions.
llvm-svn: 275138
2016-07-12 01:42:33 +00:00
Dehao Chen 71021cdf47 Fix the assertion failure caused by http://reviews.llvm.org/D22118
Summary: http://reviews.llvm.org/D22118 uses metadata to store the call count, which makes it possible to have branch weight to have only one elements. Also fix the assertion failure in inliner when checking the instruction type to include "invoke" instruction.

Reviewers: mkuper, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22228

llvm-svn: 275079
2016-07-11 17:36:02 +00:00
David Majnemer dd8f6bdd46 [IR] Stop a -Wsign-compare warning from firing
llvm-svn: 275077
2016-07-11 17:09:06 +00:00
Dehao Chen 9232f98279 Implement callsite-hotness based inline cost for Sample-based PGO
Summary:
For sample-based PGO, using BFI to calculate callsite count is sometime not accurate. This is because with sampling based approach, if a callsite resides in a hot loop deeply nested in a bunch of cold branches, the callsite's BFI frequency would be inaccurately calculated due to lack of samples in the cold branch.

E.g.

if (A1 && A2 && A3 && ..... && A10) {
  for (i=0; i < 100000000; i++) {
    callsite();
  }
}

Assume that A1 to A100 are all 100% taken, and callsite has 1000 samples and thus is considerred hot. Because the loop's trip count is huge, it's normal that all branches outside the loop has no sample at all. As a result, we can only use static branch probability to derive the the frequency of the loop header. Assuming that static heuristic thinks each branch is 50% taken, then the count calculated from BFI will be 1/(2^10) of the actual value.

In order to get more accurate callsite count, we directly annotate the weight on the call instruction, and directly use it when checking callsite hotness.

Note that this mechanism can also be shared by instrumentation based callsite hotness analysis. The side benefit is that it breaks the dependency from Inliner to BFI as call count is embedded in the IR.

Reviewers: davidxl, eraman, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22118

llvm-svn: 275073
2016-07-11 16:48:54 +00:00
Hal Finkel 5c12d8fe8f BasicAA should look through functions with returned arguments
Motivated by the work on the llvm.noalias intrinsic, teach BasicAA to look
through returned-argument functions when answering queries. This is essential
so that we don't loose all other AA information when supplementing with
llvm.noalias.

Differential Revision: http://reviews.llvm.org/D9383

llvm-svn: 275035
2016-07-11 01:32:20 +00:00
Hal Finkel e87ad547ef Add getReturnedArgOperand to Call/InvokeInst, CallSite
In order to make the optimizer smarter about using the 'returned' argument
attribute (generally, but motivated by my llvm.noalias intrinsic work), add a
utility function to Call/InvokeInst, and CallSite, to make it easy to get the
returned call argument (when one exists).

P.S. There is already an unfortunate amount of code duplication between
CallInst and InvokeInst, and this adds to it. We should probably clean that up
separately.

Differential Revision: http://reviews.llvm.org/D22204

llvm-svn: 275031
2016-07-10 23:01:32 +00:00
Craig Topper 70610cf7b6 [X86] Remove and autoupgrade 512-bit non-temporal store intrinsics.
llvm-svn: 274966
2016-07-09 04:38:27 +00:00
Eric Liu c9c6817724 Move setName after accessing Name
llvm-svn: 274862
2016-07-08 16:09:51 +00:00
Eric Liu 6ee4e81a32 Make a std::string copy of StringRef Name so that it remains valid when the original Name is overridden.
Summary: lib/IR/AutoUpgrade.cpp:348 and lib/IR/AutoUpgrade.cpp:350 upset sanitizer.

Reviewers: bkramer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22140

llvm-svn: 274861
2016-07-08 16:09:48 +00:00
Craig Topper f7bf6de0af [AVX512] Remove and autoupgrade a duplicate set of 512-bit masked shift intrinsics.
I'm not sure if clang ever used these builtin names or not.

llvm-svn: 274827
2016-07-08 06:14:47 +00:00
Craig Topper d5d2a35013 [AVX512] Zero extend the result of vpcmpeq/vpcmpgt and similar intrinsics in the autoupgrade code. This currently results in worse codegen but is needed for correctness.
llvm-svn: 274736
2016-07-07 06:11:07 +00:00
Simon Pilgrim 4e96fbf3c1 [X86][AVX512] Autoupgrade the BROADCAST intrinsics
llvm-svn: 274550
2016-07-05 13:58:47 +00:00
Craig Topper 5aebb86ac1 [IR,X86] Remove some intrinsic prefixes earlier in the auto-upgrade code so we can shorten the length of the comparison strings and avoid repeatedly comparing the common prefix. No functional change intended.
llvm-svn: 274522
2016-07-04 20:56:38 +00:00
Simon Pilgrim 02d435d2f4 [X86][AVX512] Autoupgrade the VPERMPD/VPERMQ intrinsics
llvm-svn: 274506
2016-07-04 14:19:05 +00:00
Simon Pilgrim 9fca300cbe [X86][AVX512] Autoupgrade the VPERMILPD/VPERMILPS intrinsics
llvm-svn: 274498
2016-07-04 12:40:54 +00:00
Nicolai Haehnle 84c9f9919a Add writeonly IR attribute
Summary:
This complements the earlier addition of IntrWriteMem and IntrWriteArgMem
LLVM intrinsic properties, see D18291.

Also start using the attribute for memset, memcpy, and memmove intrinsics,
and remove their special-casing in BasicAliasAnalysis.

Reviewers: reames, joker.eph

Subscribers: joker.eph, llvm-commits

Differential Revision: http://reviews.llvm.org/D18714

llvm-svn: 274485
2016-07-04 08:01:29 +00:00
Simon Pilgrim 19adee9d84 [X86][AVX512] Autoupgrade the MOVDDUP/MOVSLDUP/MOVSHDUP intrinsics
llvm-svn: 274439
2016-07-02 14:42:35 +00:00
Benjamin Kramer 52a692d28d [DIBuilder] Remove dead code. NFC.
llvm-svn: 274438
2016-07-02 13:18:38 +00:00
Reid Kleckner b5af11dfa3 [codeview] Add DISubprogram::ThisAdjustment
Summary:
This represents the adjustment applied to the implicit 'this' parameter
in the prologue of a virtual method in the MS C++ ABI. The adjustment is
always zero unless multiple inheritance is involved.

This increases the size of DISubprogram by 8 bytes, unfortunately. The
adjustment really is a signed 32-bit integer. If this size increase is
too much, we could probably win it back by splitting out a subclass with
info specific to virtual methods (virtuality, vindex, thisadjustment,
containingType).

Reviewers: aprantl, dexonsmith

Subscribers: aaboud, amccarth, llvm-commits

Differential Revision: http://reviews.llvm.org/D21614

llvm-svn: 274325
2016-07-01 02:41:21 +00:00
David Majnemer 9319cbc045 [CodeView] Implement support for bitfields in LLVM
CodeView need to know the offset of the storage allocation for a
bitfield.  Encode this via the "extraData" field in DIDerivedType and
introduced a new flag, DIFlagBitField, to indicate whether or not a
member is a bitfield.

This fixes PR28162.

Differential Revision: http://reviews.llvm.org/D21782

llvm-svn: 274200
2016-06-30 03:00:20 +00:00
Adam Nemet ad437fff53 [Diag] Add getter shouldAlwaysPrint. NFC
For the new hotness attribute, the API will take the pass rather than
the pass name so we can no longer play the trick of AlwaysPrint being a
special pass name. This adds a getter to help the transition.

There is also a corresponding clang patch.

llvm-svn: 274100
2016-06-29 04:55:19 +00:00
Artur Pilipenko 7ad95ec22d Support arbitrary addrspace pointers in masked load/store intrinsics
This is a resubmittion of 263158 change after fixing the existing problem with intrinsics mangling (see LTO and intrinsics mangling llvm-dev thread for details).

This patch fixes the problem which occurs when loop-vectorize tries to use @llvm.masked.load/store intrinsic for a non-default addrspace pointer. It fails with "Calling a function with a bad signature!" assertion in CallInst constructor because it tries to pass a non-default addrspace pointer to the pointer argument which has default addrspace.

The fix is to add pointer type as another overloaded type to @llvm.masked.load/store intrinsics.

Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D17270

llvm-svn: 274043
2016-06-28 18:27:25 +00:00
Matt Arsenault 82f41518ed Verifier: Reject non-float !fpmath
Code already assumes this is float. getFPAccuracy()
crashes on any other type.

llvm-svn: 273912
2016-06-27 19:43:15 +00:00
Artur Pilipenko 72f76b8805 Revert -r273892 "Support arbitrary addrspace pointers in masked load/store intrinsics" since some of the clang tests don't expect to see the updated signatures.
llvm-svn: 273895
2016-06-27 16:54:33 +00:00
Artur Pilipenko a36aa41519 Support arbitrary addrspace pointers in masked load/store intrinsics
This is a resubmittion of 263158 change after fixing the existing problem with intrinsics mangling (see LTO and intrinsics mangling llvm-dev thread for details).

This patch fixes the problem which occurs when loop-vectorize tries to use @llvm.masked.load/store intrinsic for a non-default addrspace pointer. It fails with "Calling a function with a bad signature!" assertion in CallInst constructor because it tries to pass a non-default addrspace pointer to the pointer argument which has default addrspace.

The fix is to add pointer type as another overloaded type to @llvm.masked.load/store intrinsics.

Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D17270

llvm-svn: 273892
2016-06-27 16:29:26 +00:00
Benjamin Kramer af28e7d6fa Apply clang-tidy's modernize-loop-convert to most of lib/IR.
Only minor manual fixes. No functionality change intended.

llvm-svn: 273813
2016-06-26 14:10:56 +00:00
David Majnemer 0f45572761 The absence of noreturn doesn't ensure mayReturn
There are two separate issues:
- LLVM doesn't consider infinite loops to be side effects: we happily
  hoist/sink above/below loops whose bounds are unknown.
- The absence of the noreturn attribute is insufficient for us to know
  if a function will definitely return.  Relying on noreturn in the
  middle-end for any property is an accident waiting to happen.

llvm-svn: 273762
2016-06-25 00:55:12 +00:00
Peter Collingbourne 0312f614b1 IR: Introduce llvm.type.checked.load intrinsic.
This intrinsic safely loads a function pointer from a virtual table pointer
using type metadata. This intrinsic is used to implement control flow integrity
in conjunction with virtual call optimization. The virtual call optimization
pass will optimize away llvm.type.checked.load intrinsics associated with
devirtualized calls, thereby removing the type check in cases where it is
not needed to enforce the control flow integrity constraint.

This patch also introduces the capability to copy type metadata between
global variables, and teaches the virtual call optimization pass to do so.

Differential Revision: http://reviews.llvm.org/D21121

llvm-svn: 273756
2016-06-25 00:23:04 +00:00
Peter Collingbourne 7efd750607 IR: New representation for CFI and virtual call optimization pass metadata.
The bitset metadata currently used in LLVM has a few problems:

1. It has the wrong name. The name "bitset" refers to an implementation
   detail of one use of the metadata (i.e. its original use case, CFI).
   This makes it harder to understand, as the name makes no sense in the
   context of virtual call optimization.

2. It is represented using a global named metadata node, rather than
   being directly associated with a global. This makes it harder to
   manipulate the metadata when rebuilding global variables, summarise it
   as part of ThinLTO and drop unused metadata when associated globals are
   dropped. For this reason, CFI does not currently work correctly when
   both CFI and vcall opt are enabled, as vcall opt needs to rebuild vtable
   globals, and fails to associate metadata with the rebuilt globals. As I
   understand it, the same problem could also affect ASan, which rebuilds
   globals with a red zone.

This patch solves both of those problems in the following way:

1. Rename the metadata to "type metadata". This new name reflects how
   the metadata is currently being used (i.e. to represent type information
   for CFI and vtable opt). The new name is reflected in the name for the
   associated intrinsic (llvm.type.test) and pass (LowerTypeTests).

2. Attach metadata directly to the globals that it pertains to, rather
   than using the "llvm.bitsets" global metadata node as we are doing now.
   This is done using the newly introduced capability to attach
   metadata to global variables (r271348 and r271358).

See also: http://lists.llvm.org/pipermail/llvm-dev/2016-June/100462.html

Differential Revision: http://reviews.llvm.org/D21053

llvm-svn: 273729
2016-06-24 21:21:32 +00:00
Peter Collingbourne 4f7c16dd53 Linker: Copy metadata when linking declarations.
Differential Revision: http://reviews.llvm.org/D21624

llvm-svn: 273692
2016-06-24 17:42:21 +00:00
Artur Pilipenko 6c7a8abf5c Remangle intrinsics names when types are renamed
This is a resubmittion of previously reverted rL273568.

This is a fix for the problem mentioned in "LTO and intrinsics mangling" llvm-dev mail thread:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098387.html

Reviewers: mehdi_amini, reames

Differential Revision: http://reviews.llvm.org/D19373

llvm-svn: 273686
2016-06-24 15:10:29 +00:00
Artur Pilipenko b68b82117a NFC. Move verifyIntrinsicIsVarArg from verifier to Intrinsic::matchIntrinsicVarArg since it will be reused for intrinsic remangling code
llvm-svn: 273685
2016-06-24 14:47:27 +00:00