Commit Graph

212 Commits

Author SHA1 Message Date
Mircea Trofin 0a753938db [llvm] Avoid div by 0 when updating profile weights.
Reviewers: davidxl

Reviewed By: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D61661

llvm-svn: 360223
2019-05-08 03:57:25 +00:00
Wei Mi 01f8d556aa [PGO/SamplePGO][NFC] Move the function updateProfWeight from Instruction
to CallInst.

The issue was raised here: https://reviews.llvm.org/D60903#1472783

The function Instruction::updateProfWeight is only used for CallInst in
profile update. From the current interface, it is very easy to think that
the function can also be used for branch instruction. However, Branch
instruction does't need the scaling the function provides for
branch_weights and VP (value profile), in addition, scaling may introduce
inaccuracy for branch probablity.

The patch moves the function updateProfWeight from Instruction class to
CallInst to remove the confusion. The patch also changes the scaling of
branch_weights from a loop to a block because we know that ProfileData
for branch_weights of CallInst will only have two operands at most.

Differential Revision: https://reviews.llvm.org/D60911

llvm-svn: 358900
2019-04-22 17:04:51 +00:00
Sanjay Patel b276dd195a [InstCombine] canonicalize select shuffles by commuting
In PR41304:
https://bugs.llvm.org/show_bug.cgi?id=41304
...we have a case where we want to fold a binop of select-shuffle (blended) values.

Rather than try to match commuted variants of the pattern, we can canonicalize the
shuffles and check for mask equality with commuted operands.

We don't produce arbitrary shuffle masks in instcombine, but select-shuffles are a
special case that the backend is required to handle because we already canonicalize
vector select to this shuffle form.

So there should be no codegen difference from this change. It's possible that this
improves CSE in IR though.

Differential Revision: https://reviews.llvm.org/D60016

llvm-svn: 357366
2019-03-31 15:01:30 +00:00
Kristina Brooks 24659eb2e7 Remove large amount of empty lines mid-file. NFC
llvm-svn: 355286
2019-03-03 13:21:38 +00:00
Craig Topper 784929d045 Implementation of asm-goto support in LLVM
This patch accompanies the RFC posted here:
http://lists.llvm.org/pipermail/llvm-dev/2018-October/127239.html

This patch adds a new CallBr IR instruction to support asm-goto
inline assembly like gcc as used by the linux kernel. This
instruction is both a call instruction and a terminator
instruction with multiple successors. Only inline assembly
usage is supported today.

This also adds a new INLINEASM_BR opcode to SelectionDAG and
MachineIR to represent an INLINEASM block that is also
considered a terminator instruction.

There will likely be more bug fixes and optimizations to follow
this, but we felt it had reached a point where we would like to
switch to an incremental development model.

Patch by Craig Topper, Alexander Ivchenko, Mikhail Dvoretckii

Differential Revision: https://reviews.llvm.org/D53765

llvm-svn: 353563
2019-02-08 20:48:56 +00:00
James Y Knight d9e85a0861 [opaque pointer types] Pass function types to InvokeInst creation.
This cleans up all InvokeInst creation in LLVM to explicitly pass a
function type rather than deriving it from the pointer's element-type.

Differential Revision: https://reviews.llvm.org/D57171

llvm-svn: 352910
2019-02-01 20:43:34 +00:00
James Y Knight 7976eb5838 [opaque pointer types] Pass function types to CallInst creation.
This cleans up all CallInst creation in LLVM to explicitly pass a
function type rather than deriving it from the pointer's element-type.

Differential Revision: https://reviews.llvm.org/D57170

llvm-svn: 352909
2019-02-01 20:43:25 +00:00
James Y Knight 13680223b9 [opaque pointer types] Add a FunctionCallee wrapper type, and use it.
Recommit r352791 after tweaking DerivedTypes.h slightly, so that gcc
doesn't choke on it, hopefully.

Original Message:
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.

Then:
- update the CallInst/InvokeInst instruction creation functions to
  take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.

One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.

However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)

Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.

Differential Revision: https://reviews.llvm.org/D57315

llvm-svn: 352827
2019-02-01 02:28:03 +00:00
James Y Knight fadf25068e Revert "[opaque pointer types] Add a FunctionCallee wrapper type, and use it."
This reverts commit f47d6b38c7 (r352791).

Seems to run into compilation failures with GCC (but not clang, where
I tested it). Reverting while I investigate.

llvm-svn: 352800
2019-01-31 21:51:58 +00:00
James Y Knight f47d6b38c7 [opaque pointer types] Add a FunctionCallee wrapper type, and use it.
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.

Then:
- update the CallInst/InvokeInst instruction creation functions to
  take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.

One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.

However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)

Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.

Differential Revision: https://reviews.llvm.org/D57315

llvm-svn: 352791
2019-01-31 20:35:56 +00:00
Craig Topper c1892ec15a [CallSite removal] Remove CallSite uses from InstCombine.
Reviewers: chandlerc

Reviewed By: chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D57494

llvm-svn: 352771
2019-01-31 17:23:29 +00:00
Matt Arsenault 39508331ef Reapply "IR: Add fp operations to atomicrmw"
This reapplies commits r351778 and r351782 with
RISCV test fixes.

llvm-svn: 351850
2019-01-22 18:18:02 +00:00
Chandler Carruth 285fe716c5 Revert r351778: IR: Add fp operations to atomicrmw
This broke the RISCV build, and even with that fixed, one of the RISCV
tests behaves surprisingly differently with asserts than without,
leaving there no clear test pattern to use. Generally it seems bad for
hte IR to differ substantially due to asserts (as in, an alloca is used
with asserts that isn't needed without!) and nothing I did simply would
fix it so I'm reverting back to green.

This also required reverting the RISCV build fix in r351782.

llvm-svn: 351796
2019-01-22 10:29:58 +00:00
Matt Arsenault bfdba5e4fc IR: Add fp operations to atomicrmw
Add just fadd/fsub for now.

llvm-svn: 351778
2019-01-22 03:32:36 +00:00
Chandler Carruth 2946cd7010 Update the file headers across all of the LLVM projects in the monorepo
to reflect the new license.

We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.

Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.

llvm-svn: 351636
2019-01-19 08:50:56 +00:00
James Y Knight 84c1dbde08 [opaque pointer types] Update LoadInst creation APIs to consistently
accept a return-type argument.

Note: this also adds a new C API and soft-deprecates the old C API.

Differential Revision: https://reviews.llvm.org/D56558

llvm-svn: 351123
2019-01-14 21:37:53 +00:00
James Y Knight f956390954 [opaque pointer types] Update CallInst creation APIs to consistently
accept a callee-type argument.

Note: this also adds a new C API and soft-deprecates the old C API.

Differential Revision: https://reviews.llvm.org/D56556

llvm-svn: 351121
2019-01-14 21:37:42 +00:00
Chandler Carruth 57578aaf96 [CallSite removal] Port `IndirectCallSiteVisitor` to use `CallBase` and
update client code.

Also rename it to use the more generic term `call` instead of something
that could be confused with a praticular type.

Differential Revision: https://reviews.llvm.org/D56183

llvm-svn: 350508
2019-01-07 07:15:51 +00:00
Chandler Carruth 05b5bd8b85 [CallSite removal] Add and flesh out APIs on the new `CallBase` base class that previously were only available on the `CallSite` wrapper.
Summary:
This will make migrating code easier and generally seems like a good collection
of API improvements.

Some of these APIs seem like more consistent / better naming of existing
ones. I've retained the old names for migration simplicit and am just
adding the new ones in this commit. I'll try to garbage collect these
once CallSite is gone.

Subscribers: sanjoy, mcrosier, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D55638

llvm-svn: 350109
2018-12-27 23:40:17 +00:00
Chandler Carruth e429c7941d [TI removal] Leverage the fact that TerminatorInst is gone to create
a normal base class that provides all common "call" functionality.

This merges two complex CRTP mixins for the common "call" logic and
common operand bundle logic into a single, normal base class of
`CallInst` and `InvokeInst`. Going forward, users can typically
`dyn_cast<CallBase>` and use the resulting API. No more need for the
`CallSite` wrapper. I'm planning to migrate current usage of the wrapper
to directly use the base class and then it can be removed, but those are
simpler and much more incremental steps. The big change is to introduce
this abstraction into the type system.

I've tried to do some basic simplifications of the APIs that I couldn't
really help but touch as part of this:
- I've tried to organize the attribute API and bundle API into groups to
  make understanding the API of `CallBase` easier. Without this,
  I wasn't able to navigate the API sanely for all of the ways I needed
  to modify it.
- I've added what seem like more clear and consistent APIs for getting
  at the called operand. These ended up being especially useful to
  consolidate the *numerous* duplicated code paths trying to do this.
- I've largely reworked the organization and implementation of the APIs
  for computing the argument operands as they needed to change to work
  with the new subclass approach.

To minimize any cost associated with this abstraction, I've moved the
operand layout in memory to store the called operand last. This makes
its position relative to the end of the operand array the same,
regardless of the subclass. It should make it much cheaper to reference
from the `CallBase` abstraction, and this is likely one of the most
frequent things to query.

We do still pay one abstraction penalty here: we have to branch to
determine whether there are 0 or 2 extra operands when computing the end
of the argument operand sequence. However, that seems both rare and
should optimize well. I've implemented this in a way specifically
designed to allow it to optimize fairly well. If this shows up in
profiles, we can add overrides of the relevant methods to the subclasses
that bypass this penalty. It seems very unlikely that this will be an
issue as the code was *already* dealing with an ever present abstraction
of whether or not there are operand bundles, so this isn't the first
branch to go into the computation.

I've tried to remove as much of the obvious vestigial API surface of the
old CRTP implementation as I could, but I suspect there is further
cleanup that should now be possible, especially around the operand
bundle APIs. I'm leaving all of that for future work in this patch as
enough things are changing here as-is.

One thing that made this harder for me to reason about and debug was the
pervasive use of unsigned values in subtraction and other arithmetic
computations. I had to debug more than one unintentional wrap. I've
switched a few of these to use `int` which seems substantially simpler,
but I've held back from doing this more broadly to avoid creating
confusing divergence within a single class's API.

I also worked to remove all of the magic numbers used to index into
operands, putting them behind named constants or putting them into
a single method with a comment and strictly using the method elsewhere.
This was necessary to be able to re-layout the operands as discussed
above.

Thanks to Ben for reviewing this (somewhat large and awkward) patch!

Differential Revision: https://reviews.llvm.org/D54788

llvm-svn: 347452
2018-11-22 10:31:35 +00:00
Fangrui Song 7570932977 Use llvm::copy. NFC
llvm-svn: 347126
2018-11-17 01:44:25 +00:00
Cameron McInally cbde0d9c7b [IR] Add a dedicated FNeg IR Instruction
The IEEE-754 Standard makes it clear that fneg(x) and
fsub(-0.0, x) are two different operations. The former is a bitwise
operation, while the latter is an arithmetic operation. This patch
creates a dedicated FNeg IR Instruction to model that behavior.

Differential Revision: https://reviews.llvm.org/D53877

llvm-svn: 346774
2018-11-13 18:15:47 +00:00
Fangrui Song 4f2e66c018 Fix -Wsign-compare warning
llvm-svn: 346515
2018-11-09 16:45:37 +00:00
Simon Pilgrim d0c71609c5 [CostModel] Add SK_ExtractSubvector handling to getInstructionThroughput (PR39368)
Add ShuffleVectorInst::isExtractSubvectorMask helper to match shuffle masks.

llvm-svn: 346510
2018-11-09 16:28:19 +00:00
Sanjay Patel d1172a0c20 [IR] add optional parameter for copying IR flags to compare instructions
As shown, this is used to eliminate redundant code in InstCombine,
and there are more cases where we should be using this pattern, but
we're currently unintentionally dropping flags. 

llvm-svn: 346282
2018-11-07 00:00:42 +00:00
Sanjay Patel 0857df38bd [IR] remove fake binop query for fneg
We want to remove this fneg API because it would silently fail 
if we add an actual fneg instruction to IR (as proposed in 
D53877 ).

We have a newer 'match' API that makes checking for
these patterns simpler. It also works with vectors
that may include undef elements in constants.

If any out-of-tree users need updating, they can model
their code changes on this commit:
https://reviews.llvm.org/rL345295

llvm-svn: 345904
2018-11-01 22:56:15 +00:00
Fangrui Song 531e3d0cd9 [IR] Fix -Wunused-function after r345052
llvm-svn: 345057
2018-10-23 17:24:15 +00:00
Sanjay Patel 07076cfdf6 [IR] remove fake binop queries for not/neg
The initial motivation is that we want to remove the
fneg API because that would silently fail if we add
an actual fneg instruction to IR. The same would be
true for the integer ops, so we might as well get rid 
of these too.

We have a newer 'match' API that makes checking for
these patterns simpler. It also works with vectors
that may include undef elements in constants.

If any out-of-tree users need updating, they can model
their code changes on these commits:
rL345050
rL345043
rL345042
rL345041
rL345036
rL345030

llvm-svn: 345052
2018-10-23 17:06:03 +00:00
Chandler Carruth 509e20e029 [TI removal] Remove `TerminatorInst` from the IR type system!
llvm-svn: 344769
2018-10-19 00:22:37 +00:00
Matt Arsenault b02ba99e91 IR: Move AtomicRMW string names into class
This will be used to improve error messages in a future commit.

llvm-svn: 343647
2018-10-02 23:44:11 +00:00
Sanjay Patel fd4976bd19 [IR] add shuffle query for vector concatenation
This can be used for combining and in the vectorizers/cost models.

llvm-svn: 342653
2018-09-20 15:21:52 +00:00
Sanjay Patel 8d39ed895f [IR] fix declaration of shuffle mask
An address sanitizer bot flagged this as a potential bug.

llvm-svn: 341084
2018-08-30 16:44:07 +00:00
Sanjay Patel ac619a09ec [IR] add shuffle queries for identity extend/extract
This was one of the potential follow-ups suggested in D48236, 
and these will be used to make matching the patterns in PR38691 cleaner:
https://bugs.llvm.org/show_bug.cgi?id=38691

About the vocabulary: in the DAG, these would be concat_vector with an 
undef operand or extract_subvector. Alternate names are discussed in the
review, but I think these are familiar/good enough to proceed. Once we
have uses of them in code, we might adjust if there are better options.

https://reviews.llvm.org/D51392

llvm-svn: 341075
2018-08-30 15:05:38 +00:00
Chandler Carruth 96fc1de77d [IR] Begin removal of TerminatorInst by removing successor manipulation.
The core get and set routines move to the `Instruction` class. These
routines are only valid to call on instructions which are terminators.

The iterator and *generic* range based access move to `CFG.h` where all
the other generic successor and predecessor access lives. While moving
the iterator here, simplify it using the iterator utilities LLVM
provides and updates coding style as much as reasonable. The APIs remain
pointer-heavy when they could better use references, and retain the odd
behavior of `operator*` and `operator->` that is common in LLVM
iterators. Adjusting this API, if desired, should be a follow-up step.

Non-generic range iteration is added for the two instructions where
there is an especially easy mechanism and where there was code
attempting to use the range accessor from a specific subclass:
`indirectbr` and `br`. In both cases, the successors are contiguous
operands and can be easily iterated via the operand list.

This is the first major patch in removing the `TerminatorInst` type from
the IR's instruction type hierarchy. This change was discussed in an RFC
here and was pretty clearly positive:
http://lists.llvm.org/pipermail/llvm-dev/2018-May/123407.html

There will be a series of much more mechanical changes following this
one to complete this move.

Differential Revision: https://reviews.llvm.org/D47467

llvm-svn: 340698
2018-08-26 08:41:15 +00:00
Serguei Katkov 09ab506798 [IR Verifier] Do not allow bitcast of pointer to vector of pointers and vice versa.
LangRef for BitCast requires that
"The bit sizes of value and the destination type, ty2, must be identical".
Currently verifier allows BitCast of pointer to vector of pointers so that
the sizes are different.

This change fixes that.

Reviewers: arsenm
Reviewed By: arsenm
Subscribers: llvm-commits, wdng
Differential Revision: https://reviews.llvm.org/D50886

llvm-svn: 340249
2018-08-21 04:27:07 +00:00
Fangrui Song f78650a8de Remove trailing space
sed -Ei 's/[[:space:]]+$//' include/**/*.{def,h,td} lib/**/*.{cpp,h}

llvm-svn: 338293
2018-07-30 19:41:25 +00:00
Bjorn Pettersson 550517bcab Improve ConvertDebugDeclareToDebugValue
Summary:
This is a follow-up to r334830 and r335031.

In the valueCoversEntireFragment check we now also handle
the situation when there is a variable length array (VLA)
involved, and the length of the array has been reduced to
a constant.

The ConvertDebugDeclareToDebugValue functions that are related
to PHI nodes and load instructions now avoid inserting dbg.value
intrinsics when the value does not, for certain, cover the
variable/fragment that should be described.
In r334830 we assumed that the value always covered the entire
var/fragment and we had assertions in the code to show that
assumption. However, those asserts failed when compiling code
with VLAs, so we removed the asserts in r335031. Now when we
know that the valueCoversEntireFragment check can fail also for
PHI/Load instructions we avoid to insert the faulty dbg.value
intrinsic in such situations. Compared to the Store instruction
scenario we simply drop the dbg.value here (as the variable does
not change its value due to PHI/Load, so an earlier dbg.value
describing the variable should still be valid).

Reviewers: aprantl, vsk, efriedma

Reviewed By: aprantl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D48547

llvm-svn: 335580
2018-06-26 06:17:00 +00:00
Sanjay Patel 2ca3360b11 [IR] move shuffle mask queries from TTI to ShuffleVectorInst
The optimizer is getting smarter (eg, D47986) about differentiating shuffles 
based on its mask values, so we should make queries on the mask constant 
operand generally available to avoid code duplication.

We'll probably use this soon in the vectorizers and instcombine (D48023 and 
https://bugs.llvm.org/show_bug.cgi?id=37806).

We might clean up TTI a bit more once all of its current 'SK_*' options are 
covered.

Differential Revision: https://reviews.llvm.org/D48236

llvm-svn: 335067
2018-06-19 18:44:00 +00:00
Adrian Prantl 4dfcc4a788 Remove @brief commands from doxygen comments, too.
This is a follow-up to r331272.

We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.

Patch produced by
  for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done

https://reviews.llvm.org/D46290

llvm-svn: 331275
2018-05-01 16:10:38 +00:00
Craig Topper 18799f4c07 [InstCombine] Allow fptrunc (fpext X)) to be reduced to a single fpext/ftrunc
If we are only truncating bits from the extend we should be able to just use a smaller extend.

If we are truncating more than the extend we should be able to just use a fptrunc since the presense of the fpextend shouldn't affect rounding.

Differential Revision: https://reviews.llvm.org/D43970

llvm-svn: 326595
2018-03-02 18:16:51 +00:00
Serge Guelton 1fb81bcb9b Syndicate duplicate code between CallInst and InvokeInst
NFC intended, syndicate common code to a parametric base class. Part of the original problem is that InvokeInst is a TerminatorInst, unlike CallInst. the problem is solved by introducing a parametrized class paramtertized by its base.

Differential Revision: https://reviews.llvm.org/D40727

llvm-svn: 325778
2018-02-22 13:30:32 +00:00
Serguei Katkov 3cb4c34a4e Rename and move utility function getLatchPredicateForGuard. NFC.
Rename getLatchPredicateForGuard to more common name
getFlippedStrictnessPredicate and move it to ICmpInst class.

llvm-svn: 324717
2018-02-09 07:59:07 +00:00
Max Kazantsev b299ade2c5 Re-enable "[SCEV] Make isLoopEntryGuardedByCond a bit smarter"
The failures happened because of assert which was overconfident about
SCEV's proving capabilities and is generally not valid.

Differential Revision: https://reviews.llvm.org/D42835

llvm-svn: 324473
2018-02-07 11:16:29 +00:00
Serguei Katkov 69246ca787 Revert [SCEV] Make isLoopEntryGuardedByCond a bit smarter
Revert rL324453 commit which causes buildbot failures.

Differential Revision: https://reviews.llvm.org/D42835

llvm-svn: 324462
2018-02-07 09:10:08 +00:00
Max Kazantsev dd5ee6f5d9 [SCEV] Make isLoopEntryGuardedByCond a bit smarter
Sometimes `isLoopEntryGuardedByCond` cannot prove predicate `a > b` directly.
But it is a common situation when `a >= b` is known from ranges and `a != b` is
known from a dominating condition. Thia patch teaches SCEV to sum these facts
together and prove strict comparison via non-strict one.

Differential Revision: https://reviews.llvm.org/D42835

llvm-svn: 324453
2018-02-07 07:56:26 +00:00
Yichao Yu 92c11ee352 Fix invalid ptrtoint in InstCombine
Summary:
It's unclear if this is the only thing we can do but at least this is consistent with the check
of address space agreement in `isBitCastable`.

The code is used at least in both instcombine and jumpthreading though
I could only find a way to trigger the invalid cast in instcombine.

Reviewers: loladiro, sanjoy, majnemer

Reviewed By: sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34335

llvm-svn: 316302
2017-10-22 20:28:17 +00:00
Mikael Holmen 0ec1d25d33 Minor refactoring regarding Cast::isNoopCast(), NFC
Summary:
FastISel::hasTrivialKill() was the only user of the "IntPtrTy" version of
Cast::isNoopCast(). According to review comments in D37894 we could instead
use the "DataLayout" version of the method, and thus get rid of the
"IntPtrTy" versions of isNoopCast() completely.

With the above done, the remaining isNoopCast() could then be simplified
a bit more.

Reviewers: arsenm

Reviewed By: arsenm

Subscribers: wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D38497

llvm-svn: 314969
2017-10-05 07:07:09 +00:00
Mikael Holmen 6efe507e42 [Lint] Avoid failed assertion by fetching the proper pointer type
Summary:
When checking if a constant expression is a noop cast we fetched the
IntPtrType by doing DL->getIntPtrType(V->getType())). However, there can
be cases where V doesn't return a pointer, and then getIntPtrType()
triggers an assertion.

Now we pass DataLayout to isNoopCast so the method itself can determine
what the IntPtrType is.

Reviewers: arsenm

Reviewed By: arsenm

Subscribers: wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D37894

llvm-svn: 314763
2017-10-03 06:03:49 +00:00
Konstantin Zhuravlyov bb80d3e1d3 Enhance synchscope representation
OpenCL 2.0 introduces the notion of memory scopes in atomic operations to
  global and local memory. These scopes restrict how synchronization is
  achieved, which can result in improved performance.

  This change extends existing notion of synchronization scopes in LLVM to
  support arbitrary scopes expressed as target-specific strings, in addition to
  the already defined scopes (single thread, system).

  The LLVM IR and MIR syntax for expressing synchronization scopes has changed
  to use *syncscope("<scope>")*, where <scope> can be "singlethread" (this
  replaces *singlethread* keyword), or a target-specific name. As before, if
  the scope is not specified, it defaults to CrossThread/System scope.

  Implementation details:
    - Mapping from synchronization scope name/string to synchronization scope id
      is stored in LLVM context;
    - CrossThread/System and SingleThread scopes are pre-defined to efficiently
      check for known scopes without comparing strings;
    - Synchronization scope names are stored in SYNC_SCOPE_NAMES_BLOCK in
      the bitcode.

Differential Revision: https://reviews.llvm.org/D21723

llvm-svn: 307722
2017-07-11 22:23:00 +00:00
Craig Topper 95d2347ae1 [IR] Make use of Type::isPtrOrPtrVectorTy/isIntOrIntVectorTy/isFPOrFPVectorTy to shorten code. NFC
llvm-svn: 307491
2017-07-09 07:04:00 +00:00