A ConstantVector can have ConstantExpr operands and vice versa.
However, the folder had no ability to fold ConstantVectors which, in
some cases, was an optimization barrier.
Instead, rephrase the folder in terms of Constants instead of
ConstantExprs and teach callers how to deal with failure.
llvm-svn: 277099
Avoid unnecessary spills of byval arguments of device functions to
local space on SASS level and subsequent pointer conversion to generic
address space that follows. Instead, make a local copy in IR, provide
a way to access arguments directly, and let LLVM optimize the copy away
when possible.
Differential Review: https://reviews.llvm.org/D21421
llvm-svn: 276153
Taking address of a byval variable in PTX is legal, but currently runs
into miscompilation by ptxas on sm_50+ (NVIDIA issue 1789042).
Work around the issue by enforcing minimum alignment on byval arguments
of device functions.
The change is a no-op on SASS level for sm_3x where ptxas already aligns
local copy by at least 4.
Differential Revision: https://reviews.llvm.org/D22428
llvm-svn: 275893
Summary:
Instead, we take a single flags arg (a bitset).
Also add a default 0 alignment, and change the order of arguments so the
alignment comes before the flags.
This greatly simplifies many callsites, and fixes a bug in
AMDGPUISelLowering, wherein the order of the args to getLoad was
inverted. It also greatly simplifies the process of adding another flag
to getLoad.
Reviewers: chandlerc, tstellarAMD
Subscribers: jholewinski, arsenm, jyknight, dsanders, nemanjai, llvm-commits
Differential Revision: http://reviews.llvm.org/D22249
llvm-svn: 275592
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the NVPTX backend, mainly by preferring MachineInstr&
over MachineInstr* when a pointer isn't nullable and using range-based
for loops.
There was one piece of questionable code in
NVPTXInstrInfo::AnalyzeBranch, where a condition checked a pointer
converted from an iterator for nullptr. Since this case is impossible
(moreover, the code above guarantees that the iterator is valid), I
removed the check when I changed the pointer to a reference.
Despite that case, there should be no functionality change here.
llvm-svn: 274931
- Rename the ptx.read.* intrinsics to nvvm.read.ptx.sreg.* - some but
not all of these registers were already accessible via the nvvm
name.
- Rename ptx.bar.sync nvvm.bar.sync, to match nvvm.bar0.
There's a fair amount of code motion here, but it's all very
mechanical.
llvm-svn: 274769
Avoid unnecessary spills of such vars to local space on SASS level and
pointer space conversion.
Instead, make a local copy with appropriate addrspacecasts and let
LLVM optimize them away when possible.
This allows loading value of the argument using [symbol+offset]
instead of converting argument to general space pointer and using it
for indexing (which also implicitly converts param space pointer to
local space one on SASS level and triggers copying of argument into
local space in the process).
This reduces call overhead, uses less registers and reduces overall
SASS size by 2-4%.
Differential Review: http://reviews.llvm.org/D21421
llvm-svn: 273313
This used to be free, copying and moving DebugLocs became expensive
after the metadata rewrite. Passing by reference eliminates a ton of
track/untrack operations. No functionality change intended.
llvm-svn: 272512
Summary:
Currently clang emits these instructions via inline (volatile) asm in
the CUDA headers. Switching to intrinsics will let the optimizer reason
across calls to these intrinsics.
Reviewers: tra
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D21160
llvm-svn: 272298
As suggested by clang-tidy's performance-unnecessary-copy-initialization.
This can easily hit lifetime issues, so I audited every change and ran the
tests under asan, which came back clean.
llvm-svn: 272126
NVVMIntrRange adds !range metadata to calls of NVVM intrinsics
that return values within known limited range.
This allows LLVM to generate optimal code for indexing arrays
based on tid/ctaid which is a frequently used pattern in CUDA code.
Differential Revision: http://reviews.llvm.org/D20644
llvm-svn: 270872
Having an enum member named Default is quite confusing: Is it distinct
from the others?
This patch removes that member and instead uses Optional<Reloc> in
places where we have a user input that still hasn't been maped to the
default value, which is now clear has no be one of the remaining 3
options.
llvm-svn: 269988
- Where we were returning a node before, call ReplaceNode instead.
- Where we would return null to fall back to another selector, rename
the method to try* and return a bool for success.
Part of llvm.org/pr26808.
llvm-svn: 269483
Many files include Passes.h but only a fraction needs to know about the
TargetPassConfig class. Move it into an own header. Also rename
Passes.cpp to TargetPassConfig.cpp while we are at it.
llvm-svn: 269011
This is a step towards removing the rampant undefined behaviour in
SelectionDAG, which is a part of llvm.org/PR26808.
We rename SelectionDAGISel::Select to SelectImpl and update targets to
match, and then change Select to return void and consolidate the
sketchy behaviour we're trying to get away from there.
Next, we'll update backends to implement `void Select(...)` instead of
SelectImpl and eventually drop the base Select implementation.
llvm-svn: 268693
Summary:
We don't have sign-/zero-extending ldg/ldu instructions defined,
so we need to emulate them with explicit CVTs. We were originally
handling the i8 case, but not any other cases.
Fixes PR26185
Reviewers: jingyue, jlebar
Subscribers: jholewinski
Differential Revision: http://reviews.llvm.org/D19615
llvm-svn: 268272
Summary:
Currently the NVVMReflect pass is run at the beginning of our backend
passes. But really, it should be run as early as possible, as it's
simply resolving an "if" statement in code. So copy it into
TargetMachine::addEarlyAsPossiblePasses.
We still run it at the beginning of the backend passes, since it's
needed for correctness when lowering to nvptx.
(Specifically, NVVMReflect changes each call to the __nvvm_reflect
function or llvm.nvvm.reflect intrinsic into an integer constant, based
on the pass's configuration. Clearly we miss many optimization
opportunities if we perform this transformation at the beginning of
codegen.)
Reviewers: rnk
Subscribers: tra, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D18616
llvm-svn: 267765
NVPTXLowerKernelArgs is required for correctness, so it should not be guarded
by CodeGenOpt::None.
NVPTXPeephole is optimization only, so it should be skipped when
CodeGenOpt::None.
llvm-svn: 267619
Removed some unused headers, replaced some headers with forward class declarations.
Found using simple scripts like this one:
clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap'
Patch by Eugene Kosov <claprix@yandex.ru>
Differential Revision: http://reviews.llvm.org/D19219
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266595
Summary:
Calls on NVPTX are unusually expensive (for one thing, lots of state
needs to be saved to memory, which is slow), so make the inlininer much
more aggressive.
Reviewers: chandlerc
Subscribers: jholewinski, llvm-commits, tra
Differential Revision: http://reviews.llvm.org/D18561
llvm-svn: 266406
Add a common parent class for ConstantArray, ConstantVector, and
ConstantStruct called ConstantAggregate. These are the aggregate
subclasses of Constant that take operands.
This is mainly a cleanup, adding common `isa` target and removing
duplicated code. However, it also simplifies caching which constants
point transitively at `GlobalValue` (a possible future direction).
llvm-svn: 265466
Summary:
Previously, we were running afoul of the assertion
EVT(CLI.Ins[i].VT) == InVals[i].getValueType() && "LowerCall emitted a value with the wrong type!"
in SelectionDAGBuilder.cpp when running the NVPTX/i8-param.ll test.
This is because our backend (for some reason) treats small return values
as i32, but it wasn't ever truncating the i32 back down to the expected
width in the DAG.
Unclear to me whether this fixes any actual bugs -- in this test, at
least, the generated code is unchanged.
Reviewers: jingyue
Subscribers: llvm-commits, tra, jholewinski
Differential Revision: http://reviews.llvm.org/D17872
llvm-svn: 265091
Summary:
Previously the NVVMReflect pass would read its configuration from
command-line flags or a static configuration given to the pass at
instantiation time.
This doesn't quite work for clang's use-case. It needs to pass a value
for __CUDA_FTZ down on a per-module basis. We use a module flag for
this, so the NVVMReflect pass needs to be updated to read said flag.
Reviewers: tra, rnk
Subscribers: cfe-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D18672
llvm-svn: 265090