This patch does three things:
- Map the /external:I flag to -isystem
- Add support for the /external:env:<var> flag which reads system
include paths from the <var> environment variable
- Pick up system include dirs EXTERNAL_INCLUDE in addition to the old
INCLUDE environment variable.
Differential revision: https://reviews.llvm.org/D104387
Conversion from a fixed-point number to a floating-point number is done by
multiplying the fixed-point number by 2^(-n) where n is the number of
fractional bits. Currently this is lowered to a vcvt
(integer to floating-point) then a vmul, but it can instead be lowered
directly to a vcvt (fixed-point to floating-point). This patch enables
such transformations as long as the multiplication factor is a power of 2.
Differential Revision: https://reviews.llvm.org/D103903
HostInfoBase has a deleted dtor/ctor so there is no need to do the same for
all the classes inheriting from it.
Reviewed By: DavidSpickett, JDevlieghere
Differential Revision: https://reviews.llvm.org/D104221
Redirect the copy ctor to the actual class instead of
overwriting it with `TypeID` based ctor.
This allows the final Pass classes to have extra fields and logic for their copy.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D104302
The intention is now that AppendError/SetError/AppendRawError only
be called with some message to show. This enforces that.
For SetError with a Status and a fallback string first assert
that the Status is a failure Status. Then it calls SetError(StringRef)
which checks the message is valid. (which could be the fallback
or the Status')
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D104525
* Remove dependency: Standard --> MemRef
* Add dependencies: GPUToNVVMTransforms --> MemRef, Linalg --> MemRef, MemRef --> Tensor
* Note: The `subtensor_insert_propagate_dest_cast` test case in MemRef/canonicalize.mlir will be moved to Tensor/canonicalize.mlir in a subsequent commit, which moves over the remaining Tensor ops from the Standard dialect to the Tensor dialect.
Differential Revision: https://reviews.llvm.org/D104506
Use poison instead of undef for cases dealing with unreachable
code. This still leaves the more interesting case of "load from
uninitialized memory" as undef.
getSpecializationCost was returning INT_MAX for a case when specialisation
shouldn't happen, but this wasn't properly checked if specialisation was
forced.
Differential Revision: https://reviews.llvm.org/D104461
This pass aims to optimize VGPR live-range in a typical divergent if-else
control flow. For example:
def(a)
if(cond)
use(a)
... // A
else
use(a)
As AMDGPU access vgpr with respect to active-mask, we can mark `a` as
dead in region A. For details, please refer to the comments in
implementation file.
The pass is enabled by default, the frontend can disable it through
"-amdgpu-opt-vgpr-liverange=false".
Differential Revision: https://reviews.llvm.org/D102212
This revision adds a BufferizationAliasInfo which maintains and updates information about which tensors will alias once bufferized, which bufferized tensors are equivalent to others and how to handle clobbers.
Bufferization greedily tries to bufferize inplace by:
1. first trying to bufferize SubTensorInsertOp inplace, in reverse order (these are deemed the most expensives).
2. then trying to bufferize all non SubTensorOp / SubTensorInsertOp, in reverse order.
3. lastly trying to bufferize all SubTensorOp in reverse order.
Reverse order is a heuristic that seems to work nicely because structured tensor codegen very often proceeds by:
1. take a subset of a tensor
2. compute on that subset
3. insert the result subset into the full tensor and yield a new tensor.
BufferizationAliasInfo + equivalence sets + clobber analysis allows bufferizing nested
subtensor/compute/subtensor_insert sequences inplace to a certain extent.
To fully realize inplace bufferization, additional container-containee analysis will be necessary and is left for a subsequent commit.
Differential revision: https://reviews.llvm.org/D104110
The main motivation behind pointer replacement of LDS use within non-kernel
functions is - to *avoid* subsequent LDS lowering pass from directly packing
LDS (assume large LDS) into a struct type which would otherwise cause allocating
huge memory for struct instance within every kernel.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D103225
There does not seem to be any use of these functions. They just
put the value to a local which is never used again.
Reviewed By: JonChesterfield
Differential Revision: https://reviews.llvm.org/D104512
Summary: This patch, as a follow-up of D95505, adds
support for writing the long symbol name by implementing
the StringTable. Only XCOFF32 is suppoted now.
Reviewed By: jhenderson, shchenz
Differential Revision: https://reviews.llvm.org/D103455
This patch lifts the requirement to have the only incoming live block
for Phis. There can be multiple live blocks if the same value comes to
phi from all of them.
Differential Revision: https://reviews.llvm.org/D103959
Reviewed By: nikic, lebedev.ri
Hi,
I think it will be more beautiful to adjust indentation of statements with more than one lines.
In function TreeTransform<Derived>::TransformDependentScopeDeclRefExpr
the second line of statement
NestedNameSpecifierLoc QualifierLoc \newline = getDerived().TransformNestedNameSpecifierLoc(E->getQualifierLoc());
is no more indent than the first line
There is a similar case in function TreeTransform<Derived>::TransformUnresolvedMemberExpr
Also I use clang-format to fix above functions
Thanks alot
Reviewed By: pengfei
Differential Revision: https://reviews.llvm.org/D104145
The variable used to need the wider scope, but doesn't after the
reland. See LC_LINKER_OPTIONS-related discussion on
https://reviews.llvm.org/D104353 for background.
This patch updates InstCombine to use poison constant to represent the resulting value of (either semantically or syntactically) unreachable instrs, or a don't-care value of an unreachable store instruction.
This allows more aggressive folding of unused results, as shown in llvm/test/Transforms/InstCombine/getelementptr.ll .
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D104602
Real zerofill sections go after __thread_bss, since zerofill sections
must all be at the end of their segment and __thread_bss must be right
after __thread_data.
Works fine already, but wasn't tested as far as I can tell.
Also tweak comment about zerofill sections a bit.
No behavior change.
Differential Revision: https://reviews.llvm.org/D104609
We make it less than INT_MAX in order not to conflict with the ordering
of zerofill sections, which must always be placed at the end of their
segment.
This is the more structural fix for the issue addressed in {D104596}.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D104607
TypePromotion is meant to be a generic pass and doesn't reference
any ARM intrinsics so it shouldn't include IntrinsicsARM.h.
The other Intrinsic related headers appear to be unneeded as well.
Add a new feature to process save-core on Darwin systems -- for
lldb to create a user process corefile with only the dirty (modified
memory) pages included. All of the binaries that were used in the
corefile are assumed to still exist on the system for the duration
of the use of the corefile. A new --style option to process save-core
is added, so a full corefile can be requested if portability across
systems, or across time, is needed for this corefile.
debugserver can now identify the dirty pages in a memory region
when queried with qMemoryRegionInfo, and the size of vm pages is
given in qHostInfo.
Create a new "all image infos" LC_NOTE for Mach-O which allows us
to describe all of the binaries that were loaded in the process --
load address, UUID, file path, segment load addresses, and optionally
whether code from the binary was executing on any thread. The old
"read dyld_all_image_infos and then the in-memory Mach-O load
commands to get segment load addresses" no longer works when we
only have dirty memory.
rdar://69670807
Differential Revision: https://reviews.llvm.org/D88387
This is a more general alternative/extension to D102635. Rather than
handling the special case of "header exit with non-exiting latch",
this unrolls against the smallest exact trip count from any exit.
The latch exit is no longer treated as priviledged when it comes to
full unrolling.
The motivating case is in full-unroll-one-unpredictable-exit.ll.
Here the header exit is an IV-based exit, while the latch exit is
a data comparison. This kind of loop does not get rotated, because
the latch is already exiting, and loop rotation doesn't try to
distinguish IV-based/analyzable latches.
Differential Revision: https://reviews.llvm.org/D102982