In the changes Jonas made in https://reviews.llvm.org/D117340 , a
small oversight was that PlatformMacOSX (despite the name) is active
for any native Darwin operating system, where lldb and the target
process are running on the same system. This patch uses compile-time
checks to return the appropriate OSType for the OS lldb is being
compiled to, so the "host" platform will correctly be selected when
lldb & the inferior are both running on that OS. And a small change
to PlatformMacOSX::GetSupportedArchitectures which adds additional
recognized triples when running on macOS but not other native Darwin
systems.
Differential Revision: https://reviews.llvm.org/D120517
rdar://89247060
Previously, OpDSL operation used hardcoded type conversion operations (cast or cast_unsigned). Supporting signed and unsigned casts thus meant implementing two different operations. Type function attributes allow us to define a single operation that has a cast type function attribute which at operation instantiation time may be set to cast or cast_unsigned. We may for example, defina a matmul operation with a cast argument:
```
@linalg_structured_op
def matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), C=TensorDef(U, S.M, S.N, output=True),
cast=TypeFnAttrDef(default=TypeFn.cast)):
C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
```
When instantiating the operation the attribute may be set to the desired cast function:
```
linalg.matmul(lhs, rhs, outs=[out], cast=TypeFn.cast_unsigned)
```
The revsion introduces a enum in the Linalg dialect that maps one-by-one to the type functions defined by OpDSL.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D119718
Declaration and definition attributes must match,
otherwise it may cause issues on linking.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D120493
SCEVs ExprValueMap currently tracks not only which IR Values
correspond to a given SCEV expression, but additionally stores that
it may be expanded in the form X+Offset. In theory, this allows
reusing existing IR Values in more cases.
In practice, this doesn't seem to be particularly useful (the test
changes are rather underwhelming) and adds a good bit of complexity.
Per https://github.com/llvm/llvm-project/issues/53905, we have an
invalidation issue with these offseted expressions.
Differential Revision: https://reviews.llvm.org/D120311
This patch extends support for generating huge stack frames on 64-bit XPLINK by implementing the ABI-mandated call to the stack extension routine.
Reviewed By: uweigand
Differential Revision: https://reviews.llvm.org/D120450
This patch removes the `const` from `usingBigM` to enable the implicit copy assignment operator for Simplex.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D120542
The checker missed a check for a case when the parameter is referenced by an lvalue and this could cause build breakages.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D117090
vcpop and vfirst are still useful when VL=0.
vcpop equivalents to li 0 and vfirst equivalents to li -1,
since no mask elements are active.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D120302
Clang computes the default ABI if -mabi is empty
and encode it in LLVM IR module flag since D105555.
For correctness, llc need to give the same target-abi
(Options.MCOptions.ABIName) with ABI encoded in IR.
The getSubtargetImpl already has a check for them only if
Options.MCOptions.ABIName is not empty.
In order to get more robustness we could have a check for
explicit ABI, but now we have two different logic to
compute the default ABI.
The front-end ABI is defautl to the ilp32/ilp32e/lp64, and
ilp32d/lp64d when hardware support for extension D.
The backend ABI is default to the ilp32/ilp32e/lp64.
Reviewed by: asb, jrtc27
Differential Revision: https://reviews.llvm.org/D118333
Expand `TruncInstCombine` to handle loops by adding `phi` nodes
to expression graph.
Reviewed by: RKSimon, lebedev.ri
(recommit of fixed f84d732f, reverted by 8ad6d5e after sanitizer breakage)
Differential Revision: https://reviews.llvm.org/D109817
(resubmit https://reviews.llvm.org/D119207 after fixing the test for
some build settings)
This patch converts CUDA pointer kernel arguments with default address
space to CrossWorkGroup address space (__global in OpenCL). This is
because Generic or Function (OpenCL's private) is not supported as
storage class for kernel pointer types.
Differential revision: https://reviews.llvm.org/D120366
GCC's compiled with --enable-version-specific-runtime-libs change the paths where includes and libs are found.
This patch adds support for these cases
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D118700
ObjFile::parse combines symbol initialization and resolution. Many tasks
unrelated to symbol resolution can be postponed and parallelized. This patch
extracts local symbol initialization and parallelizes it.
Technically the new function initializeLocalSymbols can be merged into
ObjFile::postParse, but functions like getSrcMsg may access the
uninitialized (all nullptr) local part of InputFile::symbols.
Linking chrome: 1.02x as fast with glibc malloc, 1.04x as fast with mimalloc
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D119909
The min/max intrinsic cost is currently too low because in the cost calculation
we subtract the cost of the vector compare as we will not emit it.
For the cost of the vector compare we are currently passing BAD_ICMP_PREDICATE
which returns 3, the worst case cost.
I think we should be passing VecPred instead, since we know the predicates of
the compare instr.
I think this is related to commit b3b993a7ad which introduced the predicate
argument to getCmpSelInstrCost().
https://reviews.llvm.org/rGb3b993a7ad817c3c5801341fa78f34332900eb83
Differential Revision: https://reviews.llvm.org/D120439
Extend pre-emit peephole for S_CBRANCH_VCC[N]Z to eliminate
redundant S_AND operations against EXEC for V_CMP results in VCC.
These occur after after register allocation when VCC has been
selected as the comparison destination.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D120202
In _GLIBCXX_DEBUG builds, potentially implicitly enabled by
LLVM_ENABLE_EXPENSIVE_CHECKS, std::set<A, B>::iterator and
std::set<A, C>::iterator are distinct types that are not
interconvertible. This change aligns the iterator types with the set
types.
Reviewed By: hoy
Differential Revision: https://reviews.llvm.org/D119798
This fixes instances of the newly added `-Wunqualified-std-cast-call`.
(Commit 7853371146 removed unqualified `move` from the tests,
but these unqualified `move`s remained undetected in the actual headers.)
Differential Revision: https://reviews.llvm.org/D120509
This adds a BUILD_ID prefix to the llvm-symbolizer stdin and argument
syntax. The prefix causes the given binary name to be interpreted as a
build ID instead of an object file path. The semantics are analagous to
the behavior of --obj and --build-id.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D119901
This patch changes the version of V extension from 0.1 to 1.0 in RISCVInstrInfoVPseudos.td, RISCVInstrInfoVSDPatterns.td, RISCVInstrInfoVVLPatterns.td, RISCVInstrInfoV.td
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D120525
This change adds a simple LRU cache to the Symbolize class to put a cap
on llvm-symbolizer memory usage. Previously, the Symbolizer's virtual
memory footprint would grow without bound as additional binaries were
referenced.
I'm putting this out there early for an informal review, since there may be
a dramatically different/better way to go about this. I still need to
figure out a good default constant for the memory cap and benchmark the
implementation against a large symbolization workload. Right now I've
pegged max memory usage at zero for testing purposes, which evicts the whole
cache every time.
Unfortunately, it looks like StringRefs in the returned DI objects can
directly refer to the contents of binaries. Accordingly, the cache
pruning must be explicitly requested by the caller, as the caller must
guarantee that none of the returned objects will be used afterwards.
For llvm-symbolizer this a light burden; symbolization occurs
line-by-line, and the returned objects are discarded after each.
Implementation wise, there are a number of nested caches that depend
on one another. I've implemented a simple Evictor callback system to
allow derived caches to register eviction actions to occur when the
underlying binaries are evicted.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D119784
Default the option introduced in D113372 to ON to match all(?) major Linux
distros. This matches GCC and improves consistency with Android and linux-musl
which always default to PIE.
Note: CLANG_DEFAULT_PIE_ON_LINUX will be removed in the future.
Reviewed By: thesamesam
Differential Revision: https://reviews.llvm.org/D120305
The `objc_precise_lifetime` attribute is applied to Objective-C pointers to ensure the optimizer does not prematurely release an object under Automatic Reference Counting (ARC). It is a common enough pattern to assign values to these variables but not reference them otherwise, and annotating them with `__unused` is not really correct as they are being used to ensure an object's lifetime.
Differential Revision: https://reviews.llvm.org/D120372
Adds two new parameters to control the size of data structures modeled in the environment: # of values and depth of data structure. The environment already prevents creation of recursive data structures, but that was insufficient in practice. Very large structs still ground the analysis to a halt. These new parameters allow tuning the size more effectively.
In this patch, the parameters are set as internal constants. We leave to a future patch to make these proper model parameters.
Differential Revision: https://reviews.llvm.org/D120510
This was based off @thakis' draft in {D103517}. I employed templates to ensure
the support for `-why_live` wouldn't slow down the regular non-why-live code
path.
No stat sig perf difference on my 3.2 GHz 16-Core Intel Xeon W:
base diff difference (95% CI)
sys_time 1.195 ± 0.015 1.199 ± 0.022 [ -0.4% .. +1.0%]
user_time 3.716 ± 0.022 3.701 ± 0.025 [ -0.7% .. -0.1%]
wall_time 4.606 ± 0.034 4.597 ± 0.046 [ -0.6% .. +0.2%]
samples 44 37
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D120377