The linker wrapper is used to perform linking and wrapping of embedded
device object files. Currently its internals are not able to be tested
easily. This patch adds the `--dry-run` and `--print-wrapped-module`
options to investigate the link jobs that will be run along with the
wrapped code that will be created to register the binaries.
Reviewed By: JonChesterfield
Differential Revision: https://reviews.llvm.org/D124039
LLVM BPF pass SimplifyPatchable is used to do necessary
code conversion for CO-RE operations. When studying bpf
selftest 'exhandler', I found a corner case not handled properly.
The following is the C code, modified from original 'exhandler'
code.
int g;
int test(struct t1 *p) {
struct t2 *q = p->q;
if (q)
return 0;
struct t3 *f = q->f;
if (!f) g = 5;
return 0;
}
For code:
struct t3 *f = q->f;
if (!f) ...
The IR before BPFMISimplifyPatchable pass looks like:
%5:gpr = LD_imm64 @"llvm.t2:0:8$0:1"
%6:gpr = LDD killed %5:gpr, 0
%7:gpr = LDD killed %6:gpr, 0
JNE_ri killed %7:gpr, 0, %bb.3
JMP %bb.2
Note that compiler knows q = 0 based dataflow and value analysis.
The correct generated code after the pass should be
%5:gpr = LD_imm64 @"llvm.t2:0:8$0:1"
%7:gpr = LDD killed %5:gpr, 0
JNE_ri killed %7:gpr, 0, %bb.3
JMP %bb.2
But the current implementation did further optimization for the
above code and generates
%5:gpr = LD_imm64 @"llvm.t2:0:8$0:1"
JNE_ri killed %5:gpr, 0, %bb.3
JMP %bb.2
which is incorrect.
This patch added a cache to remember those load insns not associated
with CO-RE offset value and will skip these load insns during
transformation.
Differential Revision: https://reviews.llvm.org/D123883
Introduce a method on PyMlirContext (and plumb it through to Python) to
invalidate all of the operations in the live operations map and clear
it. Since Python has no notion of private data, an end-developer could
reach into some 3rd party API which uses the MLIR Python API (that is
behaving correctly with regard to holding references) and grab a
reference to an MLIR Python Operation, preventing it from being
deconstructed out of the live operations map. This allows the API
developer to clear the map when it calls C++ code which could delete
operations, protecting itself from its users.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D123895
There's an existing generic combine that does this for legal types.
This patch adds a RISCV specific combine for W instructions.
Reviewed By: luismarques
Differential Revision: https://reviews.llvm.org/D123983
Reimplements MisExpect diagnostics from D66324 to reconstruct its
original checking methodology only using MD_prof branch_weights
metadata.
New checks rely on 2 invariants:
1) For frontend instrumentation, MD_prof branch_weights will always be
populated before llvm.expect intrinsics are lowered.
2) for IR and sample profiling, llvm.expect intrinsics will always be
lowered before branch_weights are populated from the IR profiles.
These invariants allow the checking to assume how the existing branch
weights are populated depending on the profiling method used, and emit
the correct diagnostics. If these invariants are ever invalidated, the
MisExpect related checks would need to be updated, potentially by
re-introducing MD_misexpect metadata, and ensuring it always will be
transformed the same way as branch_weights in other optimization passes.
Frontend based profiling is now enabled without using LLVM Args, by
introducing a new CodeGen option, and checking if the -Wmisexpect flag
has been passed on the command line.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D115907
Remove constraint that an initializing expression of struct type must have an
associated `Value`. This invariant is not and will not be guaranteed by the
framework, because of potentially uninitialized fields.
Differential Revision: https://reviews.llvm.org/D123961
This fixes a compile-time error recently introduced within the remote
offloading plugin. This patch also removes some extra linker flags that are unnecessary, and adds an explicit abseil linker flag without which we occasionally get problems.
Differential Revision: https://reviews.llvm.org/D119984
Motivation: The intent here is for use in Swift.
When building a clang module for swift consumption, swift adds an
extension block to the module for name lookup purposes. Swift calls
this a SwiftLookupTable. One purpose that this serves is to handle
conflicting names between ObjC classes and ObjC protocols. They exist in
different namespaces in ObjC programs, but in Swift they would exist in
the same namespace. Swift handles this by appending a suffix to a
protocol name if it shares a name with a class. For example, if you have
an ObjC class named "Foo" and a protocol with the same name, the
protocol would be renamed to "FooProtocol" when imported into swift.
When constructing the previously mentioned SwiftLookupTable, we use
Sema::LookupName to look up name conflicts for the previous problem.
By this time, the Parser has long finished its job so the call to
LookupName gets nullptr for its Scope (TUScope will be nullptr
by this point). The C/ObjC path does not have this problem because it
only uses the Scope in specific scenarios. The C++ codepath uses the
Scope quite extensively and will fail early on if the Scope it gets is
null. In our very specific case of looking up ObjC classes with a
specific name, we want to force sema::LookupName to take the C/ObjC
codepath even if C++ or ObjC++ is enabled.
getUpperBound is analogous to getLowerBound(), except for the upper
bound, and is used in range analysis.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D124020
Sequence is an important transform combination primitive that just indicates
transform ops being applied in a row. The simplest version requires fails
immediately if any transformation in the sequence fails. Introducing this
operation allows one to start placing transform IR within other IR.
Depends On D123135
Reviewed By: Mogball, rriddle
Differential Revision: https://reviews.llvm.org/D123664
Summary: Handle casts for ranges working similarly to APSIntType::apply function but for the whole range set. Support promotions, truncations and conversions.
Example:
promotion: char [0, 42] -> short [0, 42] -> int [0, 42] -> llong [0, 42]
truncation: llong [4295033088, 4295033130] -> int [65792, 65834] -> short [256, 298] -> char [0, 42]
conversion: char [-42, 42] -> uint [0, 42]U[4294967254, 4294967295] -> short[-42, 42]
Differential Revision: https://reviews.llvm.org/D103094
This patch adds a new function `mlirDenseElementsAttrBFloat16Get()`,
which accepts the shaped type, the number of BFloat16 values, and a
pointer to an array of BFloat16 values, each of which is a `uint16_t`
value.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D123981
A new set of overloaded functions named getOrInsertLibFunc() are now supposed
to be used instead of getOrInsertFunction() when building a libcall from
within an LLVM optimizer(). The idea is that this new function also makes
sure that any mandatory argument attributes are added to the function
prototype (after calling getOrInsertFunction()).
inferLibFuncAttributes() is renamed to inferNonMandatoryLibFuncAttrs() as it
only adds attributes that are not necessary for correctness but merely
helping with later optimizations.
Generally, the front end is responsible for building a correct function
prototype with the needed argument attributes. If the middle end however is
the one creating the call, e.g. when replacing one libcall with another, it
then must take this responsibility.
This continues the work of properly handling argument extension if required
by the target ABI when building a lib call. getOrInsertLibFunc() now does
this for all libcalls currently built by any LLVM optimizer. It is expected
that when in the future a new optimization builds a new libcall with an
integer argument it is to be added to getOrInsertLibFunc() with the proper
handling. Note that not all targets have it in their ABI to sign/zero extend
integer arguments to the full register width, but this will be done
selectively as determined by getExtAttrForI32Param().
Review: Eli Friedman, Nikita Popov, Dávid Bolvanský
Differential Revision: https://reviews.llvm.org/D123198
With 'nuw' we can convert the increment of the shift amount
into a pre-shift (constant fold) of the shifted constant:
https://alive2.llvm.org/ce/z/FkTyR2
Fixes issue #41976
When constructing canonical relationships between two regions, the first instruction of a basic block from the first region is used to find the corresponding basic block from the second region. However, debug instructions are not included in similarity matching, and therefore do not have a canonical numbering. This patch makes sure to ignore the debug instructions when finding the first instruction in a basic block.
Reviewer: paquette
Differential Revision: https://reviews.llvm.org/D123903
Makes
bin/llvm-lit \
projects/compiler-rt/test/profile/Profile-arm64/instrprof-darwin-dead-strip.c
pass on my machine.
Without this change, ld64 complains that the bitcode was generated by LLVM 15
while the reader is 13.1 -- the version of Xcode on my machine. Looks like the
DYLD_LIBRARY_PATH technique isn't working.
-lto_library was added back in ld64-136, which was in Xcode 4.6, which was
released over 10 years ago. So relying on it should be safe by now.
Differential Revision: https://reviews.llvm.org/D124018
The printer is now resilient to invalid IR and will already automatically
fallback to the generic form on invalid IR. Using the generic printer on
pass failure was a conservative option before the printer was made
failsafe.
Reviewed By: lattner, rriddle, jpienaar, bondhugula
Differential Revision: https://reviews.llvm.org/D123915
This increases cardinality of span latency metrics. Currently this was
being shown to the user via file status updates as `Running Update (x)` after
this change we'll only display `Running Update`. This also affects logs in case
of a crash, but contents and version number for inputs are printed separately in
that case already.
Differential Revision: https://reviews.llvm.org/D124013
Two tests used the term "full archive" rather than "regular", these have
been updated including the test names. They now also use --thin rather
than the deprecated T. This change was made in preparation of D123142.
Differential Revision: https://reviews.llvm.org/D123778
The goal of this patch is to improve distribution build's flexibility to include only applicable header files.
Currently, the clang-resource-headers target contains nearly all the files in clang/lib/Headers. Most of these files are platform specific (e.g. immintrin.h is x86 specific). A distribution build will have to either include all the headers for all the platforms, or not include any headers. For example, if a distribution build for powerpc includes the clang-resource-headers target, it will include all the x86 specific headers, even-though the x86 specific headers cannot be used.
This patch breaks up the clang-resource-headers list to a core list and platform specific lists. With the patch, a distribution build can now include the ppc-resource-headers to include the headers applicable to the powerpc platform.
Specifically, one can now have
cmake ... LLVM_DISTRIBUTION_COMPONENTS="clang;ppc-resource-headers" ... ../llvm
ninja install-distribution then installs the powerpc headers.
Similarly, one can do
cmake ... LLVM_DISTRIBUTION_COMPONENTS="clang;x86-resource-headers" ... ../llvm
to include headers applicable to the x86 platform in a distribution installation.
To implement this behaviour, the patch does two things:
* It breaks up the long files header file list to a core list and platform specific lists.
* It adds numerous platform specific installation targets.
Differential Revision: https://reviews.llvm.org/D123498
This teaches the perfect shuffle tables about lane inserts, that can
help reduce the cost of many entries. Many of the shuffle masks are
one-away from being correct, and a simple lane move can be a lot simpler
than trying to use ext/zip/etc. Because they are not exactly like the
other masks handled in the perfect shuffle tables, they require special
casing to generate them, with a special InsOp Operator.
The lane to insert into is encoded as the RHSID, and the move from is
grabbed from the original mask. This helps reduce the maximum perfect
shuffle entry cost to 3, with many more shuffles being generatable in a
single instruction.
Differential Revision: https://reviews.llvm.org/D123386
This introduces filtering out inclusions based on the resolved path. This
mechanism will be important for disabling warnings for headers that we can not
diagnose correctly yet.
Reviewed By: sammccall
Differential Revision: https://reviews.llvm.org/D123488