It's full featured now and we can use it for the runtimes build instead
of relying on an external libtool, which means the CMAKE_HOST_APPLE
restriction serves no purpose either now. Restrict llvm-lipo to Darwin
targets while I'm here, since it's only needed there.
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D86367
{builtin,runtime}_register_target passes a TOOLCHAIN_TOOLS list, whereas
{builtin,runtime}_default_target does notl. The explicit TOOLCHAIN_TOOLS
list matches what LLVMExternalProjectUtils would have set anyway,
barring some target-specific adjustments, and those target-specific
adjustments seem valuable, so let's drop the explicit TOOLCHAIN_TOOLS
list and let LLVMExternalProjectUtils take care of it.
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D86366
libtool already produces a table of contents, and ranlib just gives
spurious errors because it doesn't understand universal binaries.
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D86365
The -V option in cctools' libtool prints out the version number and
performs any specified operation. Add this option to LLVM's version.
cctools is more forgiving of invalid command lines when -V is specified,
but I think it's better to give errors instead of silently producing no
output.
Unfortunately, when -V is present, options that would otherwise be
required aren't anymore, so we need to perform some manual argument
validation.
Reviewed By: alexshap
Differential Revision: https://reviews.llvm.org/D86359
Instead of copying and pasting the same `#ifdef` expressions in multiple
places, define a type and a pair of macros in `kmp_os.h`, to handle
whether `va_list` is pointer-like or not:
* `kmp_va_list` is the type to use for `__kmp_fork_call()`
* `kmp_va_deref()` dereferences a `va_list`, if necessary
* `kmp_va_addr_of()` takes the address of a `va_list`, if necessary
Also add FreeBSD to the list of OSes that has a non pointer-like
va_list. This can now be easily extended to other OSes too.
Reviewed By: AndreyChurbanov
Differential Revision: https://reviews.llvm.org/D86397
The "takeName" logic at the end of ScalarizerVisitor::finish
could end up renaming global variables when having simplified
and extractelement instruction to simply pick a single vector
element. If the input vector to the extractelement instruction
held pointers to global variables we ended up renaming the global
variable.
The patch make sure we only take the name of the replaced Op when
we have added new instructions that might need a useful name.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D86472
A specification expression can reference an implicitly declared variable
in the host procedure. Because we have to process specification parts
before execution parts, this may be the first time we encounter the
variable. We were assuming the variable was implicitly declared in the
scope where it was encountered, leading to an error because local
variables may not be referenced in specification expressions.
The fix is to tentatively create the implicit variable in the host
procedure because that is the only way the specification expression can
be valid. We mark it with the flag `ImplicitOrError` to indicate that
either it must be implicitly defined in the host (by being mentioned in
the execution part) or else its use turned out to be an error.
We need to apply the implicit type rules of the host, which requires
some changes to implicit typing.
Variables in common blocks are allowed to appear in specification expressions
(because they are not locals) but the common block definition may not appear
until after their use. To handle this we create common block symbols and object
entities for each common block object during the `PreSpecificationConstruct`
pass. This allows us to remove the corresponding code in the main visitor and
`commonBlockInfo_.curr`. The change in order of processing causes some
different error messages to be emitted.
Some cleanup is included with this change:
- In `ExpressionAnalyzer`, if an unresolved name is encountered but
no error has been reported, emit an internal error.
- Change `ImplicitRulesVisitor` to hide the `ImplicitRules` object
that implements it. Change the interface to pass in names rather
than having to get the first character of the name.
- Change `DeclareObjectEntity` to have the `attrs` argument default
to an empty set; that is the typical case.
- In `Pre(parser::SpecificationPart)` use "structured bindings" to
give names to the pieces that make up a specification-part.
- Enhance `parser::Unwrap` to unwrap `Statement` and `UnlabeledStatement`
and make use of that in PreSpecificationConstruct.
Differential Revision: https://reviews.llvm.org/D86322
Current custom lowering of truncate vector handles a source of up to 128 bits, but that only uses one of the two shuffle vector operands. Extend it to use both operands to handle 256 bit sources.
Differential Revision: https://reviews.llvm.org/D68035
Based on the PyType and PyConcreteType classes, this patch implements the bindings of Index Type, Floating Point Type and None Type subclasses.
These three subclasses share the same binding strategy:
- The function pointer `isaFunction` points to `mlirTypeIsA***`.
- The `mlir***TypeGet` C API is bound with the `***Type` constructor in the python side.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D86466
Breakpad will always have a UUID for binaries when it creates minidump files. If an ELF files has a GNU build ID, it will use that. If it doesn't, it will create one by hashing up to the first 4096 bytes of the .text section. LLDB was not able to load these binaries even when we had the right binary because the UUID didn't match. LLDB will use the GNU build ID first as the main UUID for a binary and fallback onto a 8 byte CRC if a binary doesn't have one. With this fix, we will check for the Breakpad hash or the Facebook hash (a modified version of the breakpad hash that collides a bit less) and accept binaries when these hashes match.
Differential Revision: https://reviews.llvm.org/D86261
This reverts commit 2e43acfed8.
LLVMCoroutines (the library which contains Coroutines.h) depends on LLVMipo (the
library which contains SampleProfile.cpp). It is inappropriate for
SampleProfile.cpp to depent on Coroutines.h (circular dependency).
The test inverted dependencies as well:
llvm/test/Transforms/Coroutines/coro-inline.ll uses -sample-profile.
This interferes with GlobalISel's much better handling of the
situation.
This should really be disable for GlobalISel. However, the fallback
only re-runs the selection passes, and doesn't go back and rerun any
codegen IR passes. I haven't come up with a good solution to this
problem.
Handle NULL address argument in the `mach_vm_[de]allocate()`
interceptors and fix test: `Assignment 2` is not valid if we weren't
able to re-allocate memory.
rdar://67680613
1. Added a new common completion TypeCategoryNames to provide a list of category names for completion;
2. Applied the completion to these commands: type category delete/enable/disable/list/define;
3. Added a related test case;
4. Bound the completion to the arguments of the type 'eArgTypeName'.
Reviewed By: teemperor, JDevlieghere
Differential Revision: https://reviews.llvm.org/D84124
This is to initially handleg immAllOnesV, which should match
G_BUILD_VECTOR or G_BUILD_VECTOR_TRUNC. In the future, it could be
used for other patterns cases that map to multiple G_* instructions,
such as G_ADD and G_PTR_ADD.
D77152 tried to do this but got it wrong in the shift-by-zero case.
D86430 reverted the wrong code. Reimplement the optimization with
different code depending on whether the shift amount is known to be
non-zero (modulo bitwidth).
This improves code quality for fshl tests on AMDGPU, which only has an
fshr instruction.
Differential Revision: https://reviews.llvm.org/D86438
Need to build sphinx using below flags to Cmake
`-DLLVM_ENABLE_SPHINX=ON -DSPHINX_WARNINGS_AS_ERRORS=OFF`.
Generate html docs using cmake target
`docs-flang-html`
Generated html files should be at `build/tools/flang/docs/html`.
Patch in series from the dicussion on review
https://reviews.llvm.org/D85828
After this patch the markdown docmentation must be written using guide in-
`llvm/docs/MarkdownQuickstartTemplate.md`
Reviewed By: sscalpone
Differential Revision: https://reviews.llvm.org/D86131
1. Extended the gdb-remote communication related classes with disk file/directory
completion functions;
2. Added two common completion functions RemoteDiskFiles and
RemoteDiskDirectories based on the functions above;
3. Added completion for these commands:
A. platform get-file <remote-file> <local-file>;
B. platform put-file <local-file> <remote-file>;
C. platform get-size <remote-file>;
D. platform settings -w <remote-dir>;
E. platform open file <remote-file>.
4. Added related tests for client and server;
5. Updated docs/lldb-platform-packets.txt.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D85284
1. Added two common completions: `ProcessIDs` and `ProcessNames`, which are
refactored from their original dedicated option completions;
2. Removed the dedicated option completion functions of `process attach` and
`platform process attach`, so that they can use arg-type-bound common
completions instead;
3. Bound `eArgTypePid` to the pid completion, `eArgTypeProcessName` to the
process name completion in `CommandObject.cpp`;
4. Added a related test case.
Reviewed By: teemperor
Differential Revision: https://reviews.llvm.org/D80700
Using callCapturesBefore potentially improves the precision and the
number of stores we can remove. But in practice, it seems to have very
little impact in terms of stores removed. For example, for
SPEC2000/SPEC2006/MultiSource with -O3 -flto, ~50 more stores are
removed (out of ~26900 stores removed). But in terms of compile-time, it
is very expensive and the patch gives substantial compile-time
improvements: Geomean O3 -0.24%, ReleaseThinLTO -0.47%, ReleaseLTO-g
-0.39%.
http://llvm-compile-time-tracker.com/compare.php?from=612a0bff88ed906c83b82f079d4c49e5fecfb9d0&to=e6c86b96d20d97dd88e903a409bd8d39b6114312&stat=instructions
Currently SimpleCmpTest passes after 9,831,994 trials on x86_64/Linux
when the number of given trials is 10,000,000, just a little bigger than
that. This patch modifies SimpleCmpTest.cpp so that the test passes with less
trials, reducing its chances of future failures as libFuzzer evolves. More
specifically, this patch changes a 32-bit equality check to a 8-bit equality
check, making this test pass at 4,635,303 trials.
Differential Revision: https://reviews.llvm.org/D86382
This commit makes FileCheck print all CHECK-NOT directive failure in a
CHECK-NOT block even if one fails. Prior to that, it would stop trying
to match CHECK-NOT directive as soon as one in the block fails.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D86315
This patch adds frontend and backend options to enable and disable
the PowerPC MMA operations added in ISA 3.1. Instructions using these
options will be added in subsequent patches.
Differential Revision: https://reviews.llvm.org/D81442
summary:
When callee coroutine function is inlined into caller coroutine
function before coro-split pass, llvm will emits "coroutine should
have exactly one defining @llvm.coro.begin". It seems that coro-early
pass can not handle this quiet well.
So we believe that unsplited coroutine function should not be inlined.
This patch fix such issue by not inlining function if it has attribute
"coroutine.presplit" (it means the function has not been splited) to
fix this issue
TestPlan: check-llvm
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D85812
OpenMP 5.0 supports nested declare target regions. So, in general,it is
allow to mark a declarationas declare target with different device_type
or link type. Patch adds support for such kind of nesting.
Differential Revision: https://reviews.llvm.org/D86239
Changes:
* Change `ToVectorTy` to deal directly with `ElementCount` instances.
* `VF == 1` replaced with `VF.isScalar()`.
* `VF > 1` and `VF >=2` replaced with `VF.isVector()`.
* `VF <=1` is replaced with `VF.isZero() || VF.isScalar()`.
* Replaced the uses of `llvm::SmallSet<ElementCount, ...>` with
`llvm::SmallSetVector<ElementCount, ...>`. This avoids the need of an
ordering function for the `ElementCount` class.
* Bits and pieces around printing the `ElementCount` to string streams.
To guarantee that this change is a NFC, `VF.Min` and asserts are used
in the following places:
1. When it doesn't make sense to deal with the scalable property, for
example:
a. When computing unrolling factors.
b. When shuffle masks are built for fixed width vector types
In this cases, an
assert(!VF.Scalable && "<mgs>") has been added to make sure we don't
enter coepaths that don't make sense for scalable vectors.
2. When there is a conscious decision to use `FixedVectorType`. These
uses of `FixedVectorType` will likely be removed in favour of
`VectorType` once the vectorizer is generic enough to deal with both
fixed vector types and scalable vector types.
3. When dealing with building constants out of the value of VF, for
example when computing the vectorization `step`, or building vectors
of indices. These operation _make sense_ for scalable vectors too,
but changing the code in these places to be generic and make it work
for scalable vectors is to be submitted in a separate patch, as it is
a functional change.
4. When building the potential VFs in VPlan. Making the VPlan generic
enough to handle scalable vectorization factors is a functional change
that needs a separate patch. See for example `void
LoopVectorizationPlanner::buildVPlans(unsigned MinVF, unsigned
MaxVF)`.
5. The class `IntrinsicCostAttribute`: this class still uses `unsigned
VF` as updating the field to use `ElementCount` woudl require changes
that could result in changing the behavior of the compiler. Will be done
in a separate patch.
7. When dealing with user input for forcing the vectorization
factor. In this case, adding support for scalable vectorization is a
functional change that migh require changes at command line.
Note that in some places the idiom
```
unsigned VF = ...
auto VTy = FixedVectorType::get(ScalarTy, VF)
```
has been replaced with
```
ElementCount VF = ...
assert(!VF.Scalable && ...);
auto VTy = VectorType::get(ScalarTy, VF)
```
The assertion guarantees that the new code is (at least in debug mode)
functionally equivalent to the old version. Notice that this change had been
possible because none of the methods that are specific to `FixedVectorType`
were used after the instantiation of `VTy`.
Reviewed By: rengolin, ctetreau
Differential Revision: https://reviews.llvm.org/D85794
Handle workitem intrinsics. There isn't really away to adequately test
this right now, since none of the known bits users are fine grained
enough to test the edge conditions. This triggers a number of
instances of the new 64-bit to 32-bit shift combine in the existing
tests.
shl ([sza]ext x, y) => zext (shl x, y).
Turns expensive 64 bit shifts into 32 bit if it does not overflow the
source type:
This is a port of an AMDGPU DAG combine added in
5fa289f0d8. InstCombine does this
already, but we need to do it again here to apply it to shifts
introduced for lowered getelementptrs. This will help matching
addressing modes that use 32-bit offsets in a future patch.
TableGen annoyingly assumes only a single match data operand, so
introduce a reusable struct. However, this still requires defining a
separate GIMatchData for every combine which is still annoying.
Adds a morally equivalent function to the existing
getShiftAmountTy. Without this, we would have to do try to repeatedly
query the legalizer info and guess at what type to use for the shift.
Changes:
* Change `ToVectorTy` to deal directly with `ElementCount` instances.
* `VF == 1` replaced with `VF.isScalar()`.
* `VF > 1` and `VF >=2` replaced with `VF.isVector()`.
* `VF <=1` is replaced with `VF.isZero() || VF.isScalar()`.
* Add `<` operator to `ElementCount` to be able to use
`llvm::SmallSetVector<ElementCount, ...>`.
* Bits and pieces around printing the ElementCount to string streams.
* Added a static method to `ElementCount` to represent a scalar.
To guarantee that this change is a NFC, `VF.Min` and asserts are used
in the following places:
1. When it doesn't make sense to deal with the scalable property, for
example:
a. When computing unrolling factors.
b. When shuffle masks are built for fixed width vector types
In this cases, an
assert(!VF.Scalable && "<mgs>") has been added to make sure we don't
enter coepaths that don't make sense for scalable vectors.
2. When there is a conscious decision to use `FixedVectorType`. These
uses of `FixedVectorType` will likely be removed in favour of
`VectorType` once the vectorizer is generic enough to deal with both
fixed vector types and scalable vector types.
3. When dealing with building constants out of the value of VF, for
example when computing the vectorization `step`, or building vectors
of indices. These operation _make sense_ for scalable vectors too,
but changing the code in these places to be generic and make it work
for scalable vectors is to be submitted in a separate patch, as it is
a functional change.
4. When building the potential VFs in VPlan. Making the VPlan generic
enough to handle scalable vectorization factors is a functional change
that needs a separate patch. See for example `void
LoopVectorizationPlanner::buildVPlans(unsigned MinVF, unsigned
MaxVF)`.
5. The class `IntrinsicCostAttribute`: this class still uses `unsigned
VF` as updating the field to use `ElementCount` woudl require changes
that could result in changing the behavior of the compiler. Will be done
in a separate patch.
7. When dealing with user input for forcing the vectorization
factor. In this case, adding support for scalable vectorization is a
functional change that migh require changes at command line.
Differential Revision: https://reviews.llvm.org/D85794
Summary:
Whith the number of projects growing, it is important to be able to
filter them in a more convenient way than by names. It is especially
important for benchmarks, when it is not viable to analyze big
projects 20 or 50 times in a row.
Because of this reason, this commit adds a notion of sizes and a
filtering interface that puts a limit on a maximum size of the project
to analyze or benchmark.
Sizes assigned to the projects in this commit, do not directly
correspond to the number of lines or files in the project. The key
factor that is important for the developers of the analyzer is the
time it takes to analyze the project. And for this very reason,
"size" basically helps to cluster projects based on their analysis
time.
Differential Revision: https://reviews.llvm.org/D83942