For --gc-sections, SmallVector<InputSection *, 256> -> SmallVector<InputSection *, 0> because the code bloat (1296 bytes) is not worthwhile (the saved reallocation is negligible).
For OutputSection::compressedData, N=1 is useless (for a compressed .debug_*, the size is always larger than 1).
Information for pointer size/alignment/etc is queried a lot, but
the binary search based implementation makes this fairly slow.
Add an explicit check for address space zero and skip the search
in that case -- we need to specially handle the zero address space
anyway, as it serves as the fallback for all address spaces that
were not explicitly defined.
I initially wanted to simply replace the binary search with a
linear search, which would handle both address space zero and the
general case efficiently, but I was not sure whether there are
any degenerate targets that use more than a handful of declared
address spaces (in-tree, even AMDGPU only declares six).
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.
Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).
This commit reverts d9da4c3e73. Fixes
missing headers (don't know how that was working locally).
Differential Revision: https://reviews.llvm.org/D91672
Interleave groups also depend on the values they store. Manage the
stored values as VPUser operands. This is currently a NFC, but is
required to allow VPlan transforms and to manage generated vector values
exclusively in VPTransformState.
As suggested in D92247 (and independent of whatever we decide to do there),
this code is confusing as-is. Hopefully, this is at least mildly better.
We might be able to do better still, but we have a function called
"removePredecessor" with this behavior:
"Note that this function does not actually remove the predecessor." (!)
x86-64 ILP32 mode (x32) uses 32-bit size_t, so share the code with ix86 to zero out padding bits, not with x86-64 LP64 mode.
Reviewed By: #libc, ldionne
Differential Revision: https://reviews.llvm.org/D91349
This cache went away in 73fdd99870
This time, the cache is periodically validated against disk, so config
is still mostly "live".
The per-file cache reuses FileCache, but the tree-of-file-caches is
duplicated from ConfigProvider. .clangd, .clang-tidy, .clang-format, and
compile_commands.json all have this pattern, we should extract it at some point.
TODO for now though.
Differential Revision: https://reviews.llvm.org/D92133
For recursive phis, we skip the recursive operands and check that
the remaining operands are NoAlias with an unknown size. Currently,
this is limited to inbounds GEPs with positive offsets, to
guarantee that the recursion only ever increases the pointer.
Make this more general by only requiring that the underlying object
of the phi operand is the phi itself, i.e. it it based on itself in
some way. To compensate, we need to use a beforeOrAfterPointer()
location size, as we no longer have the guarantee that the pointer
is strictly increasing.
This allows us to handle some additional cases like negative geps,
geps with dynamic offsets or geps that aren't inbounds.
Differential Revision: https://reviews.llvm.org/D91914
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.
Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).
Differential Revision: https://reviews.llvm.org/D91672
This retrieves CPU affinity via FreeBSD's cpuset(2) API, and makes LLVM
respect affinity settings configured by the user via the cpuset(1)
command.
In particular, this allows to reduce the number of threads used on
machines with high core counts, which can interact badly with
parallelized build systems. This is particularly noticable with lld,
which spawns lots of threads even for linking e.g. hello_world!
This fix is related to PR48193, but does not adress the more fundamental
problem, which is that LLVM by default grabs as many CPUs and/or threads
as possible.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D92271
I took the "Permitted"/"Not Permitted" combo from the `Tag_ARM_ISA_use` case (GNU tools print "Yes").
Reviewed By: compnerd, MaskRay, simon_tatham
Differential Revision: https://reviews.llvm.org/D90305
This ensures that failures show up in regular builds, rather than only
when expensive checks are enabled.
Differential Revision: https://reviews.llvm.org/D91339
The build bots caught two additional pre-existing problems exposed by the test change part of my change https://reviews.llvm.org/D91339, when expensive checks are enabled. https://reviews.llvm.org/D91924 fixes one of them, this fixes the other.
FixupSetCC will change code in the form of
%setcc = SETCCr ...
%ext1 = MOVZX32rr8 %setcc
to
%zero = MOV32r0
%setcc = SETCCr ...
%ext2 = INSERT_SUBREG %zero, %setcc, %subreg.sub_8bit
and replace uses of %ext1 with %ext2.
The register class for %ext2 did not take into account any constraints on %ext1, which may have been required by its uses. This change ensures that the original constraints are honoured, by instead of creating a new %ext2 register, reusing %ext1 and further constraining it as needed. This requires a slight reorganisation to account for the fact that it is possible for the constraining to fail, in which case no changes should be made.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D91933
The build bots caught two additional pre-existing problems exposed by the test change part of my change https://reviews.llvm.org/D91339, when expensive checks are enabled. This fixes one of them.
X86 has CALL64r and CALL32r opcodes, where CALL64r takes a 64-bit register, and CALL32r takes a 32-bit register. CALL64r can only be used in 64-bit mode, CALL32r can only be used in 32-bit mode. LLVM would assume that after picking the appropriate CALLr opcode, a pointer-sized register would be a valid operand, but in x32 mode, a 64-bit mode, pointers are 32 bits. In this mode, it is invalid to directly pass a pointer to CALL64r, it needs to be extended to 64 bits first.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D91924
Makes it easier to diagnose remote index issues with --debug-origins flag.
Reviewed By: sammccall
Differential Revision: https://reviews.llvm.org/D92202
The size requirement on V2 was present because it was not clear
whether an unknown size would allow an access before the start of
V2, which could then overlap. This is clarified since D91649: In
this part of BasicAA, all accesses can occur only after the base
pointer, even if they have unknown size.
This makes the positive and negative offset cases symmetric.
Differential Revision: https://reviews.llvm.org/D91482
Not sure why bswap was treated specially. This also applies to bitreverse
or generic grevi. We can improve this in future patches.
For now I just wanted to get the consistency and the test coverage
as I plan to make some other changes around bswap.
Reverting commit due to address sanitizer errors.
> Extracting the similar regions is the first step in the IROutliner.
>
> Using the IRSimilarityIdentifier, we collect the SimilarityGroups and
> sort them by how many instructions will be removed. Each
> IRSimilarityCandidate is used to define an OutlinableRegion. Each
> region is ordered by their occurrence in the Module and the regions that
> are not compatible with previously outlined regions are discarded.
>
> Each region is then extracted with the CodeExtractor into its own
> function.
>
> We test that correctly extract in:
> test/Transforms/IROutliner/extraction.ll
> test/Transforms/IROutliner/address-taken.ll
> test/Transforms/IROutliner/outlining-same-globals.ll
> test/Transforms/IROutliner/outlining-same-constants.ll
> test/Transforms/IROutliner/outlining-different-structure.ll
>
> Reviewers: paquette, jroelofs, yroux
>
> Differential Revision: https://reviews.llvm.org/D86975
This reverts commit bf899e8913.
Extracting the similar regions is the first step in the IROutliner.
Using the IRSimilarityIdentifier, we collect the SimilarityGroups and
sort them by how many instructions will be removed. Each
IRSimilarityCandidate is used to define an OutlinableRegion. Each
region is ordered by their occurrence in the Module and the regions that
are not compatible with previously outlined regions are discarded.
Each region is then extracted with the CodeExtractor into its own
function.
We test that correctly extract in:
test/Transforms/IROutliner/extraction.ll
test/Transforms/IROutliner/address-taken.ll
test/Transforms/IROutliner/outlining-same-globals.ll
test/Transforms/IROutliner/outlining-same-constants.ll
test/Transforms/IROutliner/outlining-different-structure.ll
Reviewers: paquette, jroelofs, yroux
Differential Revision: https://reviews.llvm.org/D86975
Optimize emitSPAdjustment function to generate as small as possible
instructions to adjust SP.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D92174
The 5f12f4ff90 commit suppress printing of
inline namespace names in diagnostics by default that breaks the libc++
iterator test, which expects __1 in the namespace.
This patch fixes the test by supporting a test case without __1 in the
namespace.
Differential Revision: https://reviews.llvm.org/D92142
I think people were sometimes parenthesizing `(foo::max)()` out of
misplaced concern that an unparenthesized `foo::max()` would trip up
Windows' `max(a,b)` macro. However, this is not the case: `max(a,b)`
should be tripped up only by an unparenthesized call to `foo::max(a,b)`,
and in fact we already do `_VSTD::max(a,b)` all over the place anyway
without any guards.
However, in order to do it without guards, we must also
wrap the header in _LIBCPP_PUSH_MACROS, which <span> was not.
Differential Revision: https://reviews.llvm.org/D92240
On platforms where the integrated as isn't called by default this
test fails since the output is not what it expects. Adding this
option generates the expected output on those platforms as well.