This undocumented ld64 flag, based on the most recent ld64 source dump
from Xcode 12, only applies to i386. It seems like on all newer
architectures this behavior is the default.
Reviewed By: #lld-macho, int3
Differential Revision: https://reviews.llvm.org/D113070
In one of our links lld was reading 760k files, but the unique number of
files was only 1500. This takes that link from 30 seconds to 8.
This seems like a heavy hammer, especially since some things don't need
to be cached, like the filelist arguments and the passed static
archives (the latter is already cached as a one off), but it seems ld64
does something similar here to short circuit these duplicate reads:
82e429e186/src/ld/InputFiles.cpp (L644-L665)
Of the types of files being read for our iOS app, the biggest problem
was constantly re-reading small tbd files:
```
% wc -l /tmp/read.txt
761414 /tmp/read.txt
% cat /tmp/read.txt | sort -u | wc -l
1503
% cat /tmp/read.txt | grep "\.a$" | wc -l
43721
% cat /tmp/read.txt | grep "\.tbd$" | wc -l
717656
```
We could likely hoist this logic up to not cache at this level, but it
would be a more invasive change to make sure all callers that needed it
cached the results.
I could see this being an issue with OOMs, and I'm not a linker expert so
maybe there's another way we should solve this problem? Feedback welcome!
Reviewed By: int3, #lld-macho
Differential Revision: https://reviews.llvm.org/D113153
By default with ld64, architecture mismatches are just warnings, then
this flag can be passed to make these fail. This matches that behavior.
Reviewed By: int3, #lld-macho
Differential Revision: https://reviews.llvm.org/D113082
This allows for external users of Comprehensive Bufferize to specify their own InitTensorOp elimination procedures.
Differential Revision: https://reviews.llvm.org/D112686
D101513 means that we no longer need to specify `-pie` in most of our
test RUN commands. Let's clean up the unused flags so as not to confuse
future test writers.
Reviewed By: #lld-macho, oontvoo, MaskRay
Differential Revision: https://reviews.llvm.org/D113114
Now in libcxx and clang, all the coroutine components are defined in
std::experimental namespace.
And now the coroutine TS is merged into C++20. So in the working draft
like N4892, we could find the coroutine components is defined in std
namespace instead of std::experimental namespace.
And the coroutine support in clang seems to be relatively stable. So I
think it may be suitable to move the coroutine component into the
experiment namespace now.
This patch would make clang lookup coroutine_traits in std namespace
first. For the compatibility consideration, clang would lookup in
std::experimental namespace if it can't find definitions in std
namespace. So the existing codes wouldn't be break after update
compiler.
And in case the compiler found std::coroutine_traits and
std::experimental::coroutine_traits at the same time, it would emit an
error for it.
The support for looking up std::experimental::coroutine_traits would be
removed in Clang16.
Reviewed By: lxfind, Quuxplusone
Differential Revision: https://reviews.llvm.org/D108696
Currently, FPSCR is not modeled, so in some early passes (such as
early-cse), the read/set intrinsics to FPSCR may get incorrect
simplification.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D112380
This expands the lookup table statically and avoids routing through methods that
contain asserts (like StringRef/std::string element accessors and drop_front)
such that performance is more predictable across compilation environments. This
was primarily driven by slow debug mode performance but has a large benefit in
release builds as well.
```
ssd_mobilenet_v2_face_float (42MB .mlir)
Debug/MSVC (old): 5.22s
Debug/MSVC (new): 0.16s
Release/MSVC (old): 0.81s
Release/MSVC (new): 0.02s
huggingface_minilm (536MB .mlir)
Debug/MSVC (old): 65.31s
Debug/MSVC (new): 2.03s
Release/MSVC (old): 9.93s
Release/MSVC (new): 0.27s
```
Now in debug the time is split evenly between lexString, tryGetFromHex, and
element attrs hashing, with the next step to making it faster being to combine
the work (incremental hashing during conversion, etc) - but this is at least in
the right order of magnitude and retains the original API surface.
I have not profiled a build with clang but this is strictly less code and simpler
data structures so I'd expect improvements there as well.
This also fixes a bug where 0xFF bytes in the input would read out of bounds.
Reviewed By: dblaikie, stellaraccident
Differential Revision: https://reviews.llvm.org/D112105
There is no real source location for code inside prologue as it is
generated by compiler but source locations are being added to code
inside prologue as a side effect of https://reviews.llvm.org/D99269
because buildSpillLoadStore() is using source location of the real
instruction in the basic block if any.
Fixes: SWDEV-307590
Reviewed By: scott.linder, sebastian-ne
Differential Revision: https://reviews.llvm.org/D113100
By default `llvm::seq` would happily iterate over enums, which may be unsafe if the enum values are not continuous. This patch disable enum iteration with `llvm::seq` and `llvm::seq_inclusive` and adds two new functions: `enum_seq` and `enum_seq_inclusive`.
To make sure enum iteration is safe, we require users to declare their enum types as iterable by specializing `enum_iteration_traits<SomeEnum>`. Because it's not always possible to add these traits next to enum definition (e.g., for enums defined in external libraries), we provide an escape hatch to allow iteration on per-callsite basis by passing `force_iteration_on_noniterable_enum`.
The main benefit of this approach is that these global declarations via traits can appear just next to enum definitions, making easy to spot when enums are miss-labeled, e.g., after introducing new enum values, whereas `force_iteration_on_noniterable_enum` should stand out and be easy to grep for.
This emerged from a discussion with gchatelet@ about reusing llvm's `Sequence.h` in lieu of https://github.com/GPUOpen-Drivers/llpc/blob/dev/lgc/interface/lgc/EnumIterator.h.
Reviewed By: dblaikie, gchatelet, aaron.ballman
Differential Revision: https://reviews.llvm.org/D107378
For each selector encountered in the source code, we need to load
selectors from the imported modules and check that we are calling a
selector with compatible types.
At the moment, for each module we are storing methods declared in the
headers belonging to this module and methods from the transitive closure
of imported modules. When a module is imported by a few other modules,
methods from the shared module are duplicated in each importer. As the
result, we can end up with lots of identical methods that we try to add
to the global method pool. Doing this duplicate work is useless and
relatively expensive.
Avoid processing duplicate methods by storing in each module only its
own methods and not storing methods from dependencies. Collect methods
from dependencies by walking the graph of module dependencies.
The issue was discovered and reported by Richard Howell. He has done the
hard work for this fix as he has investigated and provided a detailed
explanation of the performance problem.
Differential Revision: https://reviews.llvm.org/D110123
rename str_conv_utils to str_to_integer to be more
in line with str_to_float.
Reviewed By: sivachandra, lntue
Differential Revision: https://reviews.llvm.org/D113061
Not sure these are correct. I think I missed a case when porting this from the original SCEV change to the IndVar changes. I may end up reapplying this later with a comment about how this is correct, but in case the current bad feeling turns out to be true, I'm removing from tree while investigating further.
- Use formatv to print the addresses.
- Add check for 0x0 which is treated as an invalid address.
- Use a an address that's less likely to be interpreted as a real
tagged pointer.
We already make sure to properly clear analyses for deleted functions.
This makes investigating some future potential compile time improvements easier.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D113032
Change RISCVSubtarget.hasVInstructionAnyF() to call hasVInstructionsF32
so that any changes to hasVInstructionsF32 are reflected.
The files were missed in D112496.
This is a re-commit of e2c7ee0743 which
was reverted in a2a58d91e8. This includes
a fix to consistently check for EFLAGS being live-out. See phabricator
review.
Original Summary:
This extends `optimizeCompareInstr` to re-use previous comparison
results if the previous comparison was with an immediate that was 1
bigger or smaller. Example:
CMP x, 13
...
CMP x, 12 ; can be removed if we change the SETg
SETg ... ; x > 12 changed to `SETge` (x >= 13) removing CMP
Motivation: This often happens because SelectionDAG canonicalization
tends to add/subtract 1 often when optimizing for fallthrough blocks.
Example for `x > C` the fallthrough optimization switches true/false
blocks with `!(x > C)` --> `x <= C` and canonicalization turns this into
`x < C + 1`.
Differential Revision: https://reviews.llvm.org/D110867
Detailed description: This change enables the bit field extract patterns
selection to s_bfe_u32 or v_bfe_u32 dependent on the pattern root node
divergence.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D110950
This takes care of cleaning up the temp files on crashes. It doesn't
handle cleanup when explicitly killed though.
Differential Revision: https://reviews.llvm.org/D112710
This change looks for cases where we can prove that an exit test of a loop can be performed in a narrower bitwidth, and that by doing so we can replace a loop-varying extend with a loop-invariant truncate.
The motivation here is that doing this unblocks the trip count analysis for narrow IVs involved in extended compare exit tests. It also has the nice side effect of simply making the code faster, even if we gain no other benefit from the improved analysis ability.
I've noted a few places this could be extended, but I think this stands reasonable on it's own as well.
Differential Revision: https://reviews.llvm.org/D112262
I'm not sure what the history is here but this test passes on macOS
today. It seems like we should unify these tests if they need to run
cross platform.
Reviewed By: #lld-macho, int3
Differential Revision: https://reviews.llvm.org/D113085
The main benefits of this change are faster access to operands
(no need to compute the offset, as it is now right after the
operation), simpler code(no need to manage a lot of the "is the
operand storage trailing" logic we had to before). The major
downside to this though, is that operand holding operations now
grow in size by 1 word (as no matter how we do this change, there
will need to be some additional book keeping).
Differential Revision: https://reviews.llvm.org/D111695
On our large iOS project this took a link from 1 minute 45 seconds to 45
seconds. For reference ld64 does the same link in ~20 seconds.
Reviewed By: #lld-macho, int3
Differential Revision: https://reviews.llvm.org/D113063
A quick grep for NDEBUG in MLIR revealed a use in DebugActions.h that breaks ABI. This patch changes the use of NDEBUG to LLVM_ENABLE_ABI_BREAKING_CHECKS which has the advantage of being independent of whether clients build their own app in debug or release as it is purely dependant on how MLIR itself was built.
Differential Revision: https://reviews.llvm.org/D113088
Read each pointer in the argv and envp arrays before dereferencing
it; this correctly marks an error when these pointers point into
memory that has been freed.
Differential Revision: https://reviews.llvm.org/D113046