Handling the new min/max intrinsics is the motivation, but it
turns out that we have a bunch of other intrinsics with this
missing bit of analysis too.
The FP min/max tests show that we are intersecting FMF,
so that part should be safe too.
As noted in https://llvm.org/PR46897 , there is a commutative
property specifier for intrinsics, but no corresponding function
attribute, and so apparently no uses of that bit. We may want to
remove that next.
Follow-up patches should wire up the Instruction::isCommutative()
to this IntrinsicInst specialization. That requires updating
callers to be aware of the more general commutative property
(not just binops).
Differential Revision: https://reviews.llvm.org/D86798
The original take 1 was 6102310d81,
which taught InstSimplify to do that, which seemed better at time,
since we got EarlyCSE support for free.
However, it was proven that we can not do that there,
the simplified-to PHI would not be reachable from the original PHI,
and that is not something InstSimplify is allowed to do,
as noted in the commit ed90f15efb
that reverted it:
> It appears to cause compilation non-determinism and caused stage3 mismatches.
Then there was take 2 3e69871ab5,
which was InstCombine-specific, but it again showed stage2-stage3 differences,
and reverted in bdaa3f86a0.
This is quite alarming.
Here, let's try to change how we find existing PHI candidate:
due to the worklist order, and the way PHI nodes are inserted
(it may be inserted as the first one, or maybe not), let's look at *all*
PHI nodes in the block.
Effects on vanilla llvm test-suite + RawSpeed:
```
| statistic name | baseline | proposed | Δ | % | \|%\| |
|----------------------------------------------------|-----------|-----------|-------:|---------:|---------:|
| asm-printer.EmittedInsts | 7942329 | 7942457 | 128 | 0.00% | 0.00% |
| assembler.ObjectBytes | 254295632 | 254312480 | 16848 | 0.01% | 0.01% |
| correlated-value-propagation.NumPhis | 18412 | 18347 | -65 | -0.35% | 0.35% |
| early-cse.NumCSE | 2183283 | 2183267 | -16 | 0.00% | 0.00% |
| early-cse.NumSimplify | 550105 | 541842 | -8263 | -1.50% | 1.50% |
| instcombine.NumAggregateReconstructionsSimplified | 73 | 4506 | 4433 | 6072.60% | 6072.60% |
| instcombine.NumCombined | 3640311 | 3644419 | 4108 | 0.11% | 0.11% |
| instcombine.NumDeadInst | 1778204 | 1783205 | 5001 | 0.28% | 0.28% |
| instcombine.NumPHICSEs | 0 | 22490 | 22490 | 0.00% | 0.00% |
| instcombine.NumWorklistIterations | 2023272 | 2024400 | 1128 | 0.06% | 0.06% |
| instcount.NumCallInst | 1758395 | 1758802 | 407 | 0.02% | 0.02% |
| instcount.NumInvokeInst | 59478 | 59502 | 24 | 0.04% | 0.04% |
| instcount.NumPHIInst | 330557 | 330545 | -12 | 0.00% | 0.00% |
| instcount.TotalBlocks | 1077138 | 1077220 | 82 | 0.01% | 0.01% |
| instcount.TotalFuncs | 101442 | 101441 | -1 | 0.00% | 0.00% |
| instcount.TotalInsts | 8831946 | 8832606 | 660 | 0.01% | 0.01% |
| simplifycfg.NumHoistCommonCode | 24186 | 24187 | 1 | 0.00% | 0.00% |
| simplifycfg.NumInvokes | 4300 | 4410 | 110 | 2.56% | 2.56% |
| simplifycfg.NumSimpl | 1019813 | 999767 | -20046 | -1.97% | 1.97% |
```
So it fires 22490 times, which is less than ~24k the take 1 did,
but more than what take 2 did (22228 times)
.
It allows foldAggregateConstructionIntoAggregateReuse() to actually work
after PHI-of-extractvalue folds did their thing. Previously SimplifyCFG
would have done this PHI CSE, of all places. Additionally, allows some
more `invoke`->`call` folds to happen (+110, +2.56%).
All in all, expectedly, this catches less things overall,
but all the motivational cases are still caught, so all good.
While the original variant with doing this in InstSimplify (rightfully)
caused questions and ultimately was detected to be a culprit
of stage2-stage3 mismatch, it was expected that
InstCombine-based implementation would be fine.
But apparently it's not, as
http://lab.llvm.org:8011/builders/clang-with-thin-lto-ubuntu/builds/24095/steps/compare-compilers/logs/stdio
suggests.
Which suggests that somewhere in InstCombine there is a loop
over nondeterministically sorted container, which causes
different worklist ordering.
This reverts commit 3e69871ab5.
This ensures that you get the same output regardless if generating
code directly to an object file or if generating assembly and
assembling that.
Add implementations of the EmitARM64WinCFI*() methods in
AArch64TargetAsmStreamer, and fill in one blank in MCAsmStreamer.
Add corresponding directive handlers in AArch64AsmParser and
COFFAsmParser.
Some SEH directive names have been picked to match the prior art
for SEH assembly directives for x86_64, e.g. the spelling of
".seh_startepilogue" matching the preexisting ".seh_endprologue".
For the directives for saving registers, the exact spelling
from the arm64 documentation is picked, e.g. ".seh_save_reg" (to follow
that naming for all the other ones, e.g. ".seh_save_fregp_x"), while
the corresponding one for x86_64 is plain ".seh_savereg" without the
second underscore.
Directives in the epilogues have the same names as in prologues,
e.g. .seh_savereg, even though the registers are restored, not
saved, at that point.
Differential Revision: https://reviews.llvm.org/D86529
This can happen e.g. for code that declare .seh_proc/.seh_endproc
in assembly, or for code that use .seh_handlerdata (which triggers
the unwind info to be emitted before the end of the function).
The TextSection field must be made non-const to be able to use it
with Streamer.SwitchSection().
Differential Revision: https://reviews.llvm.org/D86528
As we've established, if it takes more than two iterations
(one to perform folding and one to ensure that no folding opportunities
remain) per function, then there are worklist management issues.
So it may be interesting to keep track of it.
The original take was 6102310d81,
which taught InstSimplify to do that, which seemed better at time,
since we got EarlyCSE support for free.
However, it was proven that we can not do that there,
the simplified-to PHI would not be reachable from the original PHI,
and that is not something InstSimplify is allowed to do,
as noted in the commit ed90f15efb
that reverted it :
> It appears to cause compilation non-determinism and caused stage3 mismatches.
However InstCombine already does many different optimizations,
so it should be a safe place to do it here.
Note that we still can't just compare incoming values ranges,
because there is no guarantee that these PHI's we'd simplify to
were already re-visited and sorted.
However coming up with a test is problematic.
Effects on vanilla llvm test-suite + RawSpeed:
```
| statistic name | baseline | proposed | Δ | % | |%| |
|----------------------------------------------------|-----------|-----------|-------:|---------:|---------:|
| instcombine.NumPHICSEs | 0 | 22228 | 22228 | 0.00% | 0.00% |
| asm-printer.EmittedInsts | 7942329 | 7942456 | 127 | 0.00% | 0.00% |
| assembler.ObjectBytes | 254295632 | 254313792 | 18160 | 0.01% | 0.01% |
| early-cse.NumCSE | 2183283 | 2183272 | -11 | 0.00% | 0.00% |
| early-cse.NumSimplify | 550105 | 541842 | -8263 | -1.50% | 1.50% |
| instcombine.NumAggregateReconstructionsSimplified | 73 | 4506 | 4433 | 6072.60% | 6072.60% |
| instcombine.NumCombined | 3640311 | 3666911 | 26600 | 0.73% | 0.73% |
| instcombine.NumDeadInst | 1778204 | 1783318 | 5114 | 0.29% | 0.29% |
| instcount.NumCallInst | 1758395 | 1758804 | 409 | 0.02% | 0.02% |
| instcount.NumInvokeInst | 59478 | 59502 | 24 | 0.04% | 0.04% |
| instcount.NumPHIInst | 330557 | 330549 | -8 | 0.00% | 0.00% |
| instcount.TotalBlocks | 1077138 | 1077221 | 83 | 0.01% | 0.01% |
| instcount.TotalFuncs | 101442 | 101441 | -1 | 0.00% | 0.00% |
| instcount.TotalInsts | 8831946 | 8832611 | 665 | 0.01% | 0.01% |
| simplifycfg.NumInvokes | 4300 | 4410 | 110 | 2.56% | 2.56% |
| simplifycfg.NumSimpl | 1019813 | 999740 | -20073 | -1.97% | 1.97% |
```
So it fires ~22k times, which is less than ~24k the take 1 did.
It allows foldAggregateConstructionIntoAggregateReuse() to actually work
after PHI-of-extractvalue folds did their thing. Previously SimplifyCFG
would have done this PHI CSE, of all places. Additionally, allows some
more `invoke`->`call` folds to happen (+110, +2.56%).
All in all, expectedly, this catches less things overall,
but all the motivational cases are still caught, so all good.
As discussed in https://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20200824/824235.html
even though it seems worthwhile doing so in InstSimplify,
we really can't do that there, because the other PHI wouldn't be
def-reachable from the original PHI.
So ignoring whether or not EarlyCSE should also know to do this,
InstCombine is the place.
As a prerequisite to doing experimental buids of pieces of FreeBSD PowerPC64 as little-endian, allow actually targeting it.
This is needed so basic platform definitions are pulled in. Without it, the compiler will only run freestanding.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D73425
A couple of AArch64 tests were failing on Solaris, both sparc and x86:
LLVM :: MC/AArch64/SVE/add-diagnostics.s
LLVM :: MC/AArch64/SVE/cpy-diagnostics.s
LLVM :: MC/AArch64/SVE/cpy.s
LLVM :: MC/AArch64/SVE/dup-diagnostics.s
LLVM :: MC/AArch64/SVE/dup.s
LLVM :: MC/AArch64/SVE/mov-diagnostics.s
LLVM :: MC/AArch64/SVE/mov.s
LLVM :: MC/AArch64/SVE/sqadd-diagnostics.s
LLVM :: MC/AArch64/SVE/sqsub-diagnostics.s
LLVM :: MC/AArch64/SVE/sub-diagnostics.s
LLVM :: MC/AArch64/SVE/subr-diagnostics.s
LLVM :: MC/AArch64/SVE/uqadd-diagnostics.s
LLVM :: MC/AArch64/SVE/uqsub-diagnostics.s
For example, reduced from `MC/AArch64/SVE/add-diagnostics.s`:
add z0.b, z0.b, #0, lsl #8
missed the expected diagnostics
$ ./bin/llvm-mc -triple=aarch64 -show-encoding -mattr=+sve add.s
add.s:1:21: error: immediate must be an integer in range [0, 255] with a shift amount of 0
add z0.b, z0.b, #0, lsl #8
^
The message is `Match_InvalidSVEAddSubImm8`, emitted in the generated
`lib/Target/AArch64/AArch64GenAsmMatcher.inc` for `MCK_SVEAddSubImm8`.
When comparing the call to `::AArch64Operand::isSVEAddSubImm<char>` on both
Linux/x86_64 and Solaris, I find
875 bool IsByte = std::is_same<int8_t, std::make_signed_t<T>>::value;
is `false` on Solaris, unlike Linux.
The problem boils down to the fact that `int8_t` is plain `char` on
Solaris: both the sparc and i386 psABIs have `char` as signed. However,
with
9887 DiagnosticPredicate DP(Operand.isSVEAddSubImm<int8_t>());
in `lib/Target/AArch64/AArch64GenAsmMatcher.inc`, `std::make_signed_t<int8_t>`
above yieds `signed char`, so `std::is_same<int8_t, signed char>` is `false`.
This can easily be fixed by also allowing for `int8_t` here and in a few
similar places.
Tested on `amd64-pc-solaris2.11`, `sparcv9-sun-solaris2.11`, and
`x86_64-pc-linux-gnu`.
Differential Revision: https://reviews.llvm.org/D85225
Rather than calling hasFnAttribute and then calling getFnAttribute
if the attribute exists, its better to just call getFnAttribute and
then check if we got a valid attribute back.
This patch helps make the debug_abbrev_offset field optional. We don't
need to calculate the value of this field in the future.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D86614
Following 0becc27ebf, `ppc64-pcrel-long-branch-error.s` fails in some
environments with out-of-memory errors associated with buffering the
output in-memory. Since the alternative of writing to an allocated file
is also known to cause problems, we will disable the test
unconditionally (pending a mechanism to disable the test selectively).
Current `v:t = zext(setcc x,y,cc)` will be transformed to `select x, y, 1:t, 0:t, cc`. It misses some opportunities if x's type size is less than `t`'s size. This patch enhances the above transformation.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D86687
LIT uses a model where the test suite is configurable trough a
lit.site.cfg file. Most of the time we use the lit.site.cfg with values
that match the current build configuration, generated by CMake.
Nothing prevents you from running the test suite with a different
configuration, either by overriding some of these values from the
command line, or by passing a different lit.site.cfg.
The latter is currently tedious. Many configuration values are optional
but they still need to be set because lit.cfg.py is accessing them
directly. This patch changes the code to use getattr to return the
attribute if it exists. This makes it possible to specify a minimal
lit.site.cfg with only the mandatory/desired configuration values.
Differential revision: https://reviews.llvm.org/D86821
instruction can decrement the reference count, not whether it can alter
it
This prevents the state transition from S_Use to S_CanRelease when doing
a bottom-up traversal and the transition from S_Retain to S_CanRelease
when doing a top-down traversal when the visited instruction can
increment the ref count but cannot decrement it. This allows the ARC
optimizer to remove retain/release pairs which were previously not
removed.
rdar://problem/21793154
This CL modifies clang enabling using -fsanitize=thread on fuchsia. The
change doesn't build the runtime for fuchsia, it just enables the
instrumentation to be used.
pair-programmed-with: mdempsky@google.com
Change-Id: I816c4d240d1f15e9eae2803fb8ba3a7bf667ed51
Reviewed By: mcgrathr, phosek
Differential Revision: https://reviews.llvm.org/D86822
This patch removes the rather confusing LLDB_LIB_DIR and LLDB_IMPLIB_DIR
environment variables. They are confusing because LLDB_LIB_DIR would
point to the bin subdirectory in the build root while LLDB_IMPLIB_DIR
would point to the lib subdirectory. The reason far this was
LLDB.framework, which gets build under bin.
This patch replaces their uses with configuration.lldb_framework_path
and configuration.lldb_libs_dir respectively.
Differential revision: https://reviews.llvm.org/D86817
* This is just enough to create regions/blocks and iterate over them.
* Does not yet implement the preferred iteration strategy (python pseudo containers).
* Refinements need to come after doing basic mappings of operations and values so that the whole hierarchy can be used.
Differential Revision: https://reviews.llvm.org/D86683
The implicit def of the super register would appear to kill any live
uses of components before the spill, and would be deleted by
MachineCopyPropagation. We need to add implicit uses of the super
register, similarly to what copyPhysReg does. VGPR tuples appear to be
correctly handled already. I need to double check the SGPR->memory
path.
After D85099, if we have attribute group in the function signature that hasn't
been seen before, and later a callsite with the same attribute group, filecheck will evaluate
the first attribute group to for example '#0 {'. We now include { in the args_and_sig group to avoid this.
Differential Revision: https://reviews.llvm.org/D86769
For example:
#define FOO(x) (x)
FOO({});
... forms a statement-expression after macro expansion. This warning
applies to '({' and '})' delimiting statement-expressions, '[[' and ']]'
delimiting attributes, and '::*' introducing a pointer-to-member.
The warning for forming these compound tokens across macro expansions
(or across files!) is enabled by default; the warning for whitespace
within the tokens is not, but is included in -Wall.
Differential Revision: https://reviews.llvm.org/D86751