Certain ARM implementations treat icache clear instruction as a memory read,
and CPU segfaults on trying to clear cache on !PROT_READ page.
We workaround this in Memory::protectMappedMemory by adding
PROT_READ to affected pages, clearing the cache, and then setting
desired protection.
This fixes "AllocationTests/MappedMemoryTest.***/3" unit-tests on
affected hardware.
Reviewers: psmith, zatrazz, kristof.beyls, lhames
Reviewed By: lhames
Subscribers: llvm-commits, krytarowski, peter.smith, jgreenhalgh, aemerson,
rengolin
Patch by maxim-kuvrykov!
Differential Revision: https://reviews.llvm.org/D40423
llvm-svn: 319166
This change is the first in a series of changes to get the XRay runtime
building on macOS. This first allows us to build the minimal parts of
XRay to get us started on supporting macOS development. These include:
- CMake changes to allow targeting x86_64 initially.
- Allowing for building the initialisation routines without
`.preinit_array` support.
- Use __sanitizer::SleepForMillis() to work around the lack of
clock_nanosleep on macOS.
- Deprecate the xray_fdr_log_grace_period_us flag, and introduce
the xray_fdr_log_grace_period_ms flag instead, to use
milliseconds across platforms.
Reviewers: kubamracek
Subscribers: llvm-commits, krytarowski, nglevin, mgorny
Differential Review: https://reviews.llvm.org/D39114
llvm-svn: 319165
The core idea is to (re-)introduce some redundancies where their cost is
hidden by the cost of materializing immediates for constant operands of
PHI nodes. When the cost of the redundancies is covered by this,
avoiding materializing the immediate has numerous benefits:
1) Less register pressure
2) Potential for further folding / combining
3) Potential for more efficient instructions due to immediate operand
As a motivating example, consider the remarkably different cost on x86
of a SHL instruction with an immediate operand versus a register
operand.
This pattern turns up surprisingly frequently, but is somewhat rarely
obvious as a significant performance problem.
The pass is entirely target independent, but it does rely on the target
cost model in TTI to decide when to speculate things around the PHI
node. I've included x86-focused tests, but any target that sets up its
immediate cost model should benefit from this pass.
There is probably more that can be done in this space, but the pass
as-is is enough to get some important performance on our internal
benchmarks, and should be generally performance neutral, but help with
more extensive benchmarking is always welcome.
One awkward part is that this pass has to be scheduled after
*everything* that can eliminate these kinds of redundancies. This
includes SimplifyCFG, GVN, etc. I'm open to suggestions about better
places to put this. We could in theory make it part of the codegen pass
pipeline, but there doesn't really seem to be a good reason for that --
it isn't "lowering" in any sense and only relies on pretty standard cost
model based TTI queries, so it seems to fit well with the "optimization"
pipeline model. Still, further thoughts on the pipeline position are
welcome.
I've also only implemented this in the new pass manager. If folks are
very interested, I can try to add it to the old PM as well, but I didn't
really see much point (my use case is already switched over to the new
PM).
I've tested this pretty heavily without issue. A wide range of
benchmarks internally show no change outside the noise, and I don't see
any significant changes in SPEC either. However, the size class
computation in tcmalloc is substantially improved by this, which turns
into a 2% to 4% win on the hottest path through tcmalloc for us, so
there are definitely important cases where this is going to make
a substantial difference.
Differential revision: https://reviews.llvm.org/D37467
llvm-svn: 319164
The proper index is 6, not 2.
Patch extracted from https://reviews.llvm.org/D40337
Reviewed and accepted by <dvyukov>.
Sponsored by <The NetBSD Foundation>
llvm-svn: 319163
In https://reviews.llvm.org/D39681, we started using a map instead
passing a long list of register sets to the ppc64le register context.
However, existing register contexts were still using the old method.
This converts the remaining register contexts to use this approach.
While doing that, I've had to modify the approach a bit:
- the general purpose register set is still kept as a separate field,
because this one is always present, and it's parsing is somewhat
different than that of other register sets.
- since the same register sets have different IDs on different operating
systems, but we use the same register context class to represent
different register sets, I've needed to add a layer of indirection to
translate os-specific constants (e.g. NETBSD::NT_AMD64_FPREGS) into more
generic terms (e.g. floating point register set).
While slightly more complicated, this setup allows for better separation
of concerns. The parsing code in ProcessElfCore can focus on parsing
OS-specific core file notes, and can completely ignore
architecture-specific register sets (by just storing any unrecognised
notes in a map). These notes will then be passed on to the
architecture-specific register context, which can just deal with
architecture specifics, because the OS-specific note types are hidden in
a register set description map.
This way, adding an register set, which is already supported on other
OSes, to a new OS, should in most cases be as simple as adding a new
entry into the register set description map.
Differential Revision: https://reviews.llvm.org/D40133
llvm-svn: 319162
Summary:
New linux kernels (on systems that support the XSAVES instruction) will
not update the inferior registers unless the corresponding flag in the
XSAVE header is set. Normally this flag will be set in our image of the
XSAVE area (since we obtained it from the kernel), but if the inferior
has never used the corresponding register set, the respective flag can
be clear.
This fixes the issue by making sure we explicitly set the flags
corresponding to the registers we modify. I don't try to precisely match
the flags to set on each write, as the rules could get quite complicated
-- I use a simpler over-approximation instead.
This was already caught by test_fp_register_write, but that was only
because the code that ran before main() did not use some of the register
sets. Since nothing in this test relies on being stopped in main(), I
modify the test to stop at the entry point instead, so we can be sure
the inferior did not have a chance to access these registers.
Reviewers: clayborg, valentinagiusti
Subscribers: lldb-commits
Differential Revision: https://reviews.llvm.org/D40434
llvm-svn: 319161
Summary:
NetBSD uses the __sigaction14 symbol name for historical and compat
reasons for the sigaction(2) function name.
Rename the interceptors and users of sigaction to sigaction_symname
and reuse it in the code base.
This change fixes 4 failing tests in TSan/NetBSD:
- ThreadSanitizer-x86_64 :: signal_errno.cc
- ThreadSanitizer-x86_64 :: signal_malloc.cc
- ThreadSanitizer-x86_64 :: signal_sync2.cc
- ThreadSanitizer-x86_64 :: signal_thread.cc
Sponsored by <The NetBSD Foundation>
Reviewers: joerg, vitalybuka, eugenis, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: kubamracek, llvm-commits, #sanitizers
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D40341
llvm-svn: 319160
Summary:
- Converted Protocol.h parse() functions to take JSON::Expr.
These no longer detect and log unknown fields, as this is not that
useful and no longer free.
I haven't changed the error handling too much: fields that were
treated as optional before are still optional, even when it's wrong.
Exception: object properties with the wrong type are now ignored.
- Made JSONRPCDispatcher parse using json::parse
- The bug where 'method' must come before 'params' in the stream is
fixed as a side-effect. (And the same bug in executeCommand).
- Some parser crashers fixed as a side effect.
e.g. https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=3890
- The debug stream now prettyprints the input messages with --pretty.
- Request params are attached to traces when tracing is enabled.
- Fixed some bugs in tests (errors tolerated by YAMLParser, and
off-by-ones in Content-Length that our null-termination was masking)
- Fixed a random double-escape bug in ClangdLSPServer (it was our last
use of YAMLParser!)
Reviewers: ilya-biryukov
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D40406
llvm-svn: 319159
Summary:
I think we do not need to analyze debug intrinsics here, as they should
not impact codegen. This has 2 benefits: 1) slightly less work to do and
2) avoiding generating optimization remarks for converting calls to
debug intrinsics to tail calls, which are not really helpful for users.
Based on work by Sander de Smalen.
Reviewers: davide, trentxintong, aprantl
Reviewed By: aprantl
Subscribers: llvm-commits, JDevlieghere
Tags: #debug-info
Differential Revision: https://reviews.llvm.org/D40440
llvm-svn: 319158
Summary:
Noticed this when I tried to port the Protocol.h parsers.
And tests for the inspect API, which caught a small bug.
Reviewers: ioeric
Subscribers: ilya-biryukov
Differential Revision: https://reviews.llvm.org/D40399
llvm-svn: 319157
Summary:
The entire algorithm operates per basic-block, so for cache locality
it should be better to re-optimize a basic-block immediately rather than
in a separate loop.
I don't have performance measurements.
Change-Id: I85106570bd623c4ff277faaa50ee43258e1ddcc5
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D40344
llvm-svn: 319156
Summary:
The PeepholeOptimizer pass calls this function solely based on checking
DefMI->isMoveImmediate(), which only checks the MoveImm bit of the
instruction description. So it's up to FoldImmediate itself to properly
check that DefMI *actually* moves from an immediate.
I don't have a separate test case for this, but the next patch introduces
a test case which happens to crash without this change.
This error is caught by the assertion in MachineOperand::getImm().
Change-Id: I88e7cdbcf54d75e1a296822e6fe5f9a5f095bbf8
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D40342
llvm-svn: 319155
Currently, we use a set of pairs to cache responces like `CompareValueComplexity(X, Y) == 0`. If we had
proved that `CompareValueComplexity(S1, S2) == 0` and `CompareValueComplexity(S2, S3) == 0`,
this cache does not allow us to prove that `CompareValueComplexity(S1, S3)` is also `0`.
This patch replaces this set with `EquivalenceClasses` that merges Values into equivalence sets so that
any two values from the same set are equal from point of `CompareValueComplexity`. This, in particular,
allows us to prove the fact from example above.
Differential Revision: https://reviews.llvm.org/D40429
llvm-svn: 319153
This allows grouping all sections like ".ctors.12345" into ".ctors".
For MinGW, the numerical values for such ctors are all zero-padded,
so a lexical sort is good enough.
Differential Revision: https://reviews.llvm.org/D40408
llvm-svn: 319151
The priorities in the section name suffixes are zero padded,
allowing the linker to just do a lexical sort.
Add zero padding for .ctors sections in ELF as well.
Differential Revision: https://reviews.llvm.org/D40407
llvm-svn: 319150
Currently, we use a set of pairs to cache responces like `CompareSCEVComplexity(X, Y) == 0`. If we had
proved that `CompareSCEVComplexity(S1, S2) == 0` and `CompareSCEVComplexity(S2, S3) == 0`,
this cache does not allow us to prove that `CompareSCEVComplexity(S1, S3)` is also `0`.
This patch replaces this set with `EquivalenceClasses` any two values from the same set are equal from
point of `CompareSCEVComplexity`. This, in particular, allows us to prove the fact from example above.
Differential Revision: https://reviews.llvm.org/D40428
llvm-svn: 319149
This is to address a problem similar to those in D37460 for Scalar PRE. We should not
PRE across an instruction that may not pass execution to its successor unless it is safe
to speculatively execute it.
Differential Revision: https://reviews.llvm.org/D38619
llvm-svn: 319147
Unoptimized IR can have linear sequences of stores to an array, where the
initial GEP for the first store is formed from the pointer to the array, and the
GEP for each store after the first is formed from the previous GEP with some
offset in an inductive fashion.
The (large) resulting DAG when analyzed by DAGCombine undergoes an excessive
number of combines as each store node is examined every time its' offset node
is combined with any child of the offset. One of the transformations is
findBetterNeighborChains which assists MergeConsecutiveStores. The former
relies on repeated chain walking to do its' work, however MergeConsecutiveStores
is disabled at O0 which makes the transformation redundant.
Any optimization level other than O0 would invoke InstCombine which would
resolve the chain of GEPs into flat base + offset GEP for each store which
does not exhibit the repeated examination of each store to the array.
Disabling this optimization fixes an excessive compile time issue (30~ minutes
for the test case provided) at O0.
Reviewers: niravd, craig.topper, t.p.northover
Differential Revision: https://reviews.llvm.org/D40193
llvm-svn: 319142
This fixes cases where we wouldn't perform various register operand
checks just because we didn't happen to have a definition in the
MCInstrDesc. This changes the code to only skip the tests that actually
depend on the MCInstrDesc definition.
This makes the machine verifier spot the problem from
https://llvm.org/PR33071 after the pass that actually caused it.
llvm-svn: 319141
Additional checks for phi operands:
- first operand should be a virtual register def. It should not be
tied, implicit, internalread, earlyclobber or a read.
- The other operands should be register/mbb operands next to each other
- The register operands should not be implicit, internalread,
earlyclobber, debug or tied.
- We can perform most of the PHI checks even for unreachable blocks.
llvm-svn: 319140
If /debug was not specified, readSection will return a null
pointer for debug sections. If the debug section is associative with
another section, we need to make sure that the section returned from
readSection is not a null pointer before adding it as an associative
section.
Differential Revision: https://reviews.llvm.org/D40533
llvm-svn: 319133
Revert "[SROA] Propagate !range metadata when moving loads."
Revert "[Mem2Reg] Clang-format unformatted parts of this file. NFCI."
Davide says they broke a bot.
llvm-svn: 319131
This adds code to protect WebAssembly's `trunc_s` family of opcodes
from values outside their domain. Even though such conversions have
full undefined behavior in C/C++, LLVM IR's `fptosi` and `fptoui` do
not, and only return undef.
This also implements the proposed non-trapping float-to-int conversion
feature and uses that instead when available.
llvm-svn: 319128
Currently we mark every shared symbol as STB_WEAK.
That is a hack to make it easy to decide when a .so is needed or not
because of a reference to a given symbol.
That hack leaks when we create copy relocations as shown by the update
to relocation-copy-alias.s.
This patch stores the original binding when we first read a shared
symbol. We still have to update the binding to weak if we see a weak
undef, but I find the logic easier to read where it is now.
llvm-svn: 319127
These lines all exist identically either under SSE2, AVX2 or AVX512. Given that VLX implies all of those, these aren't providing anything new.
llvm-svn: 319124
object is sufficiently aligned.
r303175 annotated field unwindHeader of __cxa_exception with attribute
'aligned' to ensure the thrown object following the __cxa_exception
header was sufficiently aligned. This caused changes in the field
offsets of __cxa_exception relative to the start of the thrown object,
which was an ABI breaking change for some clients.
Instead of annotating field unwindHeader, this commit inserts extra
space before the header. This ensures the thrown object following the
header is sufficiently aligned without changing the field offsets, thus
avoiding any ABI breakages.
rdar://problem/25364625
rdar://problem/35556163
llvm-svn: 319123
With AVX512 vXi1 types are legal so we shouldn't be extending them.
This change is similar to existing code in the zext(setcc) combine.
llvm-svn: 319120