Summary:
This change adds support for the setjmp(3)/longjmp(3)
family of functions on NetBSD.
There are three types of them on NetBSD:
- setjmp(3) / longjmp(3)
- sigsetjmp(3) / sigsetjmp(3)
- _setjmp(3) / _longjmp(3)
Due to historical and compat reasons the symbol
names are mangled:
- setjmp -> __setjmp14
- longjmp -> __longjmp14
- sigsetjmp -> __sigsetjmp14
- siglongjmp -> __siglongjmp14
- _setjmp -> _setjmp
- _longjmp -> _longjmp
This leads to symbol renaming in the existing codebase.
There is no such symbol as __sigsetjmp/__longsetjmp
on NetBSD
Add a comment that GNU-style executable stack
note is not needed on NetBSD. The stack is not
executable without it.
Sponsored by <The NetBSD Foundation>
Reviewers: joerg, dvyukov, vitalybuka
Reviewed By: dvyukov
Subscribers: llvm-commits, kubamracek, #sanitizers
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D40337
llvm-svn: 319189
As part of the unification of the debug format and the MIR format,
always print registers as lowercase.
* Only debug printing is affected. It now follows MIR.
Differential Revision: https://reviews.llvm.org/D40417
llvm-svn: 319187
Generalize FixFunctionBitcasts to handle varargs functions. This in
particular fixes the case where clang bitcasts away a varargs when
calling a K&R-style function.
This avoids interacting with tricky ABI details because it operates
at the LLVM IR level before varargs ABI details are exposed.
This fixes PR35385.
llvm-svn: 319186
Looking through Agner, FTST is very similar to generic float compare behaviour, so I've added them to the existing IIC_FCOMI (WriteFAdd) tags.
llvm-svn: 319184
In more recent Linux kernels with 47 bit VMAs the layout of virtual memory
for powerpc64 changed causing the thread sanitizer to not work properly. This
patch adds support for 47 bit VMA kernels for powerpc64.
(second part)
Tested on several 4.x and 3.x kernel releases.
llvm-svn: 319180
These functions were defined as static members of TemplateSpecializationType.
Now they are moved to namespace level. Previously there were different
implementations for lists containing TemplateArgument and TemplateArgumentLoc,
now these implementations share the same code.
This change is a result of refactoring patch D40508. NFC.
llvm-svn: 319178
Summary:
Remove the redundant, config-time call to cmake when
building host tools for cross compiles or optimized tablegen..
The config-time call to cmake is redundant because it will always get
called again when the CONFIGURE_LLVM_${target_name} target fires at
build-time. This speeds up initial configuration, but has no affect
on build behavior.
Reviewers: beanz
Reviewed By: beanz
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D40229
llvm-svn: 319176
Atom's FABS/FCHS/FSQRT latencies taken from Agner.
Note: I just added FSIN and FCOS to the existing IIC_FSINCOS itinerary, which is actually a more costly instruction.
llvm-svn: 319175
Summary:
The readability-else-after-return check was not warning about
an else after a throw of an exception that had arguments that needed
to be cleaned up.
Reviewers: aaron.ballman, alexfh, djasper
Reviewed By: aaron.ballman
Subscribers: lebedev.ri, klimek, xazax.hun, cfe-commits
Differential Revision: https://reviews.llvm.org/D40505
llvm-svn: 319174
This is needed for cases when the memory access is not as big as the width of
the data type. For instance, storing i1 (1 bit) would be done in a byte (8
bits).
Using 'BitSize >> 3' (or '/ 8') would e.g. give the memory access of an i1 a
size of 0, which for instance makes alias analysis return NoAlias even when
it shouldn't.
There are no tests as this was done as a follow-up to the bugfix for the case
where this was discovered (r318824). This handles more similar cases.
Review: Björn Petterson
https://reviews.llvm.org/D40339
llvm-svn: 319173
lld assumes some ARM features that are not available in all Arm
processors. In particular:
- The blx instruction present for interworking.
- The movt/movw instructions are used in Thunks.
- The J1=1 J2=1 encoding of branch immediates to improve Thumb wide
branch range are assumed to be present.
This patch reads the ARM Attributes section to check for the
architecture the object file was compiled with. If none of the objects
have an architecture that supports either of these features a warning
will be given. This is most likely to affect armv6 as used in the first
Raspberry Pi.
Differential Revision: https://reviews.llvm.org/D36823
llvm-svn: 319169
LLVM Coding Standards:
Function names should be verb phrases (as they represent actions), and
command-like function should be imperative. The name should be camel
case, and start with a lower case letter (e.g. openFile() or isFoo()).
Differential Revision: https://reviews.llvm.org/D40416
llvm-svn: 319168
Certain ARM implementations treat icache clear instruction as a memory read,
and CPU segfaults on trying to clear cache on !PROT_READ page.
We workaround this in Memory::protectMappedMemory by adding
PROT_READ to affected pages, clearing the cache, and then setting
desired protection.
This fixes "AllocationTests/MappedMemoryTest.***/3" unit-tests on
affected hardware.
Reviewers: psmith, zatrazz, kristof.beyls, lhames
Reviewed By: lhames
Subscribers: llvm-commits, krytarowski, peter.smith, jgreenhalgh, aemerson,
rengolin
Patch by maxim-kuvrykov!
Differential Revision: https://reviews.llvm.org/D40423
llvm-svn: 319166
This change is the first in a series of changes to get the XRay runtime
building on macOS. This first allows us to build the minimal parts of
XRay to get us started on supporting macOS development. These include:
- CMake changes to allow targeting x86_64 initially.
- Allowing for building the initialisation routines without
`.preinit_array` support.
- Use __sanitizer::SleepForMillis() to work around the lack of
clock_nanosleep on macOS.
- Deprecate the xray_fdr_log_grace_period_us flag, and introduce
the xray_fdr_log_grace_period_ms flag instead, to use
milliseconds across platforms.
Reviewers: kubamracek
Subscribers: llvm-commits, krytarowski, nglevin, mgorny
Differential Review: https://reviews.llvm.org/D39114
llvm-svn: 319165
The core idea is to (re-)introduce some redundancies where their cost is
hidden by the cost of materializing immediates for constant operands of
PHI nodes. When the cost of the redundancies is covered by this,
avoiding materializing the immediate has numerous benefits:
1) Less register pressure
2) Potential for further folding / combining
3) Potential for more efficient instructions due to immediate operand
As a motivating example, consider the remarkably different cost on x86
of a SHL instruction with an immediate operand versus a register
operand.
This pattern turns up surprisingly frequently, but is somewhat rarely
obvious as a significant performance problem.
The pass is entirely target independent, but it does rely on the target
cost model in TTI to decide when to speculate things around the PHI
node. I've included x86-focused tests, but any target that sets up its
immediate cost model should benefit from this pass.
There is probably more that can be done in this space, but the pass
as-is is enough to get some important performance on our internal
benchmarks, and should be generally performance neutral, but help with
more extensive benchmarking is always welcome.
One awkward part is that this pass has to be scheduled after
*everything* that can eliminate these kinds of redundancies. This
includes SimplifyCFG, GVN, etc. I'm open to suggestions about better
places to put this. We could in theory make it part of the codegen pass
pipeline, but there doesn't really seem to be a good reason for that --
it isn't "lowering" in any sense and only relies on pretty standard cost
model based TTI queries, so it seems to fit well with the "optimization"
pipeline model. Still, further thoughts on the pipeline position are
welcome.
I've also only implemented this in the new pass manager. If folks are
very interested, I can try to add it to the old PM as well, but I didn't
really see much point (my use case is already switched over to the new
PM).
I've tested this pretty heavily without issue. A wide range of
benchmarks internally show no change outside the noise, and I don't see
any significant changes in SPEC either. However, the size class
computation in tcmalloc is substantially improved by this, which turns
into a 2% to 4% win on the hottest path through tcmalloc for us, so
there are definitely important cases where this is going to make
a substantial difference.
Differential revision: https://reviews.llvm.org/D37467
llvm-svn: 319164
The proper index is 6, not 2.
Patch extracted from https://reviews.llvm.org/D40337
Reviewed and accepted by <dvyukov>.
Sponsored by <The NetBSD Foundation>
llvm-svn: 319163
In https://reviews.llvm.org/D39681, we started using a map instead
passing a long list of register sets to the ppc64le register context.
However, existing register contexts were still using the old method.
This converts the remaining register contexts to use this approach.
While doing that, I've had to modify the approach a bit:
- the general purpose register set is still kept as a separate field,
because this one is always present, and it's parsing is somewhat
different than that of other register sets.
- since the same register sets have different IDs on different operating
systems, but we use the same register context class to represent
different register sets, I've needed to add a layer of indirection to
translate os-specific constants (e.g. NETBSD::NT_AMD64_FPREGS) into more
generic terms (e.g. floating point register set).
While slightly more complicated, this setup allows for better separation
of concerns. The parsing code in ProcessElfCore can focus on parsing
OS-specific core file notes, and can completely ignore
architecture-specific register sets (by just storing any unrecognised
notes in a map). These notes will then be passed on to the
architecture-specific register context, which can just deal with
architecture specifics, because the OS-specific note types are hidden in
a register set description map.
This way, adding an register set, which is already supported on other
OSes, to a new OS, should in most cases be as simple as adding a new
entry into the register set description map.
Differential Revision: https://reviews.llvm.org/D40133
llvm-svn: 319162
Summary:
New linux kernels (on systems that support the XSAVES instruction) will
not update the inferior registers unless the corresponding flag in the
XSAVE header is set. Normally this flag will be set in our image of the
XSAVE area (since we obtained it from the kernel), but if the inferior
has never used the corresponding register set, the respective flag can
be clear.
This fixes the issue by making sure we explicitly set the flags
corresponding to the registers we modify. I don't try to precisely match
the flags to set on each write, as the rules could get quite complicated
-- I use a simpler over-approximation instead.
This was already caught by test_fp_register_write, but that was only
because the code that ran before main() did not use some of the register
sets. Since nothing in this test relies on being stopped in main(), I
modify the test to stop at the entry point instead, so we can be sure
the inferior did not have a chance to access these registers.
Reviewers: clayborg, valentinagiusti
Subscribers: lldb-commits
Differential Revision: https://reviews.llvm.org/D40434
llvm-svn: 319161
Summary:
NetBSD uses the __sigaction14 symbol name for historical and compat
reasons for the sigaction(2) function name.
Rename the interceptors and users of sigaction to sigaction_symname
and reuse it in the code base.
This change fixes 4 failing tests in TSan/NetBSD:
- ThreadSanitizer-x86_64 :: signal_errno.cc
- ThreadSanitizer-x86_64 :: signal_malloc.cc
- ThreadSanitizer-x86_64 :: signal_sync2.cc
- ThreadSanitizer-x86_64 :: signal_thread.cc
Sponsored by <The NetBSD Foundation>
Reviewers: joerg, vitalybuka, eugenis, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: kubamracek, llvm-commits, #sanitizers
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D40341
llvm-svn: 319160
Summary:
- Converted Protocol.h parse() functions to take JSON::Expr.
These no longer detect and log unknown fields, as this is not that
useful and no longer free.
I haven't changed the error handling too much: fields that were
treated as optional before are still optional, even when it's wrong.
Exception: object properties with the wrong type are now ignored.
- Made JSONRPCDispatcher parse using json::parse
- The bug where 'method' must come before 'params' in the stream is
fixed as a side-effect. (And the same bug in executeCommand).
- Some parser crashers fixed as a side effect.
e.g. https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=3890
- The debug stream now prettyprints the input messages with --pretty.
- Request params are attached to traces when tracing is enabled.
- Fixed some bugs in tests (errors tolerated by YAMLParser, and
off-by-ones in Content-Length that our null-termination was masking)
- Fixed a random double-escape bug in ClangdLSPServer (it was our last
use of YAMLParser!)
Reviewers: ilya-biryukov
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D40406
llvm-svn: 319159
Summary:
I think we do not need to analyze debug intrinsics here, as they should
not impact codegen. This has 2 benefits: 1) slightly less work to do and
2) avoiding generating optimization remarks for converting calls to
debug intrinsics to tail calls, which are not really helpful for users.
Based on work by Sander de Smalen.
Reviewers: davide, trentxintong, aprantl
Reviewed By: aprantl
Subscribers: llvm-commits, JDevlieghere
Tags: #debug-info
Differential Revision: https://reviews.llvm.org/D40440
llvm-svn: 319158
Summary:
Noticed this when I tried to port the Protocol.h parsers.
And tests for the inspect API, which caught a small bug.
Reviewers: ioeric
Subscribers: ilya-biryukov
Differential Revision: https://reviews.llvm.org/D40399
llvm-svn: 319157
Summary:
The entire algorithm operates per basic-block, so for cache locality
it should be better to re-optimize a basic-block immediately rather than
in a separate loop.
I don't have performance measurements.
Change-Id: I85106570bd623c4ff277faaa50ee43258e1ddcc5
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D40344
llvm-svn: 319156
Summary:
The PeepholeOptimizer pass calls this function solely based on checking
DefMI->isMoveImmediate(), which only checks the MoveImm bit of the
instruction description. So it's up to FoldImmediate itself to properly
check that DefMI *actually* moves from an immediate.
I don't have a separate test case for this, but the next patch introduces
a test case which happens to crash without this change.
This error is caught by the assertion in MachineOperand::getImm().
Change-Id: I88e7cdbcf54d75e1a296822e6fe5f9a5f095bbf8
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D40342
llvm-svn: 319155
Currently, we use a set of pairs to cache responces like `CompareValueComplexity(X, Y) == 0`. If we had
proved that `CompareValueComplexity(S1, S2) == 0` and `CompareValueComplexity(S2, S3) == 0`,
this cache does not allow us to prove that `CompareValueComplexity(S1, S3)` is also `0`.
This patch replaces this set with `EquivalenceClasses` that merges Values into equivalence sets so that
any two values from the same set are equal from point of `CompareValueComplexity`. This, in particular,
allows us to prove the fact from example above.
Differential Revision: https://reviews.llvm.org/D40429
llvm-svn: 319153
This allows grouping all sections like ".ctors.12345" into ".ctors".
For MinGW, the numerical values for such ctors are all zero-padded,
so a lexical sort is good enough.
Differential Revision: https://reviews.llvm.org/D40408
llvm-svn: 319151
The priorities in the section name suffixes are zero padded,
allowing the linker to just do a lexical sort.
Add zero padding for .ctors sections in ELF as well.
Differential Revision: https://reviews.llvm.org/D40407
llvm-svn: 319150
Currently, we use a set of pairs to cache responces like `CompareSCEVComplexity(X, Y) == 0`. If we had
proved that `CompareSCEVComplexity(S1, S2) == 0` and `CompareSCEVComplexity(S2, S3) == 0`,
this cache does not allow us to prove that `CompareSCEVComplexity(S1, S3)` is also `0`.
This patch replaces this set with `EquivalenceClasses` any two values from the same set are equal from
point of `CompareSCEVComplexity`. This, in particular, allows us to prove the fact from example above.
Differential Revision: https://reviews.llvm.org/D40428
llvm-svn: 319149
This is to address a problem similar to those in D37460 for Scalar PRE. We should not
PRE across an instruction that may not pass execution to its successor unless it is safe
to speculatively execute it.
Differential Revision: https://reviews.llvm.org/D38619
llvm-svn: 319147
Unoptimized IR can have linear sequences of stores to an array, where the
initial GEP for the first store is formed from the pointer to the array, and the
GEP for each store after the first is formed from the previous GEP with some
offset in an inductive fashion.
The (large) resulting DAG when analyzed by DAGCombine undergoes an excessive
number of combines as each store node is examined every time its' offset node
is combined with any child of the offset. One of the transformations is
findBetterNeighborChains which assists MergeConsecutiveStores. The former
relies on repeated chain walking to do its' work, however MergeConsecutiveStores
is disabled at O0 which makes the transformation redundant.
Any optimization level other than O0 would invoke InstCombine which would
resolve the chain of GEPs into flat base + offset GEP for each store which
does not exhibit the repeated examination of each store to the array.
Disabling this optimization fixes an excessive compile time issue (30~ minutes
for the test case provided) at O0.
Reviewers: niravd, craig.topper, t.p.northover
Differential Revision: https://reviews.llvm.org/D40193
llvm-svn: 319142
This fixes cases where we wouldn't perform various register operand
checks just because we didn't happen to have a definition in the
MCInstrDesc. This changes the code to only skip the tests that actually
depend on the MCInstrDesc definition.
This makes the machine verifier spot the problem from
https://llvm.org/PR33071 after the pass that actually caused it.
llvm-svn: 319141
Additional checks for phi operands:
- first operand should be a virtual register def. It should not be
tied, implicit, internalread, earlyclobber or a read.
- The other operands should be register/mbb operands next to each other
- The register operands should not be implicit, internalread,
earlyclobber, debug or tied.
- We can perform most of the PHI checks even for unreachable blocks.
llvm-svn: 319140