Summary:
Recommitting this with the correct sorting predicate. The Low field of Clusters is a ConstantInt and
cannot be directly compared. So we needed to invoke slt (signed less than) to compare correctly.
This fixes failures in the following tests uncovered by D39245:
LLVM :: CodeGen/ARM/ifcvt3.ll
LLVM :: CodeGen/ARM/switch-minsize.ll
LLVM :: CodeGen/X86/switch.ll
LLVM :: CodeGen/X86/switch-bt.ll
LLVM :: CodeGen/X86/switch-density.ll
Reviewers: hans, fhahn
Reviewed By: hans
Subscribers: aemerson, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D40541
llvm-svn: 319210
Summary:
They're not always mutually exclusive. read-modify-write atomics are both
at the same time. One example of this is the SWP instructions on AArch64.
Another example is GlobalISel's G_ATOMICRMW_* generic instructions which
will be added in a later patch.
Reviewers: arphaman, aemerson
Reviewed By: aemerson
Subscribers: aemerson, javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D40157
llvm-svn: 319202
This fixes erroneously reported CUDA compilation errors
in host-side code during device-side compilation.
I've also restricted OpenMP-specific checks to trigger only
if we're compiling with OpenMP enabled.
Differential Revision: https://reviews.llvm.org/D40275
llvm-svn: 319201
The motivation behind this patch is that future directions require us to
be able to compute the hash value of records independently of actually
using them for de-duplication.
The current structure of TypeSerializer / TypeTableBuilder being a
single entry point that takes an unserialized type record, and then
hashes and de-duplicates it is not flexible enough to allow this.
At the same time, the existing TypeSerializer is already extremely
complex for this very reason -- it tries to be too many things. In
addition to serializing, hashing, and de-duplicating, ti also supports
splitting up field list records and adding continuations. All of this
functionality crammed into this one class makes it very complicated to
work with and hard to maintain.
To solve all of these problems, I've re-written everything from scratch
and split the functionality into separate pieces that can easily be
reused. The end result is that one class TypeSerializer is turned into 3
new classes SimpleTypeSerializer, ContinuationRecordBuilder, and
TypeTableBuilder, each of which in isolation is simple and
straightforward.
A quick summary of these new classes and their responsibilities are:
- SimpleTypeSerializer : Turns a non-FieldList leaf type into a series of
bytes. Does not do any hashing. Every time you call it, it will
re-serialize and return bytes again. The same instance can be re-used
over and over to avoid re-allocations, and in exchange for this
optimization the bytes returned by the serializer only live until the
caller attempts to serialize a new record.
- ContinuationRecordBuilder : Turns a FieldList-like record into a series
of fragments. Does not do any hashing. Like SimpleTypeSerializer,
returns references to privately owned bytes, so the storage is
invalidated as soon as the caller tries to re-use the instance. Works
equally well for LF_FIELDLIST as it does for LF_METHODLIST, solving a
long-standing theoretical limitation of the previous implementation.
- TypeTableBuilder : Accepts sequences of bytes that the user has already
serialized, and inserts them by de-duplicating with a hash table. For
the sake of convenience and efficiency, this class internally stores a
SimpleTypeSerializer so that it can accept unserialized records. The
same is not true of ContinuationRecordBuilder. The user is required to
create their own instance of ContinuationRecordBuilder.
Differential Revision: https://reviews.llvm.org/D40518
llvm-svn: 319198
Currently CodeGen is calling std::sort on the features vector in TargetOptions for every function, but I don't think CodeGen should be modifying TargetOptions.
Differential Revision: https://reviews.llvm.org/D40228
llvm-svn: 319195
Summary: This remove a small amount of duplicated code.
Reviewers: clayborg, zturner, davide
Subscribers: lldb-commits
Differential Revision: https://reviews.llvm.org/D40536
llvm-svn: 319191
This is based on discussion in https://reviews.llvm.org/D40376 .
The comments try to explain the reason for the current implementation
and note that it might change in the future, so clients should not
rely on this particular implementation.
Differential Revision: https://reviews.llvm.org/D40565
llvm-svn: 319190
Summary:
This change adds support for the setjmp(3)/longjmp(3)
family of functions on NetBSD.
There are three types of them on NetBSD:
- setjmp(3) / longjmp(3)
- sigsetjmp(3) / sigsetjmp(3)
- _setjmp(3) / _longjmp(3)
Due to historical and compat reasons the symbol
names are mangled:
- setjmp -> __setjmp14
- longjmp -> __longjmp14
- sigsetjmp -> __sigsetjmp14
- siglongjmp -> __siglongjmp14
- _setjmp -> _setjmp
- _longjmp -> _longjmp
This leads to symbol renaming in the existing codebase.
There is no such symbol as __sigsetjmp/__longsetjmp
on NetBSD
Add a comment that GNU-style executable stack
note is not needed on NetBSD. The stack is not
executable without it.
Sponsored by <The NetBSD Foundation>
Reviewers: joerg, dvyukov, vitalybuka
Reviewed By: dvyukov
Subscribers: llvm-commits, kubamracek, #sanitizers
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D40337
llvm-svn: 319189
As part of the unification of the debug format and the MIR format,
always print registers as lowercase.
* Only debug printing is affected. It now follows MIR.
Differential Revision: https://reviews.llvm.org/D40417
llvm-svn: 319187
Generalize FixFunctionBitcasts to handle varargs functions. This in
particular fixes the case where clang bitcasts away a varargs when
calling a K&R-style function.
This avoids interacting with tricky ABI details because it operates
at the LLVM IR level before varargs ABI details are exposed.
This fixes PR35385.
llvm-svn: 319186
Looking through Agner, FTST is very similar to generic float compare behaviour, so I've added them to the existing IIC_FCOMI (WriteFAdd) tags.
llvm-svn: 319184
In more recent Linux kernels with 47 bit VMAs the layout of virtual memory
for powerpc64 changed causing the thread sanitizer to not work properly. This
patch adds support for 47 bit VMA kernels for powerpc64.
(second part)
Tested on several 4.x and 3.x kernel releases.
llvm-svn: 319180
These functions were defined as static members of TemplateSpecializationType.
Now they are moved to namespace level. Previously there were different
implementations for lists containing TemplateArgument and TemplateArgumentLoc,
now these implementations share the same code.
This change is a result of refactoring patch D40508. NFC.
llvm-svn: 319178
Summary:
Remove the redundant, config-time call to cmake when
building host tools for cross compiles or optimized tablegen..
The config-time call to cmake is redundant because it will always get
called again when the CONFIGURE_LLVM_${target_name} target fires at
build-time. This speeds up initial configuration, but has no affect
on build behavior.
Reviewers: beanz
Reviewed By: beanz
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D40229
llvm-svn: 319176
Atom's FABS/FCHS/FSQRT latencies taken from Agner.
Note: I just added FSIN and FCOS to the existing IIC_FSINCOS itinerary, which is actually a more costly instruction.
llvm-svn: 319175
Summary:
The readability-else-after-return check was not warning about
an else after a throw of an exception that had arguments that needed
to be cleaned up.
Reviewers: aaron.ballman, alexfh, djasper
Reviewed By: aaron.ballman
Subscribers: lebedev.ri, klimek, xazax.hun, cfe-commits
Differential Revision: https://reviews.llvm.org/D40505
llvm-svn: 319174
This is needed for cases when the memory access is not as big as the width of
the data type. For instance, storing i1 (1 bit) would be done in a byte (8
bits).
Using 'BitSize >> 3' (or '/ 8') would e.g. give the memory access of an i1 a
size of 0, which for instance makes alias analysis return NoAlias even when
it shouldn't.
There are no tests as this was done as a follow-up to the bugfix for the case
where this was discovered (r318824). This handles more similar cases.
Review: Björn Petterson
https://reviews.llvm.org/D40339
llvm-svn: 319173
lld assumes some ARM features that are not available in all Arm
processors. In particular:
- The blx instruction present for interworking.
- The movt/movw instructions are used in Thunks.
- The J1=1 J2=1 encoding of branch immediates to improve Thumb wide
branch range are assumed to be present.
This patch reads the ARM Attributes section to check for the
architecture the object file was compiled with. If none of the objects
have an architecture that supports either of these features a warning
will be given. This is most likely to affect armv6 as used in the first
Raspberry Pi.
Differential Revision: https://reviews.llvm.org/D36823
llvm-svn: 319169
LLVM Coding Standards:
Function names should be verb phrases (as they represent actions), and
command-like function should be imperative. The name should be camel
case, and start with a lower case letter (e.g. openFile() or isFoo()).
Differential Revision: https://reviews.llvm.org/D40416
llvm-svn: 319168
Certain ARM implementations treat icache clear instruction as a memory read,
and CPU segfaults on trying to clear cache on !PROT_READ page.
We workaround this in Memory::protectMappedMemory by adding
PROT_READ to affected pages, clearing the cache, and then setting
desired protection.
This fixes "AllocationTests/MappedMemoryTest.***/3" unit-tests on
affected hardware.
Reviewers: psmith, zatrazz, kristof.beyls, lhames
Reviewed By: lhames
Subscribers: llvm-commits, krytarowski, peter.smith, jgreenhalgh, aemerson,
rengolin
Patch by maxim-kuvrykov!
Differential Revision: https://reviews.llvm.org/D40423
llvm-svn: 319166
This change is the first in a series of changes to get the XRay runtime
building on macOS. This first allows us to build the minimal parts of
XRay to get us started on supporting macOS development. These include:
- CMake changes to allow targeting x86_64 initially.
- Allowing for building the initialisation routines without
`.preinit_array` support.
- Use __sanitizer::SleepForMillis() to work around the lack of
clock_nanosleep on macOS.
- Deprecate the xray_fdr_log_grace_period_us flag, and introduce
the xray_fdr_log_grace_period_ms flag instead, to use
milliseconds across platforms.
Reviewers: kubamracek
Subscribers: llvm-commits, krytarowski, nglevin, mgorny
Differential Review: https://reviews.llvm.org/D39114
llvm-svn: 319165
The core idea is to (re-)introduce some redundancies where their cost is
hidden by the cost of materializing immediates for constant operands of
PHI nodes. When the cost of the redundancies is covered by this,
avoiding materializing the immediate has numerous benefits:
1) Less register pressure
2) Potential for further folding / combining
3) Potential for more efficient instructions due to immediate operand
As a motivating example, consider the remarkably different cost on x86
of a SHL instruction with an immediate operand versus a register
operand.
This pattern turns up surprisingly frequently, but is somewhat rarely
obvious as a significant performance problem.
The pass is entirely target independent, but it does rely on the target
cost model in TTI to decide when to speculate things around the PHI
node. I've included x86-focused tests, but any target that sets up its
immediate cost model should benefit from this pass.
There is probably more that can be done in this space, but the pass
as-is is enough to get some important performance on our internal
benchmarks, and should be generally performance neutral, but help with
more extensive benchmarking is always welcome.
One awkward part is that this pass has to be scheduled after
*everything* that can eliminate these kinds of redundancies. This
includes SimplifyCFG, GVN, etc. I'm open to suggestions about better
places to put this. We could in theory make it part of the codegen pass
pipeline, but there doesn't really seem to be a good reason for that --
it isn't "lowering" in any sense and only relies on pretty standard cost
model based TTI queries, so it seems to fit well with the "optimization"
pipeline model. Still, further thoughts on the pipeline position are
welcome.
I've also only implemented this in the new pass manager. If folks are
very interested, I can try to add it to the old PM as well, but I didn't
really see much point (my use case is already switched over to the new
PM).
I've tested this pretty heavily without issue. A wide range of
benchmarks internally show no change outside the noise, and I don't see
any significant changes in SPEC either. However, the size class
computation in tcmalloc is substantially improved by this, which turns
into a 2% to 4% win on the hottest path through tcmalloc for us, so
there are definitely important cases where this is going to make
a substantial difference.
Differential revision: https://reviews.llvm.org/D37467
llvm-svn: 319164
The proper index is 6, not 2.
Patch extracted from https://reviews.llvm.org/D40337
Reviewed and accepted by <dvyukov>.
Sponsored by <The NetBSD Foundation>
llvm-svn: 319163
In https://reviews.llvm.org/D39681, we started using a map instead
passing a long list of register sets to the ppc64le register context.
However, existing register contexts were still using the old method.
This converts the remaining register contexts to use this approach.
While doing that, I've had to modify the approach a bit:
- the general purpose register set is still kept as a separate field,
because this one is always present, and it's parsing is somewhat
different than that of other register sets.
- since the same register sets have different IDs on different operating
systems, but we use the same register context class to represent
different register sets, I've needed to add a layer of indirection to
translate os-specific constants (e.g. NETBSD::NT_AMD64_FPREGS) into more
generic terms (e.g. floating point register set).
While slightly more complicated, this setup allows for better separation
of concerns. The parsing code in ProcessElfCore can focus on parsing
OS-specific core file notes, and can completely ignore
architecture-specific register sets (by just storing any unrecognised
notes in a map). These notes will then be passed on to the
architecture-specific register context, which can just deal with
architecture specifics, because the OS-specific note types are hidden in
a register set description map.
This way, adding an register set, which is already supported on other
OSes, to a new OS, should in most cases be as simple as adding a new
entry into the register set description map.
Differential Revision: https://reviews.llvm.org/D40133
llvm-svn: 319162