A search order is a list of VSOs to be searched linearly to find symbols. Each
VSO now has a search order that will be used when fixing up definitions in that
VSO. Each VSO's search order defaults to just that VSO itself.
This is a first step towards removing symbol resolvers from ORC altogether. In
practice symbol resolvers tended to be used to implement a search order anyway,
sometimes with additional programatic generation of symbols. Now that VSOs
support programmatic generation of definitions via fallback generators, search
orders provide a cleaner way to achieve the desired effect (while removing a lot
of boilerplate).
llvm-svn: 337593
Ideally our ISD node types going into the isel table would have types consistent with their instruction domain. This prevents us having to duplicate patterns with different types for the same instruction.
Unfortunately, it seems our shuffle combining is currently relying on this a little remove some bitcasts. This seems to enable some switching between shufps and shufd. Hopefully there's some way we can address this in the combining.
Differential Revision: https://reviews.llvm.org/D49280
llvm-svn: 337590
CombineTo is most useful when you need to replace multiple results, avoid the worklist management, or you need to something else after the combine, etc. Otherwise you should be able to just return the new node and let DAGCombiner go through its usual worklist code.
All of the places changed in this patch look to be standard cases where we should be able to use the more stand behavior of just returning the new node.
Differential Revision: https://reviews.llvm.org/D49569
llvm-svn: 337589
CUDA-9.2 made all integer SIMD functions into compiler builtins,
so clang no longer has access to the implementation of these
functions in either headers of libdevice and has to provide
its own implementation.
This is mostly a 1:1 mapping to a corresponding PTX instructions
with an exception of vhadd2/vhadd4 that don't have an equivalent
instruction and had to be implemented with a bit hack.
Performance of this implementation will be suboptimal for SM_50
and newer GPUs where PTXAS generates noticeably worse code for
the SIMD instructions compared to the code it generates
for the inline assembly generated by nvcc (or used to come
with CUDA headers).
Differential Revision: https://reviews.llvm.org/D49274
llvm-svn: 337587
Discovered because of: https://bugs.llvm.org/show_bug.cgi?id=38235
It seems to me that a scoped enum should NOT be an integral constant expression
without a cast, so this seems like a sensical change.
Attributes that check for an integer parameter simply use this function to
ensure that they have an integer, so it was previously allowing a scoped enum.
Also added a test based on Richard's feedback to ensure that case labels still work.
Differential Revision: https://reviews.llvm.org/D49599
llvm-svn: 337585
This adds initial support for a demangling library (LLVMDemangle)
and tool (llvm-undname) for demangling Microsoft names. This
doesn't cover 100% of cases and there are some known limitations
which I intend to address in followup patches, at least until such
time that we have (near) 100% test coverage matching up with all
of the test cases in clang/test/CodeGenCXX/mangle-ms-*.
Differential Revision: https://reviews.llvm.org/D49552
llvm-svn: 337584
ItaniumDemangle had a small NFC refactor to make some of its
code reusable by the newly added Microsoft demangler. To keep
the libcxxabi demangler as close as possible to the master copy
this refactor is being merged over.
Differential Revision: https://reviews.llvm.org/D49575
llvm-svn: 337582
Summary:
When splitting predecessors in BasicBlockUtils, we create a new block as an immediate predecessor of the original BB, then we connect a given set of predecessors to the new block.
The API in this patch will be used to update MemoryPhis for this CFG change.
If all predecessors are being moved, we move the MemoryPhi directly. Otherwise we create a new MemoryPhi in the NewBB and populate its incoming values, while deleting them from BB's Phi.
[Split from D45299 for easier review]
Reviewers: george.burgess.iv
Subscribers: sanjoy, jlebar, Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D49156
llvm-svn: 337581
no-ops.
A non-escaping block on the stack will never be called after its
lifetime ends, so it doesn't have to be copied to the heap. To prevent
a non-escaping block from being copied to the heap, this patch sets
field 'isa' of the block object to NSConcreteGlobalBlock and sets the
BLOCK_IS_GLOBAL bit of field 'flags', which causes the runtime to treat
the block as if it were a global block (calling _Block_copy on the block
just returns the original block and calling _Block_release is a no-op).
Also, a new flag bit 'BLOCK_IS_NOESCAPE' is added, which allows the
runtime or tools to distinguish between true global blocks and
non-escaping blocks.
rdar://problem/39352313
Differential Revision: https://reviews.llvm.org/D49303
llvm-svn: 337580
`VM_MEMORY_SANITIZER`.
It turns out that `VM_MEMORY_ANALYSIS_TOOL` is already reserved for
use by other tools so switch to a tag reserved for use by the Sanitizers.
rdar://problem/41969783
Differential Revision: https://reviews.llvm.org/D49603
llvm-svn: 337579
We can safely use getConstant here as we're still lowering, which allows constant folding to kick in and simplify the vector shift codegen.
Noticed while working on D49562.
llvm-svn: 337578
Enable the optimization of operations on DPR and SPR via a feature instead
of checking the target.
Differential revision: https://reviews.llvm.org/D49463
llvm-svn: 337575
This is a new modernized VS integration installer. It adds a
Visual Studio .sln file which, when built, outputs a VSIX that can
be used to install ourselves as a "real" Visual Studio Extension.
We can even upload this extension to the visual studio marketplace.
This fixes a longstanding problem where we didn't support installing
into VS 2017 and higher. In addition to supporting VS 2017, due
to the way this is written we now longer need to do anything special
to support future versions of VS as well. Everything should
"just work". This also fixes several bugs with our old integration,
such as MSBuild triggering full rebuilds when /Zi was used.
Finally, we add a new UI page called "LLVM" which becomes visible
when the LLVM toolchain is selected. For now this only contains
one option which is the path to clang-cl.exe, but in the future
we can add more things here.
Differential Revision: https://reviews.llvm.org/D42762
llvm-svn: 337572
When pointer checking is enabled, it's important that every pointer is
checked before its value is used.
For stores MSan used to generate code that calculates shadow/origin
addresses from a pointer before checking it.
For userspace this isn't a problem, because the shadow calculation code
is quite simple and compiler is able to move it after the check on -O2.
But for KMSAN getShadowOriginPtr() creates a runtime call, so we want the
check to be performed strictly before that call.
Swapping materializeChecks() and materializeStores() resolves the issue:
both functions insert code before the given IR location, so the new
insertion order guarantees that the code calculating shadow address is
between the address check and the memory access.
llvm-svn: 337571
Summary: In Python 3, sys.stdout.write expects a string rather than bytes. In order to be able to write the bytes to stdout, we need to use the buffer directly instead. This change is borrowing the implementation for writing to stdout that cat.py uses. Note that we cannot use cat.py directly because the file we are trying to open is a gzip file.
Reviewers: asmith, bkramer, alexshap, jakehehrlich
Reviewed By: alexshap, jakehehrlich
Subscribers: jakehehrlich, llvm-commits
Differential Revision: https://reviews.llvm.org/D49515
llvm-svn: 337567
Summary:
Each of the four methods had a dozen lines and was doing almost exactly
the same thing: get the appropriate accelerator table kind and insert an
entry into it. I move this common logic to a helper function and make
these methods delegate to it.
This came up in the context of D49493, where I've needed to make adding
a string to a string pool slightly more complicated, and it seemed to
make sense to do it in one place instead of five.
To make this work I've needed to unify the interface of the AccelTable
data types, as some used to store DIE& and others DIE*. I chose to unify
to a reference as that's what the caller uses.
This technically isn't NFC, because it changes the StringPool used for
apple tables in the DWO case (now it uses the main file like DWARF v5
instead of the DWO file). However, that shouldn't matter, as DWO is not
a thing on apple targets (clang frontend simply ignores -gsplit-dwarf).
Reviewers: JDevlieghere, aprantl, probinson
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D49542
llvm-svn: 337562
When merging through a TokenFactor we need to check that the
load may be ordered such that no other aliasing memory operations may
happen. It is not sufficient to just check that the load is a member
of the chain token factor as it there may be a indirect chain. Require
the load's chain has only one use.
This fixes PR37826.
Reviewers: spatel, davide, efriedma, craig.topper, RKSimon
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D49388
llvm-svn: 337560
Summary:
There is currently too much redundancy in the class/variable/* names in Scudo:
- we are in the namespace `__scudo`, so there is no point in having something
named `ScudoX` to end up with a final name of `__scudo::ScudoX`;
- there are a lot of types/* that have `Allocator` in the name, given that
Scudo is an allocator I figure this doubles up as well.
So change a bunch of the Scudo names to make them shorter, less redundant, and
overall simpler. They should still be pretty self explaining (or at least it
looks so to me).
The TSD part will be done in another CL (eg `__scudo::ScudoTSD`).
Reviewers: alekseyshl, eugenis
Reviewed By: alekseyshl
Subscribers: delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D49505
llvm-svn: 337557
As documented here: https://software.intel.com/en-us/node/682969 and
https://software.intel.com/en-us/node/523346. cpu_dispatch multiversioning
is an ICC feature that provides for function multiversioning.
This feature is implemented with two attributes: First, cpu_specific,
which specifies the individual function versions. Second, cpu_dispatch,
which specifies the location of the resolver function and the list of
resolvable functions.
This is valuable since it provides a mechanism where the resolver's TU
can be specified in one location, and the individual implementions
each in their own translation units.
The goal of this patch is to be source-compatible with ICC, so this
implementation diverges from the ICC implementation in a few ways:
1- Linux x86/64 only: This implementation uses ifuncs in order to
properly dispatch functions. This is is a valuable performance benefit
over the ICC implementation. A future patch will be provided to enable
this feature on Windows, but it will obviously more closely fit ICC's
implementation.
2- CPU Identification functions: ICC uses a set of custom functions to identify
the feature list of the host processor. This patch uses the cpu_supports
functionality in order to better align with 'target' multiversioning.
1- cpu_dispatch function def/decl: ICC's cpu_dispatch requires that the function
marked cpu_dispatch be an empty definition. This patch supports that as well,
however declarations are also permitted, since the linker will solve the
issue of multiple emissions.
Differential Revision: https://reviews.llvm.org/D47474
llvm-svn: 337552
r337531 changed return type of MmapFixedNoReserve, but esan wasn't updated.
As the result esan shadow setup always fails.
We probably need to make MmapFixedNoAccess signature consistent
with MmapFixedNoReserve. But this is just to unbreak tests.
llvm-svn: 337550
This version contains a fix to add values for which the state in ParamState change
to the worklist if the state in ValueState did not change. To avoid adding the
same value multiple times, mergeInValue returns true, if it added the value to
the worklist. The value is added to the worklist depending on its state in
ValueState.
Original message:
For comparisons with parameters, we can use the ParamState lattice
elements which also provide constant range information. This improves
the code for PR33253 further and gets us closer to use
ValueLatticeElement for all values.
Also, as we are using the range information in the solver directly, we
do not need tryToReplaceWithConstantRange afterwards anymore.
Reviewers: dberlin, mssimpso, davide, efriedma
Reviewed By: mssimpso
Differential Revision: https://reviews.llvm.org/D43762
llvm-svn: 337548
This is an early step towards using SimplifyDemandedVectorElts for target shuffle combining - this merely moves the existing X86ISD::VBROADCAST simplification code to use the SimplifyDemandedVectorElts mechanism.
Adds X86TargetLowering::SimplifyDemandedVectorEltsForTargetNode to handle X86ISD::VBROADCAST - in time we can support all target shuffles (and other ops) here.
llvm-svn: 337547
Summary:
This patch makes us generate the debug_names section in response to some
user-facing commands (previously it was only generated if explicitly
selected via the -accel-tables option).
My goal was to make this work for DWARF>=5 (as it's an official part of
that standard), and also, as an extension, for DWARF<5 if one is
explicitly tuning for lldb as a debugger (because it brings a large
performance improvement there).
This is slightly complicated by the fact that the debug_names tables are
incompatible with the DWARF v4 type units (they assume that the type
units are in the debug_info section), and unfortunately, right now we
generate DWARF v4-style type units even for -gdwarf-5. For this reason,
I disable all accelerator tables if the user requested type unit
generation. I do this even for apple tables, as they have the same
problem (in fact generating type units for apple targets makes us crash
even before we get around to emitting the accelerator tables).
Reviewers: JDevlieghere, aprantl, dblaikie, echristo, probinson
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D49420
llvm-svn: 337544
Currently clang does not allow implicit cast of a pointer to a pointer type
in different address space but allows C-style cast of a pointer to a pointer
type in different address space. However, there is a bug in Sema causing
incorrect Cast Expr in AST for the latter case, which in turn results in
invalid LLVM IR in codegen.
This is because Sema::IsQualificationConversion returns true for a cast of
pointer to a pointer type in different address space, which in turn allows
a standard conversion and results in a cast expression with no op in AST.
This patch fixes that by let Sema::IsQualificationConversion returns false
for a cast of pointer to a pointer type in different address space, which
in turn disallows standard conversion, implicit cast, and static cast.
Finally it results in an reinterpret cast and correct conversion kind is set.
Differential Revision: https://reviews.llvm.org/D49294
llvm-svn: 337540
It looks like currently the UBSan blacklist is only applied when "Undefined" is selected.
This patch updates the cmake file to apply it whenever Undefined is selected
(e.g. 'Address; Undefined' ). This allows us to use the workaround added in
rL335525 when using AddressSan and UBSan together.
Reviewers: eugenis, vitalybuka
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D49558
llvm-svn: 337539