Commit Graph

294725 Commits

Author SHA1 Message Date
Lang Hames fd0c1e7169 [ORC] Replace SymbolResolvers in the new ORC layers with search orders on VSOs.
A search order is a list of VSOs to be searched linearly to find symbols. Each
VSO now has a search order that will be used when fixing up definitions in that
VSO. Each VSO's search order defaults to just that VSO itself.

This is a first step towards removing symbol resolvers from ORC altogether. In
practice symbol resolvers tended to be used to implement a search order anyway,
sometimes with additional programatic generation of symbols. Now that VSOs
support programmatic generation of definitions via fallback generators, search
orders provide a cleaner way to achieve the desired effect (while removing a lot
of boilerplate).

llvm-svn: 337593
2018-07-20 18:31:50 +00:00
Benjamin Kramer a2e18bba30 [Demangler] Add missing overrides
-Winconsistent-missing-override complains about this.

llvm-svn: 337592
2018-07-20 18:22:12 +00:00
Zachary Turner 91ecedd29e Fix a few warnings and style issues in MS demangler.
Also remove a broken test case.

llvm-svn: 337591
2018-07-20 18:07:33 +00:00
Craig Topper 28ac623f6f [X86] Remove isel patterns for MOVSS/MOVSD ISD opcodes with integer types.
Ideally our ISD node types going into the isel table would have types consistent with their instruction domain. This prevents us having to duplicate patterns with different types for the same instruction.

Unfortunately, it seems our shuffle combining is currently relying on this a little remove some bitcasts. This seems to enable some switching between shufps and shufd. Hopefully there's some way we can address this in the combining.

Differential Revision: https://reviews.llvm.org/D49280

llvm-svn: 337590
2018-07-20 17:57:53 +00:00
Craig Topper 6194ccf8c7 [X86] Remove what appear to be unnecessary uses of DCI.CombineTo
CombineTo is most useful when you need to replace multiple results, avoid the worklist management, or you need to something else after the combine, etc. Otherwise you should be able to just return the new node and let DAGCombiner go through its usual worklist code.

All of the places changed in this patch look to be standard cases where we should be able to use the more stand behavior of just returning the new node.

Differential Revision: https://reviews.llvm.org/D49569

llvm-svn: 337589
2018-07-20 17:57:42 +00:00
Zachary Turner a219bae1b7 Fix linker failure with Any.
This is due to a difference in MS ABI which is why I didn't see
it locally.  The included fix should work on all compilers.

llvm-svn: 337588
2018-07-20 17:50:53 +00:00
Artem Belevich fa07bb646c [CUDA] Provide integer SIMD functions for CUDA-9.2
CUDA-9.2 made all integer SIMD functions into compiler builtins,
so clang no longer has access to the implementation of these
functions in either headers of libdevice and has to provide
its own implementation.

This is mostly a 1:1 mapping to a corresponding PTX instructions
with an exception of vhadd2/vhadd4 that don't have an equivalent
instruction and had to be implemented with a bit hack.

Performance of this implementation will be suboptimal for SM_50
and newer GPUs where PTXAS generates noticeably worse code for
the SIMD instructions compared to the code it generates
for the inline assembly generated by nvcc (or used to come
with CUDA headers).

Differential Revision: https://reviews.llvm.org/D49274

llvm-svn: 337587
2018-07-20 17:44:34 +00:00
Simon Pilgrim 5e729dcc03 [llvm-mca][x86] Add movsx/movzx instructions to general x86_64 resource tests
llvm-svn: 337586
2018-07-20 17:43:42 +00:00
Erich Keane 1ddd4bf87f Prevent Scoped Enums from being Integral constant expressions:
Discovered because of: https://bugs.llvm.org/show_bug.cgi?id=38235

It seems to me that a scoped enum should NOT be an integral constant expression
without a cast, so this seems like a sensical change.

Attributes that check for an integer parameter simply use this function to
ensure that they have an integer, so it was previously allowing a scoped enum.

Also added a test based on Richard's feedback to ensure that case labels still work.

Differential Revision: https://reviews.llvm.org/D49599

llvm-svn: 337585
2018-07-20 17:42:09 +00:00
Zachary Turner f435a7eada Add a Microsoft Demangler.
This adds initial support for a demangling library (LLVMDemangle)
and tool (llvm-undname) for demangling Microsoft names.  This
doesn't cover 100% of cases and there are some known limitations
which I intend to address in followup patches, at least until such
time that we have (near) 100% test coverage matching up with all
of the test cases in clang/test/CodeGenCXX/mangle-ms-*.

Differential Revision: https://reviews.llvm.org/D49552

llvm-svn: 337584
2018-07-20 17:27:48 +00:00
Philip Pfaffe 2628620b62 [Any] Fix a typo: didn't use the correct argument
llvm-svn: 337583
2018-07-20 17:24:11 +00:00
Zachary Turner 0e3fbf6b8b Merge changes to ItaniumDemangle over to libcxxabi.
ItaniumDemangle had a small NFC refactor to make some of its
code reusable by the newly added Microsoft demangler.  To keep
the libcxxabi demangler as close as possible to the master copy
this refactor is being merged over.

Differential Revision: https://reviews.llvm.org/D49575

llvm-svn: 337582
2018-07-20 17:16:49 +00:00
Alina Sbirlea 20c2962585 [MemorySSA] Add API to update MemoryPhis, following CFG changes.
Summary:
When splitting predecessors in BasicBlockUtils, we create a new block as an immediate predecessor of the original BB, then we connect a given set of predecessors to the new block.
The API in this patch will be used to update MemoryPhis for this CFG change.
If all predecessors are being moved, we move the MemoryPhi directly. Otherwise we create a new MemoryPhi in the NewBB and populate its incoming values, while deleting them from BB's Phi.
[Split from D45299 for easier review]

Reviewers: george.burgess.iv

Subscribers: sanjoy, jlebar, Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D49156

llvm-svn: 337581
2018-07-20 17:13:05 +00:00
Akira Hatanaka dbfa453e41 [CodeGen][ObjC] Make copying and disposing of a non-escaping block
no-ops.

A non-escaping block on the stack will never be called after its
lifetime ends, so it doesn't have to be copied to the heap. To prevent
a non-escaping block from being copied to the heap, this patch sets
field 'isa' of the block object to NSConcreteGlobalBlock and sets the
BLOCK_IS_GLOBAL bit of field 'flags', which causes the runtime to treat
the block as if it were a global block (calling _Block_copy on the block
just returns the original block and calling _Block_release is a no-op).

Also, a new flag bit 'BLOCK_IS_NOESCAPE' is added, which allows the
runtime or tools to distinguish between true global blocks and
non-escaping blocks.

rdar://problem/39352313

Differential Revision: https://reviews.llvm.org/D49303

llvm-svn: 337580
2018-07-20 17:10:32 +00:00
Dan Liew c358e51e9b On Darwin switch from the `VM_MEMORY_ANALYSIS_TOOL` VM tag to
`VM_MEMORY_SANITIZER`.

It turns out that `VM_MEMORY_ANALYSIS_TOOL` is already reserved for
use by other tools so switch to a tag reserved for use by the Sanitizers.

rdar://problem/41969783

Differential Revision: https://reviews.llvm.org/D49603

llvm-svn: 337579
2018-07-20 17:07:35 +00:00
Simon Pilgrim 70fcd0f481 [X86][XOP] Fix SUB constant folding for VPSHA/VPSHL shift lowering
We can safely use getConstant here as we're still lowering, which allows constant folding to kick in and simplify the vector shift codegen.

Noticed while working on D49562.

llvm-svn: 337578
2018-07-20 16:55:18 +00:00
Alexander Potapenko 80c6f41581 [MSan] Hotfix compilation
Make sure NewSI is used in materializeStores()

llvm-svn: 337577
2018-07-20 16:52:12 +00:00
Zachary Turner 862439c7cb Change bool_constant to integral_constant.
bool_constant is C++17.

llvm-svn: 337576
2018-07-20 16:51:55 +00:00
Evandro Menezes fffa9b5897 [ARM] Add new feature to enable optimizing the VFP registers
Enable the optimization of operations on DPR and SPR via a feature instead
of checking the target.

Differential revision: https://reviews.llvm.org/D49463

llvm-svn: 337575
2018-07-20 16:49:28 +00:00
Zachary Turner bf443b9181 Add llvm::Any.
This is analogous to std::any which is only available in C++17.

Differential Revision: https://reviews.llvm.org/D48807

llvm-svn: 337573
2018-07-20 16:39:32 +00:00
Zachary Turner 83226b913d Rewrite the VS integration scripts.
This is a new modernized VS integration installer.  It adds a
Visual Studio .sln file which, when built, outputs a VSIX that can
be used to install ourselves as a "real" Visual Studio Extension.
We can even upload this extension to the visual studio marketplace.

This fixes a longstanding problem where we didn't support installing
into VS 2017 and higher.  In addition to supporting VS 2017, due
to the way this is written we now longer need to do anything special
to support future versions of VS as well.  Everything should
"just work".  This also fixes several bugs with our old integration,
such as MSBuild triggering full rebuilds when /Zi was used.

Finally, we add a new UI page called "LLVM" which becomes visible
when the LLVM toolchain is selected.  For now this only contains
one option which is the path to clang-cl.exe, but in the future
we can add more things here.

Differential Revision: https://reviews.llvm.org/D42762

llvm-svn: 337572
2018-07-20 16:30:02 +00:00
Alexander Potapenko 5ff3abbc31 [MSan] run materializeChecks() before materializeStores()
When pointer checking is enabled, it's important that every pointer is
checked before its value is used.
For stores MSan used to generate code that calculates shadow/origin
addresses from a pointer before checking it.
For userspace this isn't a problem, because the shadow calculation code
is quite simple and compiler is able to move it after the check on -O2.
But for KMSAN getShadowOriginPtr() creates a runtime call, so we want the
check to be performed strictly before that call.

Swapping materializeChecks() and materializeStores() resolves the issue:
both functions insert code before the given IR location, so the new
insertion order guarantees that the code calculating shadow address is
between the address check and the memory access.

llvm-svn: 337571
2018-07-20 16:28:49 +00:00
Simon Pilgrim c7132031a2 [X86][SSE] Use SplitOpsAndApply to improve HADD/HSUB lowering
Improve AVX1 256-bit vector HADD/HSUB matching by using SplitOpsAndApply to split into 128-bit instructions.

llvm-svn: 337568
2018-07-20 16:20:45 +00:00
Stella Stamenova ca0547c83c [llvm-objcopy, tests] Fix several llvm-objcopy tests
Summary: In Python 3, sys.stdout.write expects a string rather than bytes. In order to be able to write the bytes to stdout, we need to use the buffer directly instead. This change is borrowing the implementation for writing to stdout that cat.py uses. Note that we cannot use cat.py directly because the file we are trying to open is a gzip file.

Reviewers: asmith, bkramer, alexshap, jakehehrlich

Reviewed By: alexshap, jakehehrlich

Subscribers: jakehehrlich, llvm-commits

Differential Revision: https://reviews.llvm.org/D49515

llvm-svn: 337567
2018-07-20 16:19:36 +00:00
Simon Pilgrim a85b86a982 [X86][AVX] Add support for i16 256-bit vector horizontal op redundant shuffle removal
llvm-svn: 337566
2018-07-20 15:51:01 +00:00
Simon Pilgrim a2bc2d488c [X86][AVX] Add v16i16 horizontal op redundant shuffle tests
llvm-svn: 337565
2018-07-20 15:41:15 +00:00
Pavel Labath 0120691f53 Fix build breakage from r337562
I changed a variable's type from pointer to reference, but forgot to
update the assert-only code.

llvm-svn: 337564
2018-07-20 15:40:24 +00:00
Nirav Dave 25802ac9fd [DAG] Avoid Node Update assertion due to AND simplification
Check for construction-time folding for incomplete AND nodes in
BackwardsPropagateMask.

Fixes PR38185.

Reviewers: RKSimon, samparker

Reviewed By: samparker

Subscribers: llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D49444

llvm-svn: 337563
2018-07-20 15:27:24 +00:00
Pavel Labath 7f7e60694e DwarfDebug: Reduce duplication in addAccel*** methods
Summary:
Each of the four methods had a dozen lines and was doing almost exactly
the same thing: get the appropriate accelerator table kind and insert an
entry into it. I move this common logic to a helper function and make
these methods delegate to it.

This came up in the context of D49493, where I've needed to make adding
a string to a string pool slightly more complicated, and it seemed to
make sense to do it in one place instead of five.

To make this work I've needed to unify the interface of the AccelTable
data types, as some used to store DIE& and others DIE*. I chose to unify
to a reference as that's what the caller uses.

This technically isn't NFC, because it changes the StringPool used for
apple tables in the DWO case (now it uses the main file like DWARF v5
instead of the DWO file). However, that shouldn't matter, as DWO is not
a thing on apple targets (clang frontend simply ignores -gsplit-dwarf).

Reviewers: JDevlieghere, aprantl, probinson

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D49542

llvm-svn: 337562
2018-07-20 15:24:13 +00:00
Simon Pilgrim 7c56bce996 [X86][AVX] Add support for 32/64 bits 256-bit vector horizontal op redundant shuffle removal
llvm-svn: 337561
2018-07-20 15:24:12 +00:00
Nirav Dave 5a4e11ad9c [DAG] Fix Memory ordering check in ReduceLoadOpStore.
When merging through a TokenFactor we need to check that the
load may be ordered such that no other aliasing memory operations may
happen. It is not sufficient to just check that the load is a member
of the chain token factor as it there may be a indirect chain. Require
the load's chain has only one use.

This fixes PR37826.

Reviewers: spatel, davide, efriedma, craig.topper, RKSimon

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D49388

llvm-svn: 337560
2018-07-20 15:20:50 +00:00
Reka Kovacs 88ad704b5b [analyzer] Rename DanglingInternalBufferChecker to InnerPointerChecker.
Differential Revision: https://reviews.llvm.org/D49553

llvm-svn: 337559
2018-07-20 15:14:49 +00:00
Simon Pilgrim 8342126c4e [X86][AVX] Add 256-bit vector horizontal op redundant shuffle tests
llvm-svn: 337558
2018-07-20 15:07:53 +00:00
Kostya Kortchinsky cccd21d42c [scudo] Simplify internal names (NFC)
Summary:
There is currently too much redundancy in the class/variable/* names in Scudo:
- we are in the namespace `__scudo`, so there is no point in having something
  named `ScudoX` to end up with a final name of `__scudo::ScudoX`;
- there are a lot of types/* that have `Allocator` in the name, given that
  Scudo is an allocator I figure this doubles up as well.

So change a bunch of the Scudo names to make them shorter, less redundant, and
overall simpler. They should still be pretty self explaining (or at least it
looks so to me).

The TSD part will be done in another CL (eg `__scudo::ScudoTSD`).

Reviewers: alekseyshl, eugenis

Reviewed By: alekseyshl

Subscribers: delcypher, #sanitizers, llvm-commits

Differential Revision: https://reviews.llvm.org/D49505

llvm-svn: 337557
2018-07-20 15:07:17 +00:00
Bruno Cardoso Lopes 534f4e6dd0 [www] Add CodeCompass and CodeChecker to Clang Related Projects page
llvm-svn: 337555
2018-07-20 14:46:10 +00:00
Florian Hahn ec3ca89a17 [IPSCCP] Fix for bot failure caused by r337548
llvm-svn: 337554
2018-07-20 14:37:10 +00:00
Erich Keane 3efe00206f Implement cpu_dispatch/cpu_specific Multiversioning
As documented here: https://software.intel.com/en-us/node/682969 and
https://software.intel.com/en-us/node/523346. cpu_dispatch multiversioning
is an ICC feature that provides for function multiversioning.

This feature is implemented with two attributes: First, cpu_specific,
which specifies the individual function versions. Second, cpu_dispatch,
which specifies the location of the resolver function and the list of
resolvable functions.

This is valuable since it provides a mechanism where the resolver's TU
can be specified in one location, and the individual implementions
each in their own translation units.

The goal of this patch is to be source-compatible with ICC, so this
implementation diverges from the ICC implementation in a few ways:
1- Linux x86/64 only: This implementation uses ifuncs in order to
properly dispatch functions. This is is a valuable performance benefit
over the ICC implementation. A future patch will be provided to enable
this feature on Windows, but it will obviously more closely fit ICC's
implementation.
2- CPU Identification functions: ICC uses a set of custom functions to identify
the feature list of the host processor. This patch uses the cpu_supports
functionality in order to better align with 'target' multiversioning.
1- cpu_dispatch function def/decl: ICC's cpu_dispatch requires that the function
marked cpu_dispatch be an empty definition. This patch supports that as well,
however declarations are also permitted, since the linker will solve the
issue of multiple emissions.

Differential Revision: https://reviews.llvm.org/D47474

llvm-svn: 337552
2018-07-20 14:13:28 +00:00
Simon Pilgrim f907e19b5e Regenerate partial vector fold test. NFCI.
llvm-svn: 337551
2018-07-20 13:58:57 +00:00
Dmitry Vyukov 97cf5f7f40 esan: fix shadow setup
r337531 changed return type of MmapFixedNoReserve, but esan wasn't updated.
As the result esan shadow setup always fails.
We probably need to make MmapFixedNoAccess signature consistent
with MmapFixedNoReserve. But this is just to unbreak tests.
 

llvm-svn: 337550
2018-07-20 13:40:08 +00:00
Chen Zheng 0f609db61a [NFC][testcases] fold sdiv if two operands are negated and non-overflow
llvm-svn: 337549
2018-07-20 13:38:59 +00:00
Florian Hahn 0a560d5d9c Recommit r328307: [IPSCCP] Use constant range information for comparisons of parameters.
This version contains a fix to add values for which the state in ParamState change
to the worklist if the state in ValueState did not change. To avoid adding the
same value multiple times, mergeInValue returns true, if it added the value to
the worklist. The value is added to the worklist depending on its state in
ValueState.

Original message:
For comparisons with parameters, we can use the ParamState lattice
elements which also provide constant range information. This improves
the code for PR33253 further and gets us closer to use
ValueLatticeElement for all values.

Also, as we are using the range information in the solver directly, we
do not need tryToReplaceWithConstantRange afterwards anymore.

Reviewers: dberlin, mssimpso, davide, efriedma

Reviewed By: mssimpso

Differential Revision: https://reviews.llvm.org/D43762

llvm-svn: 337548
2018-07-20 13:29:12 +00:00
Simon Pilgrim 6fb8b68b2d [X86][AVX] Convert X86ISD::VBROADCAST demanded elts combine to use SimplifyDemandedVectorElts
This is an early step towards using SimplifyDemandedVectorElts for target shuffle combining - this merely moves the existing X86ISD::VBROADCAST simplification code to use the SimplifyDemandedVectorElts mechanism.

Adds X86TargetLowering::SimplifyDemandedVectorEltsForTargetNode to handle X86ISD::VBROADCAST - in time we can support all target shuffles (and other ops) here.

llvm-svn: 337547
2018-07-20 13:26:51 +00:00
Simon Pilgrim cbf5af12b0 Regenerate remainder test.
llvm-svn: 337546
2018-07-20 13:14:29 +00:00
Chen Zheng f801d0fea9 [InstSimplify] fold srem instruction if its two operands are negated.
Differential Revision: https://reviews.llvm.org/D49423

llvm-svn: 337545
2018-07-20 13:00:47 +00:00
Pavel Labath f9adc20aef [DebugInfo] Generate .debug_names section when it makes sense
Summary:
This patch makes us generate the debug_names section in response to some
user-facing commands (previously it was only generated if explicitly
selected via the -accel-tables option).

My goal was to make this work for DWARF>=5 (as it's an official part of
that standard), and also, as an extension, for DWARF<5 if one is
explicitly tuning for lldb as a debugger (because it brings a large
performance improvement there).

This is slightly complicated by the fact that the debug_names tables are
incompatible with the DWARF v4 type units (they assume that the type
units are in the debug_info section), and unfortunately, right now we
generate DWARF v4-style type units even for -gdwarf-5. For this reason,
I disable all accelerator tables if the user requested type unit
generation. I do this even for apple tables, as they have the same
problem (in fact generating type units for apple targets makes us crash
even before we get around to emitting the accelerator tables).

Reviewers: JDevlieghere, aprantl, dblaikie, echristo, probinson

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D49420

llvm-svn: 337544
2018-07-20 12:59:05 +00:00
Chen Zheng 35395a6773 [NFC][testcases] more testcases for folding srem if its two operands are negatived.
llvm-svn: 337543
2018-07-20 12:53:45 +00:00
Ulrich Weigand 9dd23b8433 [SystemZ] Test case formatting fixes
Fix systematically wrong whitespace from a prior automated change.

NFC.

llvm-svn: 337542
2018-07-20 12:12:10 +00:00
Sam McCall 57743883f1 Revert "[LSV] Refactoring + supporting bitcasts to a type of different size"
This reverts commit r337489.
It causes asserts to fire in some TensorFlow tests, e.g.
tensorflow/compiler/tests/gather_test.py on GPU.

Example stack trace:
Start test case: GatherTest.testHigherRank
assertion failed at third_party/llvm/llvm/lib/Support/APInt.cpp:819 in llvm::APInt llvm::APInt::trunc(unsigned int) const: width && "Can't truncate to 0 bits"
    @     0x5559446ebe10  __assert_fail
    @     0x55593ef32f5e  llvm::APInt::trunc()
    @     0x55593d78f86e  (anonymous namespace)::Vectorizer::lookThroughComplexAddresses()
    @     0x55593d78f2bc  (anonymous namespace)::Vectorizer::areConsecutivePointers()
    @     0x55593d78d128  (anonymous namespace)::Vectorizer::isConsecutiveAccess()
    @     0x55593d78c926  (anonymous namespace)::Vectorizer::vectorizeInstructions()
    @     0x55593d78c221  (anonymous namespace)::Vectorizer::vectorizeChains()
    @     0x55593d78b948  (anonymous namespace)::Vectorizer::run()
    @     0x55593d78b725  (anonymous namespace)::LoadStoreVectorizer::runOnFunction()
    @     0x55593edf4b17  llvm::FPPassManager::runOnFunction()
    @     0x55593edf4e55  llvm::FPPassManager::runOnModule()
    @     0x55593edf563c  (anonymous namespace)::MPPassManager::runOnModule()
    @     0x55593edf5137  llvm::legacy::PassManagerImpl::run()
    @     0x55593edf5b71  llvm::legacy::PassManager::run()
    @     0x55593ced250d  xla::gpu::IrDumpingPassManager::run()
    @     0x55593ced5033  xla::gpu::(anonymous namespace)::EmitModuleToPTX()
    @     0x55593ced40ba  xla::gpu::(anonymous namespace)::CompileModuleToPtx()
    @     0x55593ced33d0  xla::gpu::CompileToPtx()
    @     0x55593b26b2a2  xla::gpu::NVPTXCompiler::RunBackend()
    @     0x55593b21f973  xla::Service::BuildExecutable()
    @     0x555938f44e64  xla::LocalService::CompileExecutable()
    @     0x555938f30a85  xla::LocalClient::Compile()
    @     0x555938de3c29  tensorflow::XlaCompilationCache::BuildExecutable()
    @     0x555938de4e9e  tensorflow::XlaCompilationCache::CompileImpl()
    @     0x555938de3da5  tensorflow::XlaCompilationCache::Compile()
    @     0x555938c5d962  tensorflow::XlaLocalLaunchBase::Compute()
    @     0x555938c68151  tensorflow::XlaDevice::Compute()
    @     0x55593f389e1f  tensorflow::(anonymous namespace)::ExecutorState::Process()
    @     0x55593f38a625  tensorflow::(anonymous namespace)::ExecutorState::ScheduleReady()::$_1::operator()()
*** SIGABRT received by PID 7798 (TID 7837) from PID 7798; ***

llvm-svn: 337541
2018-07-20 12:03:00 +00:00
Yaxun Liu 99a9f75948 Sema: Fix explicit address space cast in C++
Currently clang does not allow implicit cast of a pointer to a pointer type
in different address space but allows C-style cast of a pointer to a pointer
type in different address space. However, there is a bug in Sema causing
incorrect Cast Expr in AST for the latter case, which in turn results in
invalid LLVM IR in codegen.

This is because Sema::IsQualificationConversion returns true for a cast of
pointer to a pointer type in different address space, which in turn allows
a standard conversion and results in a cast expression with no op in AST.

This patch fixes that by let Sema::IsQualificationConversion returns false
for a cast of pointer to a pointer type in different address space, which
in turn disallows standard conversion, implicit cast, and static cast.
Finally it results in an reinterpret cast and correct conversion kind is set.

Differential Revision: https://reviews.llvm.org/D49294

llvm-svn: 337540
2018-07-20 11:32:51 +00:00
Florian Hahn 5c608154a7 [UBSan] Also use blacklist for 'Address; Undefined' setting
It looks like currently the UBSan blacklist is only applied when "Undefined" is selected.
This patch updates the cmake file to apply it whenever Undefined is selected
 (e.g. 'Address; Undefined' ). This  allows us to use the workaround added in
rL335525 when using AddressSan and UBSan together.

Reviewers: eugenis, vitalybuka

Reviewed By: eugenis

Differential Revision: https://reviews.llvm.org/D49558

llvm-svn: 337539
2018-07-20 10:12:31 +00:00