Commit Graph

1746 Commits

Author SHA1 Message Date
Hiroshi Inoue ae17900997 [NFC] fix trivial typos in document and comments
"not not" -> "not" etc

llvm-svn: 330083
2018-04-14 08:59:00 +00:00
Mandeep Singh Grang 636d94db3b [Transforms] Change std::sort to llvm::sort in response to r327219
Summary:
r327219 added wrappers to std::sort which randomly shuffle the container before sorting.
This will help in uncovering non-determinism caused due to undefined sorting
order of objects having the same key.

To make use of that infrastructure we need to invoke llvm::sort instead of std::sort.

Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort.
Refer the comments section in D44363 for a list of all the required patches.

Reviewers: kcc, pcc, danielcdh, jmolloy, sanjoy, dberlin, ruiu

Reviewed By: ruiu

Subscribers: ruiu, llvm-commits

Differential Revision: https://reviews.llvm.org/D45142

llvm-svn: 330059
2018-04-13 19:47:57 +00:00
Andrey Konovalov 1ba9d9c6ca hwasan: add -fsanitize=kernel-hwaddress flag
This patch adds -fsanitize=kernel-hwaddress flag, that essentially enables
-hwasan-kernel=1 -hwasan-recover=1 -hwasan-match-all-tag=0xff.

Differential Revision: https://reviews.llvm.org/D45046

llvm-svn: 330044
2018-04-13 18:05:21 +00:00
Evgeniy Stepanov 1f1a7a719d hwasan: add -hwasan-match-all-tag flag
Sometimes instead of storing addresses as is, the kernel stores the address of
a page and an offset within that page, and then computes the actual address
when it needs to make an access. Because of this the pointer tag gets lost
(gets set to 0xff). The solution is to ignore all accesses tagged with 0xff.

This patch adds a -hwasan-match-all-tag flag to hwasan, which allows to ignore
accesses through pointers with a particular pointer tag value for validity.

Patch by Andrey Konovalov.

Differential Revision: https://reviews.llvm.org/D44827

llvm-svn: 329228
2018-04-04 20:44:59 +00:00
Alexander Potapenko ac70668cff MSan: introduce the conservative assembly handling mode.
The default assembly handling mode may introduce false positives in the
cases when MSan doesn't understand that the assembly call initializes
the memory pointed to by one of its arguments.

We introduce the conservative mode, which initializes the first
|sizeof(type)| bytes for every |type*| pointer passed into the
assembly statement.

llvm-svn: 329054
2018-04-03 09:50:06 +00:00
Peter Collingbourne d03bf12c1b DataFlowSanitizer: wrappers of functions with local linkage should have the same linkage as the function being wrapped
This patch resolves link errors when the address of a static function is taken, and that function is uninstrumented by DFSan.

This change resolves bug 36314.

Patch by Sam Kerner!

Differential Revision: https://reviews.llvm.org/D44784

llvm-svn: 328890
2018-03-30 18:37:55 +00:00
Evgeniy Stepanov 50635dab26 Add msan custom mapping options.
Similarly to https://reviews.llvm.org/D18865 this adds options to provide custom mapping for msan.
As discussed in http://lists.llvm.org/pipermail/llvm-dev/2018-February/121339.html

Patch by vit9696(at)avp.su.

Differential Revision: https://reviews.llvm.org/D44926

llvm-svn: 328830
2018-03-29 21:18:17 +00:00
Alexander Potapenko 4e7ad0805e [MSan] Introduce ActualFnStart. NFC
This is a step towards the upcoming KMSAN implementation patch.
KMSAN is going to prepend a special basic block containing
tool-specific calls to each function. Because we still want to
instrument the original entry block, we'll need to store it in
ActualFnStart.

For MSan this will still be F.getEntryBlock(), whereas for KMSAN
it'll contain the second BB.

llvm-svn: 328697
2018-03-28 11:35:09 +00:00
Alexander Potapenko e1d5877847 [MSan] Add an isStore argument to getShadowOriginPtr(). NFC
This is a step towards the upcoming KMSAN implementation patch.
The isStore argument is to be used by getShadowOriginPtrKernel(),
it is ignored by getShadowOriginPtrUserspace().

Depending on whether a memory access is a load or a store, KMSAN
instruments it with different functions, __msan_metadata_ptr_for_load_X()
and __msan_metadata_ptr_for_store_X().

Those functions may return different values for a single address,
which is necessary in the case the runtime library decides to ignore
particular accesses.

llvm-svn: 328692
2018-03-28 10:17:17 +00:00
Rong Xu 662f38b16f [PGO] Fix branch probability remarks assert
Fixed counter/weight overflow that leads to an assertion. Also fixed the help
string for pgo-emit-branch-prob option.

Differential Revision: https://reviews.llvm.org/D44809

llvm-svn: 328653
2018-03-27 18:55:56 +00:00
David Blaikie 4fe1fe1418 Fix Layering, move instrumentation transform headers into Instrumentation subdirectory
llvm-svn: 328379
2018-03-23 22:11:06 +00:00
Alex Shlyapnikov 83e7841419 [HWASan] Port HWASan to Linux x86-64 (LLVM)
Summary:
Porting HWASan to Linux x86-64, first of the three patches, LLVM part.

The approach is similar to ARM case, trap signal is used to communicate
memory tag check failure. int3 instruction is used to generate a signal,
access parameters are stored in nop [eax + offset] instruction immediately
following the int3 one.

One notable difference is that x86-64 has to untag the pointer before use
due to the lack of feature comparable to ARM's TBI (Top Byte Ignore).

Reviewers: eugenis

Subscribers: kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D44699

llvm-svn: 328342
2018-03-23 17:57:54 +00:00
David Blaikie 2be3922807 Fix a couple of layering violations in Transforms
Remove #include of Transforms/Scalar.h from Transform/Utils to fix layering.

Transforms depends on Transforms/Utils, not the other way around. So
remove the header and the "createStripGCRelocatesPass" function
declaration (& definition) that is unused and motivated this dependency.

Move Transforms/Utils/Local.h into Analysis because it's used by
Analysis/MemoryBuiltins.cpp.

llvm-svn: 328165
2018-03-21 22:34:23 +00:00
Alexander Potapenko fa0217276a [MSan] fix the types of RegSaveAreaPtrPtr and OverflowArgAreaPtrPtr
Despite their names, RegSaveAreaPtrPtr and OverflowArgAreaPtrPtr
used to be i8* instead of i8**.

This is important, because these pointers are dereferenced twice
(first in CreateLoad(), then in getShadowOriginPtr()), but for some
reason MSan allowed this - most certainly because it was possible
to optimize getShadowOriginPtr() away at compile time.

Differential revision: https://reviews.llvm.org/D44520

llvm-svn: 327830
2018-03-19 10:08:04 +00:00
Alexander Potapenko 014ff63f24 [MSan] Don't create zero offsets in getShadowPtrForArgument(). NFC
For MSan instrumentation with MS.ParamTLS and MS.ParamOriginTLS being
TLS variables, the CreateAdd() with ArgOffset==0 is a no-op, because
the compiler is able to fold the addition of 0.

But for KMSAN, which receives ParamTLS and ParamOriginTLS from a call
to the runtime library, this introduces a stray instruction which
complicates reading/testing the IR.

Differential revision: https://reviews.llvm.org/D44514

llvm-svn: 327829
2018-03-19 10:03:47 +00:00
Alexander Potapenko e0bafb4359 [MSan] Introduce insertWarningFn(). NFC
This is a step towards the upcoming KMSAN implementation patch.
KMSAN is going to use a different warning function,
__msan_warning_32(uptr origin), so we'd better create the warning
calls in one place.

Differential Revision: https://reviews.llvm.org/D44513

llvm-svn: 327828
2018-03-19 09:59:44 +00:00
Kuba Mracek 8842da8e07 [asan] Fix a false positive ODR violation due to LTO ConstantMerge pass [llvm part, take 3]
This fixes a false positive ODR violation that is reported by ASan when using LTO. In cases, where two constant globals have the same value, LTO will merge them, which breaks ASan's ODR detection.

Differential Revision: https://reviews.llvm.org/D43959

llvm-svn: 327061
2018-03-08 21:02:18 +00:00
Kuba Mracek f0bcbfef5c Revert r327053.
llvm-svn: 327055
2018-03-08 20:13:39 +00:00
Kuba Mracek 584bd10803 [asan] Fix a false positive ODR violation due to LTO ConstantMerge pass [llvm part, take 2]
This fixes a false positive ODR violation that is reported by ASan when using LTO. In cases, where two constant globals have the same value, LTO will merge them, which breaks ASan's ODR detection.

Differential Revision: https://reviews.llvm.org/D43959

llvm-svn: 327053
2018-03-08 20:05:45 +00:00
Kuba Mracek e834b22874 Revert r327029
llvm-svn: 327033
2018-03-08 17:32:00 +00:00
Kuba Mracek 0e06d37dba [asan] Fix a false positive ODR violation due to LTO ConstantMerge pass [llvm part]
This fixes a false positive ODR violation that is reported by ASan when using LTO. In cases, where two constant globals have the same value, LTO will merge them, which breaks ASan's ODR detection.

Differential Revision: https://reviews.llvm.org/D43959

llvm-svn: 327029
2018-03-08 17:24:06 +00:00
Vedant Kumar 9a041a7522 [InstrProfiling] Emit the runtime hook when no counters are lowered
The API verification tool tapi has difficulty processing frameworks
which enable code coverage, but which have no code. The profile lowering
pass does not emit the runtime hook in this case because no counters are
lowered.

While the hook is not needed for program correctness (the profile
runtime doesn't have to be linked in), it's needed to allow tapi to
validate the exported symbol set of instrumented binaries.

It was not possible to add a workaround in tapi for empty binaries due
to an architectural issue: tapi generates its expected symbol set before
it inspects a binary. Changing that model has a higher cost than simply
forcing llvm to always emit the runtime hook.

rdar://36076904

Differential Revision: https://reviews.llvm.org/D43794

llvm-svn: 326350
2018-02-28 19:00:08 +00:00
Peter Collingbourne 32f5405bff Fix DataFlowSanitizer instrumentation pass to take parameter position changes into account for custom functions.
When DataFlowSanitizer transforms a call to a custom function, the
new call has extra parameters. The attributes on parameters must be
updated to take the new position of each parameter into account.

Patch by Sam Kerner!

Differential Revision: https://reviews.llvm.org/D43132

llvm-svn: 325820
2018-02-22 19:09:07 +00:00
Evgeniy Stepanov 43271b1803 [hwasan] Fix inline instrumentation.
This patch changes hwasan inline instrumentation:

Fixes address untagging for shadow address calculation (use 0xFF instead of 0x00 for the top byte).
Emits brk instruction instead of hlt for the kernel and user space.
Use 0x900 instead of 0x100 for brk immediate (0x100 - 0x800 are unavailable in the kernel).
Fixes and adds appropriate tests.

Patch by Andrey Konovalov.

Differential Revision: https://reviews.llvm.org/D43135

llvm-svn: 325711
2018-02-21 19:52:23 +00:00
Evgeniy Stepanov 80ccda2d4b [hwasan] Fix kernel instrumentation of stack.
Summary:
Kernel addresses have 0xFF in the most significant byte.
A tag can not be pushed there with OR (tag << 56);
use AND ((tag << 56) | 0x00FF..FF) instead.

Reviewers: kcc, andreyknvl

Subscribers: srhines, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D42941

llvm-svn: 324691
2018-02-09 00:59:10 +00:00
Daniel Neilson 606cf6f64f [DSan] Update uses of memory intrinsic get/setAlignment to new API (NFC)
Summary:
This change is part of step five in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. In particular, this changes the
DataFlowSanitizer pass to cease using the old get/setAlignment() API of MemoryIntrinsic
in favour of getting source & dest specific alignments through the new API.

Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965, rC322964, rL322963 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments. ( rL323597 )
Step 3) Update Clang to use the new IRBuilder API. ( rC323617 )
Step 4) Update Polly to use the new IRBuilder API. ( rL323618 )
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use [get|set]DestAlignment()
and [get|set]SourceAlignment() instead. ( rL323886, rL323891, rL324148, rL324273, rL324278,
rL324384, rL324395, rL324402, rL324626, rL324642, rL324653 )
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.

Reference
   http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html
   http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

llvm-svn: 324654
2018-02-08 21:28:26 +00:00
Daniel Neilson a98d9d92da [ASan] Update uses of IRBuilder::CreateMemCpy to new API (NFC)
Summary:
This change is part of step five in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. In particular, this changes the
AddressSanitizer pass to cease using The old IRBuilder CreateMemCpy single-alignment API
in favour of the new API that allows setting source and destination alignments independently.

Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965, rC322964, rL322963 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments. ( rL323597 )
Step 3) Update Clang to use the new IRBuilder API. ( rC323617 )
Step 4) Update Polly to use the new IRBuilder API. ( rL323618 )
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use [get|set]DestAlignment()
and [get|set]SourceAlignment() instead. ( rL323886, rL323891, rL324148, rL324273, rL324278,
rL324384, rL324395, rL324402, rL324626, rL324642 )
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.

Reference
   http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html
   http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

llvm-svn: 324653
2018-02-08 21:26:12 +00:00
Daniel Neilson 57b34ce574 [MSan] Update uses of IRBuilder::CreateMemCpy to new API (NFC)
Summary:
This change is part of step five in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. In particular, this changes the
MemorySanitizer pass to cease using the old IRBuilder CreateMemCpy single-alignment APIs
in favour of the new API that allows setting source and destination alignments independently.

Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965, rC322964, rL322963 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments. ( rL323597 )
Step 3) Update Clang to use the new IRBuilder API. ( rC323617 )
Step 4) Update Polly to use the new IRBuilder API. ( rL323618 )
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use [get|set]DestAlignment()
and [get|set]SourceAlignment() instead. ( rL323886, rL323891, rL324148, rL324273, rL324278,
rL324384, rL324395, rL324402, rL324626 )
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.

Reference
   http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html
   http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

llvm-svn: 324642
2018-02-08 19:46:12 +00:00
Vedant Kumar cff94627cf [InstrProfiling] Don't exit early when an unused intrinsic is found
This fixes a think-o in r323574.

llvm-svn: 323576
2018-01-27 00:01:04 +00:00
Vedant Kumar 1ee511c19c [InstrProfiling] Improve compile time when there is no work
When there are no uses of profiling intrinsics in a module, and there's
no coverage data to lower, InstrProfiling has no work to do.

llvm-svn: 323574
2018-01-26 23:54:24 +00:00
Evgeniy Stepanov 31475a039a [asan] Fix kernel callback naming in instrumentation module.
Right now clang uses "_n" suffix for some user space callbacks and "N" for the matching kernel ones. There's no need for this and it actually breaks kernel build with inline instrumentation. Use the same callback names for user space and the kernel (and also make them consistent with the names GCC uses).

Patch by Andrey Konovalov.

Differential Revision: https://reviews.llvm.org/D42423

llvm-svn: 323470
2018-01-25 21:28:51 +00:00
Dmitry Vyukov 68aab34f2d asan: allow inline instrumentation for the kernel
Currently ASan instrumentation pass forces callback
instrumentation when applied to the kernel.
This patch changes the current behavior to allow
using inline instrumentation in this case.

Authored by andreyknvl. Reviewed in:
https://reviews.llvm.org/D42384

llvm-svn: 323140
2018-01-22 19:07:11 +00:00
Daniel Neilson 1e68724d24 Remove alignment argument from memcpy/memmove/memset in favour of alignment attributes (Step 1)
Summary:
 This is a resurrection of work first proposed and discussed in Aug 2015:
   http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html
and initially landed (but then backed out) in Nov 2015:
   http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

 The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument
which is required to be a constant integer. It represents the alignment of the
dest (and source), and so must be the minimum of the actual alignment of the
two.

 This change is the first in a series that allows source and dest to each
have their own alignments by using the alignment attribute on their arguments.

 In this change we:
1) Remove the alignment argument.
2) Add alignment attributes to the source & dest arguments. We, temporarily,
   require that the alignments for source & dest be equal.

 For example, code which used to read:
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false)
will now read
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false)

 Downstream users may have to update their lit tests that check for
@llvm.memcpy/memmove/memset call/declaration patterns. The following extended sed script
may help with updating the majority of your tests, but it does not catch all possible
patterns so some manual checking and updating will be required.

s~declare void @llvm\.mem(set|cpy|move)\.p([^(]*)\((.*), i32, i1\)~declare void @llvm.mem\1.p\2(\3, i1)~g
s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* \3, i8 \4, i8 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* \3, i8 \4, i16 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* \3, i8 \4, i32 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* \3, i8 \4, i64 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* \3, i8 \4, i128 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* align \6 \3, i8 \4, i8 \5, i1 \7)~g
s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* align \6 \3, i8 \4, i16 \5, i1 \7)~g
s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* align \6 \3, i8 \4, i32 \5, i1 \7)~g
s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* align \6 \3, i8 \4, i64 \5, i1 \7)~g
s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* align \6 \3, i8 \4, i128 \5, i1 \7)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* \4, i8\5* \6, i8 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* \4, i8\5* \6, i16 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* \4, i8\5* \6, i32 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* \4, i8\5* \6, i64 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* \4, i8\5* \6, i128 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* align \8 \4, i8\5* align \8 \6, i8 \7, i1 \9)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* align \8 \4, i8\5* align \8 \6, i16 \7, i1 \9)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* align \8 \4, i8\5* align \8 \6, i32 \7, i1 \9)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* align \8 \4, i8\5* align \8 \6, i64 \7, i1 \9)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* align \8 \4, i8\5* align \8 \6, i128 \7, i1 \9)~g

 The remaining changes in the series will:
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
   source and dest alignments.
Step 3) Update Clang to use the new IRBuilder API.
Step 4) Update Polly to use the new IRBuilder API.
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
        and those that use use MemIntrinsicInst::[get|set]Alignment() to use
        getDestAlignment() and getSourceAlignment() instead.
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
        MemIntrinsicInst::[get|set]Alignment() methods.

Reviewers: pete, hfinkel, lhames, reames, bollu

Reviewed By: reames

Subscribers: niosHD, reames, jholewinski, qcolombet, jfb, sanjoy, arsenm, dschuff, dylanmckay, mehdi_amini, sdardis, nemanjai, david2050, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, sabuasal, llvm-commits

Differential Revision: https://reviews.llvm.org/D41675

llvm-svn: 322965
2018-01-19 17:13:12 +00:00
Benjamin Kramer bfc1d976ca [HWAsan] Fix uninitialized variable.
Found by msan.

llvm-svn: 322847
2018-01-18 14:19:04 +00:00
Evgeniy Stepanov 5bd669dc8f [hwasan] LLVM-level flags for linux kernel-compatible hwasan instrumentation.
Summary:
-hwasan-mapping-offset defines the non-zero shadow base address.
-hwasan-kernel disables calls to __hwasan_init in module constructors.
Unlike ASan, -hwasan-kernel does not force callback instrumentation.
This is controlled separately with -hwasan-instrument-with-calls.

Reviewers: kcc

Subscribers: srhines, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D42141

llvm-svn: 322785
2018-01-17 23:24:38 +00:00
Easwaran Raman e5b8de2f1f Add a ProfileCount class to represent entry counts.
Summary:
The class wraps a uint64_t and an enum to represent the type of profile
count (real and synthetic) with some helper methods.

Reviewers: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D41883

llvm-svn: 322771
2018-01-17 22:24:23 +00:00
Evgeniy Stepanov c07e0bd533 [hwasan] Rename sized load/store callbacks to be consistent with ASan.
Summary: __hwasan_load is now __hwasan_loadN.

Reviewers: kcc

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D42138

llvm-svn: 322601
2018-01-16 23:15:08 +00:00
Evgeniy Stepanov 080e0d40b9 [hwasan] An LLVM flag to disable stack tag randomization.
Summary: Necessary to achieve consistent test results.

Reviewers: kcc, alekseyshl

Subscribers: kubamracek, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D42023

llvm-svn: 322429
2018-01-13 01:32:15 +00:00
Daniel Neilson 2409d24201 [NFC] Change MemIntrinsicInst::setAlignment() to take an unsigned instead of a Constant
Summary:
 In preparation for https://reviews.llvm.org/D41675 this NFC changes this
prototype of MemIntrinsicInst::setAlignment() to accept an unsigned instead
of a Constant.

llvm-svn: 322403
2018-01-12 21:33:37 +00:00
Evgeniy Stepanov 99fa3e774d [hwasan] Stack instrumentation.
Summary:
Very basic stack instrumentation using tagged pointers.
Tag for N'th alloca in a function is built as XOR of:
 * base tag for the function, which is just some bits of SP (poor
   man's random)
 * small constant which is a function of N.

Allocas are aligned to 16 bytes. On every ReturnInst allocas are
re-tagged to catch use-after-return.

This implementation has a bunch of issues that will be taken care of
later:
1. lifetime intrinsics referring to tagged pointers are not
   recognized in SDAG. This effectively disables stack coloring.
2. Generated code is quite inefficient. There is one extra
   instruction at each memory access that adds the base tag to the
   untagged alloca address. It would be better to keep tagged SP in a
   callee-saved register and address allocas as an offset of that XOR
   retag, but that needs better coordination between hwasan
   instrumentation pass and prologue/epilogue insertion.
3. Lifetime instrinsics are ignored and use-after-scope is not
   implemented. This would be harder to do than in ASan, because we
   need to use a differently tagged pointer depending on which
   lifetime.start / lifetime.end the current instruction is dominated
   / post-dominated.

Reviewers: kcc, alekseyshl

Subscribers: srhines, kubamracek, javed.absar, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D41602

llvm-svn: 322324
2018-01-11 22:53:30 +00:00
Benjamin Kramer 3a13ed60ba Avoid int to string conversion in Twine or raw_ostream contexts.
Some output changes from uppercase hex to lowercase hex, no other functionality change intended.

llvm-svn: 321526
2017-12-28 16:58:54 +00:00
Evgeniy Stepanov 3fd1b1a764 [hwasan] Implement -fsanitize-recover=hwaddress.
Summary: Very similar to AddressSanitizer, with the exception of the error type encoding.

Reviewers: kcc, alekseyshl

Subscribers: cfe-commits, kubamracek, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D41417

llvm-svn: 321203
2017-12-20 19:05:44 +00:00
Xinliang David Li 19fb5b467b [PGO] add MST min edge selection heuristic to ensure non-zero entry count
Differential Revision: http://reviews.llvm.org/D41059

llvm-svn: 320998
2017-12-18 17:56:19 +00:00
Michael Zolotukhin 6af4f232b5 Remove redundant includes from lib/Transforms.
llvm-svn: 320628
2017-12-13 21:31:01 +00:00
Evgeniy Stepanov ecb48e523e [hwasan] Inline instrumentation & fixed shadow.
Summary: This brings CPU overhead on bzip2 down from 5.5x to 2x.

Reviewers: kcc, alekseyshl

Subscribers: kubamracek, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D41137

llvm-svn: 320538
2017-12-13 01:16:34 +00:00
Hiroshi Yamauchi f3bda1daa2 Split IndirectBr critical edges before PGO gen/use passes.
Summary:
The PGO gen/use passes currently fail with an assert failure if there's a
critical edge whose source is an IndirectBr instruction and that edge
needs to be instrumented.

To avoid this in certain cases, split IndirectBr critical edges in the PGO
gen/use passes. This works for blocks with single indirectbr predecessors,
but not for those with multiple indirectbr predecessors (splitting an
IndirectBr critical edge isn't always possible.)

Reviewers: davidxl, xur

Reviewed By: davidxl

Subscribers: efriedma, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D40699

llvm-svn: 320511
2017-12-12 19:07:43 +00:00
Adrian Prantl 3c6c14d14b ASAN: Provide reliable debug info for local variables at -O0.
The function stack poisioner conditionally stores local variables
either in an alloca or in malloc'ated memory, which has the
unfortunate side-effect, that the actual address of the variable is
only materialized when the variable is accessed, which means that
those variables are mostly invisible to the debugger even when
compiling without optimizations.

This patch stores the address of the local stack base into an alloca,
which can be referred to by the debug info and is available throughout
the function. This adds one extra pointer-sized alloca to each stack
frame (but mem2reg can optimize it away again when optimizations are
enabled, yielding roughly the same debug info quality as before in
optimized code).

rdar://problem/30433661

Differential Revision: https://reviews.llvm.org/D41034

llvm-svn: 320415
2017-12-11 20:43:21 +00:00
Alexander Potapenko 3c934e4864 [MSan] Hotfix compilation
For some reason the override directives got removed in r320373.
I suspect this to be an unwanted effect of clang-format.

llvm-svn: 320381
2017-12-11 15:48:56 +00:00
Alexander Potapenko c07e6a0eff [MSan] introduce getShadowOriginPtr(). NFC.
This patch introduces getShadowOriginPtr(), a method that obtains both the shadow and origin pointers for an address as a Value pair.
The existing callers of getShadowPtr() and getOriginPtr() are updated to use getShadowOriginPtr().

The rationale for this change is to simplify KMSAN instrumentation implementation.
In KMSAN origins tracking is always enabled, and there's no direct mapping between the app memory and the shadow/origin pages.
Both the shadow and the origin pointer for a given address are obtained by calling a single runtime hook from the instrumentation,
therefore it's easier to work with those pointers together.

Reviewed at https://reviews.llvm.org/D40835.

llvm-svn: 320373
2017-12-11 15:05:22 +00:00
Xinliang David Li fa3f1a15b2 [PGO] change arg type to uint64_t to match member field type
llvm-svn: 320285
2017-12-10 07:39:53 +00:00
Kamil Rytarowski 3d3f91e832 Register NetBSD/x86_64 in MemorySanitizer.cpp
Summary:
Reuse the Linux new mapping as it is.

Sponsored by <The NetBSD Foundation>

Reviewers: joerg, eugenis, vitalybuka

Reviewed By: vitalybuka

Subscribers: llvm-commits, #sanitizers

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D41022

llvm-svn: 320219
2017-12-09 00:32:09 +00:00
Evgeniy Stepanov c667c1f47a Hardware-assisted AddressSanitizer (llvm part).
Summary:
This is LLVM instrumentation for the new HWASan tool. It is basically
a stripped down copy of ASan at this point, w/o stack or global
support. Instrumenation adds a global constructor + runtime callbacks
for every load and store.

HWASan comes with its own IR attribute.

A brief design document can be found in
clang/docs/HardwareAssistedAddressSanitizerDesign.rst (submitted earlier).

Reviewers: kcc, pcc, alekseyshl

Subscribers: srhines, mehdi_amini, mgorny, javed.absar, eraman, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D40932

llvm-svn: 320217
2017-12-09 00:21:41 +00:00
Adrian Prantl d13170174c Generalize llvm::replaceDbgDeclare and actually support the use-case that
is mentioned in the documentation (inserting a deref before the plus_uconst).

llvm-svn: 320203
2017-12-08 21:58:18 +00:00
Xinliang David Li d91057bf52 Revert r320104: infinite loop profiling bug fix
Causes unexpected memory issue with New PM this time.
The new PM invalidates BPI but not BFI, leaving the
reference to BPI from BFI invalid.

Abandon this patch.  There is a more general solution
which also handles runtime infinite loop (but not statically).

llvm-svn: 320180
2017-12-08 19:38:07 +00:00
Bill Seurer 957a076cce [PowerPC][asan] Update asan to handle changed memory layouts in newer kernels
In more recent Linux kernels with 47 bit VMAs the layout of virtual memory
for powerpc64 changed causing the address sanitizer to not work properly. This
patch adds support for 47 bit VMA kernels for powerpc64 and fixes up test
cases.

https://reviews.llvm.org/D40907

There is an associated patch for compiler-rt.

Tested on several 4.x and 3.x kernel releases.

llvm-svn: 320109
2017-12-07 22:53:33 +00:00
Xinliang David Li 4b0027f671 [PGO] detect infinite loop and form MST properly
Differential Revision: http://reviews.llvm.org/D40873

llvm-svn: 320104
2017-12-07 22:23:28 +00:00
Matthew Simpson e363d2cebb [PGO] Make indirect call promotion a utility
This patch factors out the main code transformation utilities in the pgo-driven
indirect call promotion pass and places them in Transforms/Utils. The change is
intended to be a non-functional change, letting non-pgo-driven passes share a
common implementation with the existing pgo-driven pass.

The common utilities are used to conditionally promote indirect call sites to
direct call sites. They perform the underlying transformation, and do not
consider profile information. The pgo-specific details (e.g., the computation
of branch weight metadata) have been left in the indirect call promotion pass.

Differential Revision: https://reviews.llvm.org/D40658

llvm-svn: 319963
2017-12-06 21:22:54 +00:00
Xinliang David Li 45c819063a Revert r319794: [PGO] detect infinite loop and form MST properly: memory leak problem
llvm-svn: 319841
2017-12-05 21:54:01 +00:00
Xinliang David Li cc35bc9efc [PGO] detect infinite loop and form MST properly
Differential Revision: http://reviews.llvm.org/D40702

llvm-svn: 319794
2017-12-05 17:19:41 +00:00
Evgeniy Stepanov 4a8d151986 [msan] Add a fixme note for a minor deficiency.
llvm-svn: 319708
2017-12-04 22:50:39 +00:00
Xinliang David Li c23d2c6883 [PGO] Skip counter promotion for infinite loops
Differential Revision: http://reviews.llvm.org/D40662

llvm-svn: 319462
2017-11-30 19:16:25 +00:00
Alexander Potapenko 9e5477f473 MSan: remove an unnecessary cast. NFC for userspace instrumenetation.
llvm-svn: 318923
2017-11-23 15:06:51 +00:00
Alexander Potapenko 391804f54b [MSan] Move the access address check before the shadow access for that address
MSan used to insert the shadow check of the store pointer operand
_after_ the shadow of the value operand has been written.
This happens to work in the userspace, as the whole shadow range is
always mapped. However in the kernel the shadow page may not exist, so
the bug may cause a crash.

This patch moves the address check in front of the shadow access.

llvm-svn: 318901
2017-11-23 08:34:32 +00:00
Vitaly Buka 8000f228b3 [msan] Don't sanitize "nosanitize" instructions
Reviewers: eugenis

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D40205

llvm-svn: 318708
2017-11-20 23:37:56 +00:00
Hiroshi Yamauchi c94d4d70d8 Add heuristics for irreducible loop metadata under PGO
Summary:
Add the following heuristics for irreducible loop metadata:

- When an irreducible loop header is missing the loop header weight metadata,
  give it the minimum weight seen among other headers.
- Annotate indirectbr targets with the loop header weight metadata (as they are
  likely to become irreducible loop headers after indirectbr tail duplication.)

These greatly improve the accuracy of the block frequency info of the Python
interpreter loop (eg. from ~3-16x off down to ~40-55% off) and the Python
performance (eg. unpack_sequence from ~50% slower to ~8% faster than GCC) due to
better register allocation under PGO.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39980

llvm-svn: 318693
2017-11-20 21:03:38 +00:00
Evgeniy Stepanov 8e7018d92f [asan] Use dynamic shadow on 32-bit Android, try 2.
Summary:
This change reverts r318575 and changes FindDynamicShadowStart() to
keep the memory range it found mapped PROT_NONE to make sure it is
not reused. We also skip MemoryRangeIsAvailable() check, because it
is (a) unnecessary, and (b) would fail anyway.

Reviewers: pcc, vitalybuka, kcc

Subscribers: srhines, kubamracek, mgorny, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D40203

llvm-svn: 318666
2017-11-20 17:41:57 +00:00
Evgeniy Stepanov 9d564cdcb0 Revert "[asan] Use dynamic shadow on 32-bit Android" and 3 more.
Revert the following commits:
  r318369 [asan] Fallback to non-ifunc dynamic shadow on android<22.
  r318235 [asan] Prevent rematerialization of &__asan_shadow.
  r317948 [sanitizer] Remove unnecessary attribute hidden.
  r317943 [asan] Use dynamic shadow on 32-bit Android.

MemoryRangeIsAvailable() reads /proc/$PID/maps into an mmap-ed buffer
that may overlap with the address range that we plan to use for the
dynamic shadow mapping. This is causing random startup crashes.

llvm-svn: 318575
2017-11-18 00:22:34 +00:00
Walter Lee 8f1545c629 [asan] Fix small X86_64 ShadowOffset for non-default shadow scale
The requirement is that shadow memory must be aligned to page
boundaries (4k in this case).  Use a closed form equation that always
satisfies this requirement.

Differential Revision: https://reviews.llvm.org/D39471

llvm-svn: 318421
2017-11-16 17:03:00 +00:00
Walter Lee 2a2b69e9c7 [asan] Fix size/alignment issues with non-default shadow scale
Fix a couple places where the minimum alignment/size should be a
function of the shadow granularity:
- alignment of AllGlobals
- the minimum left redzone size on the stack

Added a test to verify that the metadata_array is properly aligned
for shadow scale of 5, to be enabled when we add build support
for testing shadow scale of 5.

Differential Revision: https://reviews.llvm.org/D39470

llvm-svn: 318395
2017-11-16 12:57:19 +00:00
Evgeniy Stepanov 396ed67950 [asan] Fallback to non-ifunc dynamic shadow on android<22.
Summary: Android < 22 does not support ifunc.

Reviewers: pcc

Subscribers: srhines, kubamracek, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D40116

llvm-svn: 318369
2017-11-16 02:52:19 +00:00
Evgeniy Stepanov cff19ee233 [asan] Prevent rematerialization of &__asan_shadow.
Summary:
In the mode when ASan shadow base is computed as the address of an
external global (__asan_shadow, currently on android/arm32 only),
regalloc prefers to rematerialize this value to save register spills.
Even in -Os. On arm32 it is rather expensive (2 loads + 1 constant
pool entry).

This changes adds an inline asm in the function prologue to suppress
this behavior. It reduces AsanTest binary size by 7%.

Reviewers: pcc, vitalybuka

Subscribers: aemerson, kristof.beyls, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D40048

llvm-svn: 318235
2017-11-15 00:11:51 +00:00
Chandler Carruth 00a301d568 [PM] Port BoundsChecking to the new PM.
Registers it and everything, updates all the references, etc.

Next patch will add support to Clang's `-fexperimental-new-pass-manager`
path to actually enable BoundsChecking correctly.

Differential Revision: https://reviews.llvm.org/D39084

llvm-svn: 318128
2017-11-14 01:30:04 +00:00
Chandler Carruth 1594feea94 [PM] Refactor BoundsChecking further to prepare it to be exposed both as
a legacy and new PM pass.

This essentially moves the class state to parameters and re-shuffles the
code to make that reasonable. It also does some minor cleanups along the
way and leaves some comments.

Differential Revision: https://reviews.llvm.org/D39081

llvm-svn: 318124
2017-11-14 01:13:59 +00:00
Hans Wennborg 08b34a017a Update some code.google.com links
llvm-svn: 318115
2017-11-13 23:47:58 +00:00
Bill Seurer 44156a0efb [PowerPC][msan] Update msan to handle changed memory layouts in newer kernels
In more recent Linux kernels (including those with 47 bit VMAs) the layout of
virtual memory for powerpc64 changed causing the memory sanitizer to not
work properly. This patch adjusts a bit mask in the memory sanitizer to work
on the newer kernels while continuing to work on the older ones as well.

This is the non-runtime part of the patch and finishes it. ref: r317802

Tested on several 4.x and 3.x kernel releases.

llvm-svn: 318045
2017-11-13 15:43:19 +00:00
Evgeniy Stepanov 989299c42b [asan] Use dynamic shadow on 32-bit Android.
Summary:
The following kernel change has moved ET_DYN base to 0x4000000 on arm32:
https://marc.info/?l=linux-kernel&m=149825162606848&w=2

Switch to dynamic shadow base to avoid such conflicts in the future.

Reserve shadow memory in an ifunc resolver, but don't use it in the instrumentation
until PR35221 is fixed. This will eventually let use save one load per function.

Reviewers: kcc

Subscribers: aemerson, srhines, kubamracek, kristof.beyls, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D39393

llvm-svn: 317943
2017-11-10 22:27:48 +00:00
Hiroshi Yamauchi dce9def3dd Irreducible loop metadata for more accurate block frequency under PGO.
Summary:
Currently the block frequency analysis is an approximation for irreducible
loops.

The new irreducible loop metadata is used to annotate the irreducible loop
headers with their header weights based on the PGO profile (currently this is
approximated to be evenly weighted) and to help improve the accuracy of the
block frequency analysis for irreducible loops.

This patch is a basic support for this.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: mehdi_amini, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D39028

llvm-svn: 317278
2017-11-02 22:26:51 +00:00
Reid Kleckner c212cc88e2 [asan] Upgrade private linkage globals to internal linkage on COFF
COFF comdats require symbol table entries, which means the comdat leader
cannot have private linkage.

llvm-svn: 317009
2017-10-31 16:16:08 +00:00
Eugene Zelenko fce435764e [Transforms] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 316253
2017-10-21 00:57:46 +00:00
Eugene Zelenko bff0ef0324 [Transforms] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 316190
2017-10-19 22:07:16 +00:00
Chandler Carruth 3f0e056df4 [PM] Refactor the bounds checking pass to remove a method only called in
one place.

llvm-svn: 316135
2017-10-18 22:42:36 +00:00
Marco Castelluccio 0dcf64ad20 Disable gcov instrumentation of functions using funclet-based exception handling
Summary: This patch fixes the crash from https://bugs.llvm.org/show_bug.cgi?id=34659 and https://bugs.llvm.org/show_bug.cgi?id=34833.

Reviewers: rnk, majnemer

Reviewed By: rnk, majnemer

Subscribers: majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D38223

llvm-svn: 315677
2017-10-13 13:49:15 +00:00
Vivek Pandya 9590658fb8 [NFC] Convert OptimizationRemarkEmitter old emit() calls to new closure
parameterized emit() calls

Summary: This is not functional change to adopt new emit() API added in r313691.

Reviewed By: anemet

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38285

llvm-svn: 315476
2017-10-11 17:12:59 +00:00
Adam Nemet 0965da2055 Rename OptimizationDiagnosticInfo.* to OptimizationRemarkEmitter.*
Sync it up with the name of the class actually defined here.  This has been
bothering me for a while...

llvm-svn: 315249
2017-10-09 23:19:02 +00:00
Dehao Chen 9bd60429e2 Directly return promoted direct call instead of rely on stripPointerCast.
Summary: stripPointerCast is not reliably returning the value that's being type-casted. Instead it may look further at function attributes to further propagate the value. Instead of relying on stripPOintercast, the more reliable solution is to directly use the pointer to the promoted direct call.

Reviewers: tejohnson, davidxl

Reviewed By: tejohnson

Subscribers: llvm-commits, sanjoy

Differential Revision: https://reviews.llvm.org/D38603

llvm-svn: 315077
2017-10-06 17:04:55 +00:00
Sylvestre Ledru e7d4cd639b Don't move llvm.localescape outside the entry block in the GCOV profiling pass
Summary:
This fixes https://bugs.llvm.org/show_bug.cgi?id=34714.

Patch by Marco Castelluccio

Reviewers: rnk

Reviewed By: rnk

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38224

llvm-svn: 314201
2017-09-26 11:56:43 +00:00
Vlad Tsyrklevich 998b220e97 Add section headers to SpecialCaseLists
Summary:
Sanitizer blacklist entries currently apply to all sanitizers--there
is no way to specify that an entry should only apply to a specific
sanitizer. This is important for Control Flow Integrity since there are
several different CFI modes that can be enabled at once. For maximum
security, CFI blacklist entries should be scoped to only the specific
CFI mode(s) that entry applies to.

Adding section headers to SpecialCaseLists allows users to specify more
information about list entries, like sanitizer names or other metadata,
like so:

  [section1]
  fun:*fun1*
  [section2|section3]
  fun:*fun23*

The section headers are regular expressions. For backwards compatbility,
blacklist entries entered before a section header are put into the '[*]'
section so that blacklists without sections retain the same behavior.

SpecialCaseList has been modified to also accept a section name when
matching against the blacklist. It has also been modified so the
follow-up change to clang can define a derived class that allows
matching sections by SectionMask instead of by string.

Reviewers: pcc, kcc, eugenis, vsk

Reviewed By: eugenis, vsk

Subscribers: vitalybuka, llvm-commits

Differential Revision: https://reviews.llvm.org/D37924

llvm-svn: 314170
2017-09-25 22:11:11 +00:00
Matt Morehouse 4881a23ca8 [MSan] Disable sanitization for __sanitizer_dtor_callback.
Summary:
Eliminate unnecessary instrumentation at __sanitizer_dtor_callback
call sites.  Fixes https://github.com/google/sanitizers/issues/861.

Reviewers: eugenis, kcc

Reviewed By: eugenis

Subscribers: vitalybuka, llvm-commits, cfe-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D38063

llvm-svn: 313831
2017-09-20 22:53:08 +00:00
Reid Kleckner 1aa4ea8104 [gcov] Emit errors when opening the notes file fails
No time to write a test case, on to the next bug. =P

Discovered while investigating PR34659

llvm-svn: 313571
2017-09-18 21:31:48 +00:00
Hiroshi Yamauchi a43913cfaf Add options to dump PGO counts in text.
Summary:
Added text options to -pgo-view-counts and -pgo-view-raw-counts that dump block frequency and branch probability info in text.

This is useful when the graph is very large and complex (the dot command crashes, lines/edges too close to tell apart, hard to navigate without textual search) or simply when text is preferred.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37776

llvm-svn: 313159
2017-09-13 17:20:38 +00:00
Kostya Serebryany 4192b96313 [sanitizer-coverage] call appendToUsed once per module, not once per function (which is too slow)
llvm-svn: 312855
2017-09-09 05:30:13 +00:00
Matt Morehouse 034126e507 [SanitizeCoverage] Enable stack-depth coverage for -fsanitize=fuzzer
Summary:
- Don't sanitize __sancov_lowest_stack.
- Don't instrument leaf functions.
- Add CoverageStackDepth to Fuzzer and FuzzerNoLink.
- Only enable on Linux.

Reviewers: vitalybuka, kcc, george.karpenkov

Reviewed By: kcc

Subscribers: kubamracek, cfe-commits, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D37156

llvm-svn: 312185
2017-08-30 22:49:31 +00:00
Matt Morehouse ba2e61b357 Revert "[SanitizeCoverage] Enable stack-depth coverage for -fsanitize=fuzzer"
This reverts r312026 due to bot breakage.

llvm-svn: 312047
2017-08-29 21:56:56 +00:00
Matt Morehouse 2ad8d948b2 [SanitizeCoverage] Enable stack-depth coverage for -fsanitize=fuzzer
Summary:
- Don't sanitize __sancov_lowest_stack.
- Don't instrument leaf functions.
- Add CoverageStackDepth to Fuzzer and FuzzerNoLink.
- Disable stack depth tracking on Mac.

Reviewers: vitalybuka, kcc, george.karpenkov

Reviewed By: kcc

Subscribers: kubamracek, cfe-commits, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D37156

llvm-svn: 312026
2017-08-29 19:48:12 +00:00
Dehao Chen efd007f6f4 Add null check for promoted direct call
Summary: We originally assume that in pgo-icp, the promoted direct call will never be null after strip point casts. However, stripPointerCasts is so smart that it could possibly return the value of the function call if it knows that the return value is always an argument. In this case, the returned value cannot cast to Instruction. In this patch, null check is added to ensure null pointer will not be accessed.

Reviewers: tejohnson, xur, davidxl, djasper

Reviewed By: tejohnson

Subscribers: llvm-commits, sanjoy

Differential Revision: https://reviews.llvm.org/D37252

llvm-svn: 312005
2017-08-29 15:28:12 +00:00
Justin Bogner f1a54a47b0 [sanitizer-coverage] Mark the guard and 8-bit counter arrays as used
In r311742 we marked the PCs array as used so it wouldn't be dead
stripped, but left the guard and 8-bit counters arrays alone since
these are referenced by the coverage instrumentation. This doesn't
quite work if we want the indices of the PCs array to match the other
arrays though, since elements can still end up being dead and
disappear.

Instead, we mark all three of these arrays as used so that they'll be
consistent with one another.

llvm-svn: 311959
2017-08-29 00:11:05 +00:00
Justin Bogner 873a0746f1 [sanitizer-coverage] Return the array from CreatePCArray. NFC
Be more consistent with CreateFunctionLocalArrayInSection in the API
of CreatePCArray, and assign the member variable in the caller like we
do for the guard and 8-bit counter arrays.

This also tweaks the order of method declarations to match the order
of definitions in the file.

llvm-svn: 311955
2017-08-28 23:46:11 +00:00
Justin Bogner be757de2b6 [sanitizer-coverage] Clean up trailing whitespace. NFC
llvm-svn: 311954
2017-08-28 23:38:12 +00:00
Kamil Rytarowski a9f404f813 Define NetBSD/amd64 ASAN Shadow Offset
Summary:
Catch up after compiler-rt changes and define kNetBSD_ShadowOffset64
as (1ULL << 46).
 
Sponsored by <The NetBSD Foundation>

Reviewers: kcc, joerg, filcab, vitalybuka, eugenis

Reviewed By: eugenis

Subscribers: llvm-commits, #sanitizers

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D37234

llvm-svn: 311941
2017-08-28 22:13:52 +00:00
Taewook Oh 572f45a3c8 Create PHI node for the return value only when the return value has uses.
Summary:
Currently, a phi node is created in the normal destination to unify the return values from promoted calls and the original indirect call. This patch makes this phi node to be created only when the return value has uses.

This patch is necessary to generate valid code, as compiler crashes with the attached test case without this patch. Without this patch, an illegal phi node that has no incoming value from `entry`/`catch` is created in `cleanup` block.

I think existing implementation is good as far as there is at least one use of the original indirect call. `insertCallRetPHI` creates a new phi node in the normal destination block only when the original indirect call dominates its use and the normal destination block. Otherwise, `fixupPHINodeForNormalDest` will handle the unification of return values naturally without creating a new phi node. However, if there's no use, `insertCallRetPHI` still creates a new phi node even when the original indirect call does not dominate the normal destination block, because `getCallRetPHINode` returns false.

Reviewers: xur, davidxl, danielcdh

Reviewed By: xur

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37176

llvm-svn: 311906
2017-08-28 18:57:00 +00:00