Commit Graph

145 Commits

Author SHA1 Message Date
Evgeniy Stepanov 28f330fd6f [msan] Don't check divisor shadow in fdiv.
Summary:
Floating point division by zero or even undef does not have undefined
behavior and may occur due to optimizations.

Fixes https://bugs.llvm.org/show_bug.cgi?id=37523.

Reviewers: kcc

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D47085

llvm-svn: 332761
2018-05-18 20:19:53 +00:00
Evgeniy Stepanov 091fed94ae [msan] Instrument masked.store, masked.load intrinsics.
Summary: Instrument masked store/load intrinsics.

Reviewers: kcc

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D46785

llvm-svn: 332402
2018-05-15 21:28:25 +00:00
Craig Topper 38ad7ddabc [X86] Remove and autoupgrade cvtsi2ss/cvtsi2sd intrinsics to match what clang has used for a very long time.
llvm-svn: 332186
2018-05-12 23:14:39 +00:00
Shiva Chen 2c864551df [DebugInfo] Add DILabel metadata and intrinsic llvm.dbg.label.
In order to set breakpoints on labels and list source code around
labels, we need collect debug information for labels, i.e., label
name, the function label belong, line number in the file, and the
address label located. In order to keep these information in LLVM
IR and to allow backend to generate debug information correctly.
We create a new kind of metadata for labels, DILabel. The format
of DILabel is

!DILabel(scope: !1, name: "foo", file: !2, line: 3)

We hope to keep debug information as much as possible even the
code is optimized. So, we create a new kind of intrinsic for label
metadata to avoid the metadata is eliminated with basic block.
The intrinsic will keep existing if we keep it from optimized out.
The format of the intrinsic is

llvm.dbg.label(metadata !1)

It has only one argument, that is the DILabel metadata. The
intrinsic will follow the label immediately. Backend could get the
label metadata through the intrinsic's parameter.

We also create DIBuilder API for labels to be used by Frontend.
Frontend could use createLabel() to allocate DILabel objects, and use
insertLabel() to insert llvm.dbg.label intrinsic in LLVM IR.

Differential Revision: https://reviews.llvm.org/D45024

Patch by Hsiangkai Wang.

llvm-svn: 331841
2018-05-09 02:40:45 +00:00
Chandler Carruth 16429acacb [x86] Revert r330322 (& r330323): Lowering x86 adds/addus/subs/subus intrinsics
The LLVM commit introduces a crash in LLVM's instruction selection.

I filed http://llvm.org/PR37260 with the test case.

llvm-svn: 330997
2018-04-26 21:46:01 +00:00
Alexander Ivchenko e8fed1546e Lowering x86 adds/addus/subs/subus intrinsics (llvm part)
This is the patch that lowers x86 intrinsics to native IR
in order to enable optimizations. The patch also includes folding
of previously missing saturation patterns so that IR emits the same
machine instructions as the intrinsics.

Patch by tkrupa

Differential Revision: https://reviews.llvm.org/D44785

llvm-svn: 330322
2018-04-19 12:13:30 +00:00
Alexander Potapenko ac70668cff MSan: introduce the conservative assembly handling mode.
The default assembly handling mode may introduce false positives in the
cases when MSan doesn't understand that the assembly call initializes
the memory pointed to by one of its arguments.

We introduce the conservative mode, which initializes the first
|sizeof(type)| bytes for every |type*| pointer passed into the
assembly statement.

llvm-svn: 329054
2018-04-03 09:50:06 +00:00
Evgeniy Stepanov 50635dab26 Add msan custom mapping options.
Similarly to https://reviews.llvm.org/D18865 this adds options to provide custom mapping for msan.
As discussed in http://lists.llvm.org/pipermail/llvm-dev/2018-February/121339.html

Patch by vit9696(at)avp.su.

Differential Revision: https://reviews.llvm.org/D44926

llvm-svn: 328830
2018-03-29 21:18:17 +00:00
Daniel Neilson 1e68724d24 Remove alignment argument from memcpy/memmove/memset in favour of alignment attributes (Step 1)
Summary:
 This is a resurrection of work first proposed and discussed in Aug 2015:
   http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html
and initially landed (but then backed out) in Nov 2015:
   http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

 The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument
which is required to be a constant integer. It represents the alignment of the
dest (and source), and so must be the minimum of the actual alignment of the
two.

 This change is the first in a series that allows source and dest to each
have their own alignments by using the alignment attribute on their arguments.

 In this change we:
1) Remove the alignment argument.
2) Add alignment attributes to the source & dest arguments. We, temporarily,
   require that the alignments for source & dest be equal.

 For example, code which used to read:
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false)
will now read
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false)

 Downstream users may have to update their lit tests that check for
@llvm.memcpy/memmove/memset call/declaration patterns. The following extended sed script
may help with updating the majority of your tests, but it does not catch all possible
patterns so some manual checking and updating will be required.

s~declare void @llvm\.mem(set|cpy|move)\.p([^(]*)\((.*), i32, i1\)~declare void @llvm.mem\1.p\2(\3, i1)~g
s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* \3, i8 \4, i8 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* \3, i8 \4, i16 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* \3, i8 \4, i32 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* \3, i8 \4, i64 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* \3, i8 \4, i128 \5, i1 \6)~g
s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* align \6 \3, i8 \4, i8 \5, i1 \7)~g
s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* align \6 \3, i8 \4, i16 \5, i1 \7)~g
s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* align \6 \3, i8 \4, i32 \5, i1 \7)~g
s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* align \6 \3, i8 \4, i64 \5, i1 \7)~g
s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* align \6 \3, i8 \4, i128 \5, i1 \7)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* \4, i8\5* \6, i8 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* \4, i8\5* \6, i16 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* \4, i8\5* \6, i32 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* \4, i8\5* \6, i64 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* \4, i8\5* \6, i128 \7, i1 \8)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* align \8 \4, i8\5* align \8 \6, i8 \7, i1 \9)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* align \8 \4, i8\5* align \8 \6, i16 \7, i1 \9)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* align \8 \4, i8\5* align \8 \6, i32 \7, i1 \9)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* align \8 \4, i8\5* align \8 \6, i64 \7, i1 \9)~g
s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* align \8 \4, i8\5* align \8 \6, i128 \7, i1 \9)~g

 The remaining changes in the series will:
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
   source and dest alignments.
Step 3) Update Clang to use the new IRBuilder API.
Step 4) Update Polly to use the new IRBuilder API.
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
        and those that use use MemIntrinsicInst::[get|set]Alignment() to use
        getDestAlignment() and getSourceAlignment() instead.
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
        MemIntrinsicInst::[get|set]Alignment() methods.

Reviewers: pete, hfinkel, lhames, reames, bollu

Reviewed By: reames

Subscribers: niosHD, reames, jholewinski, qcolombet, jfb, sanjoy, arsenm, dschuff, dylanmckay, mehdi_amini, sdardis, nemanjai, david2050, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, sabuasal, llvm-commits

Differential Revision: https://reviews.llvm.org/D41675

llvm-svn: 322965
2018-01-19 17:13:12 +00:00
Alexander Potapenko 391804f54b [MSan] Move the access address check before the shadow access for that address
MSan used to insert the shadow check of the store pointer operand
_after_ the shadow of the value operand has been written.
This happens to work in the userspace, as the whole shadow range is
always mapped. However in the kernel the shadow page may not exist, so
the bug may cause a crash.

This patch moves the address check in front of the shadow access.

llvm-svn: 318901
2017-11-23 08:34:32 +00:00
Vitaly Buka 8000f228b3 [msan] Don't sanitize "nosanitize" instructions
Reviewers: eugenis

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D40205

llvm-svn: 318708
2017-11-20 23:37:56 +00:00
Hans Wennborg 08b34a017a Update some code.google.com links
llvm-svn: 318115
2017-11-13 23:47:58 +00:00
Matt Morehouse 4881a23ca8 [MSan] Disable sanitization for __sanitizer_dtor_callback.
Summary:
Eliminate unnecessary instrumentation at __sanitizer_dtor_callback
call sites.  Fixes https://github.com/google/sanitizers/issues/861.

Reviewers: eugenis, kcc

Reviewed By: eugenis

Subscribers: vitalybuka, llvm-commits, cfe-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D38063

llvm-svn: 313831
2017-09-20 22:53:08 +00:00
Daniel Neilson 3c7f89f382 Add element-atomic mem intrinsic canary tests for Memory Sanitizer.
Summary:
Add canary tests to verify that MSAN currently does nothing with the element atomic memory intrinsics for memcpy, memmove, and memset.

Placeholder tests that will fail once element atomic @llvm.mem[cpy|move|set] instrinsics have been added to the MemIntrinsic class hierarchy. These will act as a reminder to verify that MSAN handles these intrinsics properly once they have been added to that class hierarchy.

Reviewers: reames

Reviewed By: reames

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D35510

llvm-svn: 308251
2017-07-18 01:06:54 +00:00
Evgeniy Stepanov 3d5ea713f7 [msan] Only check shadow memory for operands that are sized.
Fixes PR33347: https://bugs.llvm.org/show_bug.cgi?id=33347.

Differential Revision: https://reviews.llvm.org/D35160

Patch by Matt Morehouse.

llvm-svn: 307684
2017-07-11 18:13:52 +00:00
Hiroshi Inoue ef1c2ba22a fix trivial typos, NFC
llvm-svn: 306952
2017-07-01 07:12:15 +00:00
Craig Topper 2b54baeb96 [X86] Replace 'REQUIRES: x86' in tests with 'REQUIRES: x86-registered-target' which seems to be the correct way to make them run on an x86 build.
llvm-svn: 304682
2017-06-04 08:21:58 +00:00
Justin Bogner be42c4aec8 Update three tests I missed in r302979 and r302990
llvm-svn: 303319
2017-05-18 00:58:06 +00:00
Justin Bogner 3a3e115e81 MSan: Mark MemorySanitizer tests that use x86 intrinsics as REQUIRES: x86
Tests that use target intrinsics are inherently target specific. Mark
them as such.

llvm-svn: 302990
2017-05-13 16:24:38 +00:00
Alexander Potapenko a658ae8fe2 [msan] Fix PR32842
It turned out that MSan was incorrectly calculating the shadow for int comparisons: it was done by truncating the result of (Shadow1 OR Shadow2) to i1, effectively rendering all bits except LSB useless.
This approach doesn't work e.g. in the case where the values being compared are even (i.e. have the LSB of the shadow equal to zero).
Instead, if CreateShadowCast() has to cast a bigger int to i1, we replace the truncation with an ICMP to 0.

This patch doesn't affect the code generated for SPEC 2006 binaries, i.e. there's no performance impact.

For the test case reported in PR32842 MSan with the patch generates a slightly more efficient code:

  orq     %rcx, %rax
  jne     .LBB0_6
, instead of:

  orl     %ecx, %eax
  testb   $1, %al
  jne     .LBB0_6

llvm-svn: 302787
2017-05-11 11:07:48 +00:00
Matt Arsenault f10061ec70 Add address space mangling to lifetime intrinsics
In preparation for allowing allocas to have non-0 addrspace.

llvm-svn: 299876
2017-04-10 20:18:21 +00:00
Evgeniy Stepanov d0285f21d0 [msan] Handle x86_sse_stmxcsr and x86_sse_ldmxcsr.
llvm-svn: 296848
2017-03-03 01:12:43 +00:00
Evgeniy Stepanov d1daf631f4 [msan] Fix instrumentation of array allocas.
Before this, MSan poisoned exactly one element of any array alloca,
even if the number of elements was zero.

llvm-svn: 296050
2017-02-24 00:13:17 +00:00
Craig Topper c7486af9c9 [AVX-512] Add AVX-512 vector shift intrinsics to memory santitizer.
Just needed to add the intrinsics to the exist switch. The code is generic enough to support the wider vectors with no changes.

llvm-svn: 286980
2016-11-15 16:27:33 +00:00
Evgeniy Stepanov b736335dc3 [msan] Fix __msan_maybe_ for non-standard type sizes.
Fix incorrect calculation of the type size for __msan_maybe_warning_N
call that resulted in an invalid (narrowing) zext instruction and
"Assertion `castIsValid(op, S, Ty) && "Invalid cast!"' failed."

Only happens in very large functions (with more than 3500 MSan
checks) operating on integer types that are not power-of-two.

llvm-svn: 274395
2016-07-01 22:49:59 +00:00
Marcin Koscielnicki 3feda222c6 [sanitizers] Disable target-specific lowering of string functions.
CodeGen has hooks that allow targets to emit specialized code instead
of calls to memcmp, memchr, strcpy, stpcpy, strcmp, strlen, strnlen.
When ASan/MSan/TSan/ESan is in use, this sidesteps its interceptors, resulting
in uninstrumented memory accesses.  To avoid that, make these sanitizers
mark the calls as nobuiltin.

Differential Revision: http://reviews.llvm.org/D19781

llvm-svn: 273083
2016-06-18 10:10:37 +00:00
Craig Topper 8287fd8abd [X86] Remove SSE/AVX unaligned store intrinsics as clang no longer uses them. Auto upgrade to native unaligned store instructions.
llvm-svn: 271236
2016-05-30 23:15:56 +00:00
Craig Topper f565d37607 [X86] Remove some unnecessary declarations for old intrinsics from a test.
llvm-svn: 271175
2016-05-29 06:37:39 +00:00
Evgeniy Stepanov b62303081b [msan] Add a test for vector compare x86 intrinsics.
This was actually meant to go in with r267966, but I forgot to
git add the file. Better late than never.

llvm-svn: 270515
2016-05-24 00:04:23 +00:00
Marcin Koscielnicki a4fcd3681f [MSan] [PowerPC] Implement PowerPC64 vararg helper.
Differential Revision: http://reviews.llvm.org/D20000

llvm-svn: 269518
2016-05-13 23:55:33 +00:00
Marcin Koscielnicki 60b3cbe095 [MSan] [AArch64] Fix vararg helper for >1 or non-int fixed arguments.
This fixes http://llvm.org/PR27646 on AArch64.

There are three issues here:

- The GR save area is 7 words in size, instead of 8.  This is not enough
  if none of the fixed arguments is passed in GRs (they're all floats or
  aggregates).
- The first argument is ignored (which counteracts the above if it's passed
  in GR).
- Like x86_64, fixed arguments landing in the overflow area are wrongly
  counted towards the overflow offset.

Differential Revision: http://reviews.llvm.org/D20023

llvm-svn: 268967
2016-05-09 20:57:36 +00:00
Marcin Koscielnicki b088ad1e09 [MSan] [X86] Fix vararg helper for fixed arguments in overflow area.
This fixes http://llvm.org/PR27646 on x86_64.

Differential Revision: http://reviews.llvm.org/D19997

llvm-svn: 268783
2016-05-06 19:36:56 +00:00
Marcin Koscielnicki 60061c21cb [MSan] [MIPS64] Fix vararg helper for >1 fixed argument.
This fixes http://llvm.org/PR27646 on Mips64.

Differential Revision: http://reviews.llvm.org/D19989

llvm-svn: 268673
2016-05-05 20:13:17 +00:00
Marcin Koscielnicki cc9676a821 [MSan] [Mips64] Add tests for vararg handling.
Differential Revision: http://reviews.llvm.org/D19919

llvm-svn: 268531
2016-05-04 18:39:14 +00:00
Adrian Prantl 75819aedf6 [PR27284] Reverse the ownership between DICompileUnit and DISubprogram.
Currently each Function points to a DISubprogram and DISubprogram has a
scope field. For member functions the scope is a DICompositeType. DIScopes
point to the DICompileUnit to facilitate type uniquing.

Distinct DISubprograms (with isDefinition: true) are not part of the type
hierarchy and cannot be uniqued. This change removes the subprograms
list from DICompileUnit and instead adds a pointer to the owning compile
unit to distinct DISubprograms. This would make it easy for ThinLTO to
strip unneeded DISubprograms and their transitively referenced debug info.

Motivation
----------

Materializing DISubprograms is currently the most expensive operation when
doing a ThinLTO build of clang.

We want the DISubprogram to be stored in a separate Bitcode block (or the
same block as the function body) so we can avoid having to expensively
deserialize all DISubprograms together with the global metadata. If a
function has been inlined into another subprogram we need to store a
reference the block containing the inlined subprogram.

Attached to https://llvm.org/bugs/show_bug.cgi?id=27284 is a python script
that updates LLVM IR testcases to the new format.

http://reviews.llvm.org/D19034
<rdar://problem/25256815>

llvm-svn: 266446
2016-04-15 15:57:41 +00:00
Adrian Prantl b8089516a5 testcase gardening: update the emissionKind enum to the new syntax. (NFC)
llvm-svn: 265081
2016-04-01 00:16:49 +00:00
Evgeniy Stepanov d6e91369d8 [msan] Don't put module constructors in comdats.
There is something strange going on with debug info (.eh_frame_hdr)
disappearing when msan.module_ctor are placed in comdat sections.

Moving this functionality under flag, disabled by default.

llvm-svn: 263579
2016-03-15 20:25:47 +00:00
Evgeniy Stepanov ba6ca87ffb [msan] Put msan constructor in a comdat.
MSan adds a constructor to each translation unit that calls
__msan_init, and does nothing else. The idea is to run __msan_init
before any instrumented code. This results in multiple constructors
and multiple .init_array entries in the final binary, one per
translation unit. This is absolutely unnecessary; one would be
enough.

This change moves the constructors to a comdat group in order to drop
the extra ones.

llvm-svn: 260632
2016-02-12 00:37:52 +00:00
Adhemerval Zanella e600c99a4e [sanitizer] [msan] Fix origin store of array types
This patch fixes the memory sanitizer origin store instrumentation for
array types.  This can be triggered by cases where frontend lowers
function return to array type instead of aggregation.

For instance, the C code:

--
struct mypair {
 int64_t x;
 int y;
};

mypair my_make_pair(int64_t x, int y)  {
 mypair p;
 p.x = x;
 p.y = y;
 return p;
}

int foo (int p)
{
  mypair z = my_make_pair(p, 0);
  return z.y + z.x;
}
--

It will be lowered with target set to aarch64-linux and -O0 to:

--
[...]
define i32 @_Z3fooi(i32 %p) #0 {
[...]
%call = call [2 x i64] @_Z12my_make_pairxi(i64 %conv, i32 0)
%1 = bitcast %struct.mypair* %z to [2 x i64]*
store [2 x i64] %call, [2 x i64]* %1, align 8
[...]
--

The origin store will emit a 'icmp' to test each store value again the
TLS origin array.  However since 'icmp' does not support ArrayType the
memory instrumentation phase will bail out with an error.

This patch change it by using the same strategy used for struct type on
array.

It fixes the 'test/msan/insertvalue_origin.cc' for aarch64 (the -O0 case).

llvm-svn: 257375
2016-01-11 19:55:27 +00:00
Adhemerval Zanella d2b10c5e9a [sanitizer] [msan] VarArgHelper for AArch64
This patch add support for variadic argument for AArch64.  All the MSAN
unit tests are not passing as well the signal_stress_test (currently
set as XFAIl for aarch64).

llvm-svn: 255495
2015-12-14 14:14:15 +00:00
Evgeniy Stepanov b05d380451 [msan] Relax origin-alignment test.
Change origin-alignment test to test only the alignment of the origin
store, and not the exact instruction sequence used to compute the
address. This makes the test less fragile and, in particular, lets it
pass both with the old and new MSan ABIs.

llvm-svn: 254027
2015-11-24 21:44:16 +00:00
Pete Cooper 67cf9a723b Revert "Change memcpy/memset/memmove to have dest and source alignments."
This reverts commit r253511.

This likely broke the bots in
http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202
http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787

llvm-svn: 253543
2015-11-19 05:56:52 +00:00
Pete Cooper 72bc23ef02 Change memcpy/memset/memmove to have dest and source alignments.
Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

These intrinsics currently have an explicit alignment argument which is
required to be a constant integer.  It represents the alignment of the
source and dest, and so must be the minimum of those.

This change allows source and dest to each have their own alignments
by using the alignment attribute on their arguments.  The alignment
argument itself is removed.

There are a few places in the code for which the code needs to be
checked by an expert as to whether using only src/dest alignment is
safe.  For those places, they currently take the minimum of src/dest
alignments which matches the current behaviour.

For example, code which used to read:
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false)
will now read:
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false)

For out of tree owners, I was able to strip alignment from calls using sed by replacing:
  (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\)
with:
  $1i1 false)

and similarly for memmove and memcpy.

I then added back in alignment to test cases which needed it.

A similar commit will be made to clang which actually has many differences in alignment as now
IRBuilder can generate different source/dest alignments on calls.

In IRBuilder itself, a new argument was added.  Instead of calling:
  CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false)
you now call
  CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false)

There is a temporary class (IntegerAlignment) which takes the source alignment and rejects
implicit conversion from bool.  This is to prevent isVolatile here from passing its default
parameter to the source alignment.

Note, changes in future can now be made to codegen.  I didn't change anything here, but this
change should enable better memcpy code sequences.

Reviewed by Hal Finkel.

llvm-svn: 253511
2015-11-18 22:17:24 +00:00
Peter Collingbourne d4bff30370 DI: Reverse direction of subprogram -> function edge.
Previously, subprograms contained a metadata reference to the function they
described. Because most clients need to get or set a subprogram for a given
function rather than the other way around, this created unneeded inefficiency.

For example, many passes needed to call the function llvm::makeSubprogramMap()
to build a mapping from functions to subprograms, and the IR linker needed to
fix up function references in a way that caused quadratic complexity in the IR
linking phase of LTO.

This change reverses the direction of the edge by storing the subprogram as
function-level metadata and removing DISubprogram's function field.

Since this is an IR change, a bitcode upgrade has been provided.

Fixes PR23367. An upgrade script for textual IR for out-of-tree clients is
attached to the PR.

Differential Revision: http://reviews.llvm.org/D14265

llvm-svn: 252219
2015-11-05 22:03:56 +00:00
Evgeniy Stepanov ebd3f44f93 [msan] Fix crash on multiplication by a non-integer constant.
Fixes PR25160.

llvm-svn: 250260
2015-10-14 00:21:13 +00:00
Evgeniy Stepanov d12212bc8c New MSan mapping layout (llvm part).
This is an implementation of
https://github.com/google/sanitizers/issues/579

It has a number of advantages over the current mapping:
* Works for non-PIE executables.
* Does not require ASLR; as a consequence, debugging MSan programs in
  gdb no longer requires "set disable-randomization off".
* Supports linux kernels >=4.1.2.
* The code is marginally faster and smaller.

This is an ABI break. We never really promised ABI stability, but
this patch includes a courtesy escape hatch: a compile-time macro
that reverts back to the old mapping layout.

llvm-svn: 249753
2015-10-08 21:35:26 +00:00
Duncan P. N. Exon Smith 814b8e91c7 DI: Require subprogram definitions to be distinct
As a follow-up to r246098, require `DISubprogram` definitions
(`isDefinition: true`) to be 'distinct'.  Specifically, add an assembler
check, a verifier check, and bitcode upgrading logic to combat testcase
bitrot after the `DIBuilder` change.

While working on the testcases, I realized that
test/Linker/subprogram-linkonce-weak-odr.ll isn't relevant anymore.  Its
purpose was to check for a corner case in PR22792 where two subprogram
definitions match exactly and share the same metadata node.  The new
verifier check, requiring that subprogram definitions are 'distinct',
precludes that possibility.

I updated almost all the IR with the following script:

    git grep -l -E -e '= !DISubprogram\(.* isDefinition: true' |
    grep -v test/Bitcode |
    xargs sed -i '' -e 's/= \(!DISubprogram(.*, isDefinition: true\)/= distinct \1/'

Likely some variant of would work for out-of-tree testcases.

llvm-svn: 246327
2015-08-28 20:26:49 +00:00
Evgeniy Stepanov d04d07e65e [msan] Precise instrumentation for icmp sgt %x, -1.
Extend signed relational comparison instrumentation with a special
case for comparisons with -1. This fixes an MSan false positive when
such comparison is used as a sign bit test.

https://llvm.org/bugs/show_bug.cgi?id=24561

llvm-svn: 245980
2015-08-25 22:19:11 +00:00
Evgeniy Stepanov b0b2d4d2c9 Use CHECK-LABEL in MSan IR tests.
This actually found one case when a test was matching instructions
from the output of a different test.

llvm-svn: 245974
2015-08-25 20:59:26 +00:00
Evgeniy Stepanov 24ac55d884 [msan] Fix handling of musttail calls.
MSan instrumentation for return values of musttail calls is not
allowed by the IR constraints, and not needed at the same time.

llvm-svn: 245106
2015-08-14 22:03:50 +00:00