Commit Graph

30 Commits

Author SHA1 Message Date
Adrian Prantl 3c6c14d14b ASAN: Provide reliable debug info for local variables at -O0.
The function stack poisioner conditionally stores local variables
either in an alloca or in malloc'ated memory, which has the
unfortunate side-effect, that the actual address of the variable is
only materialized when the variable is accessed, which means that
those variables are mostly invisible to the debugger even when
compiling without optimizations.

This patch stores the address of the local stack base into an alloca,
which can be referred to by the debug info and is available throughout
the function. This adds one extra pointer-sized alloca to each stack
frame (but mem2reg can optimize it away again when optimizations are
enabled, yielding roughly the same debug info quality as before in
optimized code).

rdar://problem/30433661

Differential Revision: https://reviews.llvm.org/D41034

llvm-svn: 320415
2017-12-11 20:43:21 +00:00
Walter Lee 5a440d323a [asan] Test ASan instrumentation for shadow scale value of 5
Add additional RUN clauses to test for -asan-mapping-scale=5 in
selective tests, with special CHECK statements where needed.

Differential Revision: https://reviews.llvm.org/D39775

llvm-svn: 318493
2017-11-17 01:15:31 +00:00
Daniel Neilson 5fc2ee7fab Add element-atomic mem intrinsic canary tests for Address Sanitizer.
Summary:
Add canary tests to verify that ASAN currently does nothing with the element atomic memory intrinsics for memcpy, memmove, and memset.

Placeholder tests that will fail once element atomic @llvm.mem[cpy|move|set] instrinsics have been added to the MemIntrinsic class hierarchy. These will act as a reminder to verify that ASAN handles these intrinsics properly once they have been added to that class hierarchy.

Reviewers: reames

Reviewed By: reames

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D35505

llvm-svn: 308248
2017-07-18 01:06:52 +00:00
Arnold Schwaighofer 8d61e0030a AddressSanitizer: don't track swifterror memory addresses
They are register promoted by ISel and so it makes no sense to treat them as
memory.

Inserting calls to the thread sanitizer would also generate invalid IR.

You would hit:

"swifterror value can only be loaded and stored from, or as a swifterror
argument!"

llvm-svn: 295230
2017-02-15 20:43:43 +00:00
Pete Cooper 67cf9a723b Revert "Change memcpy/memset/memmove to have dest and source alignments."
This reverts commit r253511.

This likely broke the bots in
http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202
http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787

llvm-svn: 253543
2015-11-19 05:56:52 +00:00
Pete Cooper 72bc23ef02 Change memcpy/memset/memmove to have dest and source alignments.
Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

These intrinsics currently have an explicit alignment argument which is
required to be a constant integer.  It represents the alignment of the
source and dest, and so must be the minimum of those.

This change allows source and dest to each have their own alignments
by using the alignment attribute on their arguments.  The alignment
argument itself is removed.

There are a few places in the code for which the code needs to be
checked by an expert as to whether using only src/dest alignment is
safe.  For those places, they currently take the minimum of src/dest
alignments which matches the current behaviour.

For example, code which used to read:
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false)
will now read:
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false)

For out of tree owners, I was able to strip alignment from calls using sed by replacing:
  (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\)
with:
  $1i1 false)

and similarly for memmove and memcpy.

I then added back in alignment to test cases which needed it.

A similar commit will be made to clang which actually has many differences in alignment as now
IRBuilder can generate different source/dest alignments on calls.

In IRBuilder itself, a new argument was added.  Instead of calling:
  CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false)
you now call
  CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false)

There is a temporary class (IntegerAlignment) which takes the source alignment and rejects
implicit conversion from bool.  This is to prevent isVolatile here from passing its default
parameter to the source alignment.

Note, changes in future can now be made to codegen.  I didn't change anything here, but this
change should enable better memcpy code sequences.

Reviewed by Hal Finkel.

llvm-svn: 253511
2015-11-18 22:17:24 +00:00
Kuba Brecka 45dbffdc3d [asan] Rename the ABI versioning symbol to '__asan_version_mismatch_check' instead of abusing '__asan_init'
We currently version `__asan_init` and when the ABI version doesn't match, the linker gives a `undefined reference to '__asan_init_v5'` message. From this, it might not be obvious that it's actually a version mismatch error. This patch makes the error message much clearer by changing the name of the undefined symbol to be `__asan_version_mismatch_check_xxx` (followed by the version string). We obviously don't want the initializer to be named like that, so it's a separate symbol that is used only for the purpose of version checking.

Reviewed at http://reviews.llvm.org/D11004

llvm-svn: 243003
2015-07-23 10:54:06 +00:00
Ismail Pazarbasi 09c3709e75 ASan: Use `createSanitizerCtor` to create ctor, and call `__asan_init`
Reviewers: kcc, samsonov

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8778

llvm-svn: 236777
2015-05-07 21:40:46 +00:00
David Blaikie a79ac14fa6 [opaque pointer type] Add textual IR support for explicit type parameter to load instruction
Essentially the same as the GEP change in r230786.

A similar migration script can be used to update test cases, though a few more
test case improvements/changes were required this time around: (r229269-r229278)

import fileinput
import sys
import re

pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)")

for line in sys.stdin:
  sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line))

Reviewers: rafael, dexonsmith, grosser

Differential Revision: http://reviews.llvm.org/D7649

llvm-svn: 230794
2015-02-27 21:17:42 +00:00
Duncan P. N. Exon Smith be7ea19b58 IR: Make metadata typeless in assembly
Now that `Metadata` is typeless, reflect that in the assembly.  These
are the matching assembly changes for the metadata/value split in
r223802.

  - Only use the `metadata` type when referencing metadata from a call
    intrinsic -- i.e., only when it's used as a `Value`.

  - Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode`
    when referencing it from call intrinsics.

So, assembly like this:

    define @foo(i32 %v) {
      call void @llvm.foo(metadata !{i32 %v}, metadata !0)
      call void @llvm.foo(metadata !{i32 7}, metadata !0)
      call void @llvm.foo(metadata !1, metadata !0)
      call void @llvm.foo(metadata !3, metadata !0)
      call void @llvm.foo(metadata !{metadata !3}, metadata !0)
      ret void, !bar !2
    }
    !0 = metadata !{metadata !2}
    !1 = metadata !{i32* @global}
    !2 = metadata !{metadata !3}
    !3 = metadata !{}

turns into this:

    define @foo(i32 %v) {
      call void @llvm.foo(metadata i32 %v, metadata !0)
      call void @llvm.foo(metadata i32 7, metadata !0)
      call void @llvm.foo(metadata i32* @global, metadata !0)
      call void @llvm.foo(metadata !3, metadata !0)
      call void @llvm.foo(metadata !{!3}, metadata !0)
      ret void, !bar !2
    }
    !0 = !{!2}
    !1 = !{i32* @global}
    !2 = !{!3}
    !3 = !{}

I wrote an upgrade script that handled almost all of the tests in llvm
and many of the tests in cfe (even handling many `CHECK` lines).  I've
attached it (or will attach it in a moment if you're speedy) to PR21532
to help everyone update their out-of-tree testcases.

This is part of PR21532.

llvm-svn: 224257
2014-12-15 19:07:53 +00:00
Kostya Serebryany ad23852ac3 [asan] Assign a low branch weight to ASan's slow path, patch by Jonas Wagner. This speeds up asan (at least on SPEC) by 1%-5% or more. Also fix lint in dfsan.
llvm-svn: 216972
2014-09-02 21:46:51 +00:00
Alexey Samsonov 62a8e0639b CHECK-LABEL-ize one test
llvm-svn: 213177
2014-07-16 18:11:31 +00:00
Kostya Serebryany c7895a83d2 [asan] properly instrument memory accesses that have small alignment (smaller than min(8,size)) by making two checks instead of one. This may slowdown some cases, e.g. long long on 32-bit or wide loads produced after loop unrolling. The benefit is higher sencitivity.
llvm-svn: 209508
2014-05-23 11:52:07 +00:00
Kostya Serebryany 49b88f54da [asan] add llvm-ish test for memset/etc instrumentation
llvm-svn: 206747
2014-04-21 11:57:43 +00:00
Alexander Potapenko 7aafd31dad [ASan] Add -asan-module to the ASan .ll tests.
After the -asan pass had been split into -asan (function-level) and -asan-module (module-level) some of the
tests have silently stopped working, because they didn't instrument the globals anymore.
We've decided to have every test using both passes, irrespective of the presence of globals in it.

llvm-svn: 204335
2014-03-20 11:16:34 +00:00
Kostya Serebryany 4fb7801b3f [asan] rewrite asan's stack frame layout
Summary:
Rewrite asan's stack frame layout.
First, most of the stack layout logic is moved into a separte file
to make it more testable and (potentially) useful for other projects.
Second, make the frames more compact by using adaptive redzones
(smaller for small objects, larger for large objects).
Third, try to minimized gaps due to large alignments (this is hypothetical since
today we don't see many stack vars aligned by more than 32).

The frames indeed become more compact, but I'll still need to run more benchmarks
before committing, but I am sking for review now to get early feedback.

This change will be accompanied by a trivial change in compiler-rt tests
to match the new frame sizes.

Reviewers: samsonov, dvyukov

Reviewed By: samsonov

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2324

llvm-svn: 196568
2013-12-06 09:00:17 +00:00
Kostya Serebryany 5e276f9dbc [asan] workaround for PR16277: don't instrument AllocaInstr with alignment more than the redzone size
llvm-svn: 184928
2013-06-26 09:49:52 +00:00
Kostya Serebryany 6b5b58deeb [asan] don't instrument functions with available_externally linkage. This saves a bit of compile time and reduces the number of redundant global strings generated by asan (https://code.google.com/p/address-sanitizer/issues/detail?id=167)
llvm-svn: 177250
2013-03-18 07:33:49 +00:00
Kostya Serebryany cf880b9443 Unify clang/llvm attributes for asan/tsan/msan (LLVM part)
These are two related changes (one in llvm, one in clang).
LLVM: 
- rename address_safety => sanitize_address (the enum value is the same, so we preserve binary compatibility with old bitcode)
- rename thread_safety => sanitize_thread
- rename no_uninitialized_checks -> sanitize_memory

CLANG: 
- add __attribute__((no_sanitize_address)) as a synonym for __attribute__((no_address_safety_analysis))
- add __attribute__((no_sanitize_thread))
- add __attribute__((no_sanitize_memory))

for S in address thread memory
If -fsanitize=S is present and __attribute__((no_sanitize_S)) is not
set llvm attribute sanitize_S

llvm-svn: 176075
2013-02-26 06:58:09 +00:00
Kostya Serebryany 3ece9beaf1 [asan] instrument memory accesses with unusual sizes
This patch makes asan instrument memory accesses with unusual sizes (e.g. 5 bytes or 10 bytes), e.g. long double or
packed structures.
Instrumentation is done with two 1-byte checks
(first and last bytes) and if the error is found
__asan_report_load_n(addr, real_size) or
__asan_report_store_n(addr, real_size)
is called.

Also, call these two new functions in memset/memcpy
instrumentation.

asan-rt part will follow.

llvm-svn: 175507
2013-02-19 11:29:21 +00:00
Kostya Serebryany 7ca384bc1a [asan] revert r175266 as it breaks code with packed structures. supporting long double will require a more general solution
llvm-svn: 175442
2013-02-18 13:47:02 +00:00
Kostya Serebryany a968568165 [asan] support long double on 64-bit. See https://code.google.com/p/address-sanitizer/issues/detail?id=151
llvm-svn: 175266
2013-02-15 12:46:06 +00:00
Kostya Serebryany e2e32b32e8 [asan] fix tests for the new ABI
llvm-svn: 174959
2013-02-12 11:14:24 +00:00
Kostya Serebryany 0995994989 [asan] make sure asan erases old unused allocas after it created a new one. This became important after the recent move from ModulePass to FunctionPass because no cleanup is happening after asan pass any more.
llvm-svn: 166267
2012-10-19 06:20:53 +00:00
Kostya Serebryany bf479714f9 [asan] insert crash basic blocks inline as opposed to inserting them at the end of the function. This doesn't seem to fix or break anything, but is considered to be more friendly to downstream passes (test change)
llvm-svn: 161871
2012-08-14 14:05:50 +00:00
Kostya Serebryany f02c6069ac [asan] make sure that the crash callbacks do not get merged (Chandler's idea: insert an empty InlineAsm). Change the order in which the new BBs are inserted: the slow path BB is insert between old BBs, the crash BB is inserted at the end. Don't create an empty BB (introduced by recent commits). Update the test. The experimental code that does manual crash callback merge will most likely be deleted later.
llvm-svn: 160544
2012-07-20 09:54:50 +00:00
Kostya Serebryany 874dae6119 [asan] refactor instrumentation to allow merging the crash callbacks (not fully implemented yet, no functionality change except the BB order)
llvm-svn: 160284
2012-07-16 16:15:40 +00:00
Chandler Carruth 8b540ab337 Revert r160254 temporarily.
It turns out that ASan relied on the at-the-end block insertion order to
(purely by happenstance) disable some LLVM optimizations, which in turn
start firing when the ordering is made more "normal". These
optimizations in turn merge many of the instrumentation reporting calls
which breaks the return address based error reporting in ASan.

We're looking at several different options for fixing this.

llvm-svn: 160256
2012-07-16 10:01:02 +00:00
Chandler Carruth 3dd6c81492 Teach AddressSanitizer to create basic blocks in a more natural order.
This is particularly useful to the backend code generators which try to
process things in the incoming function order.

Also, cleanup some uses of IRBuilder to be a bit simpler and more clear.

llvm-svn: 160254
2012-07-16 08:58:53 +00:00
Chandler Carruth 663943e23e Add a basic test for AddressSanitizer. This is just a bare-bones
functionality test.

In general, unless the functionality is substantially separated, we
should lump more basic testing into this file. The test running
infrastructure likes having a few test files with more comprehensive
testing within them.

llvm-svn: 160253
2012-07-16 08:56:46 +00:00