2020-01-22 02:18:27 +08:00
|
|
|
// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py
|
|
|
|
// RUN: %clang_cc1 -triple x86_64-unknown-unknown -emit-llvm -o - %s | FileCheck %s
|
|
|
|
|
|
|
|
void *my_aligned_alloc(int size, int alignment) __attribute__((assume_aligned(32), alloc_align(2)));
|
|
|
|
|
|
|
|
// CHECK-LABEL: @t0_immediate0(
|
|
|
|
// CHECK-NEXT: entry:
|
[Codegen] If reasonable, materialize clang's `AssumeAlignedAttr` as llvm's Alignment Attribute on call-site function return value
Summary:
This should be mostly NFC - we still lower the same alignment
knowledge to the IR. The main reasoning here is that
this somewhat improves readability of IR like this,
and will improve test coverage in upcoming patch.
Even though the alignment is guaranteed to always be an I-C-E,
we don't always materialize it as llvm's Alignment Attribute because:
1. There may be a non-zero offset
2. We may be sanitizing for alignment
Note that if there already was an IR alignment attribute
on return value, we union them, and thus the alignment
only ever rises.
Also, there is a second relevant clang attribute `AllocAlignAttr`,
so that is why `AbstractAssumeAlignedAttrEmitter` is templated.
Reviewers: erichkeane, jdoerfert, hfinkel, aaron.ballman, rsmith
Reviewed By: erichkeane
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D73005
2020-01-24 03:50:15 +08:00
|
|
|
// CHECK-NEXT: [[CALL:%.*]] = call align 32 i8* @my_aligned_alloc(i32 320, i32 16)
|
2020-01-22 02:18:27 +08:00
|
|
|
// CHECK-NEXT: ret i8* [[CALL]]
|
|
|
|
//
|
|
|
|
void *t0_immediate0() {
|
|
|
|
return my_aligned_alloc(320, 16);
|
|
|
|
};
|
|
|
|
|
|
|
|
// CHECK-LABEL: @t1_immediate1(
|
|
|
|
// CHECK-NEXT: entry:
|
[Codegen] If reasonable, materialize clang's `AssumeAlignedAttr` as llvm's Alignment Attribute on call-site function return value
Summary:
This should be mostly NFC - we still lower the same alignment
knowledge to the IR. The main reasoning here is that
this somewhat improves readability of IR like this,
and will improve test coverage in upcoming patch.
Even though the alignment is guaranteed to always be an I-C-E,
we don't always materialize it as llvm's Alignment Attribute because:
1. There may be a non-zero offset
2. We may be sanitizing for alignment
Note that if there already was an IR alignment attribute
on return value, we union them, and thus the alignment
only ever rises.
Also, there is a second relevant clang attribute `AllocAlignAttr`,
so that is why `AbstractAssumeAlignedAttrEmitter` is templated.
Reviewers: erichkeane, jdoerfert, hfinkel, aaron.ballman, rsmith
Reviewed By: erichkeane
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D73005
2020-01-24 03:50:15 +08:00
|
|
|
// CHECK-NEXT: [[CALL:%.*]] = call align 32 i8* @my_aligned_alloc(i32 320, i32 32)
|
2020-01-22 02:18:27 +08:00
|
|
|
// CHECK-NEXT: ret i8* [[CALL]]
|
|
|
|
//
|
|
|
|
void *t1_immediate1() {
|
|
|
|
return my_aligned_alloc(320, 32);
|
|
|
|
};
|
|
|
|
|
|
|
|
// CHECK-LABEL: @t2_immediate2(
|
|
|
|
// CHECK-NEXT: entry:
|
2020-01-24 03:50:24 +08:00
|
|
|
// CHECK-NEXT: [[CALL:%.*]] = call align 64 i8* @my_aligned_alloc(i32 320, i32 64)
|
2020-01-22 02:18:27 +08:00
|
|
|
// CHECK-NEXT: ret i8* [[CALL]]
|
|
|
|
//
|
|
|
|
void *t2_immediate2() {
|
|
|
|
return my_aligned_alloc(320, 64);
|
|
|
|
};
|
|
|
|
|
|
|
|
// CHECK-LABEL: @t3_variable(
|
|
|
|
// CHECK-NEXT: entry:
|
|
|
|
// CHECK-NEXT: [[ALIGNMENT_ADDR:%.*]] = alloca i32, align 4
|
|
|
|
// CHECK-NEXT: store i32 [[ALIGNMENT:%.*]], i32* [[ALIGNMENT_ADDR]], align 4
|
|
|
|
// CHECK-NEXT: [[TMP0:%.*]] = load i32, i32* [[ALIGNMENT_ADDR]], align 4
|
[Codegen] If reasonable, materialize clang's `AssumeAlignedAttr` as llvm's Alignment Attribute on call-site function return value
Summary:
This should be mostly NFC - we still lower the same alignment
knowledge to the IR. The main reasoning here is that
this somewhat improves readability of IR like this,
and will improve test coverage in upcoming patch.
Even though the alignment is guaranteed to always be an I-C-E,
we don't always materialize it as llvm's Alignment Attribute because:
1. There may be a non-zero offset
2. We may be sanitizing for alignment
Note that if there already was an IR alignment attribute
on return value, we union them, and thus the alignment
only ever rises.
Also, there is a second relevant clang attribute `AllocAlignAttr`,
so that is why `AbstractAssumeAlignedAttrEmitter` is templated.
Reviewers: erichkeane, jdoerfert, hfinkel, aaron.ballman, rsmith
Reviewed By: erichkeane
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D73005
2020-01-24 03:50:15 +08:00
|
|
|
// CHECK-NEXT: [[CALL:%.*]] = call align 32 i8* @my_aligned_alloc(i32 320, i32 [[TMP0]])
|
|
|
|
// CHECK-NEXT: [[ALIGNMENTCAST:%.*]] = zext i32 [[TMP0]] to i64
|
|
|
|
// CHECK-NEXT: [[MASK:%.*]] = sub i64 [[ALIGNMENTCAST]], 1
|
2020-01-22 02:18:27 +08:00
|
|
|
// CHECK-NEXT: [[PTRINT:%.*]] = ptrtoint i8* [[CALL]] to i64
|
[Codegen] If reasonable, materialize clang's `AssumeAlignedAttr` as llvm's Alignment Attribute on call-site function return value
Summary:
This should be mostly NFC - we still lower the same alignment
knowledge to the IR. The main reasoning here is that
this somewhat improves readability of IR like this,
and will improve test coverage in upcoming patch.
Even though the alignment is guaranteed to always be an I-C-E,
we don't always materialize it as llvm's Alignment Attribute because:
1. There may be a non-zero offset
2. We may be sanitizing for alignment
Note that if there already was an IR alignment attribute
on return value, we union them, and thus the alignment
only ever rises.
Also, there is a second relevant clang attribute `AllocAlignAttr`,
so that is why `AbstractAssumeAlignedAttrEmitter` is templated.
Reviewers: erichkeane, jdoerfert, hfinkel, aaron.ballman, rsmith
Reviewed By: erichkeane
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D73005
2020-01-24 03:50:15 +08:00
|
|
|
// CHECK-NEXT: [[MASKEDPTR:%.*]] = and i64 [[PTRINT]], [[MASK]]
|
2020-01-22 02:18:27 +08:00
|
|
|
// CHECK-NEXT: [[MASKCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
|
|
|
|
// CHECK-NEXT: call void @llvm.assume(i1 [[MASKCOND]])
|
|
|
|
// CHECK-NEXT: ret i8* [[CALL]]
|
|
|
|
//
|
|
|
|
void *t3_variable(int alignment) {
|
|
|
|
return my_aligned_alloc(320, alignment);
|
|
|
|
};
|