llvm-project/clang/test/CodeGen/builtin-align.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

128 lines
6.7 KiB
C
Raw Normal View History

Add builtins for aligning and checking alignment of pointers and integers This change introduces three new builtins (which work on both pointers and integers) that can be used instead of common bitwise arithmetic: __builtin_align_up(x, alignment), __builtin_align_down(x, alignment) and __builtin_is_aligned(x, alignment). I originally added these builtins to the CHERI fork of LLVM a few years ago to handle the slightly different C semantics that we use for CHERI [1]. Until recently these builtins (or sequences of other builtins) were required to generate correct code. I have since made changes to the default C semantics so that they are no longer strictly necessary (but using them does generate slightly more efficient code). However, based on our experience using them in various projects over the past few years, I believe that adding these builtins to clang would be useful. These builtins have the following benefit over bit-manipulation and casts via uintptr_t: - The named builtins clearly convey the semantics of the operation. While checking alignment using __builtin_is_aligned(x, 16) versus ((x & 15) == 0) is probably not a huge win in readably, I personally find __builtin_align_up(x, N) a lot easier to read than (x+(N-1))&~(N-1). - They preserve the type of the argument (including const qualifiers). When using casts via uintptr_t, it is easy to cast to the wrong type or strip qualifiers such as const. - If the alignment argument is a constant value, clang can check that it is a power-of-two and within the range of the type. Since the semantics of these builtins is well defined compared to arbitrary bit-manipulation, it is possible to add a UBSAN checker that the run-time value is a valid power-of-two. I intend to add this as a follow-up to this change. - The builtins avoids int-to-pointer casts both in C and LLVM IR. In the future (i.e. once most optimizations handle it), we could use the new llvm.ptrmask intrinsic to avoid the ptrtoint instruction that would normally be generated. - They can be used to round up/down to the next aligned value for both integers and pointers without requiring two separate macros. - In many projects the alignment operations are already wrapped in macros (e.g. roundup2 and rounddown2 in FreeBSD), so by replacing the macro implementation with a builtin call, we get improved diagnostics for many call-sites while only having to change a few lines. - Finally, the builtins also emit assume_aligned metadata when used on pointers. This can improve code generation compared to the uintptr_t casts. [1] In our CHERI compiler we have compilation mode where all pointers are implemented as capabilities (essentially unforgeable 128-bit fat pointers). In our original model, casts from uintptr_t (which is a 128-bit capability) to an integer value returned the "offset" of the capability (i.e. the difference between the virtual address and the base of the allocation). This causes problems for cases such as checking the alignment: for example, the expression `if ((uintptr_t)ptr & 63) == 0` is generally used to check if the pointer is aligned to a multiple of 64 bytes. The problem with offsets is that any pointer to the beginning of an allocation will have an offset of zero, so this check always succeeds in that case (even if the address is not correctly aligned). The same issues also exist when aligning up or down. Using the alignment builtins ensures that the address is used instead of the offset. While I have since changed the default C semantics to return the address instead of the offset when casting, this offset compilation mode can still be used by passing a command-line flag. Reviewers: rsmith, aaron.ballman, theraven, fhahn, lebedev.ri, nlopes, aqjune Reviewed By: aaron.ballman, lebedev.ri Differential Revision: https://reviews.llvm.org/D71499
2020-01-10 04:48:06 +08:00
/// Check the code generation for the alignment builtins
/// To make the test case easier to read, run SROA after generating IR to remove the alloca instructions.
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -DTEST_VOID_PTR \
// RUN: -o - -emit-llvm %s -disable-O0-optnone | opt -S -sroa | \
// RUN: FileCheck %s -check-prefixes CHECK,POINTER,ALIGNMENT_EXT \
// RUN: -enable-var-scope '-D$PTRTYPE=i8'
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -DTEST_FLOAT_PTR \
// RUN: -o - -emit-llvm %s -disable-O0-optnone | opt -S -sroa | \
// RUN: FileCheck %s -check-prefixes CHECK,POINTER,NON_I8_POINTER,ALIGNMENT_EXT \
// RUN: -enable-var-scope '-D$PTRTYPE=f32'
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -DTEST_LONG \
// RUN: -o - -emit-llvm %s -disable-O0-optnone | opt -S -sroa | \
// RUN: FileCheck %s -check-prefixes CHECK,INTEGER,ALIGNMENT_EXT -enable-var-scope
/// Check that we can handle the case where the alignment parameter is wider
/// than the source type (generate a trunc on alignment instead of zext)
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -DTEST_USHORT \
// RUN: -o - -emit-llvm %s -disable-O0-optnone | opt -S -sroa | \
// RUN: FileCheck %s -check-prefixes CHECK,INTEGER,ALIGNMENT_TRUNC -enable-var-scope
#ifdef TEST_VOID_PTR
#define TYPE void *
#elif defined(TEST_FLOAT_PTR)
#define TYPE float *
#elif defined(TEST_LONG)
#define TYPE long
#elif defined(TEST_CAP)
#define TYPE void *__capability
#elif defined(TEST_USHORT)
#define TYPE unsigned short
#else
#error MISSING TYPE
#endif
/// Check that constant initializers work and are correct
_Bool aligned_true = __builtin_is_aligned(1024, 512);
// CHECK: @aligned_true = global i8 1, align 1
_Bool aligned_false = __builtin_is_aligned(123, 512);
// CHECK: @aligned_false = global i8 0, align 1
int down_1 = __builtin_align_down(1023, 32);
// CHECK: @down_1 = global i32 992, align 4
int down_2 = __builtin_align_down(256, 32);
// CHECK: @down_2 = global i32 256, align 4
int up_1 = __builtin_align_up(1023, 32);
// CHECK: @up_1 = global i32 1024, align 4
int up_2 = __builtin_align_up(256, 32);
// CHECK: @up_2 = global i32 256, align 4
/// Capture the IR type here to use in the remaining FileCheck captures:
// CHECK: define {{[^@]+}}@get_type() #0
// CHECK-NEXT: entry:
// POINTER-NEXT: ret [[$TYPE:.+]] null
// INTEGER-NEXT: ret [[$TYPE:.+]] 0
//
TYPE get_type(void) {
return (TYPE)0;
}
// CHECK-LABEL: define {{[^@]+}}@is_aligned
// CHECK-SAME: ([[$TYPE]] {{[^%]*}}[[PTR:%.*]], i32 [[ALIGN:%.*]]) #0
// CHECK-NEXT: entry:
// ALIGNMENT_EXT-NEXT: [[ALIGNMENT:%.*]] = zext i32 [[ALIGN]] to [[ALIGN_TYPE:i64]]
// ALIGNMENT_TRUNC-NEXT: [[ALIGNMENT:%.*]] = trunc i32 [[ALIGN]] to [[ALIGN_TYPE:i16]]
// CHECK-NEXT: [[MASK:%.*]] = sub [[ALIGN_TYPE]] [[ALIGNMENT]], 1
// POINTER-NEXT: [[PTR:%.*]] = ptrtoint [[$TYPE]] %ptr to i64
// CHECK-NEXT: [[SET_BITS:%.*]] = and [[ALIGN_TYPE]] [[PTR]], [[MASK]]
// CHECK-NEXT: [[IS_ALIGNED:%.*]] = icmp eq [[ALIGN_TYPE]] [[SET_BITS]], 0
// CHECK-NEXT: ret i1 [[IS_ALIGNED]]
//
_Bool is_aligned(TYPE ptr, unsigned align) {
return __builtin_is_aligned(ptr, align);
}
// CHECK-LABEL: define {{[^@]+}}@align_up
// CHECK-SAME: ([[$TYPE]] {{[^%]*}}[[PTR:%.*]], i32 [[ALIGN:%.*]]) #0
// CHECK-NEXT: entry:
// ALIGNMENT_EXT-NEXT: [[ALIGNMENT:%.*]] = zext i32 [[ALIGN]] to [[ALIGN_TYPE:i64]]
// ALIGNMENT_TRUNC-NEXT: [[ALIGNMENT:%.*]] = trunc i32 [[ALIGN]] to [[ALIGN_TYPE:i16]]
// CHECK-NEXT: [[MASK:%.*]] = sub [[ALIGN_TYPE]] [[ALIGNMENT]], 1
// INTEGER-NEXT: [[OVER_BOUNDARY:%.*]] = add [[$TYPE]] [[PTR]], [[MASK]]
// NOTYET-POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = call [[$TYPE]] @llvm.ptrmask.p0[[$PTRTYPE]].p0i8.i64(i8* [[OVER_BOUNDARY]], [[ALIGN_TYPE]] [[INVERTED_MASK]])
// POINTER-NEXT: [[INTPTR:%.*]] = ptrtoint [[$TYPE]] [[PTR]] to [[ALIGN_TYPE]]
// POINTER-NEXT: [[OVER_BOUNDARY:%.*]] = add [[ALIGN_TYPE]] [[INTPTR]], [[MASK]]
// CHECK-NEXT: [[INVERTED_MASK:%.*]] = xor [[ALIGN_TYPE]] [[MASK]], -1
// CHECK-NEXT: [[ALIGNED_RESULT:%.*]] = and [[ALIGN_TYPE]] [[OVER_BOUNDARY]], [[INVERTED_MASK]]
// POINTER-NEXT: [[DIFF:%.*]] = sub i64 [[ALIGNED_RESULT]], [[INTPTR]]
// NON_I8_POINTER-NEXT: [[PTR:%.*]] = bitcast [[$TYPE]] {{%.*}} to i8*
// POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = getelementptr inbounds i8, i8* [[PTR]], i64 [[DIFF]]
// NON_I8_POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = bitcast i8* {{%.*}} to [[$TYPE]]
// POINTER-NEXT: [[ASSUME_MASK:%.*]] = sub i64 %alignment, 1
// POINTER-NEXT: [[ASSUME_INTPTR:%.*]]= ptrtoint [[$TYPE]] [[ALIGNED_RESULT]] to i64
// POINTER-NEXT: [[MASKEDPTR:%.*]] = and i64 %ptrint, [[ASSUME_MASK]]
// POINTER-NEXT: [[MASKEDCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
// POINTER-NEXT: call void @llvm.assume(i1 [[MASKEDCOND]])
// CHECK-NEXT: ret [[$TYPE]] [[ALIGNED_RESULT]]
//
TYPE align_up(TYPE ptr, unsigned align) {
return __builtin_align_up(ptr, align);
}
// CHECK-LABEL: define {{[^@]+}}@align_down
// CHECK-SAME: ([[$TYPE]] {{[^%]*}}[[PTR:%.*]], i32 [[ALIGN:%.*]]) #0
// CHECK-NEXT: entry:
// ALIGNMENT_EXT-NEXT: [[ALIGNMENT:%.*]] = zext i32 [[ALIGN]] to [[ALIGN_TYPE:i64]]
// ALIGNMENT_TRUNC-NEXT: [[ALIGNMENT:%.*]] = trunc i32 [[ALIGN]] to [[ALIGN_TYPE:i16]]
// CHECK-NEXT: [[MASK:%.*]] = sub [[ALIGN_TYPE]] [[ALIGNMENT]], 1
// NOTYET-POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = call [[$TYPE]] @llvm.ptrmask.p0[[$PTRTYPE]].p0[[$PTRTYPE]].i64([[$TYPE]] [[PTR]], [[ALIGN_TYPE]] [[INVERTED_MASK]])
// POINTER-NEXT: [[INTPTR:%.*]] = ptrtoint [[$TYPE]] [[PTR]] to [[ALIGN_TYPE]]
// CHECK-NEXT: [[INVERTED_MASK:%.*]] = xor [[ALIGN_TYPE]] [[MASK]], -1
// POINTER-NEXT: [[ALIGNED_INTPTR:%.*]] = and [[ALIGN_TYPE]] [[INTPTR]], [[INVERTED_MASK]]
// POINTER-NEXT: [[DIFF:%.*]] = sub i64 [[ALIGNED_INTPTR]], [[INTPTR]]
// NON_I8_POINTER-NEXT: [[PTR:%.*]] = bitcast [[$TYPE]] {{%.*}} to i8*
// POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = getelementptr inbounds i8, i8* [[PTR]], i64 [[DIFF]]
// NON_I8_POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = bitcast i8* {{%.*}} to [[$TYPE]]
// INTEGER-NEXT: [[ALIGNED_RESULT:%.*]] = and [[ALIGN_TYPE]] [[PTR]], [[INVERTED_MASK]]
// POINTER-NEXT: [[ASSUME_MASK:%.*]] = sub i64 %alignment, 1
// POINTER-NEXT: [[ASSUME_INTPTR:%.*]]= ptrtoint [[$TYPE]] [[ALIGNED_RESULT]] to i64
// POINTER-NEXT: [[MASKEDPTR:%.*]] = and i64 %ptrint, [[ASSUME_MASK]]
// POINTER-NEXT: [[MASKEDCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
// POINTER-NEXT: call void @llvm.assume(i1 [[MASKEDCOND]])
// CHECK-NEXT: ret [[$TYPE]] [[ALIGNED_RESULT]]
//
TYPE align_down(TYPE ptr, unsigned align) {
return __builtin_align_down(ptr, align);
}