Add builtins for aligning and checking alignment of pointers and integers

This change introduces three new builtins (which work on both pointers
and integers) that can be used instead of common bitwise arithmetic:
__builtin_align_up(x, alignment), __builtin_align_down(x, alignment) and
__builtin_is_aligned(x, alignment).

I originally added these builtins to the CHERI fork of LLVM a few years ago
to handle the slightly different C semantics that we use for CHERI [1].
Until recently these builtins (or sequences of other builtins) were
required to generate correct code. I have since made changes to the default
C semantics so that they are no longer strictly necessary (but using them
does generate slightly more efficient code). However, based on our experience
using them in various projects over the past few years, I believe that adding
these builtins to clang would be useful.

These builtins have the following benefit over bit-manipulation and casts
via uintptr_t:

- The named builtins clearly convey the semantics of the operation. While
  checking alignment using __builtin_is_aligned(x, 16) versus
  ((x & 15) == 0) is probably not a huge win in readably, I personally find
  __builtin_align_up(x, N) a lot easier to read than (x+(N-1))&~(N-1).
- They preserve the type of the argument (including const qualifiers). When
  using casts via uintptr_t, it is easy to cast to the wrong type or strip
  qualifiers such as const.
- If the alignment argument is a constant value, clang can check that it is
  a power-of-two and within the range of the type. Since the semantics of
  these builtins is well defined compared to arbitrary bit-manipulation,
  it is possible to add a UBSAN checker that the run-time value is a valid
  power-of-two. I intend to add this as a follow-up to this change.
- The builtins avoids int-to-pointer casts both in C and LLVM IR.
  In the future (i.e. once most optimizations handle it), we could use the new
  llvm.ptrmask intrinsic to avoid the ptrtoint instruction that would normally
  be generated.
- They can be used to round up/down to the next aligned value for both
  integers and pointers without requiring two separate macros.
- In many projects the alignment operations are already wrapped in macros (e.g.
  roundup2 and rounddown2 in FreeBSD), so by replacing the macro implementation
  with a builtin call, we get improved diagnostics for many call-sites while
  only having to change a few lines.
- Finally, the builtins also emit assume_aligned metadata when used on pointers.
  This can improve code generation compared to the uintptr_t casts.

[1] In our CHERI compiler we have compilation mode where all pointers are
implemented as capabilities (essentially unforgeable 128-bit fat pointers).
In our original model, casts from uintptr_t (which is a 128-bit capability)
to an integer value returned the "offset" of the capability (i.e. the
difference between the virtual address and the base of the allocation).
This causes problems for cases such as checking the alignment: for example, the
expression `if ((uintptr_t)ptr & 63) == 0` is generally used to check if the
pointer is aligned to a multiple of 64 bytes. The problem with offsets is that
any pointer to the beginning of an allocation will have an offset of zero, so
this check always succeeds in that case (even if the address is not correctly
aligned). The same issues also exist when aligning up or down. Using the
alignment builtins ensures that the address is used instead of the offset. While
I have since changed the default C semantics to return the address instead of
the offset when casting, this offset compilation mode can still be used by
passing a command-line flag.

Reviewers: rsmith, aaron.ballman, theraven, fhahn, lebedev.ri, nlopes, aqjune
Reviewed By: aaron.ballman, lebedev.ri
Differential Revision: https://reviews.llvm.org/D71499
This commit is contained in:
Alex Richardson 2020-01-09 20:48:06 +00:00
parent 0f5f28d000
commit 8c387cbea7
13 changed files with 1025 additions and 11 deletions

View File

@ -2509,6 +2509,79 @@ the invocation point is the same as the location of the builtin.
When the invocation point of ``__builtin_FUNCTION`` is not a function scope the When the invocation point of ``__builtin_FUNCTION`` is not a function scope the
empty string is returned. empty string is returned.
Alignment builtins
------------------
Clang provides builtins to support checking and adjusting alignment of
pointers and integers.
These builtins can be used to avoid relying on implementation-defined behavior
of arithmetic on integers derived from pointers.
Additionally, these builtins retain type information and, unlike bitwise
arithmentic, they can perform semantic checking on the alignment value.
**Syntax**:
.. code-block:: c
Type __builtin_align_up(Type value, size_t alignment);
Type __builtin_align_down(Type value, size_t alignment);
bool __builtin_is_aligned(Type value, size_t alignment);
**Example of use**:
.. code-block:: c++
char* global_alloc_buffer;
void* my_aligned_allocator(size_t alloc_size, size_t alignment) {
char* result = __builtin_align_up(global_alloc_buffer, alignment);
// result now contains the value of global_alloc_buffer rounded up to the
// next multiple of alignment.
global_alloc_buffer = result + alloc_size;
return result;
}
void* get_start_of_page(void* ptr) {
return __builtin_align_down(ptr, PAGE_SIZE);
}
void example(char* buffer) {
if (__builtin_is_aligned(buffer, 64)) {
do_fast_aligned_copy(buffer);
} else {
do_unaligned_copy(buffer);
}
}
// In addition to pointers, the builtins can also be used on integer types
// and are evaluatable inside constant expressions.
static_assert(__builtin_align_up(123, 64) == 128, "");
static_assert(__builtin_align_down(123u, 64) == 64u, "");
static_assert(!__builtin_is_aligned(123, 64), "");
**Description**:
The builtins ``__builtin_align_up``, ``__builtin_align_down``, return their
first argument aligned up/down to the next multiple of the second argument.
If the value is already sufficiently aligned, it is returned unchanged.
The builtin ``__builtin_is_aligned`` returns whether the first argument is
aligned to a multiple of the second argument.
All of these builtins expect the alignment to be expressed as a number of bytes.
These builtins can be used for all integer types as well as (non-function)
pointer types. For pointer types, these builtins operate in terms of the integer
address of the pointer and return a new pointer of the same type (including
qualifiers such as ``const``) with an adjusted address.
When aligning pointers up or down, the resulting value must be within the same
underlying allocation or one past the end (see C17 6.5.6p8, C++ [expr.add]).
This means that arbitrary integer values stored in pointer-type variables must
not be passed to these builtins. For those use cases, the builtins can still be
used, but the operation must be performed on the pointer cast to ``uintptr_t``.
If Clang can determine that the alignment is not a power of two at compile time,
it will result in a compilation failure. If the alignment argument is not a
power of two at run time, the behavior of these builtins is undefined.
Non-standard C++11 Attributes Non-standard C++11 Attributes
============================= =============================

View File

@ -1476,6 +1476,11 @@ BUILTIN(__builtin_char_memchr, "c*cC*iz", "n")
BUILTIN(__builtin_dump_struct, "ivC*v*", "tn") BUILTIN(__builtin_dump_struct, "ivC*v*", "tn")
BUILTIN(__builtin_preserve_access_index, "v.", "t") BUILTIN(__builtin_preserve_access_index, "v.", "t")
// Alignment builtins (uses custom parsing to support pointers and integers)
BUILTIN(__builtin_is_aligned, "bvC*z", "nct")
BUILTIN(__builtin_align_up, "v*vC*z", "nct")
BUILTIN(__builtin_align_down, "v*vC*z", "nct")
// Safestack builtins // Safestack builtins
BUILTIN(__builtin___get_unsafe_stack_start, "v*", "Fn") BUILTIN(__builtin___get_unsafe_stack_start, "v*", "Fn")
BUILTIN(__builtin___get_unsafe_stack_bottom, "v*", "Fn") BUILTIN(__builtin___get_unsafe_stack_bottom, "v*", "Fn")

View File

@ -218,6 +218,14 @@ def note_constexpr_baa_insufficient_alignment : Note<
def note_constexpr_baa_value_insufficient_alignment : Note< def note_constexpr_baa_value_insufficient_alignment : Note<
"value of the aligned pointer (%0) is not a multiple of the asserted %1 " "value of the aligned pointer (%0) is not a multiple of the asserted %1 "
"%plural{1:byte|:bytes}1">; "%plural{1:byte|:bytes}1">;
def note_constexpr_invalid_alignment : Note<
"requested alignment %0 is not a positive power of two">;
def note_constexpr_alignment_too_big : Note<
"requested alignment must be %0 or less for type %1; %2 is invalid">;
def note_constexpr_alignment_compute : Note<
"cannot constant evaluate whether run-time alignment is at least %0">;
def note_constexpr_alignment_adjust : Note<
"cannot constant evaluate the result of adjusting alignment to %0">;
def note_constexpr_destroy_out_of_lifetime : Note< def note_constexpr_destroy_out_of_lifetime : Note<
"destroying object '%0' whose lifetime has already ended">; "destroying object '%0' whose lifetime has already ended">;
def note_constexpr_unsupported_destruction : Note< def note_constexpr_unsupported_destruction : Note<

View File

@ -2922,6 +2922,9 @@ def err_alignment_not_power_of_two : Error<
def err_alignment_dependent_typedef_name : Error< def err_alignment_dependent_typedef_name : Error<
"requested alignment is dependent but declaration is not dependent">; "requested alignment is dependent but declaration is not dependent">;
def warn_alignment_builtin_useless : Warning<
"%select{aligning a value|the result of checking whether a value is aligned}0"
" to 1 byte is %select{a no-op|always true}0">, InGroup<TautologicalCompare>;
def err_attribute_aligned_too_great : Error< def err_attribute_aligned_too_great : Error<
"requested alignment must be %0 bytes or smaller">; "requested alignment must be %0 bytes or smaller">;
def warn_assume_aligned_too_great def warn_assume_aligned_too_great

View File

@ -8175,6 +8175,42 @@ static CharUnits GetAlignOfExpr(EvalInfo &Info, const Expr *E,
return GetAlignOfType(Info, E->getType(), ExprKind); return GetAlignOfType(Info, E->getType(), ExprKind);
} }
static CharUnits getBaseAlignment(EvalInfo &Info, const LValue &Value) {
if (const auto *VD = Value.Base.dyn_cast<const ValueDecl *>())
return Info.Ctx.getDeclAlign(VD);
if (const auto *E = Value.Base.dyn_cast<const Expr *>())
return GetAlignOfExpr(Info, E, UETT_AlignOf);
return GetAlignOfType(Info, Value.Base.getTypeInfoType(), UETT_AlignOf);
}
/// Evaluate the value of the alignment argument to __builtin_align_{up,down},
/// __builtin_is_aligned and __builtin_assume_aligned.
static bool getAlignmentArgument(const Expr *E, QualType ForType,
EvalInfo &Info, APSInt &Alignment) {
if (!EvaluateInteger(E, Alignment, Info))
return false;
if (Alignment < 0 || !Alignment.isPowerOf2()) {
Info.FFDiag(E, diag::note_constexpr_invalid_alignment) << Alignment;
return false;
}
unsigned SrcWidth = Info.Ctx.getIntWidth(ForType);
APSInt MaxValue(APInt::getOneBitSet(SrcWidth, SrcWidth - 1));
if (APSInt::compareValues(Alignment, MaxValue) > 0) {
Info.FFDiag(E, diag::note_constexpr_alignment_too_big)
<< MaxValue << ForType << Alignment;
return false;
}
// Ensure both alignment and source value have the same bit width so that we
// don't assert when computing the resulting value.
APSInt ExtAlignment =
APSInt(Alignment.zextOrTrunc(SrcWidth), /*isUnsigned=*/true);
assert(APSInt::compareValues(Alignment, ExtAlignment) == 0 &&
"Alignment should not be changed by ext/trunc");
Alignment = ExtAlignment;
assert(Alignment.getBitWidth() == SrcWidth);
return true;
}
// To be clear: this happily visits unsupported builtins. Better name welcomed. // To be clear: this happily visits unsupported builtins. Better name welcomed.
bool PointerExprEvaluator::visitNonBuiltinCallExpr(const CallExpr *E) { bool PointerExprEvaluator::visitNonBuiltinCallExpr(const CallExpr *E) {
if (ExprEvaluatorBaseTy::VisitCallExpr(E)) if (ExprEvaluatorBaseTy::VisitCallExpr(E))
@ -8213,7 +8249,8 @@ bool PointerExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
LValue OffsetResult(Result); LValue OffsetResult(Result);
APSInt Alignment; APSInt Alignment;
if (!EvaluateInteger(E->getArg(1), Alignment, Info)) if (!getAlignmentArgument(E->getArg(1), E->getArg(0)->getType(), Info,
Alignment))
return false; return false;
CharUnits Align = CharUnits::fromQuantity(Alignment.getZExtValue()); CharUnits Align = CharUnits::fromQuantity(Alignment.getZExtValue());
@ -8228,16 +8265,7 @@ bool PointerExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
// If there is a base object, then it must have the correct alignment. // If there is a base object, then it must have the correct alignment.
if (OffsetResult.Base) { if (OffsetResult.Base) {
CharUnits BaseAlignment; CharUnits BaseAlignment = getBaseAlignment(Info, OffsetResult);
if (const ValueDecl *VD =
OffsetResult.Base.dyn_cast<const ValueDecl*>()) {
BaseAlignment = Info.Ctx.getDeclAlign(VD);
} else if (const Expr *E = OffsetResult.Base.dyn_cast<const Expr *>()) {
BaseAlignment = GetAlignOfExpr(Info, E, UETT_AlignOf);
} else {
BaseAlignment = GetAlignOfType(
Info, OffsetResult.Base.getTypeInfoType(), UETT_AlignOf);
}
if (BaseAlignment < Align) { if (BaseAlignment < Align) {
Result.Designator.setInvalid(); Result.Designator.setInvalid();
@ -8266,6 +8294,43 @@ bool PointerExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
return true; return true;
} }
case Builtin::BI__builtin_align_up:
case Builtin::BI__builtin_align_down: {
if (!evaluatePointer(E->getArg(0), Result))
return false;
APSInt Alignment;
if (!getAlignmentArgument(E->getArg(1), E->getArg(0)->getType(), Info,
Alignment))
return false;
CharUnits BaseAlignment = getBaseAlignment(Info, Result);
CharUnits PtrAlign = BaseAlignment.alignmentAtOffset(Result.Offset);
// For align_up/align_down, we can return the same value if the alignment
// is known to be greater or equal to the requested value.
if (PtrAlign.getQuantity() >= Alignment)
return true;
// The alignment could be greater than the minimum at run-time, so we cannot
// infer much about the resulting pointer value. One case is possible:
// For `_Alignas(32) char buf[N]; __builtin_align_down(&buf[idx], 32)` we
// can infer the correct index if the requested alignment is smaller than
// the base alignment so we can perform the computation on the offset.
if (BaseAlignment.getQuantity() >= Alignment) {
assert(Alignment.getBitWidth() <= 64 &&
"Cannot handle > 64-bit address-space");
uint64_t Alignment64 = Alignment.getZExtValue();
CharUnits NewOffset = CharUnits::fromQuantity(
BuiltinOp == Builtin::BI__builtin_align_down
? llvm::alignDown(Result.Offset.getQuantity(), Alignment64)
: llvm::alignTo(Result.Offset.getQuantity(), Alignment64));
Result.adjustOffset(NewOffset - Result.Offset);
// TODO: diagnose out-of-bounds values/only allow for arrays?
return true;
}
// Otherwise, we cannot constant-evaluate the result.
Info.FFDiag(E->getArg(0), diag::note_constexpr_alignment_adjust)
<< Alignment;
return false;
}
case Builtin::BI__builtin_operator_new: case Builtin::BI__builtin_operator_new:
return HandleOperatorNewCall(Info, E, Result); return HandleOperatorNewCall(Info, E, Result);
case Builtin::BI__builtin_launder: case Builtin::BI__builtin_launder:
@ -10564,6 +10629,33 @@ bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
return ExprEvaluatorBaseTy::VisitCallExpr(E); return ExprEvaluatorBaseTy::VisitCallExpr(E);
} }
static bool getBuiltinAlignArguments(const CallExpr *E, EvalInfo &Info,
APValue &Val, APSInt &Alignment) {
QualType SrcTy = E->getArg(0)->getType();
if (!getAlignmentArgument(E->getArg(1), SrcTy, Info, Alignment))
return false;
// Even though we are evaluating integer expressions we could get a pointer
// argument for the __builtin_is_aligned() case.
if (SrcTy->isPointerType()) {
LValue Ptr;
if (!EvaluatePointer(E->getArg(0), Ptr, Info))
return false;
Ptr.moveInto(Val);
} else if (!SrcTy->isIntegralOrEnumerationType()) {
Info.FFDiag(E->getArg(0));
return false;
} else {
APSInt SrcInt;
if (!EvaluateInteger(E->getArg(0), SrcInt, Info))
return false;
assert(SrcInt.getBitWidth() >= Alignment.getBitWidth() &&
"Bit widths must be the same");
Val = APValue(SrcInt);
}
assert(Val.hasValue());
return true;
}
bool IntExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E, bool IntExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
unsigned BuiltinOp) { unsigned BuiltinOp) {
switch (unsigned BuiltinOp = E->getBuiltinCallee()) { switch (unsigned BuiltinOp = E->getBuiltinCallee()) {
@ -10606,6 +10698,66 @@ bool IntExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
return Success(Layout.size().getQuantity(), E); return Success(Layout.size().getQuantity(), E);
} }
case Builtin::BI__builtin_is_aligned: {
APValue Src;
APSInt Alignment;
if (!getBuiltinAlignArguments(E, Info, Src, Alignment))
return false;
if (Src.isLValue()) {
// If we evaluated a pointer, check the minimum known alignment.
LValue Ptr;
Ptr.setFrom(Info.Ctx, Src);
CharUnits BaseAlignment = getBaseAlignment(Info, Ptr);
CharUnits PtrAlign = BaseAlignment.alignmentAtOffset(Ptr.Offset);
// We can return true if the known alignment at the computed offset is
// greater than the requested alignment.
assert(PtrAlign.isPowerOfTwo());
assert(Alignment.isPowerOf2());
if (PtrAlign.getQuantity() >= Alignment)
return Success(1, E);
// If the alignment is not known to be sufficient, some cases could still
// be aligned at run time. However, if the requested alignment is less or
// equal to the base alignment and the offset is not aligned, we know that
// the run-time value can never be aligned.
if (BaseAlignment.getQuantity() >= Alignment &&
PtrAlign.getQuantity() < Alignment)
return Success(0, E);
// Otherwise we can't infer whether the value is sufficiently aligned.
// TODO: __builtin_is_aligned(__builtin_align_{down,up{(expr, N), N)
// in cases where we can't fully evaluate the pointer.
Info.FFDiag(E->getArg(0), diag::note_constexpr_alignment_compute)
<< Alignment;
return false;
}
assert(Src.isInt());
return Success((Src.getInt() & (Alignment - 1)) == 0 ? 1 : 0, E);
}
case Builtin::BI__builtin_align_up: {
APValue Src;
APSInt Alignment;
if (!getBuiltinAlignArguments(E, Info, Src, Alignment))
return false;
if (!Src.isInt())
return Error(E);
APSInt AlignedVal =
APSInt((Src.getInt() + (Alignment - 1)) & ~(Alignment - 1),
Src.getInt().isUnsigned());
assert(AlignedVal.getBitWidth() == Src.getInt().getBitWidth());
return Success(AlignedVal, E);
}
case Builtin::BI__builtin_align_down: {
APValue Src;
APSInt Alignment;
if (!getBuiltinAlignArguments(E, Info, Src, Alignment))
return false;
if (!Src.isInt())
return Error(E);
APSInt AlignedVal =
APSInt(Src.getInt() & ~(Alignment - 1), Src.getInt().isUnsigned());
assert(AlignedVal.getBitWidth() == Src.getInt().getBitWidth());
return Success(AlignedVal, E);
}
case Builtin::BI__builtin_bswap16: case Builtin::BI__builtin_bswap16:
case Builtin::BI__builtin_bswap32: case Builtin::BI__builtin_bswap32:
case Builtin::BI__builtin_bswap64: { case Builtin::BI__builtin_bswap64: {

View File

@ -3490,6 +3490,13 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl GD, unsigned BuiltinID,
return EmitBuiltinNewDeleteCall( return EmitBuiltinNewDeleteCall(
E->getCallee()->getType()->castAs<FunctionProtoType>(), E, true); E->getCallee()->getType()->castAs<FunctionProtoType>(), E, true);
case Builtin::BI__builtin_is_aligned:
return EmitBuiltinIsAligned(E);
case Builtin::BI__builtin_align_up:
return EmitBuiltinAlignTo(E, true);
case Builtin::BI__builtin_align_down:
return EmitBuiltinAlignTo(E, false);
case Builtin::BI__noop: case Builtin::BI__noop:
// __noop always evaluates to an integer literal zero. // __noop always evaluates to an integer literal zero.
return RValue::get(ConstantInt::get(IntTy, 0)); return RValue::get(ConstantInt::get(IntTy, 0));
@ -14253,6 +14260,94 @@ CodeGenFunction::EmitNVPTXBuiltinExpr(unsigned BuiltinID, const CallExpr *E) {
} }
} }
struct BuiltinAlignArgs {
llvm::Value *Src = nullptr;
llvm::Type *SrcType = nullptr;
llvm::Value *Alignment = nullptr;
llvm::Value *Mask = nullptr;
llvm::IntegerType *IntType = nullptr;
BuiltinAlignArgs(const CallExpr *E, CodeGenFunction &CGF) {
QualType AstType = E->getArg(0)->getType();
if (AstType->isArrayType())
Src = CGF.EmitArrayToPointerDecay(E->getArg(0)).getPointer();
else
Src = CGF.EmitScalarExpr(E->getArg(0));
SrcType = Src->getType();
if (SrcType->isPointerTy()) {
IntType = IntegerType::get(
CGF.getLLVMContext(),
CGF.CGM.getDataLayout().getIndexTypeSizeInBits(SrcType));
} else {
assert(SrcType->isIntegerTy());
IntType = cast<llvm::IntegerType>(SrcType);
}
Alignment = CGF.EmitScalarExpr(E->getArg(1));
Alignment = CGF.Builder.CreateZExtOrTrunc(Alignment, IntType, "alignment");
auto *One = llvm::ConstantInt::get(IntType, 1);
Mask = CGF.Builder.CreateSub(Alignment, One, "mask");
}
};
/// Generate (x & (y-1)) == 0.
RValue CodeGenFunction::EmitBuiltinIsAligned(const CallExpr *E) {
BuiltinAlignArgs Args(E, *this);
llvm::Value *SrcAddress = Args.Src;
if (Args.SrcType->isPointerTy())
SrcAddress =
Builder.CreateBitOrPointerCast(Args.Src, Args.IntType, "src_addr");
return RValue::get(Builder.CreateICmpEQ(
Builder.CreateAnd(SrcAddress, Args.Mask, "set_bits"),
llvm::Constant::getNullValue(Args.IntType), "is_aligned"));
}
/// Generate (x & ~(y-1)) to align down or ((x+(y-1)) & ~(y-1)) to align up.
/// Note: For pointer types we can avoid ptrtoint/inttoptr pairs by using the
/// llvm.ptrmask instrinsic (with a GEP before in the align_up case).
/// TODO: actually use ptrmask once most optimization passes know about it.
RValue CodeGenFunction::EmitBuiltinAlignTo(const CallExpr *E, bool AlignUp) {
BuiltinAlignArgs Args(E, *this);
llvm::Value *SrcAddr = Args.Src;
if (Args.Src->getType()->isPointerTy())
SrcAddr = Builder.CreatePtrToInt(Args.Src, Args.IntType, "intptr");
llvm::Value *SrcForMask = SrcAddr;
if (AlignUp) {
// When aligning up we have to first add the mask to ensure we go over the
// next alignment value and then align down to the next valid multiple.
// By adding the mask, we ensure that align_up on an already aligned
// value will not change the value.
SrcForMask = Builder.CreateAdd(SrcForMask, Args.Mask, "over_boundary");
}
// Invert the mask to only clear the lower bits.
llvm::Value *InvertedMask = Builder.CreateNot(Args.Mask, "inverted_mask");
llvm::Value *Result =
Builder.CreateAnd(SrcForMask, InvertedMask, "aligned_result");
if (Args.Src->getType()->isPointerTy()) {
/// TODO: Use ptrmask instead of ptrtoint+gep once it is optimized well.
// Result = Builder.CreateIntrinsic(
// Intrinsic::ptrmask, {Args.SrcType, SrcForMask->getType(), Args.IntType},
// {SrcForMask, NegatedMask}, nullptr, "aligned_result");
Result->setName("aligned_intptr");
llvm::Value *Difference = Builder.CreateSub(Result, SrcAddr, "diff");
// The result must point to the same underlying allocation. This means we
// can use an inbounds GEP to enable better optimization.
Value *Base = EmitCastToVoidPtr(Args.Src);
if (getLangOpts().isSignedOverflowDefined())
Result = Builder.CreateGEP(Base, Difference, "aligned_result");
else
Result = EmitCheckedInBoundsGEP(Base, Difference,
/*SignedIndices=*/true,
/*isSubtraction=*/!AlignUp,
E->getExprLoc(), "aligned_result");
Result = Builder.CreatePointerCast(Result, Args.SrcType);
// Emit an alignment assumption to ensure that the new alignment is
// propagated to loads/stores, etc.
EmitAlignmentAssumption(Result, E, E->getExprLoc(), Args.Alignment);
}
assert(Result->getType() == Args.SrcType);
return RValue::get(Result);
}
Value *CodeGenFunction::EmitWebAssemblyBuiltinExpr(unsigned BuiltinID, Value *CodeGenFunction::EmitWebAssemblyBuiltinExpr(unsigned BuiltinID,
const CallExpr *E) { const CallExpr *E) {
switch (BuiltinID) { switch (BuiltinID) {

View File

@ -3731,6 +3731,11 @@ public:
/// Emit IR for __builtin_os_log_format. /// Emit IR for __builtin_os_log_format.
RValue emitBuiltinOSLogFormat(const CallExpr &E); RValue emitBuiltinOSLogFormat(const CallExpr &E);
/// Emit IR for __builtin_is_aligned.
RValue EmitBuiltinIsAligned(const CallExpr *E);
/// Emit IR for __builtin_align_up/__builtin_align_down.
RValue EmitBuiltinAlignTo(const CallExpr *E, bool AlignUp);
llvm::Function *generateBuiltinOSLogHelperFunction( llvm::Function *generateBuiltinOSLogHelperFunction(
const analyze_os_log::OSLogBufferLayout &Layout, const analyze_os_log::OSLogBufferLayout &Layout,
CharUnits BufferAlignment); CharUnits BufferAlignment);

View File

@ -201,6 +201,87 @@ static bool SemaBuiltinPreserveAI(Sema &S, CallExpr *TheCall) {
return false; return false;
} }
/// Check that the value argument for __builtin_is_aligned(value, alignment) and
/// __builtin_aligned_{up,down}(value, alignment) is an integer or a pointer
/// type (but not a function pointer) and that the alignment is a power-of-two.
static bool SemaBuiltinAlignment(Sema &S, CallExpr *TheCall, unsigned ID) {
if (checkArgCount(S, TheCall, 2))
return true;
clang::Expr *Source = TheCall->getArg(0);
bool IsBooleanAlignBuiltin = ID == Builtin::BI__builtin_is_aligned;
auto IsValidIntegerType = [](QualType Ty) {
return Ty->isIntegerType() && !Ty->isEnumeralType() && !Ty->isBooleanType();
};
QualType SrcTy = Source->getType();
// We should also be able to use it with arrays (but not functions!).
if (SrcTy->canDecayToPointerType() && SrcTy->isArrayType()) {
SrcTy = S.Context.getDecayedType(SrcTy);
}
if ((!SrcTy->isPointerType() && !IsValidIntegerType(SrcTy)) ||
SrcTy->isFunctionPointerType()) {
// FIXME: this is not quite the right error message since we don't allow
// floating point types, or member pointers.
S.Diag(Source->getExprLoc(), diag::err_typecheck_expect_scalar_operand)
<< SrcTy;
return true;
}
clang::Expr *AlignOp = TheCall->getArg(1);
if (!IsValidIntegerType(AlignOp->getType())) {
S.Diag(AlignOp->getExprLoc(), diag::err_typecheck_expect_int)
<< AlignOp->getType();
return true;
}
Expr::EvalResult AlignResult;
unsigned MaxAlignmentBits = S.Context.getIntWidth(SrcTy) - 1;
// We can't check validity of alignment if it is type dependent.
if (!AlignOp->isInstantiationDependent() &&
AlignOp->EvaluateAsInt(AlignResult, S.Context,
Expr::SE_AllowSideEffects)) {
llvm::APSInt AlignValue = AlignResult.Val.getInt();
llvm::APSInt MaxValue(
llvm::APInt::getOneBitSet(MaxAlignmentBits + 1, MaxAlignmentBits));
if (AlignValue < 1) {
S.Diag(AlignOp->getExprLoc(), diag::err_alignment_too_small) << 1;
return true;
}
if (llvm::APSInt::compareValues(AlignValue, MaxValue) > 0) {
S.Diag(AlignOp->getExprLoc(), diag::err_alignment_too_big)
<< MaxValue.toString(10);
return true;
}
if (!AlignValue.isPowerOf2()) {
S.Diag(AlignOp->getExprLoc(), diag::err_alignment_not_power_of_two);
return true;
}
if (AlignValue == 1) {
S.Diag(AlignOp->getExprLoc(), diag::warn_alignment_builtin_useless)
<< IsBooleanAlignBuiltin;
}
}
ExprResult SrcArg = S.PerformCopyInitialization(
InitializedEntity::InitializeParameter(S.Context, SrcTy, false),
SourceLocation(), Source);
if (SrcArg.isInvalid())
return true;
TheCall->setArg(0, SrcArg.get());
ExprResult AlignArg =
S.PerformCopyInitialization(InitializedEntity::InitializeParameter(
S.Context, AlignOp->getType(), false),
SourceLocation(), AlignOp);
if (AlignArg.isInvalid())
return true;
TheCall->setArg(1, AlignArg.get());
// For align_up/align_down, the return type is the same as the (potentially
// decayed) argument type including qualifiers. For is_aligned(), the result
// is always bool.
TheCall->setType(IsBooleanAlignBuiltin ? S.Context.BoolTy : SrcTy);
return false;
}
static bool SemaBuiltinOverflow(Sema &S, CallExpr *TheCall) { static bool SemaBuiltinOverflow(Sema &S, CallExpr *TheCall) {
if (checkArgCount(S, TheCall, 3)) if (checkArgCount(S, TheCall, 3))
return true; return true;
@ -1357,6 +1438,12 @@ Sema::CheckBuiltinFunctionCall(FunctionDecl *FDecl, unsigned BuiltinID,
if (SemaBuiltinAddressof(*this, TheCall)) if (SemaBuiltinAddressof(*this, TheCall))
return ExprError(); return ExprError();
break; break;
case Builtin::BI__builtin_is_aligned:
case Builtin::BI__builtin_align_up:
case Builtin::BI__builtin_align_down:
if (SemaBuiltinAlignment(*this, TheCall, BuiltinID))
return ExprError();
break;
case Builtin::BI__builtin_add_overflow: case Builtin::BI__builtin_add_overflow:
case Builtin::BI__builtin_sub_overflow: case Builtin::BI__builtin_sub_overflow:
case Builtin::BI__builtin_mul_overflow: case Builtin::BI__builtin_mul_overflow:

View File

@ -0,0 +1,78 @@
// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py
/// Check that the alignment builtins handle array-to-pointer decay
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -o - -emit-llvm %s | FileCheck %s
extern int func(char *c);
// CHECK-LABEL: define {{[^@]+}}@test_array() #0
// CHECK-NEXT: entry:
// CHECK-NEXT: [[BUF:%.*]] = alloca [1024 x i8], align 16
// CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds [1024 x i8], [1024 x i8]* [[BUF]], i64 0, i64 44
// CHECK-NEXT: [[INTPTR:%.*]] = ptrtoint i8* [[ARRAYIDX]] to i64
// CHECK-NEXT: [[ALIGNED_INTPTR:%.*]] = and i64 [[INTPTR]], -16
// CHECK-NEXT: [[DIFF:%.*]] = sub i64 [[ALIGNED_INTPTR]], [[INTPTR]]
// CHECK-NEXT: [[ALIGNED_RESULT:%.*]] = getelementptr inbounds i8, i8* [[ARRAYIDX]], i64 [[DIFF]]
// CHECK-NEXT: [[PTRINT:%.*]] = ptrtoint i8* [[ALIGNED_RESULT]] to i64
// CHECK-NEXT: [[MASKEDPTR:%.*]] = and i64 [[PTRINT]], 15
// CHECK-NEXT: [[MASKCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
// CHECK-NEXT: call void @llvm.assume(i1 [[MASKCOND]])
// CHECK-NEXT: [[CALL:%.*]] = call i32 @func(i8* [[ALIGNED_RESULT]])
// CHECK-NEXT: [[ARRAYIDX1:%.*]] = getelementptr inbounds [1024 x i8], [1024 x i8]* [[BUF]], i64 0, i64 22
// CHECK-NEXT: [[INTPTR2:%.*]] = ptrtoint i8* [[ARRAYIDX1]] to i64
// CHECK-NEXT: [[OVER_BOUNDARY:%.*]] = add i64 [[INTPTR2]], 31
// CHECK-NEXT: [[ALIGNED_INTPTR4:%.*]] = and i64 [[OVER_BOUNDARY]], -32
// CHECK-NEXT: [[DIFF5:%.*]] = sub i64 [[ALIGNED_INTPTR4]], [[INTPTR2]]
// CHECK-NEXT: [[ALIGNED_RESULT6:%.*]] = getelementptr inbounds i8, i8* [[ARRAYIDX1]], i64 [[DIFF5]]
// CHECK-NEXT: [[PTRINT7:%.*]] = ptrtoint i8* [[ALIGNED_RESULT6]] to i64
// CHECK-NEXT: [[MASKEDPTR8:%.*]] = and i64 [[PTRINT7]], 31
// CHECK-NEXT: [[MASKCOND9:%.*]] = icmp eq i64 [[MASKEDPTR8]], 0
// CHECK-NEXT: call void @llvm.assume(i1 [[MASKCOND9]])
// CHECK-NEXT: [[CALL10:%.*]] = call i32 @func(i8* [[ALIGNED_RESULT6]])
// CHECK-NEXT: [[ARRAYIDX11:%.*]] = getelementptr inbounds [1024 x i8], [1024 x i8]* [[BUF]], i64 0, i64 16
// CHECK-NEXT: [[SRC_ADDR:%.*]] = ptrtoint i8* [[ARRAYIDX11]] to i64
// CHECK-NEXT: [[SET_BITS:%.*]] = and i64 [[SRC_ADDR]], 63
// CHECK-NEXT: [[IS_ALIGNED:%.*]] = icmp eq i64 [[SET_BITS]], 0
// CHECK-NEXT: [[CONV:%.*]] = zext i1 [[IS_ALIGNED]] to i32
// CHECK-NEXT: ret i32 [[CONV]]
//
int test_array(void) {
char buf[1024];
func(__builtin_align_down(&buf[44], 16));
func(__builtin_align_up(&buf[22], 32));
return __builtin_is_aligned(&buf[16], 64);
}
// CHECK-LABEL: define {{[^@]+}}@test_array_should_not_mask() #0
// CHECK-NEXT: entry:
// CHECK-NEXT: [[BUF:%.*]] = alloca [1024 x i8], align 32
// CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds [1024 x i8], [1024 x i8]* [[BUF]], i64 0, i64 64
// CHECK-NEXT: [[INTPTR:%.*]] = ptrtoint i8* [[ARRAYIDX]] to i64
// CHECK-NEXT: [[ALIGNED_INTPTR:%.*]] = and i64 [[INTPTR]], -16
// CHECK-NEXT: [[DIFF:%.*]] = sub i64 [[ALIGNED_INTPTR]], [[INTPTR]]
// CHECK-NEXT: [[ALIGNED_RESULT:%.*]] = getelementptr inbounds i8, i8* [[ARRAYIDX]], i64 [[DIFF]]
// CHECK-NEXT: [[PTRINT:%.*]] = ptrtoint i8* [[ALIGNED_RESULT]] to i64
// CHECK-NEXT: [[MASKEDPTR:%.*]] = and i64 [[PTRINT]], 15
// CHECK-NEXT: [[MASKCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
// CHECK-NEXT: call void @llvm.assume(i1 [[MASKCOND]])
// CHECK-NEXT: [[CALL:%.*]] = call i32 @func(i8* [[ALIGNED_RESULT]])
// CHECK-NEXT: [[ARRAYIDX1:%.*]] = getelementptr inbounds [1024 x i8], [1024 x i8]* [[BUF]], i64 0, i64 32
// CHECK-NEXT: [[INTPTR2:%.*]] = ptrtoint i8* [[ARRAYIDX1]] to i64
// CHECK-NEXT: [[OVER_BOUNDARY:%.*]] = add i64 [[INTPTR2]], 31
// CHECK-NEXT: [[ALIGNED_INTPTR4:%.*]] = and i64 [[OVER_BOUNDARY]], -32
// CHECK-NEXT: [[DIFF5:%.*]] = sub i64 [[ALIGNED_INTPTR4]], [[INTPTR2]]
// CHECK-NEXT: [[ALIGNED_RESULT6:%.*]] = getelementptr inbounds i8, i8* [[ARRAYIDX1]], i64 [[DIFF5]]
// CHECK-NEXT: [[PTRINT7:%.*]] = ptrtoint i8* [[ALIGNED_RESULT6]] to i64
// CHECK-NEXT: [[MASKEDPTR8:%.*]] = and i64 [[PTRINT7]], 31
// CHECK-NEXT: [[MASKCOND9:%.*]] = icmp eq i64 [[MASKEDPTR8]], 0
// CHECK-NEXT: call void @llvm.assume(i1 [[MASKCOND9]])
// CHECK-NEXT: [[CALL10:%.*]] = call i32 @func(i8* [[ALIGNED_RESULT6]])
// CHECK-NEXT: ret i32 1
//
int test_array_should_not_mask(void) {
_Alignas(32) char buf[1024];
// TODO: The align_up and align_down calls should be folded to no-ops
func(__builtin_align_down(&buf[64], 16));
func(__builtin_align_up(&buf[32], 32));
// This expression can be constant-evaluated:
return __builtin_is_aligned(&buf[64], 32);
}

View File

@ -0,0 +1,12 @@
/// Check that the new alignment set by the alignment builtins is propagated
/// to e.g. llvm.memcpy calls.
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown %s -emit-llvm -O1 -o - | FileCheck %s
// CHECK-LABEL: define {{[^@]+}}@align_up
// CHECK: call void @llvm.memcpy.p0i8.p0i8.i64(i8* nonnull align 64 dereferenceable(16) {{%.+}}, i8* nonnull align 1 dereferenceable(16) {{%.+}}, i64 16, i1 false)
// CHECK-NEXT: ret void
//
void align_up(void* data, int* ptr) {
// The call to llvm.memcpy should have an "align 64" on the first argument
__builtin_memcpy(__builtin_align_up(ptr, 64), data, 16);
}

View File

@ -0,0 +1,127 @@
/// Check the code generation for the alignment builtins
/// To make the test case easier to read, run SROA after generating IR to remove the alloca instructions.
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -DTEST_VOID_PTR \
// RUN: -o - -emit-llvm %s -disable-O0-optnone | opt -S -sroa | \
// RUN: FileCheck %s -check-prefixes CHECK,POINTER,ALIGNMENT_EXT \
// RUN: -enable-var-scope '-D$PTRTYPE=i8'
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -DTEST_FLOAT_PTR \
// RUN: -o - -emit-llvm %s -disable-O0-optnone | opt -S -sroa | \
// RUN: FileCheck %s -check-prefixes CHECK,POINTER,NON_I8_POINTER,ALIGNMENT_EXT \
// RUN: -enable-var-scope '-D$PTRTYPE=f32'
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -DTEST_LONG \
// RUN: -o - -emit-llvm %s -disable-O0-optnone | opt -S -sroa | \
// RUN: FileCheck %s -check-prefixes CHECK,INTEGER,ALIGNMENT_EXT -enable-var-scope
/// Check that we can handle the case where the alignment parameter is wider
/// than the source type (generate a trunc on alignment instead of zext)
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -DTEST_USHORT \
// RUN: -o - -emit-llvm %s -disable-O0-optnone | opt -S -sroa | \
// RUN: FileCheck %s -check-prefixes CHECK,INTEGER,ALIGNMENT_TRUNC -enable-var-scope
#ifdef TEST_VOID_PTR
#define TYPE void *
#elif defined(TEST_FLOAT_PTR)
#define TYPE float *
#elif defined(TEST_LONG)
#define TYPE long
#elif defined(TEST_CAP)
#define TYPE void *__capability
#elif defined(TEST_USHORT)
#define TYPE unsigned short
#else
#error MISSING TYPE
#endif
/// Check that constant initializers work and are correct
_Bool aligned_true = __builtin_is_aligned(1024, 512);
// CHECK: @aligned_true = global i8 1, align 1
_Bool aligned_false = __builtin_is_aligned(123, 512);
// CHECK: @aligned_false = global i8 0, align 1
int down_1 = __builtin_align_down(1023, 32);
// CHECK: @down_1 = global i32 992, align 4
int down_2 = __builtin_align_down(256, 32);
// CHECK: @down_2 = global i32 256, align 4
int up_1 = __builtin_align_up(1023, 32);
// CHECK: @up_1 = global i32 1024, align 4
int up_2 = __builtin_align_up(256, 32);
// CHECK: @up_2 = global i32 256, align 4
/// Capture the IR type here to use in the remaining FileCheck captures:
// CHECK: define {{[^@]+}}@get_type() #0
// CHECK-NEXT: entry:
// POINTER-NEXT: ret [[$TYPE:.+]] null
// INTEGER-NEXT: ret [[$TYPE:.+]] 0
//
TYPE get_type(void) {
return (TYPE)0;
}
// CHECK-LABEL: define {{[^@]+}}@is_aligned
// CHECK-SAME: ([[$TYPE]] {{[^%]*}}[[PTR:%.*]], i32 [[ALIGN:%.*]]) #0
// CHECK-NEXT: entry:
// ALIGNMENT_EXT-NEXT: [[ALIGNMENT:%.*]] = zext i32 [[ALIGN]] to [[ALIGN_TYPE:i64]]
// ALIGNMENT_TRUNC-NEXT: [[ALIGNMENT:%.*]] = trunc i32 [[ALIGN]] to [[ALIGN_TYPE:i16]]
// CHECK-NEXT: [[MASK:%.*]] = sub [[ALIGN_TYPE]] [[ALIGNMENT]], 1
// POINTER-NEXT: [[PTR:%.*]] = ptrtoint [[$TYPE]] %ptr to i64
// CHECK-NEXT: [[SET_BITS:%.*]] = and [[ALIGN_TYPE]] [[PTR]], [[MASK]]
// CHECK-NEXT: [[IS_ALIGNED:%.*]] = icmp eq [[ALIGN_TYPE]] [[SET_BITS]], 0
// CHECK-NEXT: ret i1 [[IS_ALIGNED]]
//
_Bool is_aligned(TYPE ptr, unsigned align) {
return __builtin_is_aligned(ptr, align);
}
// CHECK-LABEL: define {{[^@]+}}@align_up
// CHECK-SAME: ([[$TYPE]] {{[^%]*}}[[PTR:%.*]], i32 [[ALIGN:%.*]]) #0
// CHECK-NEXT: entry:
// ALIGNMENT_EXT-NEXT: [[ALIGNMENT:%.*]] = zext i32 [[ALIGN]] to [[ALIGN_TYPE:i64]]
// ALIGNMENT_TRUNC-NEXT: [[ALIGNMENT:%.*]] = trunc i32 [[ALIGN]] to [[ALIGN_TYPE:i16]]
// CHECK-NEXT: [[MASK:%.*]] = sub [[ALIGN_TYPE]] [[ALIGNMENT]], 1
// INTEGER-NEXT: [[OVER_BOUNDARY:%.*]] = add [[$TYPE]] [[PTR]], [[MASK]]
// NOTYET-POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = call [[$TYPE]] @llvm.ptrmask.p0[[$PTRTYPE]].p0i8.i64(i8* [[OVER_BOUNDARY]], [[ALIGN_TYPE]] [[INVERTED_MASK]])
// POINTER-NEXT: [[INTPTR:%.*]] = ptrtoint [[$TYPE]] [[PTR]] to [[ALIGN_TYPE]]
// POINTER-NEXT: [[OVER_BOUNDARY:%.*]] = add [[ALIGN_TYPE]] [[INTPTR]], [[MASK]]
// CHECK-NEXT: [[INVERTED_MASK:%.*]] = xor [[ALIGN_TYPE]] [[MASK]], -1
// CHECK-NEXT: [[ALIGNED_RESULT:%.*]] = and [[ALIGN_TYPE]] [[OVER_BOUNDARY]], [[INVERTED_MASK]]
// POINTER-NEXT: [[DIFF:%.*]] = sub i64 [[ALIGNED_RESULT]], [[INTPTR]]
// NON_I8_POINTER-NEXT: [[PTR:%.*]] = bitcast [[$TYPE]] {{%.*}} to i8*
// POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = getelementptr inbounds i8, i8* [[PTR]], i64 [[DIFF]]
// NON_I8_POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = bitcast i8* {{%.*}} to [[$TYPE]]
// POINTER-NEXT: [[ASSUME_MASK:%.*]] = sub i64 %alignment, 1
// POINTER-NEXT: [[ASSUME_INTPTR:%.*]]= ptrtoint [[$TYPE]] [[ALIGNED_RESULT]] to i64
// POINTER-NEXT: [[MASKEDPTR:%.*]] = and i64 %ptrint, [[ASSUME_MASK]]
// POINTER-NEXT: [[MASKEDCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
// POINTER-NEXT: call void @llvm.assume(i1 [[MASKEDCOND]])
// CHECK-NEXT: ret [[$TYPE]] [[ALIGNED_RESULT]]
//
TYPE align_up(TYPE ptr, unsigned align) {
return __builtin_align_up(ptr, align);
}
// CHECK-LABEL: define {{[^@]+}}@align_down
// CHECK-SAME: ([[$TYPE]] {{[^%]*}}[[PTR:%.*]], i32 [[ALIGN:%.*]]) #0
// CHECK-NEXT: entry:
// ALIGNMENT_EXT-NEXT: [[ALIGNMENT:%.*]] = zext i32 [[ALIGN]] to [[ALIGN_TYPE:i64]]
// ALIGNMENT_TRUNC-NEXT: [[ALIGNMENT:%.*]] = trunc i32 [[ALIGN]] to [[ALIGN_TYPE:i16]]
// CHECK-NEXT: [[MASK:%.*]] = sub [[ALIGN_TYPE]] [[ALIGNMENT]], 1
// NOTYET-POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = call [[$TYPE]] @llvm.ptrmask.p0[[$PTRTYPE]].p0[[$PTRTYPE]].i64([[$TYPE]] [[PTR]], [[ALIGN_TYPE]] [[INVERTED_MASK]])
// POINTER-NEXT: [[INTPTR:%.*]] = ptrtoint [[$TYPE]] [[PTR]] to [[ALIGN_TYPE]]
// CHECK-NEXT: [[INVERTED_MASK:%.*]] = xor [[ALIGN_TYPE]] [[MASK]], -1
// POINTER-NEXT: [[ALIGNED_INTPTR:%.*]] = and [[ALIGN_TYPE]] [[INTPTR]], [[INVERTED_MASK]]
// POINTER-NEXT: [[DIFF:%.*]] = sub i64 [[ALIGNED_INTPTR]], [[INTPTR]]
// NON_I8_POINTER-NEXT: [[PTR:%.*]] = bitcast [[$TYPE]] {{%.*}} to i8*
// POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = getelementptr inbounds i8, i8* [[PTR]], i64 [[DIFF]]
// NON_I8_POINTER-NEXT: [[ALIGNED_RESULT:%.*]] = bitcast i8* {{%.*}} to [[$TYPE]]
// INTEGER-NEXT: [[ALIGNED_RESULT:%.*]] = and [[ALIGN_TYPE]] [[PTR]], [[INVERTED_MASK]]
// POINTER-NEXT: [[ASSUME_MASK:%.*]] = sub i64 %alignment, 1
// POINTER-NEXT: [[ASSUME_INTPTR:%.*]]= ptrtoint [[$TYPE]] [[ALIGNED_RESULT]] to i64
// POINTER-NEXT: [[MASKEDPTR:%.*]] = and i64 %ptrint, [[ASSUME_MASK]]
// POINTER-NEXT: [[MASKEDCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
// POINTER-NEXT: call void @llvm.assume(i1 [[MASKEDCOND]])
// CHECK-NEXT: ret [[$TYPE]] [[ALIGNED_RESULT]]
//
TYPE align_down(TYPE ptr, unsigned align) {
return __builtin_align_down(ptr, align);
}

View File

@ -0,0 +1,133 @@
// RUN: %clang_cc1 -triple x86_64-linux-gnu -DALIGN_BUILTIN=__builtin_align_down -DRETURNS_BOOL=0 %s -fsyntax-only -verify -Wpedantic
// RUN: %clang_cc1 -triple x86_64-linux-gnu -DALIGN_BUILTIN=__builtin_align_up -DRETURNS_BOOL=0 %s -fsyntax-only -verify -Wpedantic
// RUN: %clang_cc1 -triple x86_64-linux-gnu -DALIGN_BUILTIN=__builtin_is_aligned -DRETURNS_BOOL=1 %s -fsyntax-only -verify -Wpedantic
struct Aggregate {
int i;
int j;
};
enum Enum { EnumValue1,
EnumValue2 };
typedef __SIZE_TYPE__ size_t;
void test_parameter_types(char *ptr, size_t size) {
struct Aggregate agg;
enum Enum e = EnumValue2;
_Bool b = 0;
// The first parameter can be any pointer or integer type:
(void)ALIGN_BUILTIN(ptr, 4);
(void)ALIGN_BUILTIN(size, 2);
(void)ALIGN_BUILTIN(12345, 2);
(void)ALIGN_BUILTIN(agg, 2); // expected-error {{operand of type 'struct Aggregate' where arithmetic or pointer type is required}}
(void)ALIGN_BUILTIN(e, 2); // expected-error {{operand of type 'enum Enum' where arithmetic or pointer type is required}}
(void)ALIGN_BUILTIN(b, 2); // expected-error {{operand of type '_Bool' where arithmetic or pointer type is required}}
(void)ALIGN_BUILTIN((int)e, 2); // but with a cast it is fine
(void)ALIGN_BUILTIN((int)b, 2); // but with a cast it is fine
// The second parameter must be an integer type (but not enum or _Bool):
(void)ALIGN_BUILTIN(ptr, size);
(void)ALIGN_BUILTIN(ptr, ptr); // expected-error {{used type 'char *' where integer is required}}
(void)ALIGN_BUILTIN(ptr, agg); // expected-error {{used type 'struct Aggregate' where integer is required}}
(void)ALIGN_BUILTIN(ptr, b); // expected-error {{used type '_Bool' where integer is required}}
(void)ALIGN_BUILTIN(ptr, e); // expected-error {{used type 'enum Enum' where integer is required}}
(void)ALIGN_BUILTIN(ptr, (int)e); // but with a cast enums are fine
(void)ALIGN_BUILTIN(ptr, (int)b); // but with a cast booleans are fine
(void)ALIGN_BUILTIN(ptr, size);
(void)ALIGN_BUILTIN(size, size);
}
void test_result_unused(int i, int align) {
// -Wunused-result does not trigger for macros so we can't use ALIGN_BUILTIN()
// but need to explicitly call each function.
__builtin_align_up(i, align); // expected-warning{{ignoring return value of function declared with const attribute}}
__builtin_align_down(i, align); // expected-warning{{ignoring return value of function declared with const attribute}}
__builtin_is_aligned(i, align); // expected-warning{{ignoring return value of function declared with const attribute}}
ALIGN_BUILTIN(i, align); // no warning here
}
#define check_same_type(type1, type2) __builtin_types_compatible_p(type1, type2) && __builtin_types_compatible_p(type1 *, type2 *)
void test_return_type(void *ptr, int i, long l) {
char array[32];
__extension__ typedef typeof(ALIGN_BUILTIN(ptr, 4)) result_type_ptr;
__extension__ typedef typeof(ALIGN_BUILTIN(i, 4)) result_type_int;
__extension__ typedef typeof(ALIGN_BUILTIN(l, 4)) result_type_long;
__extension__ typedef typeof(ALIGN_BUILTIN(array, 4)) result_type_char_array;
#if RETURNS_BOOL
_Static_assert(check_same_type(_Bool, result_type_ptr), "Should return bool");
_Static_assert(check_same_type(_Bool, result_type_int), "Should return bool");
_Static_assert(check_same_type(_Bool, result_type_long), "Should return bool");
_Static_assert(check_same_type(_Bool, result_type_char_array), "Should return bool");
#else
_Static_assert(check_same_type(void *, result_type_ptr), "Should return void*");
_Static_assert(check_same_type(int, result_type_int), "Should return int");
_Static_assert(check_same_type(long, result_type_long), "Should return long");
// Check that we can use the alignment builtins on on array types (result should decay)
_Static_assert(check_same_type(char *, result_type_char_array),
"Using the builtins on an array should yield the decayed type");
#endif
}
void test_invalid_alignment_values(char *ptr, long *longptr, size_t align) {
int x = 1;
(void)ALIGN_BUILTIN(ptr, 2);
(void)ALIGN_BUILTIN(longptr, 1024);
(void)ALIGN_BUILTIN(x, 32);
(void)ALIGN_BUILTIN(ptr, 0); // expected-error {{requested alignment must be 1 or greater}}
(void)ALIGN_BUILTIN(ptr, 1);
#if RETURNS_BOOL
// expected-warning@-2 {{checking whether a value is aligned to 1 byte is always true}}
#else
// expected-warning@-4 {{aligning a value to 1 byte is a no-op}}
#endif
(void)ALIGN_BUILTIN(ptr, 3); // expected-error {{requested alignment is not a power of 2}}
(void)ALIGN_BUILTIN(x, 7); // expected-error {{requested alignment is not a power of 2}}
// check the maximum range for smaller types:
__UINT8_TYPE__ c = ' ';
(void)ALIGN_BUILTIN(c, 128); // this is fine
(void)ALIGN_BUILTIN(c, 256); // expected-error {{requested alignment must be 128 or smaller}}
(void)ALIGN_BUILTIN(x, 1ULL << 31); // this is also fine
(void)ALIGN_BUILTIN(x, 1LL << 31); // this is also fine
__INT32_TYPE__ i32 = 3;
__UINT32_TYPE__ u32 = 3;
// Maximum is the same for int32 and uint32
(void)ALIGN_BUILTIN(i32, 1ULL << 32); // expected-error {{requested alignment must be 2147483648 or smaller}}
(void)ALIGN_BUILTIN(u32, 1ULL << 32); // expected-error {{requested alignment must be 2147483648 or smaller}}
(void)ALIGN_BUILTIN(ptr, ((__int128)1) << 65); // expected-error {{requested alignment must be 9223372036854775808 or smaller}}
(void)ALIGN_BUILTIN(longptr, ((__int128)1) << 65); // expected-error {{requested alignment must be 9223372036854775808 or smaller}}
const int bad_align = 8 + 1;
(void)ALIGN_BUILTIN(ptr, bad_align); // expected-error {{requested alignment is not a power of 2}}
}
// Check that it can be used in constant expressions:
void constant_expression(int x) {
_Static_assert(__builtin_is_aligned(1024, 512), "");
_Static_assert(!__builtin_is_aligned(256, 512ULL), "");
_Static_assert(__builtin_align_up(33, 32) == 64, "");
_Static_assert(__builtin_align_down(33, 32) == 32, "");
// But not if one of the arguments isn't constant:
_Static_assert(ALIGN_BUILTIN(33, x) != 100, ""); // expected-error {{static_assert expression is not an integral constant expression}}
_Static_assert(ALIGN_BUILTIN(x, 4) != 100, ""); // expected-error {{static_assert expression is not an integral constant expression}}
}
// Check that it is a constant expression that can be assigned to globals:
int global1 = __builtin_align_down(33, 8);
int global2 = __builtin_align_up(33, 8);
_Bool global3 = __builtin_is_aligned(33, 8);
extern void test_ptr(char *c);
char *test_array_and_fnptr(void) {
char buf[1024];
// The builtins should also work on arrays (decaying the return type)
(void)(ALIGN_BUILTIN(buf, 16));
// But not on functions and function pointers:
(void)(ALIGN_BUILTIN(test_array_and_fnptr, 16)); // expected-error{{operand of type 'char *(void)' where arithmetic or pointer type is required}}
(void)(ALIGN_BUILTIN(&test_array_and_fnptr, 16)); // expected-error{{operand of type 'char *(*)(void)' where arithmetic or pointer type is required}}
}

View File

@ -0,0 +1,236 @@
// C++-specific checks for the alignment builtins
// RUN: %clang_cc1 -triple=x86_64-unknown-unknown -std=c++11 -o - %s -fsyntax-only -verify
// Check that we don't crash when using dependent types in __builtin_align:
template <typename a, a b>
void *c(void *d) { // expected-note{{candidate template ignored}}
return __builtin_align_down(d, b);
}
struct x {};
x foo;
void test(void *value) {
c<int, 16>(value);
c<struct x, foo>(value); // expected-error{{no matching function for call to 'c'}}
}
template <typename T, long Alignment, long ArraySize = 16>
void test_templated_arguments() {
T array[ArraySize]; // expected-error{{variable has incomplete type 'fwddecl'}}
static_assert(__is_same(decltype(__builtin_align_up(array, Alignment)), T *), // expected-error{{requested alignment is not a power of 2}}
"return type should be the decayed array type");
static_assert(__is_same(decltype(__builtin_align_down(array, Alignment)), T *),
"return type should be the decayed array type");
static_assert(__is_same(decltype(__builtin_is_aligned(array, Alignment)), bool),
"return type should be bool");
T *x1 = __builtin_align_up(array, Alignment);
T *x2 = __builtin_align_down(array, Alignment);
bool x3 = __builtin_align_up(array, Alignment);
}
void test() {
test_templated_arguments<int, 32>(); // fine
test_templated_arguments<struct fwddecl, 16>();
// expected-note@-1{{in instantiation of function template specialization 'test_templated_arguments<fwddecl, 16, 16>'}}
// expected-note@-2{{forward declaration of 'fwddecl'}}
test_templated_arguments<int, 7>(); // invalid alignment value
// expected-note@-1{{in instantiation of function template specialization 'test_templated_arguments<int, 7, 16>'}}
}
template <typename T>
void test_incorrect_alignment_without_instatiation(T value) {
int array[32];
static_assert(__is_same(decltype(__builtin_align_up(array, 31)), int *), // expected-error{{requested alignment is not a power of 2}}
"return type should be the decayed array type");
static_assert(__is_same(decltype(__builtin_align_down(array, 7)), int *), // expected-error{{requested alignment is not a power of 2}}
"return type should be the decayed array type");
static_assert(__is_same(decltype(__builtin_is_aligned(array, -1)), bool), // expected-error{{requested alignment must be 1 or greater}}
"return type should be bool");
__builtin_align_up(array); // expected-error{{too few arguments to function call, expected 2, have 1}}
__builtin_align_up(array, 31); // expected-error{{requested alignment is not a power of 2}}
__builtin_align_down(array, 31); // expected-error{{requested alignment is not a power of 2}}
__builtin_align_up(array, 31); // expected-error{{requested alignment is not a power of 2}}
__builtin_align_up(value, 31); // This shouldn't want since the type is dependent
__builtin_align_up(value); // Same here
}
// The original fix for the issue above broke some legitimate code.
// Here is a regression test:
typedef __SIZE_TYPE__ size_t;
void *allocate_impl(size_t size);
template <typename T>
T *allocate() {
constexpr size_t allocation_size =
__builtin_align_up(sizeof(T), sizeof(void *));
return static_cast<T *>(
__builtin_assume_aligned(allocate_impl(allocation_size), sizeof(void *)));
}
struct Foo {
int value;
};
void *test2() {
return allocate<struct Foo>();
}
// Check that pointers-to-members cannot be used:
class MemPtr {
public:
int data;
void func();
virtual void vfunc();
};
void test_member_ptr() {
__builtin_align_up(&MemPtr::data, 64); // expected-error{{operand of type 'int MemPtr::*' where arithmetic or pointer type is required}}
__builtin_align_down(&MemPtr::func, 64); // expected-error{{operand of type 'void (MemPtr::*)()' where arithmetic or pointer type is required}}
__builtin_is_aligned(&MemPtr::vfunc, 64); // expected-error{{operand of type 'void (MemPtr::*)()' where arithmetic or pointer type is required}}
}
void test_references(Foo &i) {
// Check that the builtins look at the referenced type rather than the reference itself.
(void)__builtin_align_up(i, 64); // expected-error{{operand of type 'Foo' where arithmetic or pointer type is required}}
(void)__builtin_align_up(static_cast<Foo &>(i), 64); // expected-error{{operand of type 'Foo' where arithmetic or pointer type is required}}
(void)__builtin_align_up(static_cast<const Foo &>(i), 64); // expected-error{{operand of type 'const Foo' where arithmetic or pointer type is required}}
(void)__builtin_align_up(static_cast<Foo &&>(i), 64); // expected-error{{operand of type 'Foo' where arithmetic or pointer type is required}}
(void)__builtin_align_up(static_cast<const Foo &&>(i), 64); // expected-error{{operand of type 'const Foo' where arithmetic or pointer type is required}}
(void)__builtin_align_up(&i, 64);
}
// Check that constexpr wrapper functions can be constant-evaluated.
template <typename T>
constexpr bool wrap_is_aligned(T ptr, long align) {
return __builtin_is_aligned(ptr, align);
// expected-note@-1{{requested alignment -3 is not a positive power of two}}
// expected-note@-2{{requested alignment 19 is not a positive power of two}}
// expected-note@-3{{requested alignment must be 128 or less for type 'char'; 4194304 is invalid}}
}
template <typename T>
constexpr T wrap_align_up(T ptr, long align) {
return __builtin_align_up(ptr, align);
// expected-note@-1{{requested alignment -2 is not a positive power of two}}
// expected-note@-2{{requested alignment 18 is not a positive power of two}}
// expected-note@-3{{requested alignment must be 2147483648 or less for type 'int'; 8589934592 is invalid}}
// expected-error@-4{{operand of type 'bool' where arithmetic or pointer type is required}}
}
template <typename T>
constexpr T wrap_align_down(T ptr, long align) {
return __builtin_align_down(ptr, align);
// expected-note@-1{{requested alignment -1 is not a positive power of two}}
// expected-note@-2{{requested alignment 17 is not a positive power of two}}
// expected-note@-3{{requested alignment must be 32768 or less for type 'short'; 1048576 is invalid}}
}
constexpr int a1 = wrap_align_up(22, 32);
static_assert(a1 == 32, "");
constexpr int a2 = wrap_align_down(22, 16);
static_assert(a2 == 16, "");
constexpr bool a3 = wrap_is_aligned(22, 32);
static_assert(!a3, "");
static_assert(wrap_align_down(wrap_align_up(22, 16), 32) == 32, "");
static_assert(wrap_is_aligned(wrap_align_down(wrap_align_up(22, 16), 32), 32), "");
static_assert(!wrap_is_aligned(wrap_align_down(wrap_align_up(22, 16), 32), 64), "");
constexpr long const_value(long l) { return l; }
// Check some invalid values during constant-evaluation
static_assert(wrap_align_down(1, const_value(-1)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_align_down(1, -1)'}}
static_assert(wrap_align_up(1, const_value(-2)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_align_up(1, -2)'}}
static_assert(wrap_is_aligned(1, const_value(-3)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_is_aligned(1, -3)'}}
static_assert(wrap_align_down(1, const_value(17)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_align_down(1, 17)'}}
static_assert(wrap_align_up(1, const_value(18)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_align_up(1, 18)'}}
static_assert(wrap_is_aligned(1, const_value(19)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_is_aligned(1, 19)'}}
// Check invalid values for smaller types:
static_assert(wrap_align_down(static_cast<short>(1), const_value(1 << 20)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_align_down(1, 1048576)'}}
// Check invalid boolean type
static_assert(wrap_align_up(static_cast<int>(1), const_value(1ull << 33)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_align_up(1, 8589934592)'}}
static_assert(wrap_is_aligned(static_cast<char>(1), const_value(1 << 22)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in call to 'wrap_is_aligned(1, 4194304)'}}
// Check invalid boolean type
static_assert(wrap_align_up(static_cast<bool>(1), const_value(1 << 21)), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{in instantiation of function template specialization 'wrap_align_up<bool>' requested here}}
// Check constant evaluation for pointers:
_Alignas(32) char align32array[128];
static_assert(&align32array[0] == &align32array[0], "");
// __builtin_align_up/down can be constant evaluated as a no-op for values
// that are known to have greater alignment:
static_assert(__builtin_align_up(&align32array[0], 32) == &align32array[0], "");
static_assert(__builtin_align_up(&align32array[0], 4) == &align32array[0], "");
static_assert(__builtin_align_down(&align32array[0], 4) == __builtin_align_up(&align32array[0], 8), "");
// But it can not be evaluated if the alignment is greater than the minimum
// known alignment, since in that case the value might be the same if it happens
// to actually be aligned to 64 bytes at run time.
static_assert(&align32array[0] == __builtin_align_up(&align32array[0], 64), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{cannot constant evaluate the result of adjusting alignment to 64}}
static_assert(__builtin_align_up(&align32array[0], 64) == __builtin_align_up(&align32array[0], 64), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{cannot constant evaluate the result of adjusting alignment to 64}}
// However, we can compute in case the requested alignment is less than the
// base alignment:
static_assert(__builtin_align_up(&align32array[0], 4) == &align32array[0], "");
static_assert(__builtin_align_up(&align32array[1], 4) == &align32array[4], "");
static_assert(__builtin_align_up(&align32array[2], 4) == &align32array[4], "");
static_assert(__builtin_align_up(&align32array[3], 4) == &align32array[4], "");
static_assert(__builtin_align_up(&align32array[4], 4) == &align32array[4], "");
static_assert(__builtin_align_up(&align32array[5], 4) == &align32array[8], "");
static_assert(__builtin_align_up(&align32array[6], 4) == &align32array[8], "");
static_assert(__builtin_align_up(&align32array[7], 4) == &align32array[8], "");
static_assert(__builtin_align_up(&align32array[8], 4) == &align32array[8], "");
static_assert(__builtin_align_down(&align32array[0], 4) == &align32array[0], "");
static_assert(__builtin_align_down(&align32array[1], 4) == &align32array[0], "");
static_assert(__builtin_align_down(&align32array[2], 4) == &align32array[0], "");
static_assert(__builtin_align_down(&align32array[3], 4) == &align32array[0], "");
static_assert(__builtin_align_down(&align32array[4], 4) == &align32array[4], "");
static_assert(__builtin_align_down(&align32array[5], 4) == &align32array[4], "");
static_assert(__builtin_align_down(&align32array[6], 4) == &align32array[4], "");
static_assert(__builtin_align_down(&align32array[7], 4) == &align32array[4], "");
static_assert(__builtin_align_down(&align32array[8], 4) == &align32array[8], "");
// Achiving the same thing using casts to uintptr_t is not allowed:
static_assert((char *)((__UINTPTR_TYPE__)&align32array[7] & ~3) == &align32array[4], ""); // expected-error{{not an integral constant expression}}
static_assert(__builtin_align_down(&align32array[1], 4) == &align32array[0], "");
static_assert(__builtin_align_down(&align32array[1], 64) == &align32array[0], ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{cannot constant evaluate the result of adjusting alignment to 64}}
// Add some checks for __builtin_is_aligned:
static_assert(__builtin_is_aligned(&align32array[0], 32), "");
static_assert(__builtin_is_aligned(&align32array[4], 4), "");
// We cannot constant evaluate whether the array is aligned to > 32 since this
// may well be true at run time.
static_assert(!__builtin_is_aligned(&align32array[0], 64), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{cannot constant evaluate whether run-time alignment is at least 64}}
// However, if the alignment being checked is less than the minimum alignment of
// the base object we can check the low bits of the alignment:
static_assert(__builtin_is_aligned(&align32array[0], 4), "");
static_assert(!__builtin_is_aligned(&align32array[1], 4), "");
static_assert(!__builtin_is_aligned(&align32array[2], 4), "");
static_assert(!__builtin_is_aligned(&align32array[3], 4), "");
static_assert(__builtin_is_aligned(&align32array[4], 4), "");
// TODO: this should evaluate to true even though we can't evaluate the result
// of __builtin_align_up() to a concrete value
static_assert(__builtin_is_aligned(__builtin_align_up(&align32array[0], 64), 64), ""); // expected-error{{not an integral constant expression}}
// expected-note@-1{{cannot constant evaluate the result of adjusting alignment to 64}}
// Check different source and alignment type widths are handled correctly.
static_assert(!__builtin_is_aligned(static_cast<signed long>(7), static_cast<signed short>(4)), "");
static_assert(!__builtin_is_aligned(static_cast<signed short>(7), static_cast<signed long>(4)), "");
// Also check signed -- unsigned mismatch.
static_assert(!__builtin_is_aligned(static_cast<signed long>(7), static_cast<signed long>(4)), "");
static_assert(!__builtin_is_aligned(static_cast<unsigned long>(7), static_cast<unsigned long>(4)), "");
static_assert(!__builtin_is_aligned(static_cast<signed long>(7), static_cast<unsigned long>(4)), "");
static_assert(!__builtin_is_aligned(static_cast<unsigned long>(7), static_cast<signed long>(4)), "");
static_assert(!__builtin_is_aligned(static_cast<signed long>(7), static_cast<unsigned short>(4)), "");
static_assert(!__builtin_is_aligned(static_cast<unsigned short>(7), static_cast<signed long>(4)), "");