Adding an unsigned offset to a base pointer has undefined behavior if
the result of the expression would precede the base. An example from
@regehr:
int foo(char *p, unsigned offset) {
return p + offset >= p; // This may be optimized to '1'.
}
foo(p, -1); // UB.
This patch extends the pointer overflow check in ubsan to detect invalid
unsigned pointer index expressions. It changes the instrumentation to
only permit non-negative offsets in pointer index expressions when all
of the GEP indices are unsigned.
Testing: check-llvm, check-clang run on a stage2, ubsan-instrumented
build.
Differential Revision: https://reviews.llvm.org/D33910
llvm-svn: 305216
Check pointer arithmetic for overflow.
For some more background on this check, see:
https://wdtz.org/catching-pointer-overflow-bugs.htmlhttps://reviews.llvm.org/D20322
Patch by Will Dietz and John Regehr!
This version of the patch is different from the original in a few ways:
- It introduces the EmitCheckedInBoundsGEP utility which inserts
checks when the pointer overflow check is enabled.
- It does some constant-folding to reduce instrumentation overhead.
- It does not check some GEPs in CGExprCXX. I'm not sure that
inserting checks here, or in CGClass, would catch many bugs.
Possible future directions for this check:
- Introduce CGF.EmitCheckedStructGEP, to detect overflows when
accessing structures.
Testing: Apart from the added lit test, I ran check-llvm and check-clang
with a stage2, ubsan-instrumented clang. Will and John have also done
extensive testing on numerous open source projects.
Differential Revision: https://reviews.llvm.org/D33305
llvm-svn: 304459
Alloca always returns a pointer in alloca address space, which may
be different from the type defined by the language. For example,
in C++ the auto variables are in the default address space. Therefore
cast alloca to the expected address space when necessary.
Differential Revision: https://reviews.llvm.org/D32248
llvm-svn: 303370
Sanitizer instrumentation generally needs to be marked with !nosanitize,
but we're not doing this properly for ubsan's overflow checks.
r213291 has more information about why this is needed.
llvm-svn: 302598
Currently, ubsan emits overflow checks for arithmetic that is known to
be safe at compile-time, e.g:
1 + 1 => CheckedAdd(1, 1)
This leads to breakage when using the __builtin_prefetch intrinsic. LLVM
expects the arguments to @llvm.prefetch to be constant integers, and
when ubsan inserts unnecessary checks on the operands to the intrinsic,
this contract is broken, leading to verifier failures (see PR32874).
Instead of special-casing __builtin_prefetch for ubsan, this patch fixes
the underlying problem, i.e that clang currently emits unnecessary
overflow checks.
Testing: I ran the check-clang and check-ubsan targets with a stage2,
ubsan-enabled build of clang. I added a regression test for PR32874, and
some extra checking to make sure we don't regress runtime checking for
unsafe arithmetic. The existing ubsan-promoted-arithmetic.cpp test also
provides coverage for this change.
llvm-svn: 301988
With this, FMF(contract) becomes an alternative way to express the request to
contract.
These are currently only propagated for FMul, FAdd and FSub. The rest will be
added as more FMFs are hooked up for this.
This is toward fixing PR25721.
Differential Revision: https://reviews.llvm.org/D31168
llvm-svn: 299469
FPContractModeKind is the codegen option flag which is already ternary (off,
on, fast). This makes it universally the type for the contractable info
across the front-end:
* In FPOptions (i.e. in the Sema + in the expression nodes).
* In LangOpts::DefaultFPContractMode which is the option that initializes
FPOptions in the Sema.
Another way to look at this change is that before fp-contractable on/off were
the only states handled to the front-end:
* For "on", FMA folding was performed by the front-end
* For "fast", we simply forwarded the flag to TargetOptions to handle it in
LLVM
Now off/on/fast are all exposed because for fast we will generate
fast-math-flags during CodeGen.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
---
This is a recommit of r299027 with an adjustment to the test
CodeGenCUDA/fp-contract.cu. The test assumed that even
though -ffp-contract=on is passed FE-based folding of FMA won't happen.
This is obviously wrong since the user is asking for this explicitly with the
option. CUDA is different that -ffp-contract=fast is on by default.
The test used to "work" because contract=fast and contract=on were maintained
separately and we didn't fold in the FE because contract=fast was on due to
the target-default. This patch consolidates the contract=on/fast/off state
into a ternary state hence the change in behavior.
---
Differential Revision: https://reviews.llvm.org/D31167
llvm-svn: 299033
FPContractModeKind is the codegen option flag which is already ternary (off,
on, fast). This makes it universally the type for the contractable info
across the front-end:
* In FPOptions (i.e. in the Sema + in the expression nodes).
* In LangOpts::DefaultFPContractMode which is the option that initializes
FPOptions in the Sema.
Another way to look at this change is that before fp-contractable on/off were
the only states handled to the front-end:
* For "on", FMA folding was performed by the front-end
* For "fast", we simply forwarded the flag to TargetOptions to handle it in
LLVM
Now off/on/fast are all exposed because for fast we will generate
fast-math-flags during CodeGen.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
Differential Revision: https://reviews.llvm.org/D31167
llvm-svn: 299027
Sema holds the current FPOptions which is adjusted by 'pragma STDC
FP_CONTRACT'. This then gets propagated into expression nodes as they are
built.
This encapsulates FPOptions so that this propagation happens opaquely rather
than directly with the fp_contractable on/off bit. This allows controlled
transitioning of fp_contractable to a ternary value (off, on, fast). It will
also allow adding more fast-math flags later.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
Differential Revision: https://reviews.llvm.org/D31166
llvm-svn: 298877
Details:
Emit suspend expression which roughly looks like:
auto && x = CommonExpr();
if (!x.await_ready()) {
llvm_coro_save();
x.await_suspend(...); (*)
llvm_coro_suspend(); (**)
}
x.await_resume();
where the result of the entire expression is the result of x.await_resume()
(*) If x.await_suspend return type is bool, it allows to veto a suspend:
if (x.await_suspend(...))
llvm_coro_suspend();
(**) llvm_coro_suspend() encodes three possible continuations as a switch instruction:
%where-to = call i8 @llvm.coro.suspend(...)
switch i8 %where-to, label %coro.ret [ ; jump to epilogue to suspend
i8 0, label %yield.ready ; go here when resumed
i8 1, label %yield.cleanup ; go here when destroyed
]
llvm-svn: 298784
Teach UBSan to detect when a value with the _Nonnull type annotation
assumes a null value. Call expressions, initializers, assignments, and
return statements are all checked.
Because _Nonnull does not affect IRGen, the new checks are disabled by
default. The new driver flags are:
-fsanitize=nullability-arg (_Nonnull violation in call)
-fsanitize=nullability-assign (_Nonnull violation in assignment)
-fsanitize=nullability-return (_Nonnull violation in return stmt)
-fsanitize=nullability (all of the above)
This patch builds on top of UBSan's existing support for detecting
violations of the nonnull attributes ('nonnull' and 'returns_nonnull'),
and relies on the compiler-rt support for those checks. Eventually we
will need to update the diagnostic messages in compiler-rt (there are
FIXME's for this, which will be addressed in a follow-up).
One point of note is that the nullability-return check is only allowed
to kick in if all arguments to the function satisfy their nullability
preconditions. This makes it necessary to emit some null checks in the
function body itself.
Testing: check-clang and check-ubsan. I also built some Apple ObjC
frameworks with an asserts-enabled compiler, and verified that we get
valid reports.
Differential Revision: https://reviews.llvm.org/D30762
llvm-svn: 297700
Summary:
Because of the existence branches out of GNU statement expressions, it
is possible that emitting cleanups for a full expression may cause the
new insertion point to not be dominated by the result of the inner
expression. Consider this example:
struct Foo { Foo(); ~Foo(); int x; };
int g(Foo, int);
int f(bool cond) {
int n = g(Foo(), ({ if (cond) return 0; 42; }));
return n;
}
Before this change, result of the call to 'g' did not dominate its use
in the store to 'n'. The early return exit from the statement expression
branches to a shared cleanup block, which ends in a switch between the
fallthrough destination (the assignment to 'n') or the function exit
block.
This change solves the problem by spilling and reloading expression
evaluation results when any of the active cleanups have branches.
I audited the other call sites of enterFullExpression, and they don't
appear to keep and Values live across the site of the cleanup, except in
ARC code. I wasn't able to create a test case for ARC that exhibits this
problem, though.
Reviewers: rjmccall, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D30590
llvm-svn: 297084
2nd attempt: the first was in r296231, but it had a use after lifetime
bug.
Clang has logic to lower certain conditional expressions directly into llvm
select instructions. However, it does not emit the correct profile counter
increment as it does this: it emits an unconditional increment of the counter
for the 'then branch', even if the value selected is from the 'else branch'
(this is PR32019).
That means, given the following snippet, we would report that "0" is selected
twice, and that "1" is never selected:
int f1(int x) {
return x ? 0 : 1;
^2 ^0
}
f1(0);
f1(1);
Fix the problem by using the instrprof_increment_step intrinsic to do the
proper increment.
llvm-svn: 296245
Clang has logic to lower certain conditional expressions directly into
llvm select instructions. However, it does not emit the correct profile
counter increment as it does this: it emits an unconditional increment
of the counter for the 'then branch', even if the value selected is from
the 'else branch' (this is PR32019).
That means, given the following snippet, we would report that "0" is
selected twice, and that "1" is never selected:
int f1(int x) {
return x ? 0 : 1;
^2 ^0
}
f1(0);
f1(1);
Fix the problem by using the instrprof_increment_step intrinsic to do
the proper increment.
llvm-svn: 296231
Teach ubsan to diagnose remainder operations which have undefined
behavior due to signed overflow (e.g INT_MIN % -1).
Differential Revision: https://reviews.llvm.org/D29437
llvm-svn: 296214
C requires the operands of arithmetic expressions to be promoted if
their types are smaller than an int. Ubsan emits overflow checks when
this sort of type promotion occurs, even if there is no way to actually
get an overflow with the promoted type.
This patch teaches clang how to omit the superflous overflow checks
(addressing PR20193).
Testing: check-clang and check-ubsan.
Differential Revision: https://reviews.llvm.org/D29369
llvm-svn: 296213
This re-applies r293343 (reverts commit r293475) with a fix for an
assertion failure caused by a missing integer cast. I tested this patch
by using the built compiler to compile X86FastISel.cpp.o with ubsan.
Original commit message:
Ubsan does not report UB shifts in some cases where the shift exponent
needs to be truncated to match the type of the shift base. We perform a
range check on the truncated shift amount, leading to false negatives.
Fix the issue (PR27271) by performing the range check on the original
shift amount.
Differential Revision: https://reviews.llvm.org/D29234
llvm-svn: 293572
Ubsan does not report UB shifts in some cases where the shift exponent
needs to be truncated to match the type of the shift base. We perform a
range check on the truncated shift amount, leading to false negatives.
Fix the issue (PR27271) by performing the range check on the original
shift amount.
Differential Revision: https://reviews.llvm.org/D29234
llvm-svn: 293343
This reverts commit r290171. It triggers a bunch of warnings, because
the new enumerator isn't handled in all switches. We want a warning-free
build.
Replied on the commit with more details.
llvm-svn: 290173
Summary: Enabling the compression of CLK_NULL_QUEUE to variable of type queue_t.
Reviewers: Anastasia
Subscribers: cfe-commits, yaxunl, bader
Differential Revision: https://reviews.llvm.org/D27569
llvm-svn: 290171
This adds a way for us to version any UBSan handler by itself.
The patch overrides D21289 for a better implementation (we're able to
rev up a single handler).
After this, then we can land a slight modification of D19667+D19668.
We probably don't want to keep all the versions in compiler-rt (maybe we
want to deprecate on one release and remove the old handler on the next
one?), but with this patch we will loudly fail to compile when mixing
incompatible handler calls, instead of silently compiling and then
providing bad error messages.
Reviewers: kcc, samsonov, rsmith, vsk
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D21695
llvm-svn: 289444
initialization of each array element:
* ArrayInitLoopExpr is a prvalue of array type with two subexpressions:
a common expression (an OpaqueValueExpr) that represents the up-front
computation of the source of the initialization, and a subexpression
representing a per-element initializer
* ArrayInitIndexExpr is a prvalue of type size_t representing the current
position in the loop
This will be used to replace the creation of explicit index variables in lambda
capture of arrays and copy/move construction of classes with array elements,
and also C++17 structured bindings of arrays by value (which inexplicably allow
copying an array by value, unlike all of C++'s other array declarations).
No uses of these nodes are introduced by this change, however.
llvm-svn: 289413
In amdgcn target, null pointers in global, constant, and generic address space take value 0 but null pointers in private and local address space take value -1. Currently LLVM assumes all null pointers take value 0, which results in incorrectly translated IR. To workaround this issue, instead of emit null pointers in local and private address space, a null pointer in generic address space is emitted and casted to local and private address space.
Tentative definition of global variables with non-zero initializer will have weak linkage instead of common linkage since common linkage requires zero initializer and does not have explicit section to hold the non-zero value.
Virtual member functions getNullPointer and performAddrSpaceCast are added to TargetCodeGenInfo which by default returns ConstantPointerNull and emitting addrspacecast instruction. A virtual member function getNullPointerValue is added to TargetInfo which by default returns 0. Each target can override these virtual functions to get target specific null pointer and the null pointer value for specific address space, and perform specific translations for addrspacecast.
Wrapper functions getNullPointer is added to CodegenModule and getTargetNullPointerValue is added to ASTContext to facilitate getting the target specific null pointers and their values.
This change has no effect on other targets except amdgcn target. Other targets can provide support of non-zero null pointer in a similar way.
This change only provides support for non-zero null pointer for C and OpenCL. Supporting for other languages will be added later incrementally.
Differential Revision: https://reviews.llvm.org/D26196
llvm-svn: 289252
__builtin_astype is used to cast OpenCL opaque types to other types, as such, it needs to be able to handle casting from and to pointer types correctly.
Current it cannot handle 1) casting between pointers of different addr spaces 2) casting between pointer type and non-pointer types.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D25123
llvm-svn: 283114
Pointers of certain GPUs in AMDGCN target in private address space is 32 bit but pointers in other address spaces are 64 bit. size_t type should be defined as 64 bit for these GPUs so that it could hold pointers in all address spaces. Also fixed issues in pointer arithmetic codegen by using pointer specific intptr type.
Differential Revision: https://reviews.llvm.org/D23361
llvm-svn: 279121
Use BB.getNextNode(), which returns nullptr on end(), instead of
&*BB.getIterator(), which is UB on end().
CodeGenFunction::createBasicBlock expects nullptr in this case already.
llvm-svn: 278898
Let the driver pass the option to frontend. Do not set precision metadata for division instructions when this option is set. Set function attribute "correctly-rounded-divide-sqrt-fp-math" based on this option.
Differential Revision: https://reviews.llvm.org/D22940
llvm-svn: 278155
Currently Clang use int32 to represent sampler_t, which have been a source of issue for some backends, because in some backends sampler_t cannot be represented by int32. They have to depend on kernel argument metadata and use IPA to find the sampler arguments and global variables and transform them to target specific sampler type.
This patch uses opaque pointer type opencl.sampler_t* for sampler_t. For each use of file-scope sampler variable, it generates a function call of __translate_sampler_initializer. For each initialization of function-scope sampler variable, it generates a function call of __translate_sampler_initializer.
Each builtin library can implement its own __translate_sampler_initializer(). Since the real sampler type tends to be architecture dependent, allowing it to be initialized by a library function simplifies backend design. A typical implementation of __translate_sampler_initializer could be a table lookup of real sampler literal values. Since its argument is always a literal, the returned pointer is known at compile time and easily optimized to finally become some literal values directly put into image read instructions.
This patch is partially based on Alexey Sotkin's work in Khronos Clang (3d4eec6162).
Differential Revision: https://reviews.llvm.org/D21567
llvm-svn: 277024
__builtin_astype does not generate correct LLVM IR for vec3 types. This patch inserts bitcasts to/from vec4 when necessary in addition to generating vector shuffle. Sema and codegen tests are added.
Differential Revision: http://reviews.llvm.org/D20133
llvm-svn: 272153
I couldn't find any documentation that this form existed either. Nor is there documentation for one of the remaining two forms, but there is a testcase that uses it.
llvm-svn: 269879
This patch corresponds to reviews:
http://reviews.llvm.org/D15120http://reviews.llvm.org/D19125
It adds support for the __float128 keyword, literals and target feature to
enable it. Based on the latter of the two aforementioned reviews, this feature
is enabled on Linux on i386/X86 as well as SystemZ.
This is also the second attempt in commiting this feature. The first attempt
did not enable it on required platforms which caused failures when compiling
type_traits with -std=gnu++11.
If you see failures with compiling this header on your platform after this
commit, it is likely that your platform needs to have this feature enabled.
llvm-svn: 268898
Since this patch provided support for the __float128 type but disabled it
on all platforms by default, some platforms can't compile type_traits with
-std=gnu++11 since there is a specialization with __float128.
This reverts the patch until D19125 is approved (i.e. we know which platforms
need this support enabled).
llvm-svn: 266460
This patch corresponds to review:
http://reviews.llvm.org/D15120
It adds support for the __float128 keyword, literals and a target feature to
enable it. This support is disabled by default on all targets and any target
that has support for this type is free to add it.
Based on feedback that I've received from target maintainers, this appears to
be the right thing for most targets. I have not heard from the maintainers of
X86 which I believe supports this type. I will subsequently investigate the
impact of enabling this on X86.
llvm-svn: 266186
In codegen different address spaces may be mapped to the same address
space for a target, e.g. in x86/x86-64 all address spaces are mapped
to 0. Therefore AddressSpaceConversion should be translated by
CreatePointerBitCastOrAddrSpaceCast instead of CreateAddrSpaceCast.
Differential Revision: http://reviews.llvm.org/D18713
llvm-svn: 266107
Revert the two changes to thread CodeGenOptions into the TargetInfo allocation
and to fix the layering violation by moving CodeGenOptions into Basic.
Code Generation is arguably not particularly "basic". This addresses Richard's
post-commit review comments. This change purely does the mechanical revert and
will be followed up with an alternate approach to thread the desired information
into TargetInfo.
llvm-svn: 265806
This is a mechanical move of CodeGenOptions from libFrontend to libBasic. This
fixes the layering violation introduced earlier by threading CodeGenOptions into
TargetInfo. It should also fix the modules based self-hosting builds. NFC.
llvm-svn: 265702
Summary: See LLVM change D18775 for details, this change depends on it.
Reviewers: jyknight, reames
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18776
llvm-svn: 265569
Fixes PR11517 for SPARC.
On most targets, clang lowers va_arg itself, eschewing the use of the
llvm vaarg instruction. This is necessary (at least for now) as the type
argument to the vaarg instruction cannot represent all the ABI
information that is needed to support complex calling conventions.
However, on targets with a simpler varrags ABIs, the LLVM instruction
can work just fine, and clang can simply lower to it. Unfortunately,
even on such targets, vaarg with a struct argument would fail, because
the default lowering to vaarg was naive: it didn't take into account the
ABI attribute computed by classifyArgumentType. In particular, for the
DefaultABIInfo, structs are supposed to be passed indirectly and so
llvm's vaarg instruction should be emitted with a pointer argument.
Now, vaarg instruction emission is able to use computed ABIArgInfo for
the provided argument type, which allows the default ABI support to work
for structs too.
I haven't touched the EmitVAArg implementation for PPC32_SVR4 or XCore,
although I believe both are now redundant, and could be switched over to
use the default implementation as well.
Differential Revision: http://reviews.llvm.org/D16154
llvm-svn: 261717
reclaiming a call result in order to ignore it or assign it
to an __unsafe_unretained variable. This avoids adding
an unwanted retain/release pair when the return value is
not actually returned autoreleased (e.g. when it is returned
from a nonatomic getter or a typical collection accessor).
This runtime function is only available on the latest Apple
OS releases; the backwards-compatibility story is that you
don't get the optimization unless your deployment target is
recent enough. Sorry.
rdar://20530049
llvm-svn: 258962
In {CG,}ExprConstant.cpp, we weren't treating vector splats properly.
This patch makes us treat splats more properly.
Additionally, this patch adds a new cast kind which allows a bool->int
cast to result in -1 or 0, instead of 1 or 0 (for true and false,
respectively), so we can sanely model OpenCL bool->int casts in the AST.
Differential Revision: http://reviews.llvm.org/D14877
llvm-svn: 257559
Currently debug info for types used in explicit cast only is not emitted. It happened after a patch for better alignment handling. This patch fixes this bug.
Differential Revision: http://reviews.llvm.org/D13582
llvm-svn: 250795
Currently codegen crashes trying to emit casting to bool &. It happens because bool type is converted to i1 and later then lvalue for reference is converted to i1*. But when codegen tries to load this lvalue it crashes trying to load value from this i1*.
Differential Revision: http://reviews.llvm.org/D13325
llvm-svn: 249534
OpenCL v1.1 s6.2.2: for the boolean value true, every bit in the result vector should be set.
This change treats the i1 value as signed for the purposes of performing the cast to integer,
and therefore sign extend into the result.
Patch by Neil Hickey!
http://reviews.llvm.org/D13349
llvm-svn: 249301
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
Summary:
float_cast_overflow is the only UBSan check without a source location attached.
This patch propagates SourceLocations where necessary to get them to the
EmitCheck() call.
Reviewers: rsmith, ABataev, rjmccall
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11757
llvm-svn: 244568
This patch fixes bug 23800 ( https://llvm.org/bugs/show_bug.cgi?id=23800#c2 ). There existed a case where the index operand from extractelement was directly used to create a shufflevector mask. Since the index can be of any integral type but the mask must only contain 32 bit integers a 64 bit index operand led to an assertion error later on.
Committed on behalf of mpflanzer (Moritz Pflanzer)
Differential Revision: http://reviews.llvm.org/D10838
llvm-svn: 243851
This causes programs compiled with this flag to print a diagnostic when
a control flow integrity check fails instead of aborting. Diagnostics are
printed using UBSan's runtime library.
The main motivation of this feature over -fsanitize=vptr is fidelity with
the -fsanitize=cfi implementation: the diagnostics are printed under exactly
the same conditions as those which would cause -fsanitize=cfi to abort the
program. This means that the same restrictions apply regarding compiling
all translation units with -fsanitize=cfi, cross-DSO virtual calls are
forbidden, etc.
Differential Revision: http://reviews.llvm.org/D10268
llvm-svn: 240109
The RegionCounter type does a lot of legwork, but most of it is only
meaningful within the implementation of CodeGenPGO. The uses elsewhere
in CodeGen generally just want to increment or read counters, so do
that directly.
llvm-svn: 235664
Summary:
Make sure signed overflow in "x--" is checked with
llvm.ssub.with.overflow intrinsic and is reported as:
"-2147483648 - 1 cannot be represented in type 'int'"
instead of:
"-2147483648 + -1 cannot be represented in type 'int'"
, like we do for unsigned overflow.
Test Plan: clang + compiler-rt regression test suite
Reviewers: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D8236
llvm-svn: 235568
Adds atomic update codegen for the following forms of expressions:
x binop= expr;
x++;
++x;
x--;
--x;
x = x binop expr;
x = expr binop x;
If x and expr are integer and binop is associative or x is a LHS in a RHS of the assignment expression, and atomics are allowed for type of x on the target platform atomicrmw instruction is emitted.
Otherwise compare-and-swap sequence is emitted:
bb:
...
atomic load <x>
cont:
<expected> = phi [ <x>, label %bb ], [ <new_failed>, %cont ]
<desired> = <expected> binop <expr>
<res> = cmpxchg atomic &<x>, desired, expected
<new_failed> = <res>.field1;
br <res>field2, label %exit, label %cont
exit:
...
Differential Revision: http://reviews.llvm.org/D8536
llvm-svn: 233513
We previously defaulted to long double, but it's also possible to have
a half inc/dec amount, when LangOpts NativeHalfType is set.
Currently, that's only true for OpenCL.
llvm-svn: 233135
On AArch64, the -fallow-half-args-and-returns option is the default.
With it, the half type is considered legal (rather than the i16 used
normally for __fp16), but no operation is, except conversions and
load/stores and such.
The previous behavior was tantamount to saying LangOpts.NativeHalfType
was implied by LangOpts.HalfArgsAndReturns, which isn't true.
Instead, teach the various parts of CodeGen that already know about
half (using the intrinsics or not) about this weird in-between case,
where the "half" type is legal, but operations on it aren't.
This is a smaller intermediate step to the end-goal of removing the
intrinsic, always using "half", and letting the backend legalize.
Builds on r232968.
rdar://20045970, rdar://17468714
Differential Revision: http://reviews.llvm.org/D8367
llvm-svn: 232971
Fix the CodeGen so that for types bigger than float, instead of
converting to fp16 via the sequence "InTy -> float -> fp16", we
perform conversions in just one step. This avoids the double
rounding which potentially changes results from a natural
IEEE-754 operation.
rdar://17594379, rdar://17468714
Differential Revision: http://reviews.llvm.org/D4602
Part of: http://reviews.llvm.org/D8367
llvm-svn: 232968
This scheme checks that pointer and lvalue casts are made to an object of
the correct dynamic type; that is, the dynamic type of the object must be
a derived class of the pointee type of the cast. The checks are currently
only introduced where the class being casted to is a polymorphic class.
Differential Revision: http://reviews.llvm.org/D8312
llvm-svn: 232241
This is a recommit of r231150, reverted in r231409. Turns out
that -fsanitize=shift-base check implementation only works if the
shift exponent is valid, otherwise it contains undefined behavior
itself.
Make sure we check that exponent is valid before we proceed to
check the base. Make sure that we actually report invalid values
of base or exponent if -fsanitize=shift-base or
-fsanitize=shift-exponent is specified, respectively.
llvm-svn: 231711
It's not that easy. If we're only checking -fsanitize=shift-base we
still need to verify that exponent has sane value, otherwise
UBSan-inserted checks for base will contain undefined behavior
themselves.
llvm-svn: 231409
-fsanitize=shift is now a group that includes both these checks, so
exisiting users should not be affected.
This change introduces two new UBSan kinds that sanitize only left-hand
side and right-hand side of shift operation. In practice, invalid
exponent value (negative or too large) tends to cause more portability
problems, including inconsistencies between different compilers, crashes
and inadequeate results on non-x86 architectures etc. That is,
-fsanitize=shift-exponent failures should generally be addressed first.
As a bonus, this change simplifies CodeGen implementation for emitting left
shift (separate checks for base and exponent are now merged by the
existing generic logic in EmitCheck()), and LLVM IR for these checks
(the number of basic blocks is reduced).
llvm-svn: 231150
To handle default arguments in C++ in the debug info, we disable code
updating the debug location during the emission of default arguments.
This code was buggy in the case of default arguments which, themselves,
have default arguments - the inner default argument would re-enable
debug info when it was finished, but before the outer default argument
was finished.
This was already a bug, but got worse (because a crasher instead of just
a quality bug) with the recent improvements to debug info line quality
because... The ApplyDebugLocation scoped device would find the debug
info disabled and not save any debug location. But then in
~ApplyDebugLocation it would find the debug info had been enabled and
would then apply the no-location. Then the outer function call would be
emitted without any location. That's bad.
Arguably we could /also/ fix the ApplyDebugLocation to assert on this
situation (where debug info was disabled in the ctor and enabled in the
dtor, or the other way around) but this is at least the necessary fix
regardless.
(also, I imagine this disabling behavior might need to be in-place for
CGExprComplex and CGExprAgg too, maybe... ?)
And I seem to recall seeing some weird default arg stepping behavior
recently which might be related to this too... I'll have to look into
it.
llvm-svn: 228053
distinction between the different use-cases. With the previous default
behavior we would occasionally emit empty debug locations in situations
where they actually were strictly required (= on invoke insns).
We now have a choice between defaulting to an empty location or an
artificial location.
Specifically, this fixes a bug caused by a missing debug location when
emitting C++ EH cleanup blocks from within an artificial function, such as
an ObjC destroy helper function.
rdar://problem/19670595
llvm-svn: 228003
This causes things like assignment to refer to the '=' rather than the
LHS when attributing the store instruction, for example.
There were essentially 3 options for this:
* The beginning of an expression (this was the behavior prior to this
commit). This meant that stepping through subexpressions would bounce
around from subexpressions back to the start of the outer expression,
etc. (eg: x + y + z would go x, y, x, z, x (the repeated 'x's would be
where the actual addition occurred)).
* The end of an expression. This seems to be what GCC does /mostly/, and
certainly this for function calls. This has the advantage that
progress is always 'forwards' (never jumping backwards - except for
independent subexpressions if they're evaluated in interesting orders,
etc). "x + y + z" would go "x y z" with the additions occurring at y
and z after the respective loads.
The problem with this is that the user would still have to think
fairly hard about precedence to realize which subexpression is being
evaluated or which operator overload is being called in, say, an asan
backtrace.
* The preferred location or 'exprloc'. In this case you get sort of what
you'd expect, though it's a bit confusing in its own way due to going
'backwards'. In this case the locations would be: "x y + z +" in
lovely postfix arithmetic order. But this does mean that if the op+
were an operator overload, say, and in a backtrace, the backtrace will
point to the exact '+' that's being called, not to the end of one of
its operands.
(actually the operator overload case doesn't work yet for other reasons,
but that's being fixed - but this at least gets scalar/complex
assignments and other plain operators right)
llvm-svn: 227027
Several pieces of code were relying on implicit debug location setting
which usually lead to incorrect line information anyway. So I've fixed
those (in r225955 and r225845) separately which should pave the way for
this commit to be cleanly reapplied.
The reason these implicit dependencies resulted in crashes with this
patch is that the debug location would no longer implicitly leak from
one place to another, but be set back to invalid. Once a call with
no/invalid location was emitted, if that call was ever inlined it could
produce invalid debugloc chains and assert during LLVM's codegen.
There may be further cases of such bugs in this patch - they're hard to
flush out with regression testing, so I'll keep an eye out for reports
and investigate/fix them ASAP if they come up.
Original commit message:
Reapply "DebugInfo: Generalize debug info location handling"
Originally committed in r224385 and reverted in r224441 due to concerns
this change might've introduced a crash. Turns out this change fixes the
crash introduced by one of my earlier more specific location handling
changes (those specific fixes are reverted by this patch, in favor of
the more general solution).
Recommitted in r224941 and reverted in r224970 after it caused a crash
when building compiler-rt. Looks to be due to this change zeroing out
the debug location when emitting default arguments (which were meant to
inherit their outer expression's location) thus creating call
instructions without locations - these create problems for inlining and
must not be created. That is fixed and tested in this version of the
change.
Original commit message:
This is a more scalable (fixed in mostly one place, rather than many
places that will need constant improvement/maintenance) solution to
several commits I've made recently to increase source fidelity for
subexpressions.
This resetting had to be done at the DebugLoc level (not the
SourceLocation level) to preserve scoping information (if the resetting
was done with CGDebugInfo::EmitLocation, it would've caused the tail end
of an expression's codegen to end up in a potentially different scope
than the start, even though it was at the same source location). The
drawback to this is that it might leave CGDebugInfo out of sync. Ideally
CGDebugInfo shouldn't have a duplicate sense of the current
SourceLocation, but for now it seems it does... - I don't think I'm
going to tackle removing that just now.
I expect this'll probably cause some more buildbot fallout & I'll
investigate that as it comes up.
Also these sort of improvements might be starting to show a weakness/bug
in LLVM's line table handling: we don't correctly emit is_stmt for
statements, we just put it on every line table entry. This means one
statement split over multiple lines appears as multiple 'statements' and
two statements on one line (without column info) are treated as one
statement.
I don't think we have any IR representation of statements that would
help us distinguish these cases and identify the beginning of each
statement - so that might be something we need to add (possibly to the
lexical scope chain - a scope for each statement). This does cause some
problems for GDB and possibly other DWARF consumers.
llvm-svn: 225956
Summary:
The Mips ABI's treat pointers in the same way as integers. They are
sign-extended to 32-bit for O32, and 64-bit for N32/N64. This doesn't matter
for O32 and N64 where pointers are already the correct width but it does matter
for big-endian N32, where pointers are 32-bit and need promoting.
The caller side is already passing pointers correctly. This patch corrects the
callee.
Reviewers: vmedic, atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6812
llvm-svn: 225782
This reverts commit r225000, r225021, r225083, r225086, r225090.
The root change (r225000) still has several issues where it's caused
calls to be emitted without debug locations. This causes assertion
failures if/when those calls are inlined.
I'll work up some test cases and fixes before recommitting this.
llvm-svn: 225555
Originally committed in r224385 and reverted in r224441 due to concerns
this change might've introduced a crash. Turns out this change fixes the
crash introduced by one of my earlier more specific location handling
changes (those specific fixes are reverted by this patch, in favor of
the more general solution).
Recommitted in r224941 and reverted in r224970 after it caused a crash
when building compiler-rt. Looks to be due to this change zeroing out
the debug location when emitting default arguments (which were meant to
inherit their outer expression's location) thus creating call
instructions without locations - these create problems for inlining and
must not be created. That is fixed and tested in this version of the
change.
Original commit message:
This is a more scalable (fixed in mostly one place, rather than many
places that will need constant improvement/maintenance) solution to
several commits I've made recently to increase source fidelity for
subexpressions.
This resetting had to be done at the DebugLoc level (not the
SourceLocation level) to preserve scoping information (if the resetting
was done with CGDebugInfo::EmitLocation, it would've caused the tail end
of an expression's codegen to end up in a potentially different scope
than the start, even though it was at the same source location). The
drawback to this is that it might leave CGDebugInfo out of sync. Ideally
CGDebugInfo shouldn't have a duplicate sense of the current
SourceLocation, but for now it seems it does... - I don't think I'm
going to tackle removing that just now.
I expect this'll probably cause some more buildbot fallout & I'll
investigate that as it comes up.
Also these sort of improvements might be starting to show a weakness/bug
in LLVM's line table handling: we don't correctly emit is_stmt for
statements, we just put it on every line table entry. This means one
statement split over multiple lines appears as multiple 'statements' and
two statements on one line (without column info) are treated as one
statement.
I don't think we have any IR representation of statements that would
help us distinguish these cases and identify the beginning of each
statement - so that might be something we need to add (possibly to the
lexical scope chain - a scope for each statement). This does cause some
problems for GDB and possibly other DWARF consumers.
llvm-svn: 225000
Originally committed in r224385 and reverted in r224441 due to concerns
this change might've introduced a crash. Turns out this change fixes the
crash introduced by one of my earlier more specific location handling
changes (those specific fixes are reverted by this patch, in favor of
the more general solution).
Original commit message:
This is a more scalable (fixed in mostly one place, rather than many
places that will need constant improvement/maintenance) solution to
several commits I've made recently to increase source fidelity for
subexpressions.
This resetting had to be done at the DebugLoc level (not the
SourceLocation level) to preserve scoping information (if the resetting
was done with CGDebugInfo::EmitLocation, it would've caused the tail end
of an expression's codegen to end up in a potentially different scope
than the start, even though it was at the same source location). The
drawback to this is that it might leave CGDebugInfo out of sync. Ideally
CGDebugInfo shouldn't have a duplicate sense of the current
SourceLocation, but for now it seems it does... - I don't think I'm
going to tackle removing that just now.
I expect this'll probably cause some more buildbot fallout & I'll
investigate that as it comes up.
Also these sort of improvements might be starting to show a weakness/bug
in LLVM's line table handling: we don't correctly emit is_stmt for
statements, we just put it on every line table entry. This means one
statement split over multiple lines appears as multiple 'statements' and
two statements on one line (without column info) are treated as one
statement.
I don't think we have any IR representation of statements that would
help us distinguish these cases and identify the beginning of each
statement - so that might be something we need to add (possibly to the
lexical scope chain - a scope for each statement). This does cause some
problems for GDB and possibly other DWARF consumers.
llvm-svn: 224941
This is a more scalable (fixed in mostly one place, rather than many
places that will need constant improvement/maintenance) solution to
several commits I've made recently to increase source fidelity for
subexpressions.
This resetting had to be done at the DebugLoc level (not the
SourceLocation level) to preserve scoping information (if the resetting
was done with CGDebugInfo::EmitLocation, it would've caused the tail end
of an expression's codegen to end up in a potentially different scope
than the start, even though it was at the same source location). The
drawback to this is that it might leave CGDebugInfo out of sync. Ideally
CGDebugInfo shouldn't have a duplicate sense of the current
SourceLocation, but for now it seems it does... - I don't think I'm
going to tackle removing that just now.
I expect this'll probably cause some more buildbot fallout & I'll
investigate that as it comes up.
Also these sort of improvements might be starting to show a weakness/bug
in LLVM's line table handling: we don't correctly emit is_stmt for
statements, we just put it on every line table entry. This means one
statement split over multiple lines appears as multiple 'statements' and
two statements on one line (without column info) are treated as one
statement.
I don't think we have any IR representation of statements that would
help us distinguish these cases and identify the beginning of each
statement - so that might be something we need to add (possibly to the
lexical scope chain - a scope for each statement). This does cause some
problems for GDB and possibly other DWARF consumers.
llvm-svn: 224385