Several places were still treating the Attribute object as respresenting
multiple attributes. Those places now use the AttributeSet to represent
multiple attributes.
llvm-svn: 174004
implementation; this is much more inline with the original implementation
(i.e., pre-ubsan) and does not require run-time library support.
The trapping implementation can be invoked using either '-fcatch-undefined-behavior'
or '-fsanitize=undefined-trap -fsanitize-undefined-trap-on-error', with the latter
being preferred. Eventually, the -fcatch-undefined-behavior' flag will be removed.
llvm-svn: 173848
When we are visiting the extern declaration of 'i' in
static int i = 99;
int foo() {
extern int i;
return i;
}
We should not try to handle it as if it was an function static. That is, we
must consider the written storage class.
Fixing this then exposes that the assert in EmitGlobalVarDeclLValue and the
if leading to its call are not completely accurate. They were passing before
because the second decl was marked as having external storage. I changed them
to check the linkage, which I find easier to understand.
Last but not least, there is something strange going on with cuda and opencl.
My guess is that the linkage computation for these languages needs to be
audited, but I didn't want to change that in this patch so I just updated
the storage classes to keep the current behavior.
Thanks to Reed Kotler for reporting this.
llvm-svn: 170827
We were emitting calls to blocks as if all arguments were
required --- i.e. with signature (A,B,C,D,...) rather than
(A,B,...). This patch fixes that and accounts for the
implicit block-context argument as a required argument.
In addition, this patch changes the function type under which
we call unprototyped functions on platforms like x86-64 that
guarantee compatibility of variadic functions with unprototyped
function types; previously we would always call such functions
under the LLVM type T (...)*, but now we will call them under
the type T (A,B,C,D,...)*. This last change should have no
material effect except for making the type conventions more
explicit; it was a side-effect of the most convenient implementation.
llvm-svn: 169588
generally support the C++11 memory model requirements for bitfield
accesses by relying more heavily on LLVM's memory model.
The primary change this introduces is to move from a manually aligned
and strided access pattern across the bits of the bitfield to a much
simpler lump access of all bits in the bitfield followed by math to
extract the bits relevant for the particular field.
This simplifies the code significantly, but relies on LLVM to
intelligently lowering these integers.
I have tested LLVM's lowering both synthetically and in benchmarks. The
lowering appears to be functional, and there are no really significant
performance regressions. Different code patterns accessing bitfields
will vary in how this impacts them. The only real regressions I'm seeing
are a few patterns where the LLVM code generation for loads that feed
directly into a mask operation don't take advantage of the x86 ability
to do a smaller load and a cheap zero-extension. This doesn't regress
any benchmark in the nightly test suite on my box past the noise
threshold, but my box is quite noisy. I'll be watching the LNT numbers,
and will look into further improvements to the LLVM lowering as needed.
llvm-svn: 169489
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
objc_loadWeak. This retains and autorelease the weakly-refereced
object. This hidden autorelease sometimes makes __weak variable alive even
after the weak reference is erased, because the object is still referenced
by an autorelease pool. This patch overcomes this behavior by loading a
weak object via call to objc_loadWeakRetained(), followng it by objc_release
at appropriate place, thereby removing the hidden autorelease. // rdar://10849570
llvm-svn: 168740
checks to enable. Remove frontend support for -fcatch-undefined-behavior,
-faddress-sanitizer and -fthread-sanitizer now that they don't do anything.
llvm-svn: 167413
We want the diagnostic, and if the load is optimized away, we still want to
trap it. Stop checking non-default address spaces; that doesn't work in
general.
llvm-svn: 167219
initialized by a reference constant expression.
Our odr-use modeling still needs work here: we don't yet implement the 'set of
potential results of an expression' DR.
llvm-svn: 166361
Convert the uses of the Attributes class over to the new format. The
Attributes::get method call now takes an LLVM context so that the attributes
object can be uniquified and stored.
llvm-svn: 165918
the trap BB out of the individual checks and into a common function, to prepare
for making this code call into a runtime library. Rename the existing EmitCheck
to EmitTypeCheck to clarify it and to move it out of the way of the new
EmitCheck.
llvm-svn: 163451
(__builtin_* etc.) so that it isn't possible to take their address.
Specifically, introduce a new type to represent a reference to a builtin
function, and a new cast kind to convert it to a function pointer in the
operand of a call. Fixes PR13195.
llvm-svn: 162962
* when checking that a pointer or reference refers to appropriate storage for a type, also check the alignment and perform a null check
* check that references are bound to appropriate storage
* check that 'this' has appropriate storage in member accesses and member function calls
llvm-svn: 162523
in the ABI arrangement, and leave a hook behind so that we can easily
tweak CCs on platforms that use different CCs by default for C++
instance methods.
llvm-svn: 159894
if we want to ignore a result, the Dest will be null. Otherwise,
we must copy into it. This means we need to ensure a slot when
loading from a volatile l-value.
With all that in place, fix a bug with chained assignments into
__block variables of aggregate type where we were losing insight into
the actual source of the value during the second assignment.
llvm-svn: 159630
Heavily based on a patch from
Aaron Wishnick <aaron.s.wishnick@gmail.com>.
I'll clean up the duplicated function in CodeGen as
a follow-up, later today or tomorrow.
llvm-svn: 159060
When enabled, clang generates bounds checks for array and pointers dereferences. Work to follow in LLVM's backend.
OK'ed by Chad; thanks for the review.
llvm-svn: 156431
remove the comparison of objectsize with -1. since it's an unsigned comparison, it will always succeed if objectsize returns -1, which is enough to have the check removed
llvm-svn: 156311
and only consider using __cxa_atexit in the Itanium logic. The
default logic is to use atexit().
Emit "guarded" initializers in Microsoft mode unconditionally.
This is definitely not correct, but it's closer to correct than
just not emitting the initializer.
Based on a patch by Timur Iskhodzhanov!
llvm-svn: 155894
thinking of generalizing it to be able to specify other freedoms beyond accuracy
(such as that NaN's don't have to be respected). I'd like the 3.1 release (the
first one with this metadata) to have the more generic name already rather than
having to auto-upgrade it in 3.2.
llvm-svn: 154745
__atomic_test_and_set, __atomic_clear, plus a pile of undocumented __GCC_*
predefined macros.
Implement library fallback for __atomic_is_lock_free and
__c11_atomic_is_lock_free, and implement __atomic_always_lock_free.
Contrary to their documentation, GCC's __atomic_fetch_add family don't
multiply the operand by sizeof(T) when operating on a pointer type.
libstdc++ relies on this quirk. Remove this handling for all but the
__c11_atomic_fetch_add and __c11_atomic_fetch_sub builtins.
Contrary to their documentation, __atomic_test_and_set and __atomic_clear
take a first argument of type 'volatile void *', not 'void *' or 'bool *',
and __atomic_is_lock_free and __atomic_always_lock_free have an argument
of type 'const volatile void *', not 'void *'.
With this change, libstdc++4.7's <atomic> passes libc++'s atomic test suite,
except for a couple of libstdc++ bugs and some cases where libc++'s test
suite tests for properties which implementations have latitude to vary.
llvm-svn: 154640
in general (such an atomic has boolean representation) and
specifically for IR generation of __c11_atomic_init. The latter also
means actually using initialization semantics for this initialization,
rather than just creating a store.
On a related note, make sure we actually put in non-atomic-to-atomic
conversions when performing an implicit conversion sequence. IR
generation is far too kind here, but we still want the ASTs to make
sense.
llvm-svn: 154612
This is not quite sufficient for libstdc++'s <atomic>: we still need
__atomic_test_and_set and __atomic_clear, and may need a more complete
__atomic_is_lock_free implementation.
We are also missing an implementation of __atomic_always_lock_free,
__atomic_nand_fetch, and __atomic_fetch_nand, but those aren't needed
for libstdc++.
llvm-svn: 154579
LLVM intrinsics for.
I have an implementation of these functions, which wants to go in a libgcc_s
equivalent in compiler-rt. It's currently here:
http://people.freebsd.org/~theraven/atomic.c
It will be committed to compiler-rt as soon as I work out where would be a
sensible place to put it...
llvm-svn: 153666
flag as GCC uses: -fstrict-enums). There is a *lot* of code making
unwarranted assumptions about the underlying type of enums, and it
doesn't seem entirely reasonable to eagerly break all of it.
Much more importantly, the current state of affairs is *very* good at
optimizing based upon this information, which causes failures that are
very distant from the actual enum. Before we push for enabling this by
default, I think we need to implement -fcatch-undefined-behavior support
for instrumenting and trapping whenever we store or load a value outside
of the range. That way we can track down the misbehaving code very
quickly.
I discussed this with Rafael, and currently the only important cases he
is aware of are the bool range-based optimizations which are staying
hard enabled. We've not seen any issue with those either, and they are
much more important for performance.
llvm-svn: 153550
For i686 targets (eg. cygwin), I saw "Range must not be empty!" in verifier.
It produces (i32)[0x80000000:0x80000000) from (uint64_t)[0xFFFFFFFF80000000ULL:0x0000000080000000ULL), for signed i32 on MDNode::Range.
llvm-svn: 153382
track whether the referenced declaration comes from an enclosing
local context. I'm amenable to suggestions about the exact meaning
of this bit.
llvm-svn: 152491
we correctly emit loads of BlockDeclRefExprs even when they
don't qualify as ODR-uses. I think I'm adequately convinced
that BlockDeclRefExpr can die.
llvm-svn: 152479
analysis to make the AST representation testable. They are represented by a
new UserDefinedLiteral AST node, which is a sugared CallExpr. All semantic
properties, including full CodeGen support, are achieved for free by this
representation.
UserDefinedLiterals can never be dependent, so no custom instantiation
behavior is required. They are mangled as if they were direct calls to the
underlying literal operator. This matches g++'s apparent behavior (but not its
actual mangling, which is broken for literal-operator-ids).
User-defined *string* literals are now fully-operational, but the semantic
analysis is quite hacky and needs more work. No other forms of user-defined
literal are created yet, but the AST support for them is present.
This patch committed after midnight because we had already hit the quota for
new kinds of literal yesterday.
llvm-svn: 152211
block pointer that returns a block literal which captures (by copy)
the lambda closure itself. Some aspects of the block literal are left
unspecified, namely the capture variable (which doesn't actually
exist) and the body (which will be filled in by IRgen because it can't
be written as an AST).
Because we're switching to this model, this patch also eliminates
tracking the copy-initialization expression for the block capture of
the conversion function, since that information is now embedded in the
synthesized block literal. -1 side tables FTW.
llvm-svn: 151131
optional argument passed through the variadic ellipsis)
potentially affects how we need to lower it. Propagate
this information down to the various getFunctionInfo(...)
overloads on CodeGenTypes. Furthermore, rename those
overloads to clarify their distinct purposes, and make
sure we're calling the right one in the right place.
This has a nice side-effect of making it easier to construct
a function type, since the 'variadic' bit is no longer
separable.
This shouldn't really change anything for our existing
platforms, with one minor exception --- we should now call
variadic ObjC methods with the ... in the "right place"
(see the test case), which I guess matters for anyone
running GNUStep on MIPS. Mostly it's just a substantial
clean-up.
llvm-svn: 150788