Summary:
When -fsanitize-address-field-padding=1 is present
don't emit memcpy for copy constructor.
Thanks Nico for the extra test case.
Test Plan: regression tests
Reviewers: thakis, rsmith
Reviewed By: rsmith
Subscribers: rsmith, cfe-commits
Differential Revision: http://reviews.llvm.org/D6515
llvm-svn: 223563
is for each machine. Fix up darwin tests that were testing for
aapcs on armv7-ios when the actual ABI is apcs.
Should be no user visible change without -cc1.
llvm-svn: 223429
ARM ABI specifies that all the libcalls use soft FP ABI
(even hard FP binaries). These days clang emits _mulsc3 / _muldc3
calls with default (C) calling convention which would be translated
into AAPCS_VFP LLVM calling and thus the result of complex
multiplication will be bogus.
Introduce a way for a target to specify explicitly calling
convention for libcalls. Right now this is temporary correctness
fix. Ultimately, we'll end with intrinsic for complex
multiplication and all calling convention decisions for libcalls
will be put into backend.
llvm-svn: 223123
Now that LLVM can count the registers needed to implement AAPCS rules, we don't
need to duplicate that logic here. This means we can drop the explicit padding
and also use more natural types in many cases (e.g. "struct { float arr[3]; }"
used to end up as "[2 x double]" to avoid holes on the stack.
The one wrinkle is that AAPCS va_arg was also using the register counting
machinery. But the local replacement isn't too bad.
llvm-svn: 222904
Cygwin and MinGW fail to conform to the underlying system's structure passing
ABI. Make the check more precise to ensure that we correctly generate code for
the itanium environment.
llvm-svn: 222626
"global-init", "global-init-src" and "global-init-type" were originally
used to blacklist entities in ASan init-order checker. However, they
were never documented, and later were replaced by "=init" category.
Old blacklist entries should be converted as follows:
* global-init:foo -> global:foo=init
* global-init-src:bar -> src:bar=init
* global-init-type:baz -> type:baz=init
llvm-svn: 222401
This reverts commit r222144. Commit r222142 is being reverted due to
a spec2006/gcc execution-time regression.
Update mips-varargs test as well.
llvm-svn: 222397
Summary:
With this patch, passing a va_list to another function and reading 10 int's from
it works correctly on a big-endian target.
Based on a pair of patches by David Chisnall, one of which I've reworked
for the current trunk.
Reviewers: theraven, atanasyan
Reviewed By: theraven, atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6248
llvm-svn: 222339
Summary:
This distinguishes between -fpic and -fPIC now, with the additions in LLVM for
PIC level support.
Test Plan: No regressions
Reviewers: echristo, rafael
Reviewed By: rafael
Subscribers: rnk, emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D5400
llvm-svn: 222227
used inside blocks. It fixes a crash in naming code
for __func__ etc. when used in a block declared globally.
It also brings back old naming convention for
predefined expression which was broken. rdar://18961148
llvm-svn: 222065
Summary:
Ok, here is somewhat addition to D6217 aiming to preserve old darwin behavior wrt the typedefed types. The actual change to SemaChecking turned out to be pretty gross, in particular:
1. We need to extract the typedef'ed type for proper diagnostics
2. We need to walk over paren expressions as well
Reviewers: chandlerc, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6256
llvm-svn: 222044
VSX makes the "vector long long" and "vector double" types available.
This patch enables the vec_perm interface for these types. The same
builtin is generated regardless of the specified type, so no
additional work or testing is needed in the back end. Tests are added
to ensure this builtin is generated by the front end.
llvm-svn: 221988
This patch adds builtin support for xvdivdp and xvdivsp, along with a
new test case. The builtins are accessed using vec_div in altivec.h.
Builtins are listed (mostly) alphabetically there, so inserting these
changed the line numbers for deprecation warnings tested in
test/Headers/altivec-intrin.c.
There is a companion patch for LLVM.
llvm-svn: 221984
Summary:
Consider the following nifty 1 liner: (0 ? csqrtl(2.0f) : sqrtl(2.0f)). One can easily obtain such code from e.g. tgmath. Right now it produces an assertion because we fail to do the promotion real => _Complex real.
The case was properly handled previously (old handleOtherComplexFloatConversion routine), but was forgotten in the current version. This seems to be about fallout from r219557
Reviewers: chandlerc, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6217
llvm-svn: 221821
This patch enables the vec_vsx_ld and vec_vsx_st intrinsics for
PowerPC, which provide programmer access to the lxvd2x, lxvw4x,
stxvd2x, and stxvw4x instructions.
New code in altivec.h defines these in terms of new builtins, which
are themselves defined in BuiltinsPPC.def. The builtins are converted
to LLVM intrinsics in CGBuiltin.cpp. Additional code is added to
builtins-ppc-vsx.c to verify the correct generation of the intrinsics.
Note that I moved the other VSX builtins so all VSX builtins will be
alphabetical in their own section in BuiltinsPPC.def.
There is a companion patch for LLVM.
llvm-svn: 221768
Summary: If we've added poisoned paddings to a type do not emit memcpy for operator=.
Test Plan: regression tests.
Reviewers: majnemer, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6160
llvm-svn: 221739
Summary:
This change makes CodeGenFunction::EmitCheck() take several
conditions that needs to be checked (all of them need to be true),
together with sanitizer kinds these checks are for. This would allow
to split one call into UBSan runtime into several calls in case
different sanitizer kinds would have different recoverability
settings.
Tests should be fixed accordingly, I'm working on it.
Test Plan: regression test suite.
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6219
llvm-svn: 221716
Homogeneous aggregates on AAPCS_VFP ARM need to be passed *without* being
flattened (e.g. [2 x float] rather than "float, float") for various weird ABI
reasons. However, this isn't the case for anything else; further, we know at
the ABIArgInfo::getDirect callsites whether this flattening is allowed.
So, we can get more unified ARM code, with a simpler Clang, by just using that
knowledge directly.
llvm-svn: 221559
mingw64's headers implement fabs by calling __builtin_fabs, so using the
library call results in an infinite loop. If the backend legalizes
@llvm.fabs as a call to fabs later, things should work out, as the crt
provides a definition.
llvm-svn: 221206
It turns out that MinGW never dllimports of exports inline functions.
This means that code compiled with Clang would fail to link with
MinGW-compiled libraries since we might try to import functions that
are not imported.
To fix this, make Clang never dllimport inline functions when targeting
MinGW.
llvm-svn: 221154
The most complex aspect of the convention is the handling of homogeneous
vector and floating point aggregates. Reuse the homogeneous aggregate
classification code that we use on PPC64 and ARM for this.
This convention also has a C mangling, and we apparently implement that
in both Clang and LLVM.
Reviewed By: majnemer
Differential Revision: http://reviews.llvm.org/D6063
llvm-svn: 221006
Now that we have initial support for VSX, we can begin adding
intrinsics for programmer access to VSX instructions. This patch
performs the necessary enablement in the front end, and tests it by
implementing intrinsics for minimum and maximum using the vector
double data type.
The main change in the front end is to no longer disallow "vector" and
"double" in the same declaration (lib/Sema/DeclSpec.cpp), but "vector"
and "long double" must still be disallowed. The new intrinsics are
accessed via vec_max and vec_min with changes in
lib/Headers/altivec.h. Note that for v4f32, we already access
corresponding VMX builtins, but with VSX enabled we should use the
forms that allow all 64 vector registers.
The new built-ins are defined in include/clang/Basic/BuiltinsPPC.def.
I've added a new test in test/CodeGen/builtins-ppc-vsx.c that is
similar to, but much smaller than, builtins-ppc-altivec.c. This
allows us to test VSX IR generation without duplicating CHECK lines
for the existing bazillion Altivec tests.
Since vector double is now legal when VSX is available, I've modified
the error message, and changed where we test for it and for vector
long double, since the target machine isn't visible in the old place.
This serendipitously removed a not-pertinent warning about 'long'
being deprecated when used with 'vector', when "vector long double" is
encountered and we just want to issue an error. The existing tests
test/Parser/altivec.c and test/Parser/cxx-altivec.cpp have been
updated accordingly, and I've added test/Parser/vsx.c to verify that
"vector double" is now legitimate with VSX enabled.
There is a companion patch for LLVM.
llvm-svn: 220989
Summary:
When we are adding field paddings for asan even an empty dtor has to remain in the code,
so we ignore -mconstructor-aliases if the paddings are going to be added.
Test Plan: added a test
Reviewers: rsmith, rnk, rafael
Reviewed By: rafael
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6038
llvm-svn: 220986
Reuse the PPC64 HVA detection algorithm for ARM and AArch64. This is a
nice code deduplication, since they are roughly identical. A few virtual
method extension points are needed to understand how big an HVA can be
and what element types it can have for a given architecture.
Also make the record expansion code work in the presence of non-virtual
bases.
Reviewed By: uweigand, asl
Differential Revision: http://reviews.llvm.org/D6045
llvm-svn: 220972
The Windows NT SDK uses __readfsdword and declares it as a compiler provided
builtin (#pragma intrinsic(__readfsword). Because intrin.h is not referenced
by winnt.h, it is not possible to provide an out-of-line definition for the
intrinsic. Provide a proper compiler builtin definition.
llvm-svn: 220859
Following the NVVM IR specifications, arguments of aggregate type should be
passed on the stack without splitting (byval).
http://reviews.llvm.org/D6020
Patch by Jacques Pienaar.
llvm-svn: 220854
As discussed in bug 21398, PowerPC ABI code needs to consider C++ base
classes when classifying a class as homogeneous aggregate (or not) for
ABI purposes.
llvm-svn: 220852
An updated implemnentation of VLA types capturing based on previously committed solution for Lambdas.
This version captures the whole VLA type instead of particular variables which are part of VLA size expression and allows to use previusly calculated size of VLA type in captured regions. Required for OpenMP.
Differential Revision: http://reviews.llvm.org/D5099
llvm-svn: 220850
Summary:
We should avoid a tail padding not only if the last field
has zero size but also if the last field is a struct with a flexible array.
If/when http://reviews.llvm.org/D5478 is committed,
this will also handle the case of structs with zero-sized arrays.
Reviewers: majnemer, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D5924
llvm-svn: 220708
Wire it through everywhere we have support for fastcall, essentially.
This allows us to parse the MSVC "14" CTP headers, but we will
miscompile them because LLVM doesn't support __vectorcall yet.
Reviewed By: Aaron Ballman
Differential Revision: http://reviews.llvm.org/D5808
llvm-svn: 220573
Summary:
This allows us to easily identify them in the backend which in turn allows us
to handle them correctly for big-endian targets (where they must be shifted
into the upper bits of the register).
Depends on D5961
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits, theraven
Differential Revision: http://reviews.llvm.org/D5962
llvm-svn: 220566
Summary:
Ensure all integral/enumeration types are appropriately annotated with
signext/zeroext. In particular, i32 now has these attributes when using the
N32/N64 ABI. This paves the way for accurately representing the way the
N32/N64 ABI's promotes integer arguments to i64.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits, theraven
Differential Revision: http://reviews.llvm.org/D5961
llvm-svn: 220563
When SanitizerBlacklist decides if the SourceLocation is blacklisted,
we need to first turn it into a SpellingLoc before fetching the filename
and scanning "src:" entries. Otherwise we will fail to fecth the
correct filename for function definitions coming from macro expansion.
llvm-svn: 220403
This reverts commit r220169 which reverted r220153. However, it also
contains additional changes:
- We may need to add padding *after* we've packed the struct. This
occurs when the aligned next field offset is greater than the new
field's offset. When this occurs, we make the struct packed.
*However*, once packed the next field offset might be less than the
new feild's offset. It is in this case that we might further pad the
struct.
- We would pad structs which were perfectly sized! This behavior is
immensely old. This behavior came from blindly subtracting
NextFieldOffsetInChars from RecordSize. This doesn't take into
account the fact that the struct might have a greater overall
alignment than the last field.
llvm-svn: 220175
This commit caused two tests in LNT to regress. I'm able to reproduce on
any platform and will send reproduction steps to the original commit
log. This should restore the LNT bots that have been failing.
llvm-svn: 220169
a NaN-test prior to the call to the library function.
This should automatically make fastmath (including just non-NaNs) able to avoid
the expensive libcalls and also open the door to more advanced folding in LLVM
based on the rules for complex math.
Two important notes to remember: first is that this isn't yet a proper
limited range mode, it's still just improving the unlimited range mode.
Also, it isn't really perfecet w.r.t. what an unlimited range mode
should be doing because it isn't quite handling the flags produced by
all the operations in the way desirable for that mode, but then neither
is compiler-rt's libcall. When the compiler-rt libcall is improved to
carefully manage flags, the code emitted here should be improved
correspondingly. And it is still a long-term desirable thing to add
a limited range mode to Clang that would be able to use direct math
without library calls here.
Special thanks to Steve Canon for the careful review on this patch and
teaching me about these issues. =D
Differential Revision: http://reviews.llvm.org/D5756
llvm-svn: 220167
Before, ConstStructBuilder::AppendBytes would check packed constraints
prior to padding being added before the field's offset. However, adding
this padding might force our struct to be packed. Because we wouldn't
check *after* adding padding, ConstStructBuilder would be in an
inconsistent state leading to a crash.
This fixes PR21300.
llvm-svn: 220153
This commit changes the way we blacklist global variables in ASan.
Now the global is excluded from instrumentation (either regular
bounds checking, or initialization-order checking) if:
1) Global is explicitly blacklisted by its mangled name.
This part is left unchanged.
2) SourceLocation of a global is in blacklisted source file.
This changes the old behavior, where instead of looking at the
SourceLocation of a variable we simply considered llvm::Module
identifier. This was wrong, as identifier may not correspond to
the file name, and we incorrectly disabled instrumentation
for globals coming from #include'd files.
3) Global is blacklisted by type.
Now we build the type of a global variable using Clang machinery
(QualType::getAsString()), instead of llvm::StructType::getName().
After this commit, the active users of ASan blacklist files
may have to revisit them (this is a backwards-incompatible change).
llvm-svn: 220097
This commit changes the way we blacklist functions in ASan, TSan,
MSan and UBSan. We used to treat function as "blacklisted"
and turned off instrumentation in it in two cases:
1) Function is explicitly blacklisted by its mangled name.
This part is not changed.
2) Function is located in llvm::Module, whose identifier is
contained in the list of blacklisted sources. This is completely
wrong, as llvm::Module may not correspond to the actual source
file function is defined in. Also, function can be defined in
a header, in which case user had to blacklist the .cpp file
this header was #include'd into, not the header itself.
Such functions could cause other problems - for instance, if the
header was included in multiple source files, compiled
separately and linked into a single executable, we could end up
with both instrumented and non-instrumented version of the same
function participating in the same link.
After this change we will make blacklisting decision based on
the SourceLocation of a function definition. If a function is
not explicitly defined in the source file, (for example, the
function is compiler-generated and responsible for
initialization/destruction of a global variable), then it will
be blacklisted if the corresponding global variable is defined
in blacklisted source file, and will be instrumented otherwise.
After this commit, the active users of blacklist files may have
to revisit them. This is a backwards-incompatible change, but
I don't think it's possible or makes sense to support the
old incorrect behavior.
I plan to make similar change for blacklisting GlobalVariables
(which is ASan-specific).
llvm-svn: 219997
Summary:
The general approach is to add extra paddings after every field
in AST/RecordLayoutBuilder.cpp, then add code to CTORs/DTORs that poisons the paddings
(CodeGen/CGClass.cpp).
Everything is done under the flag -fsanitize-address-field-padding.
The blacklist file (-fsanitize-blacklist) allows to avoid the transformation
for given classes or source files.
See also https://code.google.com/p/address-sanitizer/wiki/IntraObjectOverflow
Test Plan: run SPEC2006 and some of the Chromium tests with -fsanitize-address-field-padding
Reviewers: samsonov, rnk, rsmith
Reviewed By: rsmith
Subscribers: majnemer, cfe-commits
Differential Revision: http://reviews.llvm.org/D5687
llvm-svn: 219961
They cannot be written to, so marking them const makes sense and may improve
optimisation.
As a side-effect, SectionInfos has to be moved from Sema to ASTContext.
It also fixes this problem, that occurs when compiling ATL:
warning LNK4254: section 'ATL' (C0000040) merged into '.rdata' (40000040) with different attributes
The ATL headers are putting variables in a special section that's marked
read-only. However, Clang currently can't model that read-onlyness in the IR.
But, by making the variables const, the section does become read-only, and
the linker warning is avoided.
Differential Revision: http://reviews.llvm.org/D5812
llvm-svn: 219960
CodeGen wouldn't mark the aliasee as thread_local if the aliasee was a
tentative definition.
Even if the definition was already emitted, it would never mark the
alias as thread_local.
This fixes PR21288.
llvm-svn: 219859
Thumb1 has legitimate reasons for preferring 32-bit alignment of types
i1/i8/i16, since the 16-bit encoding of "add rD, sp, #imm" requires #imm to be
a multiple of 4. However, this is a trade-off betweem code size and RAM usage;
the DataLayout string is not the best place to represent it even if desired.
So this patch removes the extra Thumb requirements, hopefully making ARM and
Thumb completely compatible in this respect.
llvm-svn: 219735
Before, ARM and Thumb mode code had different preferred alignments, which could
lead to some rather unexpected results. There's justification for reducing it
from the default 64-bits (wasted space), but I don't think there is for going
below 32-bits.
There's no actual ABI change here, just to reassure people.
llvm-svn: 219720
This addresses a regression introduced with SVN r219393. A block may be
contained within another block. In such a scenario, we would end up within a
BlockDecl, which is not a NamedDecl (as the names are synthesised). The cast to
a NamedDecl of the DeclContext would then assert as the types are unrelated.
Restore the mangling behaviour to that prior to SVN r219393. If the current
block is contained within a BlockDecl, walk up to the parent DeclContext,
recursively, until we have a non-BlockDecl. This is expected to be a NamedDecl.
Add in a couple of asserts to ensure that the assumption that we only encounter
a block within a NamedDecl or a BlockDecl.
llvm-svn: 219696
Previously loop hints such as #pragma loop vectorize_width(#) required a constant. This patch allows a constant expression to be used as well. Such as a non-type template parameter or an expression (2 * c + 1).
Reviewed by Richard Smith
llvm-svn: 219589
and !=) to support mixed complex and real operand types.
This requires removing an assert from SemaChecking, and adding support
both to the constant evaluator and the code generator to synthesize the
imaginary part when needed. This seemed somewhat cleaner than having
just the comparison operators force real-to-complex conversions.
I've added test cases for these operations. I'm really terrified that
there were *no* tests in-tree which exercised this.
This turned up when trying to build R after my change to the complex
type lowering.
llvm-svn: 219570
for complex math.
This should fix the windows build bots that started having trouble here
and generally fix complex libcall emission on targets which use sret for
complex data types. It also makes the code a bit simpler (despite
calling into a much more complex bucket of code).
llvm-svn: 219565
operators where one type is a C complex type, and to emit both the
efficient and correct implementation for complex arithmetic according to
C11 Annex G using this extra information.
For both multiply and divide the old code was writing a long-hand
reduced version of the math without any of the special handling of inf
and NaN recommended by the standard here. Instead of putting more
complexity here, this change does what GCC does which is to emit
a libcall for the fully general case.
However, the old code also failed to do the proper minimization of the
set of operations when there was a mixed complex and real operation. In
those cases, C provides a spec for much more minimal operations that are
valid. Clang now emits the exact suggested operations. This change isn't
*just* about performance though, without minimizing these operations, we
again lose the correct handling of infinities and NaNs. It is critical
that this happen in the frontend based on assymetric type operands to
complex math operations.
The performance implications of this change aren't trivial either. I've
run a set of benchmarks in Eigen, an open source mathematics library
that makes heavy use of complex. While a few have slowed down due to the
libcall being introduce, most sped up and some by a huge amount: up to
100% and 140%.
In order to make all of this work, also match the algorithm in the
constant evaluator to the one in the runtime library. Currently it is
a broken port of the simplifications from C's Annex G to the long-hand
formulation of the algorithm.
Splitting this patch up is very hard because none of this works without
the AST change to preserve non-complex operands. Sorry for the enormous
change.
Follow-up changes will include support for sinking the libcalls onto
cold paths in common cases and fastmath improvements to allow more
aggressive backend folding.
Differential Revision: http://reviews.llvm.org/D5698
llvm-svn: 219557
Make it possible to pass NULL through variadic functions on 64-bit
Windows targets. The Visual C++ headers define NULL to 0, when they
should define it to 0LL on Win64 so that NULL is a pointer-sized
integer.
Fixes PR20949.
Reviewers: thakis, rsmith
Differential Revision: http://reviews.llvm.org/D5480
llvm-svn: 219456
We already add the align parameter attribute for function parameters that have
the align_value attribute (or those with a typedef type having that attribute),
which is an important special case, but does not handle pointers with value
alignment assumptions that come into scope in any other way. To handle the
general case, emit an @llvm.assume-based alignment assumption whenever we load
the pointer-typed lvalue of an align_value-attributed variable (except for
function parameters, which we already deal with at entry).
I'll also note that this is more general than Intel's described support in:
https://software.intel.com/en-us/articles/data-alignment-to-assist-vectorization
which states that the compiler inserts __assume_aligned directives in response
to align_value-attributed variables only for function parameters and for the
initializers of local variables. I think that we can make the optimizer deal
with this more-general scheme (which could lead to a lot of calls to
@llvm.assume inside of loop bodies, for example), but if not, I'll rework this
to be less aggressive.
llvm-svn: 219052
This reverts commit r218917, effectively reapplying r218913. Original
commit message follows.
--
Update debug info testcases for an LLVM metadata schema change to fold
metadata constant operands into a single `MDString`.
Part of PR17891.
llvm-svn: 219011
This test includes stdint.h (via stdatomic.h), which might include system
headers (and that might not work, depending on the system configuration).
Attempting to fix llvm-clang-lld-x86_64-debian-fast.
llvm-svn: 218960
Adds a Clang-specific implementation of C11's stdatomic.h header. On systems,
such as FreeBSD, where a stdatomic.h header is already provided, we defer to
that header instead (using our __has_include_next technology). Otherwise, we
provide an implementation in terms of our __c11_atomic_* intrinsics (that were
created for this purpose).
C11 7.1.4p1 requires function declarations for atomic_thread_fence,
atomic_signal_fence, atomic_flag_test_and_set,
atomic_flag_test_and_set_explicit, and atomic_flag_clear, and requires that
they have external linkage. Accordingly, we provide these declarations, but if
a user elides the shadowing macros and uses them, then they must have a libc
(or similar) that actually provides definitions.
atomic_flag is implemented using _Bool as the underlying type. This is
consistent with the implementation provided by FreeBSD and also GCC 4.9 (at
least when __GCC_ATOMIC_TEST_AND_SET_TRUEVAL == 1).
Patch by Richard Smith (rebased and slightly edited by me -- Richard said I
should drive at this point).
llvm-svn: 218957
Update debug info testcases for an LLVM metadata schema change to fold
metadata constant operands into a single `MDString`.
Part of PR17891.
llvm-svn: 218913
This adds support for the align_value attribute. This attribute is supported by
Intel's compiler (versions 14.0+), and several of my HPC users have requested
support in Clang. It specifies an alignment assumption on the values to which a
pointer points, and is used by numerical libraries to encourage efficient
generation of vector code.
Of course, we already have an aligned attribute that can specify enhanced
alignment for a type, so why is this additional attribute important? The
problem is that if you want to specify that an input array of T is, say,
64-byte aligned, you could try this:
typedef double aligned_double attribute((aligned(64)));
void foo(aligned_double *P) {
double x = P[0]; // This is fine.
double y = P[1]; // What alignment did those doubles have again?
}
the access here to P[1] causes problems. P was specified as a pointer to type
aligned_double, and any object of type aligned_double must be 64-byte aligned.
But if P[0] is 64-byte aligned, then P[1] cannot be, and this access causes
undefined behavior. Getting round this problem requires a lot of awkward
casting and hand-unrolling of loops, all of which is bad.
With the align_value attribute, we can accomplish what we'd like in a well
defined way:
typedef double *aligned_double_ptr attribute((align_value(64)));
void foo(aligned_double_ptr P) {
double x = P[0]; // This is fine.
double y = P[1]; // This is fine too.
}
This attribute does not create a new type (and so it not part of the type
system), and so will only "propagate" through templates, auto, etc. by
optimizer deduction after inlining. This seems consistent with Intel's
implementation (thanks to Alexey for confirming the various Intel-compiler
behaviors).
As a final note, I would have chosen to call this aligned_value, not
align_value, for better naming consistency with the aligned attribute, but I
think it would be more useful to users to adopt Intel's name.
llvm-svn: 218910
Prior to GCC 4.4, __sync_fetch_and_nand was implemented as:
{ tmp = *ptr; *ptr = ~tmp & value; return tmp; }
but this was changed in GCC 4.4 to be:
{ tmp = *ptr; *ptr = ~(tmp & value); return tmp; }
in response to this change, support for sync_fetch_and_nand (and
sync_nand_and_fetch) was removed in r99522 in order to avoid miscompiling code
depending on the old semantics. However, at this point:
1. Many years have passed, and the amount of code relying on the old
semantics is likely smaller.
2. Through the work of many contributors, all LLVM backends have been updated
such that "atomicrmw nand" provides the newer GCC 4.4+ semantics (this process
was complete July of 2014 (added to the release notes in r212635).
3. The lack of this intrinsic is now a needless impediment to porting codes
from GCC to Clang (I've now seen several examples of this).
It is true, however, that we still set GNUC_MINOR to 2 (corresponding to GCC
4.2). To compensate for this, and to address the original concern regarding
code relying on the old semantics, I've added a warning that specifically
details the fact that the semantics have changed and that we provide the newer
semantics.
Fixes PR8842.
llvm-svn: 218905
In addition to __builtin_assume_aligned, GCC also supports an assume_aligned
attribute which specifies the alignment (and optional offset) of a function's
return value. Here we implement support for the assume_aligned attribute by making
use of the @llvm.assume intrinsic.
llvm-svn: 218500
AFAICT the semantics of frem match libm's fmod.
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 218488
Fixes PR21027. The MIDL compiler produces code that does this.
If we wanted to improve the warning, I think we could do this:
void __stdcall f(); // Don't warn without -Wstrict-prototypes.
void g() {
f(); // Might warn, the user probably meant for f to take no args.
f(1, 2, 3); // Warn, we have no idea what args f takes.
f(1); // Error, this is insane, one of these calls is broken.
}
Reviewers: thakis
Differential Revision: http://reviews.llvm.org/D5481
llvm-svn: 218394
Summary:
Vectors are normally 16-byte aligned, however the O32 ABI enforces a
maximum alignment of 8-bytes since the base of the stack is 8-byte aligned.
Previously, this was enforced on the caller side, but not on the callee side.
This fixes the output of OpenCL's printf when given vectors.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: llvm-commits, pekka.jaaskelainen
Differential Revision: http://reviews.llvm.org/D5433
llvm-svn: 218248
Summary:
This fixes PR20023. In order to implement this scoping rule, we piggy
back on the existing LabelDecl machinery, by creating LabelDecl's that
will carry the "internal" name of the inline assembly label, which we
will rewrite the asm label to.
Reviewers: rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4589
llvm-svn: 218230
According to lore, we used to verifier-fail on:
void __thiscall f();
int main() { f(1); }
So that's fixed now. System headers use prototype-less __stdcall functions,
so make that a warning that's DefaultError -- then it fires on regular code
but is suppressed in system headers.
Since it's used in system headers, we have codegen tests for this; massage
them slightly so that they still compile.
llvm-svn: 218166
They are added to adxintrin.h but outside __ADX__ block.
These intrinics generates adc and sbb correspondingly that were available before ADX
llvm-svn: 218118
Summary:
This patch implements a new UBSan check, which verifies
that function arguments declared to be nonnull with __attribute__((nonnull))
are actually nonnull in runtime.
To implement this check, we pass FunctionDecl to CodeGenFunction::EmitCallArgs
(where applicable) and if function declaration has nonnull attribute specified
for a certain formal parameter, we compare the corresponding RValue to null as
soon as it's calculated.
Test Plan: regression test suite
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits, rnk
Differential Revision: http://reviews.llvm.org/D5082
llvm-svn: 217389
This makes use of the recently-added @llvm.assume intrinsic to implement a
__builtin_assume(bool) intrinsic (to provide additional information to the
optimizer). This hooks up __assume in MS-compatibility mode to mirror
__builtin_assume (the semantics have been intentionally kept compatible), and
implements GCC's __builtin_assume_aligned as assume((p - o) & mask == 0). LLVM
now contains special logic to deal with assumptions of this form.
llvm-svn: 217349
made the 8-bit masks actually 8-bit arguments to these intrinsics.
These builtins are a mess. Many were missing the I qualifier which
I added where obviously correct. Most aren't tested, but I've updated
the relevant tests. I've tried to catch all the things that should
become 'c' in this round.
It's also frustrating because the set of these is really ad-hoc and
doesn't really map that cleanly to the set supported by either GCC or
LLVM. Oh well...
llvm-svn: 217311
This patch adds support for the 32bit numeric max/min and directed round-to-integral NEON intrinsics that were added as part of v8, along with unit tests.
Patch by Graham Hunter!
llvm-svn: 217242
For naked functions with parameters, Clang would still emit stores in the prologue
that would clobber the stack, because LLVM doesn't set up a stack frame. (This
shows up in -O0 compiles, because the stores are optimized away otherwise.)
For example:
__attribute__((naked)) int f(int x) {
asm("movl $42, %eax");
asm("retl");
}
Would result in:
_Z1fi:
movl 12(%esp), %eax
movl %eax, (%esp) <--- Oops.
movl $42, %eax
retl
Differential Revision: http://reviews.llvm.org/D5183
llvm-svn: 217198
If control falls off the end of a function after an __asm block, MSVC
assumes that the inline assembly filled the EAX and possibly EDX
registers with an appropriate return value. This functionality is used
in inline functions returning 64-bit integers in system headers, so we
need some amount of compatibility.
This is implemented in Clang by adding extra output constraints to every
inline asm block, and storing the resulting output registers into the
return value slot. If we see an asm block somewhere in the function
body, we emit a normal epilogue instead of marking the end of the
function with a return type unreachable.
Normal returns in functions not using this functionality will overwrite
the return value slot, and in most cases LLVM should be able to
eliminate the dead stores.
Fixes PR17201.
Reviewed By: majnemer
Differential Revision: http://reviews.llvm.org/D5177
llvm-svn: 217187
Summary:
This allows us to easily find them in the backend after the aggregates have
been lowered to other types. This is important on big-endian targets using
the N32/N64 ABI's since these ABI's must shift small structures into the
upper bits of the register.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5005
llvm-svn: 217160
Summary:
They are returned indirectly which causes the other arguments to move to
the next argument slot.
With this, utils/ABITest does not discover any failing cases in the first
500 attempts on big/little endian for O32. Previously some of these failed.
Also tested N32/N64 little endian (big endian has other known issues) with
no issues.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: atanasyan, cfe-commits
Differential Revision: http://reviews.llvm.org/D4811
llvm-svn: 217147
Using the intrinsic allows the SelectionDAGBuilder to turn this call
into the FABS Node and also the intrinsic is something the vectorizer knows
how to vectorize.
This patch also sets the readnone attribute on this call, which should
enable additional optmizations.
llvm-svn: 217042
Previously, EnterStructPointerForCoercedAccess used Alloc size when determining how to convert. This was problematic, because there were situations were the alloc size was larger than the store size. For example, if the first element of a structure were i24 and the destination type were i32, the old code would generate a GEP and a load i24. The code should compare store sizes to ensure the whole object is loaded. I have attached a test case.
This patch modifies the output of arm64-be-bitfield.c test case, but the new IR seems to be equivalent, and after -O3, the compiler generates identical ARM assembly. (asr x0, x0, #54)
Patch by Thomas Jablin!
llvm-svn: 216722
Summary:
We did a great job getting this wrong:
- We messed up which LLVM IR types to use for arguments and return values.
The optimized libcalls use integer types for values.
Clang attempted to use the IR type which corresponds to the value
passed in instead of using an appropriately sized integer type. This
would result in violations of the ABI for, as an example, floating
point types.
- We didn't bother recording the result of the atomic libcall in the
destination memory.
Instead, call the functions with arguments matching the type of the
libcall prototype's parameters.
This fixes PR20780.
Differential Revision: http://reviews.llvm.org/D5098
llvm-svn: 216714
Summary:
The current implementation of asan cookie is incorrect:
we add nosanitize metadata to the cookie load, but the metadata may be lost
and we will instrument the load from poisoned memory.
This change replaces the load with a call to __asan_load_cxx_array_cookie (r216692)
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D5111
llvm-svn: 216702
For the following code:
__declspec(dllimport) int f(int x);
int user(int x) {
return f(x);
}
int f(int x) { return 1; }
Clang will drop the dllimport attribute in the AST, but CodeGen would have
already put it on the LLVM::Function, and that would never get updated.
(The same thing happens for global variables.)
This makes Clang check dropped DLL attribute case each time the LLVM object
is referenced.
This isn't perfect, because we will still get it wrong if the function is
never referenced by codegen after the attribute is dropped, but this handles
the common cases and makes us not fail in the verifier.
llvm-svn: 216699
Summary:
ACLE 2.0 section 9.2 defines the following "miscellaneous data processing intrinsics": `__clz`, `__cls`, `__ror`, `__rev`, `__rev16`, `__revsh` and `__rbit`.
`__clz` has already been implemented in the arm_acle.h header file. The rest are not supported yet. This patch completes ACLE data processing intrinsics.
Reviewers: t.p.northover, rengolin
Reviewed By: rengolin
Subscribers: aemerson, mroth, llvm-commits
Differential Revision: http://reviews.llvm.org/D4983
llvm-svn: 216658
ACLE 2.0 allows __fp16 to be used as a function argument or return
type. This enables this for AArch64.
This also fixes an existing bug that causes clang to not allow
homogeneous floating-point aggregates with a base type of __fp16. This
is valid for AAPCS64, but not for AAPCS-VFP.
llvm-svn: 216558
This tidies up some ARM-specific code added by r208417 to move it out
of the target-independent parts of clang into TargetInfo.cpp. This
also has the advantage that we can now flatten struct arguments to
variadic AAPCS functions.
llvm-svn: 216535
This time though, preserve the extension for bool types since that's compatible
with what MSVC expects.
See http://reviews.llvm.org/D4380
llvm-svn: 216507
Summary:
MSVC doesn't extend integer types smaller than 64bit, so to preserve
binary compatibility, clang shouldn't either.
For example, the following C code built with MSVC:
unsigned test(unsigned v);
unsigned foobar(unsigned short);
int main() { return test(0xffffffff) + foobar(28); }
Produces the following:
0000000000000004: B9 FF FF FF FF mov ecx,0FFFFFFFFh
0000000000000009: E8 00 00 00 00 call test
000000000000000E: 89 44 24 20 mov dword ptr [rsp+20h],eax
0000000000000012: 66 B9 1C 00 mov cx,1Ch
0000000000000016: E8 00 00 00 00 call foobar
And as you can see, when setting up the call to foobar, only cx is overwritten.
If foobar is compiled with clang, then the zero extension added by clang means
the rest of the register, which contains garbage, could be used.
For example if foobar is:
unsigned foobar(unsigned short v) {
return v;
}
Compiled with clang -fomit-frame-pointer -O3 gives the following assembly:
foobar:
0000000000000000: 89 C8 mov eax,ecx
0000000000000002: C3 ret
And that function would return garbage because the 16 most significant bits of
ecx still contain garbage from the first call.
With this change, the code for that function is now:
foobar:
0000000000000000: 0F B7 C1 movzx eax,cx
0000000000000003: C3 ret
Reviewers: chapuni, rnk
Reviewed By: rnk
Subscribers: majnemer, cfe-commits
Differential Revision: http://reviews.llvm.org/D4380
llvm-svn: 216491
lowering of the intrinsics.
Prior to this commit, most of the copy-related intrinsics could be optimized
away. The situation is still not ideal as there are several possibilities to
lower a given intrinsic. Currently, we match LLVM behavior.
llvm-svn: 216474
feature is c11 about nested struct declarations must have
struct-declarator-list. Without this change, code
which was meant for c99 breaks. rdar://18125536
llvm-svn: 216469
Summary:
PR19838
When operator new[] is called and an array cookie is created
we want asan to detect buffer overflow bugs that touch the cookie.
For that we need to
a) poison the shadow for the array cookie (call __asan_poison_cxx_array_cookie).
b) ignore the legal accesses to the cookie generated by clang (add 'nosanitize' metadata)
Reviewers: timurrrr, samsonov, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4774
llvm-svn: 216434
PowerPC uses the special PPC_FP128 type for long double on Linux, which is
composed of two 64-bit doubles. The higher-order double (which contains the
overall sign) comes first, and so the __builtin_signbitl implementation
requires special handling to extract the sign bit.
Fixes PR20691.
llvm-svn: 216341
Moreover, rework some patterns to actually check the emitted instructions
instead of matching unrelated string!
E.g.,
some of the "// CHECK: vmov" were matching stuff like ".globl
funcname_with_vmov" instead of actual instructions.
llvm-svn: 216275
It fits better with LLVM's memory model to try to do this in the
backend. Specifically, narrowing wide loads in the backends should be
relatively straightforward and is generally valuable, whereas widening
loads tends to be very constrained.
Discussion here:
http://lists.cs.uiuc.edu/pipermail/cfe-commits/Week-of-Mon-20140811/112581.html
This reverts commit r215614.
llvm-svn: 215648
Currently when laying out bitfields that don't need any padding, we
represent them as a wide enough int to contain all of the bits. This
can be hard on the backend since we'll do things like represent stores
to a few bits as loading an i144, masking it with a large constant,
and storing it back.
This turns up in less pathological cases where we load and mask 64 bit
word on a 32 bit platform when we actually only need to access 32 bits.
This leads to bad code being generated in most of our 32 bit backends.
In practice, there are often natural breaks in bitfields, and it's a
fairly simple and effective heuristic to split these fields into legal
integer sized chunks when it will be equivalent (ie, it won't force us
to add any extra padding).
llvm-svn: 215614
Similar approach to the set1 intrinsics is used: implement in terms of vector
initializers and then ensure with an LLVM test that a broadcast is generated
at the end.
Part of <rdar://problem/17688758>
llvm-svn: 215486
Summary:
This patch adds a runtime check verifying that functions
annotated with "returns_nonnull" attribute do in fact return nonnull pointers.
It is based on suggestion by Jakub Jelinek:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140623/223693.html.
Test Plan: regression test suite
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4849
llvm-svn: 215485
Due to the possible presence of return-by-out parameters, using the LLVM
argument number count when numbering debug info arguments can end up
off-by-one. This could produce two arguments with the same number, which
would in turn cause LLVM to emit only one of those arguments (whichever
it found last) or assert (r215157).
llvm-svn: 215227
Note that similar to palingr, we could further optimize these to emit
shufflevector when the shift count is <=64. This however does not
change the overall design that unlike palignr we would still need the LLVM
intrinsic corresponding to this intruction to handle the >64 cases. (palignr
uses the psrldq intrinsic in this case.)
llvm-svn: 214891
My original LE implementation of the vsldoi instruction, with its
altivec.h interfaces vec_sld and vec_vsldoi, produces incorrect
shufflevector operations in the LLVM IR. Correct code is generated
because the back end handles the incorrect shufflevector in a
consistent manner.
This patch and a companion patch for LLVM correct this problem by
removing the fixup from altivec.h and the corresponding fixup from the
PowerPC back end. Several test cases are also modified to reflect the
now-correct LLVM IR.
The vec_sums and vec_vsumsws interfaces in altivec.h are also fixed,
because they used vec_perm calls intended to be recognized as vsldoi
instructions. These vec_perm calls are now replaced with code that
more clearly shows the intent of the transformation.
llvm-svn: 214801
Instead of creating global variables for source locations and global names,
just create metadata nodes and strings. They will be transformed into actual
globals in the instrumentation pass (if necessary). This approach is more
flexible:
1) we don't have to ensure that our custom globals survive all the optimizations
2) if globals are discarded for some reason, we will simply ignore metadata for them
and won't have to erase corresponding globals
3) metadata for source locations can be reused for other purposes: e.g. we may
attach source location metadata to alloca instructions and provide better descriptions
for stack variables in ASan error reports.
No functionality change.
llvm-svn: 214604
These tests seem like an exception to the rule against assembly emitting
tests in clang. I made an LLVM side change that can only be tested by
setting up the inline assembly machinery that is only implemented by
Clang.
llvm-svn: 214552
It appears that the backend does not handle all cases that were handled by clang.
In particular, it does not handle structs as used in
SingleSource/UnitTests/2003-05-07-VarArgs.
llvm-svn: 214512
Summary:
This patch causes clang to emit va_arg instructions to the backend instead of
expanding them into an implementation itself. The backend already implements
va_arg since this is necessary for NaCl so this patch is removing redundant
code.
Together with the llvm patch (D4556) that accounts for the effect of endianness
on the expansion of va_arg, this fixes PR19612.
Depends on D4556
Reviewers: sstankovic, dsanders
Reviewed By: dsanders
Subscribers: rnk, cfe-commits
Differential Revision: http://reviews.llvm.org/D4742
llvm-svn: 214497
Note that it's not clear whether this is the right behavior, please see
the review for the discussion.
Reviewers: rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4577
llvm-svn: 214401
or a class derived from T. We already supported this when initializing
_Atomic(T) from T for most (and maybe all) other reasonable values of T.
llvm-svn: 214390
(Dropped the byte and word variants from the patch. Turns out these are not
part of AVX512F but only AVX512BW/VL.)
Part of <rdar://problem/17688758>
llvm-svn: 214314
This broke the following gdb tests:
gdb.base__annota1.exp
gdb.base__consecutive.exp
gdb.python__py-symtab.exp
gdb.reverse__consecutive-precsave.exp
gdb.reverse__consecutive-reverse.exp
I will look into this.
This reverts commit 214162.
llvm-svn: 214163
This allows us to give more precise diagnostics.
Diego kindly tested the impact on debug info size: "The increase on average
debug sizes is 0.1%. The total file size increase is ~0%."
llvm-svn: 214162
While Clang now supports both ELFv1 and ELFv2 ABIs, their use is currently
hard-coded via the target triple: powerpc64-linux is always ELFv1, while
powerpc64le-linux is always ELFv2.
These are of course the most common scenarios, but in principle it is
possible to support the ELFv2 ABI on big-endian or the ELFv1 ABI on
little-endian systems (and GCC does support that), and there are some
special use cases for that (e.g. certain Linux kernel versions could
only be built using ELFv1 on LE).
This patch implements the Clang side of supporting this, based on the
LLVM commit 214072. The command line options -mabi=elfv1 or -mabi=elfv2
select the desired ABI if present. (If not, Clang uses the same default
rules as now.)
Specifically, the patch implements the following changes based on the
presence of the -mabi= option:
In the driver:
- Pass the appropiate -target-abi flag to the back-end
- Select the correct dynamic loader version (/lib64/ld64.so.[12])
In the preprocessor:
- Define _CALL_ELF to the appropriate value (1 or 2)
In the compiler back-end:
- Select the correct ABI in TargetInfo.cpp
- Select the desired ABI for LLVM via feature (elfv1/elfv2)
llvm-svn: 214074
Summary:
This patch extends the __asm parser to make it keep parsing input tokens
as inline assembly if a single-line __asm line is followed by another line
starting with __asm too. It also makes sure that we correctly keep
matching braces in such situations by separating the notions of how many
braces we are matching and whether we are in single-line asm block mode.
Reviewers: rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4598
llvm-svn: 213916
arm64_be doesn't really exist; it was useful for testing while AArch64 and
ARM64 were separate, but now the only real way to refer to the system is
aarch64_be.
llvm-svn: 213747
In addition to enabling ELFv2 homogeneous aggregate handling,
LLVM support to pass array types directly also enables a performance
enhancement. We can now pass (non-homogeneous) aggregates that fit
fully in registers as direct integer arrays, using an element type
to encode the alignment requirement (that would otherwise go to the
"byval align" field).
This is preferable since "byval" forces the back-end to write the
aggregate out to the stack, even if it could be passed fully in
registers. This is particularly annoying on ELFv2, if there is
no parameter save area available, since we then need to allocate
space on the callee's stack just to hold those aggregates.
Note that to implement this optimization, this patch does not attempt
to fully anticipate register allocation rules as (defined in the
ABI and) implemented in the back-end. Instead, the patch is simply
passing *any* aggregate passed by value using the array mechanism
if its size is up to 64 bytes. This means that some of those will
end up being passed in stack slots anyway, but the generated code
shouldn't be any worse either. (*Large* aggregates remain passed
using "byval" to enable optimized copying via memcpy etc.)
llvm-svn: 213495
This patch implements clang support for the PowerPC ELFv2 ABI.
Together with a series of companion patches in LLVM, this makes
clang/LLVM fully usable on powerpc64le-linux.
Most of the ELFv2 ABI changes are fully implemented on the LLVM side.
On the clang side, we only need to implement some changes in how
aggregate types are passed by value. Specifically, we need to:
- pass (and return) "homogeneous" floating-point or vector aggregates in
FPRs and VRs (this is similar to the ARM homogeneous aggregate ABI)
- return aggregates of up to 16 bytes in one or two GPRs
The second piece is trivial to implement in any case. To implement
the first piece, this patch makes use of infrastructure recently
enabled in the LLVM PowerPC back-end to support passing array types
directly, where the array element type encodes properties needed to
handle homogeneous aggregates correctly.
Specifically, the array element type encodes:
- whether the parameter should be passed in FPRs, VRs, or just
GPRs/stack slots (for float / vector / integer element types,
respectively)
- what the alignment requirements of the parameter are when passed in
GPRs/stack slots (8 for float / 16 for vector / the element type
size for integer element types) -- this corresponds to the
"byval align" field
With this support in place, the clang part simply needs to *detect*
whether an aggregate type implements a float / vector homogeneous
aggregate as defined by the ELFv2 ABI, and if so, pass/return it
as array type using the appropriate float / vector element type.
llvm-svn: 213494
In C99, an array parameter declarator might have the form:
direct-declarator '[' 'static' type-qual-list[opt] assign-expr ']'
where the static keyword indicates that the caller will always provide a
pointer to the beginning of an array with at least the number of elements
specified by the assignment expression. For constant sizes, we can use the
new dereferenceable attribute to pass this information to the optimizer. For
VLAs, we don't know the size, but (for addrspace(0)) do know that the pointer
must be nonnull (and so we can use the nonnull attribute).
llvm-svn: 213444
r211898 introduced a regression where a large struct, which would
normally be passed ByVal, was causing padding to be inserted to
prevent the backend from using some GPRs, in order to follow the
AAPCS. However, the type of the argument was not being set correctly,
so the backend cannot align 8-byte aligned struct types on the stack.
The fix is to not insert the padding arguments when the argument is
being passed ByVal.
llvm-svn: 213359
1. Revert "Add default feature for CPUs on AArch64 target in Clang"
at r210625. Then, all enabled feature will by passed explicitly by
-target-feature in -cc1 option.
2. Get "-mfpu" deprecated.
3. Implement support of "-march". Usage is:
-march=armv8-a+[no]feature
For instance, "-march=armv8-a+neon+crc+nocrypto". Here "armv8-a" is
necessary, and CPU names are not acceptable. Candidate features are
fp, neon, crc and crypto. Where conflicting feature modifiers are
specified, the right-most feature is used.
4. Implement support of "-mtune". Usage is:
-march=CPU_NAME
For instance, "-march=cortex-a57". This option will ONLY get
micro-architectural feature enabled specifying to target CPU,
like "+zcm" and "+zcz" for cyclone. Any architectural features
WON'T be modified.
5. Change usage of "-mcpu" to "-mcpu=CPU_NAME+[no]feature", which is
an alias to "-march={feature of CPU_NAME}+[no]feature" and
"-mtune=CPU_NAME" together. Where this option is used in conjunction
with -march or -mtune, those options take precedence over the
appropriate part of this option.
llvm-svn: 213353
This is used to mark the instructions emitted by Clang to implement
variety of UBSan checks. Generally, we don't want to instrument these
instructions with another sanitizers (like ASan).
Reviewed in http://reviews.llvm.org/D4544
llvm-svn: 213291
Summary:
I'm planning on upstreaming some test cases for the inline assembly
usage in the Mozilla code base. A lot of these test cases test the
recent fixes to this code.
Reviewers: rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4508
llvm-svn: 213255
Memory barrier __builtin_arm_[dmb, dsb, isb] intrinsics are required to
implement their corresponding ACLE and MSVC intrinsics.
This patch ports ARM dmb, dsb, isb intrinsic to AArch64.
Requires LLVM r213247.
Differential Revision: http://reviews.llvm.org/D4521
llvm-svn: 213250
Clang supports __assume, at least at the semantic level, when MS extensions are
enabled. Unfortunately, trying to actually compile code using __assume would
result in this error:
error: cannot compile this builtin function yet
__assume is an optimizer hint, and can be ignored at the IR level. Until LLVM
supports assumptions at the IR level, a noop lowering is valid, and that is
what is done here.
llvm-svn: 213206
An array showing up in an inline assembly input is accepted in ICC and
GCC 4.8
This fixes PR20201.
Differential Revision: http://reviews.llvm.org/D4382
llvm-svn: 212954
This patch implements __builtin_arm_nop intrinsic for AArch32 and AArch64,
which generates hint 0x0, the alias of NOP instruction.
This intrinsic is necessary to implement ACLE __nop intrinsic.
Differential Revision: http://reviews.llvm.org/D4495
llvm-svn: 212947
Currently ASan instrumentation pass creates a string with global name
for each instrumented global (to include global names in the error report). Global
name is already mangled at this point, and we may not be able to demangle it
at runtime (e.g. there is no __cxa_demangle on Android).
Instead, create a string with fully qualified global name in Clang, and pass it
to ASan instrumentation pass in llvm.asan.globals metadata. If there is no metadata
for some global, ASan will use the original algorithm.
This fixes https://code.google.com/p/address-sanitizer/issues/detail?id=264.
llvm-svn: 212872
MSVC accepts __noop without any trailing parens and treats it like a
literal zero. We don't treat __noop as an integer literal, but now at
least we can parse a naked __noop expression.
Reviewers: rsmith
Differential Revision: http://reviews.llvm.org/D4476
llvm-svn: 212860
We now have an LLVM-level nonnull attribute that can be applied to function
parameters, and we emit it for reference types (as of r209723), but did not
emit it when an __attribute__((nonnull)) was provided. Now we will.
llvm-svn: 212835
Teach UBSan vptr checker to ignore technically invalud down-casts on
blacklisted types.
Based on http://reviews.llvm.org/D4407 by Byoungyoung Lee!
llvm-svn: 212770
This patch adds support for respecting the ABI and type alignment
of aggregates passed by value. Currently, all aggregates are aligned
at 8 bytes in the parameter save area. This is incorrect for two
reasons:
- Aggregates that need alignment of 16 bytes or more should be aligned
at 16 bytes in the parameter save area. This is implemented by
using an appropriate "byval align" attribute in the IR.
- Aggregates that need alignment beyond 16 bytes need to be dynamically
realigned by the caller. This is implemented by setting the Realign
flag of the ABIArgInfo::getIndirect call.
In addition, when expanding a va_arg call accessing a type that is
aligned at 16 bytes in the argument save area (either one of the
aggregate types as above, or a vector type which is already aligned
at 16 bytes), code needs to align the va_list pointer accordingly.
Reviewed by Hal Finkel.
llvm-svn: 212743
This patch adds support for passing arguments of non-Altivec vector type
(i.e. defined via attribute ((vector_size (...)))) on powerpc64-linux.
While such types are not mentioned in the formal ABI document, this
patch implements a calling convention compatible with GCC:
- Vectors of size < 16 bytes are passed in a GPR
- Vectors of size > 16 bytes are passed via reference
Note that vector types with a number of elements that is not a power
of 2 are not supported by GCC, so there is no pre-existing ABI to
follow. We choose to pass those (of size < 16) as if widened to the
next power of two, so they might end up in a vector register or
in a GPR. (Sizes > 16 are always passed via reference as well.)
Reviewed by Hal Finkel.
llvm-svn: 212734
Summary:
While debugging another issue, I noticed that Mips currently specifies that the
count leading zero builtins are undefined when the input is zero. The
architecture specifications say that the clz and dclz instructions write 32 or
64 respectively when given zero.
This doesn't fix any bugs that I'm aware of but it may improve optimisation in
some cases.
Differential Revision: http://reviews.llvm.org/D4431
llvm-svn: 212618
Having some kind of weird kernel-assisted ABI for these when the
native instructions are available appears to be (and should be) the
exception; OSs have been gradually opting in for years and the code
was getting silly.
So let LLVM decide whether it's possible/profitable to inline them by
default.
Patch by Phoebe Buckheister.
llvm-svn: 212598
Get rid of cached CodeGenModule::SanOpts, which was used to turn off
sanitizer codegen options if current LLVM Module is blacklisted, and use
plain LangOpts.Sanitize instead.
1) Some codegen decisions (turning TBAA or writable strings on/off)
shouldn't depend on the contents of blacklist.
2) llvm.asan.globals should *always* be created, even if the module
is blacklisted - soon Clang's CodeGen where we read sanitizer
blacklist files, so we should properly report which globals are
blacklisted to the backend.
llvm-svn: 212499
This adds support for simple MSVC compatibility mode intrinsics. These
intrinsics are simple in that they are either directly passed through to the
annotated MSBuiltin intrinsic or they mirror existing GCC builtins.
llvm-svn: 212378