This is needed for the upcoming implementation of the
new 8x32x16 and 32x8x16 variants of WMMA instructions
introduced in CUDA 9.1.
Differential Revision: https://reviews.llvm.org/D44719
llvm-svn: 328158
The patch adds nocf_check target independent attribute for disabling checks that were enabled by cf-protection flag.
The attribute can be appertained to functions and function pointers.
Attribute name follows GCC's similar attribute name.
Differential Revision: https://reviews.llvm.org/D41880
llvm-svn: 327768
In C, we'll wait until the end of the scope to clean up aggregate
temporaries used for returns from calls. This means in cases like:
{
// Assuming that `Bar` is large enough to warrant indirect returns
struct Bar b = {};
b = foo(&b);
b = foo(&b);
b = foo(&b);
b = foo(&b);
}
...We'll allocate space for 5 Bars on the stack (`b`, and 4
temporaries). This becomes painful in things like large switch
statements.
If cleaning up sooner is trivial, we should do it.
llvm-svn: 327229
Simplify the dispatching for the personality routines. This really had
no test coverage previously, so add test coverage for the various cases.
This turns out to be pretty complicated as the various languages and
models interact to change personalities around.
You really should feel bad for the compiler if you are using exceptions.
There is no reason for this type of cruelty.
llvm-svn: 327105
EmitLifetimeStart returns a non-null `size` pointer if it actually
emits a lifetime.start. Later in this function, we use `tempSize`'s
nullness to determine whether or not we should emit a lifetime.end.
llvm-svn: 326844
Summary:
The _get_ssp intrinsic can be used to retrieve the
shadow stack pointer, independent of the current arch -- in
contract with the rdsspd and the rdsspq intrinsics.
Also, this intrinsic returns zero on CPUs which don't
support CET. The rdssp[d|q] instruction is decoded as nop,
essentially just returning the input operand, which is zero.
Example result of compilation:
```
xorl %eax, %eax
movl %eax, %ecx
rdsspq %rcx # NOP when CET is not supported
movq %rcx, %rax # return zero
```
Reviewers: craig.topper
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43814
llvm-svn: 326689
Summary:
Currently only calls to mcount were suppressed with
no_instrument_function attribute.
Linux kernel requires that calls to fentry should also not be
generated.
This is an extended fix for PR PR33515.
Reviewers: hfinkel, rengolin, srhines, rnk, rsmith, rjmccall, hans
Reviewed By: rjmccall
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43995
llvm-svn: 326639
Make types with sizes that aren't a power of two an error (that can
be disabled) in structs with ms_struct layout, except on mingw where
the situation is quite likely to occur and GCC handles it silently.
Differential Revision: https://reviews.llvm.org/D43908
llvm-svn: 326476
When targeting GNU/MinGW for i386, the size of the "long double" data
type is 12 bytes (while it is 8 bytes in MSVC). When building
with -mms-bitfields to have struct layouts match MSVC, data types
are laid out in a struct with alignment according to their size.
However, this doesn't make sense for the long double type, since
it doesn't match MSVC at all, and aligning to a non-power-of-2
size triggers other asserts later.
This matches what GCC does, aligning a long double to 4 bytes
in structs on i386 even when -mms-bitfields is specified.
This fixes asserts when using the max_align_t data type when
building for MinGW/i386 with the -mms-bitfields flag.
Differential Revision: https://reviews.llvm.org/D43734
llvm-svn: 326173
In DWARF v5 the Line Number Program Header is extensible, allowing values with
new content types. This vendor extension to DWARF v5 allows source text to be
embedded directly in the line tables of the debug line section.
Add new flag (-g[no-]embed-source) to Driver and CC1 which indicates
that source should be passed through to LLVM during CodeGen.
Differential Revision: https://reviews.llvm.org/D42766
llvm-svn: 326102
Summary:
If the flag -fforce-enable-int128 is passed, it will enable support for __int128_t and __uint128_t types.
This flag can then be used to build compiler-rt for RISCV32.
Reviewers: asb, kito-cheng, apazos, efriedma
Reviewed By: asb, efriedma
Subscribers: shiva0217, efriedma, jfb, dschuff, sdardis, sbc100, jgravelle-google, aheejin, rbar, johnrusso, simoncook, jordy.potman.lists, sabuasal, niosHD, cfe-commits
Differential Revision: https://reviews.llvm.org/D43105
llvm-svn: 326045
The tests that failed on a windows host have been fixed.
Original message:
Start setting dso_local for COFF.
With this there are still some GVs where we don't set dso_local
because setGVProperties is never called. I intend to fix that in
followup commits. This is just the bare minimum to teach
shouldAssumeDSOLocal what it should do for COFF.
llvm-svn: 325940
With this there are still some GVs where we don't set dso_local
because setGVProperties is never called. I intend to fix that in
followup commits. This is just the bare minimum to teach
shouldAssumeDSOLocal what it should do for COFF.
llvm-svn: 325915
This test was previously in lldb, and was only checking that clang
was emitting the correct section. So, it belongs here and not
in the debugger.
llvm-svn: 325850
This patch fixes creating TBAA access descriptors for
may_alias-marked access types. Currently, for such types we
generate ordinary descriptors with char as its access type. The
patch changes this to produce proper may-alias descriptors.
Differential Revision: https://reviews.llvm.org/D42366
llvm-svn: 325575
Currently, clang compiles explicit initializers for array
elements into series of store instructions. For large arrays of
built-in types this results in bloated output code and
significant amount of time spent on the instruction selection
phase. This patch fixes the issue by initializing such arrays
with global constants that store the binary image of the
initializer.
Differential Revision: https://reviews.llvm.org/D43181
llvm-svn: 325478
Summary:
Make clang accept `-msahf` (and `-mno-sahf`) flags to activate the
`+sahf` feature for the backend, for bug 36028 (Incorrect use of
pushf/popf enables/disables interrupts on amd64 kernels). This was
originally submitted in bug 36037 by Jonathan Looney
<jonlooney@gmail.com>.
As described there, GCC also uses `-msahf` for this feature, and the
backend already recognizes the `+sahf` feature. All that is needed is to
teach clang to pass this on to the backend.
The mapping of feature support onto CPUs may not be complete; rather, it
was chosen to match LLVM's idea of which CPUs support this feature (see
lib/Target/X86/X86.td).
I also updated the affected test case (CodeGen/attr-target-x86.c) to
match the emitted output.
Reviewers: craig.topper, coby, efriedma, rsmith
Reviewed By: craig.topper
Subscribers: emaste, cfe-commits
Differential Revision: https://reviews.llvm.org/D43394
llvm-svn: 325446
Summary:
Gold plugin does not add pass to ThinLTO modules without useful symbols.
In this case ThinLTO can't create corresponding index file and some features, like CFI,
cannot be processes by backed correctly without index.
Given that we don't need the backed output we can request it to avoid
processing the module. This is implemented by this patch using new
"SkipModuleByDistributedBackend" flag.
Reviewers: pcc, tejohnson
Subscribers: mehdi_amini, inglorion, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D42995
llvm-svn: 325411
Summary:
ThinLTO compilation may decide not to split module and keep at as regular LTO.
In this can this module already processed during indexing and already a part of
merged object file. So here we can just skip it.
Reviewers: pcc, tejohnson
Reviewed By: tejohnson
Subscribers: mehdi_amini, inglorion, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D42680
llvm-svn: 325410
This adds Sema and Codegen tests for the vcvtr builtins
(because they were missing).
Differential Revision: https://reviews.llvm.org/D43372
llvm-svn: 325351
Summary:
TypeID summaries are used by CFI and need to be serialized by ThinLTO
indexing for later use by LTO Backend.
Reviewers: tejohnson, pcc
Subscribers: mehdi_amini, inglorion, eraman, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D42611
llvm-svn: 325182
Added support in clang for GCC function attribute 'artificial'. This attribute
is used to control stepping behavior of debugger with respect to inline
functions.
Patch By: Elizabeth Andrews (eandrews)
Differential Revision: https://reviews.llvm.org/D43259
llvm-svn: 325081
Summary:
This patch also adds the 'DW_AT_artificial' flag to the generated variable.
Addresses the issues mentioned in http://llvm.org/PR30553.
Reviewers: CarlosAlbertoEnciso, probinson, aprantl
Reviewed By: aprantl
Subscribers: JDevlieghere, cfe-commits
Differential Revision: https://reviews.llvm.org/D43189
llvm-svn: 324988
As reported here: https://bugs.llvm.org/show_bug.cgi?id=36301
The issue is that the 'use' causes the plain declaration to emit
the attributes to LLVM-IR. However, if the definition added it
later, these would silently disappear.
This commit extracts that logic to its own function in CodeGenModule,
and has the attribute-applications done during 'definition' update
the attributes properly.
Differential Revision: https://reviews.llvm.org/D43095
llvm-svn: 324907
Summary:
Right now clang is skipping array cookie poisoning for any operator
new[] which is not part of the set of replaceable global allocation
functions.
This commit adds a flag to tell clang to poison all operator new[]
cookies.
A previous review was poisoning all array cookies unconditionally, but
there is an edge case which would stop working under ASan (a custom
operator new[] saves whatever pointer it returned, and then accesses
it).
This newer revision adds a command line argument to toggle this feature.
Original revision: https://reviews.llvm.org/D41301
Compiler-rt test revision with an explanation of the edge case: https://reviews.llvm.org/D41664
Reviewers: rjmccall, kcc, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43013
llvm-svn: 324884
Summary:
This change avoids the overhead of storing, and later crawling,
an initializer list of all zeros for arrays. When LLVM
visits this (llvm/IR/Constants.cpp) ConstantArray::getImpl()
it will scan the list looking for an array of all zero.
We can avoid the store, and short-cut the scan, by detecting
all zeros when clang builds-up the initialization representation.
This was brought to my attention when investigating PR36030
Reviewers: majnemer, rjmccall
Reviewed By: rjmccall
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D42549
llvm-svn: 324776
Summary:
This patch is a fix for following issue:
https://bugs.llvm.org/show_bug.cgi?id=31362 The problem was caused by front end
lowering C calling conventions without taking into account calling conventions
enforced by attribute. In this case win64cc was no correctly lowered on targets
other than Windows.
Reviewed By: rnk (Reid Kleckner)
Differential Revision: https://reviews.llvm.org/D43016
Author: belickim <mateusz.belicki@intel.com>
llvm-svn: 324594
The difference from the previous try is that we no longer directly
access function declarations from position independent executables. It
should work, but currently doesn't with some linkers.
It now includes a fix to not mark available_externally definitions as
dso_local.
Original message:
Start setting dso_local in clang.
This starts adding dso_local to clang.
The hope is to eventually have TargetMachine::shouldAssumeDsoLocal go
away. My objective for now is to move enough of it to clang to remove
the need for the TargetMachine one to handle PIE copy relocations and
-fno-plt. With that it should then be easy to implement a
-fno-copy-reloc in clang.
This patch just adds the cases where we assume a symbol to be local
based on the file being compiled for an executable or a shared
library.
llvm-svn: 324535
This reverts commit r324500.
The bots found two failures:
ThreadSanitizer-x86_64 :: Linux/pie_no_aslr.cc
ThreadSanitizer-x86_64 :: pie_test.cc
when using gold. The issue is a limitation in gold when building pie
binaries. I will investigate how to work around it.
llvm-svn: 324505
It now includes a fix to not mark available_externally definitions as
dso_local.
Original message:
Start setting dso_local in clang.
This starts adding dso_local to clang.
The hope is to eventually have TargetMachine::shouldAssumeDsoLocal go
away. My objective for now is to move enough of it to clang to remove
the need for the TargetMachine one to handle PIE copy relocations and
-fno-plt. With that it should then be easy to implement a
-fno-copy-reloc in clang.
This patch just adds the cases where we assume a symbol to be local
based on the file being compiled for an executable or a shared
library.
llvm-svn: 324500
This patch:
* fixes an incorrect sign-extension of unsigned values, when emitting
debug info metadata for enumerators
* the enumerators metadata is created with a flag, which determines
interpretation of the value bits (signed or unsigned)
* the enumerations metadata contains the underlying integer type and a
flag, indicating whether this is a C++ "fixed enum"
Differential Revision: https://reviews.llvm.org/D42736
llvm-svn: 324490
This adds the frontend support required to support the use of the
comment pragma to enable auto linking on ELFish targets. This is a
generic ELF extension supported by LLVM. We need to change the handling
for the "dependentlib" in order to accommodate the previously discussed
encoding for the dependent library descriptor. Without the custom
handling of the PCK_Lib directive, the -l prefixed option would be
encoded into the resulting object (which is treated as a frontend
error).
llvm-svn: 324438
This starts adding dso_local to clang.
The hope is to eventually have TargetMachine::shouldAssumeDsoLocal go
away. My objective for now is to move enough of it to clang to remove
the need for the TargetMachine one to handle PIE copy relocations and
-fno-plt. With that it should then be easy to implement a
-fno-copy-reloc in clang.
This patch just adds the cases where we assume a symbol to be local
based on the file being compiled for an executable or a shared
library.
llvm-svn: 324107
When trying to track down a different bug, we discovered
that calling __builtin_va_arg on a vec3f type caused
the SROA pass to issue a warning that there was an illegal
access.
Further research showed that the vec3f type is
alloca'ed as size '12', but the _builtin_va_arg code
on x86_64 was always loading this out of registers as
{double, double}. Thus, the 2nd store into the vec3f
was storing in bytes 12-15!
This patch alters the original implementation which always
assumed {double, double} to use the actual coerced type
instead, so the LLVM-IR generated is a load/GEP/store of
a <2 x float> and a float, rather than a double and a double.
Tests were added for all combinations I could think of that
would fit in 2 FP registers, and all work exactly as expected.
Differential Revision: https://reviews.llvm.org/D42811
llvm-svn: 324098
Summary:
This patch enables debugging of C99 VLA types by generating more precise
LLVM Debug metadata, using the extended DISubrange 'count' field that
takes a DIVariable.
This should implement:
Bug 30553: Debug info generated for arrays is not what GDB expects (not as good as GCC's)
https://bugs.llvm.org/show_bug.cgi?id=30553
Reviewers: echristo, aprantl, dexonsmith, clayborg, pcc, kristof.beyls, dblaikie
Reviewed By: aprantl
Subscribers: jholewinski, schweitz, davide, fhahn, JDevlieghere, cfe-commits
Differential Revision: https://reviews.llvm.org/D41698
llvm-svn: 323952
This patch fixes a bug in CGRecordLowering::accumulateBitFields where it
unconditionally starts a new run and emits a storage field when it sees
a zero-sized bitfield, which causes an assertion in insertPadding to
fail when -fno-bitfield-type-align is used.
It shouldn't emit new storage if UseZeroLengthBitfieldAlignment and
UseBitFieldTypeAlignment are both false.
rdar://problem/36762205
llvm-svn: 323943
The patch ensures that a new storage unit is created when the new bitfield's
size is wider than the available bits.
rdar://36343145
Differential Revision: https://reviews.llvm.org/D42660
llvm-svn: 323921
Summary:
This change is step three in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965, rC322964, rL322963 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments. ( rL323597 )
Step 3) Update Clang to use the new IRBuilder API.
Step 4) Update Polly to use the new IRBuilder API.
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment()
and getSourceAlignment() instead.
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.
Reference
http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.htmlhttp://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html
Reviewers: rjmccall
Subscribers: jyknight, nemanjai, nhaehnle, javed.absar, sbc100, aheejin, kbarton, fedor.sergeev, cfe-commits
Differential Revision: https://reviews.llvm.org/D41677
llvm-svn: 323617
The MSVC runtime library does not provide a definition of wmemcmp,
so we need an inline implementation.
Differential Revision: https://reviews.llvm.org/D42441
llvm-svn: 323362
Pass and return _Float16 as if it were an int or float for ARM, but with the
top 16 bits unspecified, similarly like we already do for __fp16.
We will implement proper half-precision function argument lowering in the ARM
backend soon, but want to use this workaround in the mean time.
Differential Revision: https://reviews.llvm.org/D42318
llvm-svn: 323185
When a function taking transparent union is declared as taking one of
union members earlier in the translation unit, clang would hit an
"Invalid cast" assertion during EmitFunctionProlog. This case
corresponds to function f1 in test/CodeGen/transparent-union-redecl.c.
We decided to cast i32 to union because after merging function
declarations function parameter type becomes int,
CGFunctionInfo::ArgInfo type matches with ABIArgInfo type, so we decide
it is a trivial case. But these types should also be castable to
parameter declaration type which is not the case here.
Now the fix is in converting from ABIArgInfo type to VarDecl type and using
argument demotion when necessary.
Additional tests in Sema/transparent-union.c capture current behavior and make
sure there are no regressions.
rdar://problem/34949329
Reviewers: rjmccall, rafael
Reviewed By: rjmccall
Subscribers: aemerson, cfe-commits, kristof.beyls, ahatanak
Differential Revision: https://reviews.llvm.org/D41311
llvm-svn: 323156
Summary:
Upstream LLVM is changing the the prototypes of the @llvm.memcpy/memmove/memset
intrinsics. This change updates the Clang tests for this change.
The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument
which is required to be a constant integer. It represents the alignment of the
dest (and source), and so must be the minimum of the actual alignment of the
two.
This change removes the alignment argument in favour of placing the alignment
attribute on the source and destination pointers of the memory intrinsic call.
For example, code which used to read:
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false)
will now read
call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false)
At this time the source and destination alignments must be the same (Step 1).
Step 2 of the change, to be landed shortly, will relax that contraint and allow
the source and destination to have different alignments.
llvm-svn: 322964
Thanks to Eli Friedman, who suggested the reason these tests failed on a few
buildbots yet works fine locally is because non-assert builds don't emit value
labels.
llvm-svn: 322514
RISCVABIInfo is implemented in terms of XLen, supporting both RV32 and RV64.
Unfortunately we need to count argument registers in the frontend in order to
determine when to emit signext and zeroext attributes. Integer scalars are
extended according to their type up to 32-bits and then sign-extended to XLen
when passed in registers, but are anyext when passed on the stack. This patch
only implements the base integer (soft float) ABIs.
For more information on the RISC-V ABI, see [the ABI
doc](https://github.com/riscv/riscv-elf-psabi-doc/blob/master/riscv-elf.md),
my [golden model](https://github.com/lowRISC/riscv-calling-conv-model), and
the [LLVM RISC-V calling convention
patch](https://reviews.llvm.org/D39898#2d1595b4) (specifically the comment
documenting frontend expectations).
Differential Revision: https://reviews.llvm.org/D40023
llvm-svn: 322494
Summary:
kunpck intrinsics were removed in favor of native IR a few months ago. The implementation lowers them as by operation on the integer types passed to the intrinsic and then just shifting, masking, and oring them together. A special X86 DAG combine was added to recognize this patter and turn it into a concat_vector operation.
I think it makes more sense to keep the IR implementation closer to vector operations on vXi1. Given that we expect these builtins to be used around other builtins that operate on k-registers which we try to represent in IR with vXi1. InstCombine should be able to get rid of the bitcasts between integers and vXi1 leaving only the vector operations.
Reviewers: RKSimon, spatel, zvi, jina.nahias
Reviewed By: RKSimon
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D42016
llvm-svn: 322461
GCOV in the old pass manager also strips debug info (if debug info is
disabled/only produced for profiling anyway) after the GCOV pass runs.
I think the strip pass hasn't been ported to the new pass manager, so it
might take me a little while to wire that up.
llvm-svn: 322126
Cf-protection is a target independent flag that instructs the back-end to instrument control flow mechanisms like: Branch, Return, etc.
For example in X86 this flag will be used to instrument Indirect Branch Tracking instructions.
Differential Revision: https://reviews.llvm.org/D40478
Change-Id: I5126e766c0e6b84118cae0ee8a20fe78cc373dea
llvm-svn: 322063
GCC's attribute 'target', in addition to being an optimization hint,
also allows function multiversioning. We currently have the former
implemented, this is the latter's implementation.
This works by enabling functions with the same name/signature to coexist,
so that they can all be emitted. Multiversion state is stored in the
FunctionDecl itself, and SemaDecl manages the definitions.
Note that it ends up having to permit redefinition of functions so
that they can all be emitted. Additionally, all versions of the function
must be emitted, so this also manages that.
Note that this includes some additional rules that GCC does not, since
defining something as a MultiVersion function after a usage has been made illegal.
The only 'history rewriting' that happens is if a function is emitted before
it has been converted to a multiversion'ed function, at which point its name
needs to be changed.
Function templates and virtual functions are NOT yet supported (not supported
in GCC either).
Additionally, constructors/destructors are disallowed, but the former is
planned.
llvm-svn: 322028
Adds the -fstack-size-section flag to enable the .stack_sizes section. The flag defaults to on for the PS4 triple.
Differential Revision: https://reviews.llvm.org/D40712
llvm-svn: 321992
These just overloads for _Float128. They're supported by GCC 7 and used
by glibc. APFloat support is already there so just add the overloads.
__builtin_copysignf128
__builtin_fabsf128
__builtin_huge_valf128
__builtin_inff128
__builtin_nanf128
__builtin_nansf128
This is the same support that GCC has, according to the documentation,
but limited to _Float128.
llvm-svn: 321948
r320902 fixed the IRGen for some types of checked multiplications. It
did not handle unsigned overflow correctly in the case where the signed
operand is negative (PR35750).
Eli pointed out that on overflow, the result must be equal to the unique
value that is equivalent to the mathematically-correct result modulo two
raised to the k power, where k is the number of bits in the result type.
This patch fixes the specialized IRGen from r320902 accordingly.
Testing: Apart from check-clang, I modified the test harness from
r320902 to validate the results of all multiplications -- not just the
ones which don't overflow:
https://gist.github.com/vedantk/3eb9c88f82e5c32f2e590555b4af5081
llvm.org/PR35750, rdar://34963321
Differential Revision: https://reviews.llvm.org/D41717
llvm-svn: 321771
Summary:
The C++ Itanium ABI says:
No cookie is required if the new operator being used is ::operator new[](size_t, void*).
We should only avoid poisoning the cookie if we're calling this
operator, not others. This is dealt with before the call to
InitializeArrayCookie.
Reviewers: rjmccall, kcc, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D41301
llvm-svn: 321645
added vbmi2 feature recognition
added intrinsics support for vbmi2 instructions
_mm[128,256,512]_mask[z]_compress_epi[16,32]
_mm[128,256,512]_mask_compressstoreu_epi[16,32]
_mm[128,256,512]_mask[z]_expand_epi[16,32]
_mm[128,256,512]_mask[z]_expandloadu_epi[16,32]
_mm[128,256,512]_mask[z]_sh[l,r]di_epi[16,32,64]
_mm[128,256,512]_mask_sh[l,r]dv_epi[16,32,64]
matching a similar work on the backend (D40206)
Differential Revision: https://reviews.llvm.org/D41557
llvm-svn: 321487
added vpclmulqdq feature recognition
added intrinsics support for vpclmulqdq instructions
_mm256_clmulepi64_epi128
_mm512_clmulepi64_epi128
matching a similar work on the backend (D40101)
Differential Revision: https://reviews.llvm.org/D41573
llvm-svn: 321480
added vaes feature recognition
added intrinsics support for vaes instructions, matching a similar work on the backend (D40078)
_mm256_aesenc_epi128
_mm512_aesenc_epi128
_mm256_aesenclast_epi128
_mm512_aesenclast_epi128
_mm256_aesdec_epi128
_mm512_aesdec_epi128
_mm256_aesdeclast_epi128
_mm512_aesdeclast_epi128
llvm-svn: 321474
Now that in the new TBAA format we allow access types to be of
any object types, including aggregate ones, it becomes critical
to specify types of all sub-objects such aggregates comprise as
their members. In order to meet this requirement, this patch
enables generation of field descriptors for members of array
types.
Differential Revision: https://reviews.llvm.org/D41399
llvm-svn: 321352
Now that the MDBuilder helpers generating TBAA type and access
descriptors in the new format are in place, we can teach clang to
use them when requested.
Differential Revision: https://reviews.llvm.org/D41394
llvm-svn: 321351
When a function taking transparent union is declared as taking one of
union members earlier in the translation unit, clang would hit an
"Invalid cast" assertion during EmitFunctionProlog. This case
corresponds to function f1 in test/CodeGen/transparent-union-redecl.c.
We decided to cast i32 to union because after merging function
declarations function parameter type becomes int,
CGFunctionInfo::ArgInfo type matches with ABIArgInfo type, so we decide
it is a trivial case. But these types should also be castable to
parameter declaration type which is not the case here.
The fix is in checking for the trivial case if ABIArgInfo type matches with
parameter declaration type. It exposed inconsistency that we check
hasScalarEvaluationKind for different types in EmitParmDecl and
EmitFunctionProlog, and comment says they should match.
Additional tests in Sema/transparent-union.c capture current behavior and make
sure there are no regressions.
rdar://problem/34949329
Reviewers: rjmccall, rafael
Reviewed By: rjmccall
Subscribers: aemerson, cfe-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D41311
llvm-svn: 321296
Diagnose 'unreachable' UB when a noreturn function returns.
1. Insert a check at the end of functions marked noreturn.
2. A decl may be marked noreturn in the caller TU, but not marked in
the TU where it's defined. To diagnose this scenario, strip away the
noreturn attribute on the callee and insert check after calls to it.
Testing: check-clang, check-ubsan, check-ubsan-minimal, D40700
rdar://33660464
Differential Revision: https://reviews.llvm.org/D40698
llvm-svn: 321231
Summary: Plant an inline version of "((ac+bd)/(cc+dd)) + i((bc-ad)/(cc+dd))" instead.
Patch by Paul Walker.
Reviewed By: hfinkel
Differential Revision: https://reviews.llvm.org/D40299
llvm-svn: 321183
There are 2 parts to getting the -fassociative-math command-line flag translated to LLVM FMF:
1. In the driver/frontend, we accept the flag and its 'no' inverse and deal with the
interactions with other flags like -ffast-math -fno-signed-zeros -fno-trapping-math.
This was mostly already done - we just need to translate the flag as a codegen option.
The test file is complicated because there are many potential combinations of flags here.
Note that we are matching gcc's behavior that requires 'nsz' and no-trapping-math.
2. In codegen, we map the codegen option to FMF in the IR builder. This is simple code and
corresponding test.
For the motivating example from PR27372:
float foo(float a, float x) { return ((a + x) - x); }
$ ./clang -O2 27372.c -S -o - -ffast-math -fno-associative-math -emit-llvm | egrep 'fadd|fsub'
%add = fadd nnan ninf nsz arcp contract float %0, %1
%sub = fsub nnan ninf nsz arcp contract float %add, %2
So 'reassoc' is off as expected (and so is the new 'afn' but that's a different patch).
This case now works as expected end-to-end although the underlying logic is still wrong:
$ ./clang -O2 27372.c -S -o - -ffast-math -fno-associative-math | grep xmm
addss %xmm1, %xmm0
subss %xmm1, %xmm0
We're not done because the case where 'reassoc' is set is ignored by optimizer passes. Example:
$ ./clang -O2 27372.c -S -o - -fassociative-math -fno-signed-zeros -fno-trapping-math -emit-llvm | grep fadd
%add = fadd reassoc float %0, %1
$ ./clang -O2 27372.c -S -o - -fassociative-math -fno-signed-zeros -fno-trapping-math | grep xmm
addss %xmm1, %xmm0
subss %xmm1, %xmm0
Differential Revision: https://reviews.llvm.org/D39812
llvm-svn: 320920
This patch introduces a specialized way to lower overflow-checked
multiplications with mixed-sign operands. This fixes link failures and
ICEs on code like this:
void mul(int64_t a, uint64_t b) {
int64_t res;
__builtin_mul_overflow(a, b, &res);
}
The generic checked-binop irgen would use a 65-bit multiplication
intrinsic here, which requires runtime support for _muloti4 (128-bit
multiplication), and therefore fails to link on i386. To get an ICE
on x86_64, change the example to use __int128_t / __uint128_t.
Adding runtime and backend support for 65-bit or 129-bit checked
multiplication on all of our supported targets is infeasible.
This patch solves the problem by using simpler, specialized irgen for
the mixed-sign case.
llvm.org/PR34920, rdar://34963321
Testing: Apart from check-clang, I compared the output from this fairly
comprehensive test driver using unpatched & patched clangs:
https://gist.github.com/vedantk/3eb9c88f82e5c32f2e590555b4af5081
Differential Revision: https://reviews.llvm.org/D41149
llvm-svn: 320902
Summary:
InterlockedCompareExchange128 is a bit more complicated than the other
InterlockedCompareExchange functions, so it requires a bit more work. It
doesn't directly refer to 128bit ints, instead it takes pointers to
64bit ints for Destination and ComparandResult, and exchange is taken as
two 64bit ints (high & low). The previous value is written to
ComparandResult, and success is returned. This implementation does the
following in order to produce a cmpxchg instruction:
1. Cast everything to 128bit ints or int pointers, and glues together
the Exchange values
2. Reads from CompareandResult to get the comparand
3. Calls cmpxchg volatile (on X86 this will produce a lock cmpxchg16b
instruction)
1. Result 0 (previous value) is written back to ComparandResult
2. Result 1 (success bool) is zext'ed to a uchar and returned
Resolves bug https://llvm.org/PR35251
Patch by Colden Cullen!
Reviewers: rnk, agutowski
Reviewed By: rnk
Subscribers: majnemer, cfe-commits
Differential Revision: https://reviews.llvm.org/D41032
llvm-svn: 320730
This adds a new command line option -mprefer-vector-width to specify a preferred vector width for the vectorizers. Valid values are 'none' and unsigned integers. The driver will check that it meets those constraints. Specific supported integers will be managed by the targets in the backend.
Clang will take the value and add it as a new function attribute during CodeGen.
This represents the alternate direction proposed by Sanjay in this RFC: http://lists.llvm.org/pipermail/llvm-dev/2017-November/118734.html
The syntax here matches gcc, though gcc treats it as an x86 specific command line argument. gcc only allows values of 128, 256, and 512. I'm not having clang check any values.
Differential Revision: https://reviews.llvm.org/D40230
llvm-svn: 320419
This commit fixes a bug in IRGen where it generates completely broken
code for __fp16 vectors on X86. For example when the following code is
compiled:
half4 hv0, hv1, hv2; // these are vectors of __fp16.
void foo221() {
hv0 = hv1 + hv2;
}
clang generates the following IR, in which two i16 vectors are added:
@hv1 = common global <4 x i16> zeroinitializer, align 8
@hv2 = common global <4 x i16> zeroinitializer, align 8
@hv0 = common global <4 x i16> zeroinitializer, align 8
define void @foo221() {
%0 = load <4 x i16>, <4 x i16>* @hv1, align 8
%1 = load <4 x i16>, <4 x i16>* @hv2, align 8
%add = add <4 x i16> %0, %1
store <4 x i16> %add, <4 x i16>* @hv0, align 8
ret void
}
To fix the bug, this commit uses the code committed in r314056, which
modified clang to promote and truncate __fp16 vectors to and from float
vectors in the AST. It also fixes another IRGen bug where a short value
is assigned to an __fp16 variable without any integer-to-floating-point
conversion, as shown in the following example:
__fp16 a;
short b;
void foo1() {
a = b;
}
@b = common global i16 0, align 2
@a = common global i16 0, align 2
define void @foo1() #0 {
%0 = load i16, i16* @b, align 2
store i16 %0, i16* @a, align 2
ret void
}
rdar://problem/20625184
Differential Revision: https://reviews.llvm.org/D40112
llvm-svn: 320215
This is a follow-up to r320128. Eli pointed out that there is some gray
area in the language standard about whether the constant size is exact,
or a lower bound.
https://reviews.llvm.org/D40940
llvm-svn: 320185
This patch, together with a matching llvm patch (https://reviews.llvm.org/D39720), implements the lowering of X86 kunpack intrinsics to IR.
Differential Revision: https://reviews.llvm.org/D39719
Change-Id: Id5d3cb394ad33b98be79a6783d1d15569e2b798d
llvm-svn: 319777
There are 20 LLVM math intrinsics that correspond to mathlib calls according to the LangRef:
http://llvm.org/docs/LangRef.html#standard-c-library-intrinsics
We were only converting 3 mathlib calls (sqrt, fma, pow) and 12 builtin calls (ceil, copysign,
fabs, floor, fma, fmax, fmin, nearbyint, pow, rint, round, trunc) to their intrinsic-equivalents.
This patch pulls the transforms together and handles all 20 cases. The switch is guarded by a
check for const-ness to make sure we're not doing the transform if errno could possibly be set by
the libcall or builtin.
Differential Revision: https://reviews.llvm.org/D40044
llvm-svn: 319593
The basic idea behind this patch is that since in strict aliasing
mode all accesses to union members require their outermost
enclosing union objects to be specified explicitly, then for a
couple given accesses to union members of the form
p->a.b.c...
q->x.y.z...
it is known they can only alias if both p and q point to the same
union type and offset ranges of members a.b.c... and x.y.z...
overlap. Note that the actual types of the members do not matter.
Specifically, in this patch we do the following:
* Make unions to be valid TBAA base access types. This enables
generation of TBAA type descriptors for unions.
* Encode union types as structures with a single member of a
special "union member" type. Currently we do not encode
information about sizes of types, but conceptually such union
members are considered to be of the size of the whole union.
* Encode accesses to direct and indirect union members, including
member arrays, as accesses to these special members. All
accesses to members of a union thus get the same offset, which
is the offset of the union they are part of. This means the
existing LLVM TBAA machinery is able to handle such accesses
with no changes.
While this is already an improvement comparing to the current
situation, that is, representing all union accesses as may-alias
ones, there are further changes planned to complete the support
for unions. One of them is storing information about access sizes
so we can distinct accesses to non-overlapping union members,
including accesses to different elements of member arrays.
Another change is encoding type sizes in order to make it
possible to compute offsets within constant-indexed array
elements. These enhancements will be addressed with separate
patches.
Differential Revision: https://reviews.llvm.org/D39455
llvm-svn: 319413
Summary:
The -fxray-always-emit-customevents flag instructs clang to always emit
the LLVM IR for calls to the `__xray_customevent(...)` built-in
function. The default behaviour currently respects whether the function
has an `[[clang::xray_never_instrument]]` attribute, and thus not lower
the appropriate IR code for the custom event built-in.
This change allows users calling through to the
`__xray_customevent(...)` built-in to always see those calls lowered to
the corresponding LLVM IR to lay down instrumentation points for these
custom event calls.
Using this flag enables us to emit even just the user-provided custom
events even while never instrumenting the start/end of the function
where they appear. This is useful in cases where "phase markers" using
__xray_customevent(...) can have very few instructions, must never be
instrumented when entered/exited.
Reviewers: rnk, dblaikie, kpw
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D40601
llvm-svn: 319388
Shadow stack solution introduces a new stack for return addresses only.
The stack has a Shadow Stack Pointer (SSP) that points to the last address to which we expect to return.
If we return to a different address an exception is triggered.
This patch includes shadow stack intrinsics as well as the corresponding CET header.
It includes CET clang flags for shadow stack and Indirect Branch Tracking.
For more information, please see the following:
https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf
Differential Revision: https://reviews.llvm.org/D40224
Change-Id: I79ad0925a028bbc94c8ecad75f6daa2f214171f1
llvm-svn: 318995
fma4 instructions zero the upper bits of the xmm register. fma3 instructions leave the bits unmodified. This requires separate builtins for the different semantics.
While we're cleaning up the scalar builtins this also removes the fma3 fmsub/fnmadd/fnmsub builtins by using negates in the header file.
llvm-svn: 318985
This is an instrumentation flag that's similar to
-finstrument-functions, but it only inserts calls on function entry, the
calls are inserted post-inlining, and they don't take any arugments.
This is intended for users who want to instrument function entry with
minimal overhead.
(-pg would be another alternative, but forces frame pointer emission and
affects link flags, so is probably best left alone to be used for
generating gcov data.)
Differential revision: https://reviews.llvm.org/D40276
llvm-svn: 318785
On e.g. PPC the return value and argument were marked 'signext'. This
makes the test expectations a bit more flexible.
Follow-up to r318199.
llvm-svn: 318214
This updates -mcount to use the new attribute names (LLVM r318195), and
switches over -finstrument-functions to also use these attributes rather
than inserting instrumentation in the frontend.
It also adds a new flag, -finstrument-functions-after-inlining, which
makes the cygprofile instrumentation get inserted after inlining rather
than before.
Differential Revision: https://reviews.llvm.org/D39331
llvm-svn: 318199
Not much interesting here. Mostly wiring things together.
One thing worth noting is that the approach is substantially different
from the old PM. Here, the -O0 case works fundamentally differently in
that we just directly build the pipeline without any callbacks or other
cruft. In some ways, this is nice and clean. However, I don't like that
it causes the sanitizers to be enabled with different changes at
different times. =/ Suggestions for a better way to do this are welcome.
Differential Revision: https://reviews.llvm.org/D39085
llvm-svn: 318131
cbrt() is always constant because it can't overflow or underflow. Therefore, it can't set errno.
fma() is not always constant because it can overflow or underflow. Therefore, it can set errno.
But we know that it never sets errno on GNU / MSVC, so make it constant in those environments.
Differential Revision: https://reviews.llvm.org/D39641
llvm-svn: 318093
Recommit of r317951 and r317951 along with what I believe should fix
the remaining buildbot failures - the target triple should be specified
for both the ThinLTO pre-thinlink compile and backend (post-thinlink)
compile to ensure it is consistent.
Original description:
The LTO Config field wasn't being set when invoking a ThinLTO backend
via clang (i.e. for distributed builds).
llvm-svn: 318042
Change Header files of the intrinsics for lowering test and testn intrinsics to IR code.
Removed test and testn builtins from clang
Differential Revision: https://reviews.llvm.org/D38737
llvm-svn: 318035
This patch, together with a matching llvm patch (https://reviews.llvm.org/D38671), implements the lowering of X86 shuffle i/f intrinsics to IR.
Differential Revision: https://reviews.llvm.org/D38672
Change-Id: I9b3c2f2b34323bd9ccb21d0c1832f848b88ec047
llvm-svn: 318025
Summary:
The LTO Config field wasn't being set when invoking a ThinLTO backend
via clang (i.e. for distributed builds).
Reviewers: danielcdh
Subscribers: mehdi_amini, inglorion, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D39923
llvm-svn: 317951
The backend should be able to combine the negates to create fmsub, fnmadd, and fnmsub. faddsub converting to fsubadd still needs work I think, but should be very doable.
This matches what we already do for the masked builtins.
This only covers the packed builtins. Scalar builtins will be done after FMA4 is fixed.
llvm-svn: 317873
Summary:
This just seems to have been an oversight. We already supported the f64
atomic add with an explicit scope (e.g. "cta"), but not the scopeless
version.
Reviewers: tra
Subscribers: jholewinski, sanjoy, cfe-commits, llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D39638
llvm-svn: 317623
The files are already large, and we may need to add even more RUNs to
distinguish differences based on OS, environment, or other platform things.
llvm-svn: 317583
This just makes const-ness of the builtins match const-ness of their lib function siblings.
We're deferring fixing some of these that are obviously wrong to follow-up patches.
Hopefully, the bugs are visible in the new test file (added at rL317220).
As the description in Builtins.def says: "e = const, but only when -fmath-errno=0".
This is step 2 of N to fix builtins and math calls as discussed in D39204.
Differential Revision: https://reviews.llvm.org/D39481
llvm-svn: 317265
Summary:
This change allows generalizing pointers in type signatures used for
cfi-icall by enabling the -fsanitize-cfi-icall-generalize-pointers flag.
This works by 1) emitting an additional generalized type signature
metadata node for functions and 2) llvm.type.test()ing for the
generalized type for translation units with the flag specified.
This flag is incompatible with -fsanitize-cfi-cross-dso because it would
require emitting twice as many type hashes which would increase artifact
size.
Reviewers: pcc, eugenis
Reviewed By: pcc
Subscribers: kcc
Differential Revision: https://reviews.llvm.org/D39358
llvm-svn: 317044
The LLVM sqrt intrinsic definition changed with:
D28797
...so we don't have to use any relaxed FP settings other than errno handling.
This patch sidesteps a question raised in PR27435:
https://bugs.llvm.org/show_bug.cgi?id=27435
Is a programmer using __builtin_sqrt() invoking the compiler's intrinsic definition of sqrt or the mathlib definition of sqrt?
But we have an answer now: the builtin should match the behavior of the libm function including errno handling.
Differential Revision: https://reviews.llvm.org/D39204
llvm-svn: 317031
Craig noticed that CodeGen wasn't properly ignoring the
values sent to the target attribute. This patch ignores
them.
This patch also sets the 'default' for this checking to
'supported', since only X86 has implemented the support
for checking valid CPU names and Feature Names.
One test was changed to i686, since it uses a lakemont,
which would otherwise be prohibited in x86_64.
Differential Revision: https://reviews.llvm.org/D39357
llvm-svn: 316783
The exisiting code goes out of its way to put block parameters into an
alloca only at -O0, and then describes the funciton argument with a
dbg.declare, which is undocumented in the LLVM-CFE contract and does
not actually behave as intended after LLVM r642022.
This patch just generates the alloca unconditionally, the mem2reg pass
will eliminate it at -O1 and up anyway and points the dbg.declare to
the alloca as intended (which mem2reg will then correctly rewrite into
a dbg.value).
This reapplies r316684 with some dead code removed.
rdar://problem/35043980
Differential Revision: https://reviews.llvm.org/D39305
llvm-svn: 316689
The exisiting code goes out of its way to put block parameters into an
alloca only at -O0, and then describes the funciton argument with a
dbg.declare, which is undocumented in the LLVM-CFE contract and does
not actually behave as intended after LLVM r642022.
This patch just generates the alloca unconditionally, the mem2reg pass
will eliminate it at -O1 and up anyway and points the dbg.declare to
the alloca as intended (which mem2reg will then correctly rewrite into
a dbg.value).
rdar://problem/35043980
Differential Revision: https://reviews.llvm.org/D39305
llvm-svn: 316684
Darwin uses char * for the variadic list type (va_list). We use the PPC
SVR4 ABI for PPC, which uses a structure type for the va_list. When
constructing the GEP, we would fail due to the incorrect handling for
the va_list. Correct this to use the right type.
llvm-svn: 316599
In function GetIntrinsic, not all types are covered. Types double and long long are missed, type long is wrongly treated same as int, it should be same as long long. These problems cause compiler crashes when compiling code in PR31161. This patch fixed the problem.
Differential Revision: https://reviews.llvm.org/D38820
llvm-svn: 316179
This patch has the following changes
A new flag "-mhvx-length={64B|128B}" is introduced to specify the length of the vector.
Previously we have used "-mhvx-double" for 128 Bytes. This adds the target-feature "+hvx-length{64|128}b"
The "-mhvx" flag must be provided on command line to enable HVX for Hexagon. If no -mhvx-length flag
is specified, a default length is picked from the arch mentioned in this priority order from either -mhvx=vxx
or -mcpu. For v60 and v62 the default length is 64 Byte. For unknown versions, the length is 128 Byte. The
-mhvx flag adds the target-feature "+hvxv{hvx_version}"
The 64 Byte mode is soon going to be deprecated. A warning is emitted if 64 Byte is enabled. A warning is
still emitted for the default 64 Byte as well. This warning can be suppressed with a -Wno flag.
The "-mhvx-double" and "-mno-hvx-double" flags are deprecated. A warning is emitted if the driver sees
them on commandline. "-mhvx-double" is an alias to "-mhvx-length=128B"
The compilation will error out if -mhvx-length is specified with out an -mhvx/-mhvx= flag
The macro HVX_LENGTH is defined and is set to the length of the vector.
Eg: #define HVX_LENGTH 64
The macro HVX_ARCH is defined and is set to the version of the HVX.
Eg: #define HVX_ARCH 62
Differential Revision: https://reviews.llvm.org/D38852
llvm-svn: 316102
The nan family of math routines do not rely on global state. They do
however depend on their parameter. This fits the description of pure:
Functions which have no effects except the return value and their
return value depends only on the parameters and/or global variables.
Mark the family as `readonly`.
llvm-svn: 315968
The `nan` family of functions will inspect the contents of the parameter
that they are passed. As a result, the function cannot be annotated as
`const`. The documentation of the `const` attribute explicitly states
this:
Note that a function that has pointer arguments and examines the data
pointed to must not be declared const.
Adjust the annotations on this family of functions.
llvm-svn: 315741
Usually compare expression should return i1 type, so EmitScalarConversion is called before return
return EmitScalarConversion(Result, CGF.getContext().BoolTy, E->getType(), E->getExprLoc());
But when ppc intrinsic is called to compare vectors, the ppc intrinsic can return i32 even E->getType() is BoolTy, in this case EmitScalarConversion does nothing, an i32 type result is returned and causes crash later.
This patch detects this case and truncates the result before return.
Differential Revision: https://reviews.llvm.org/D38656
llvm-svn: 315358
Move the logic for determining the `wchar_t` type information into the
driver. Rather than passing the single bit of information of
`-fshort-wchar` indicate to the frontend the desired type of `wchar_t`
through a new `-cc1` option of `-fwchar-type` and indicate the
signedness through `-f{,no-}signed-wchar`. This replicates the current
logic which was spread throughout Basic into the
`RenderCharacterOptions`.
Most of the changes to the tests are to ensure that the frontend uses
the correct type. Add a new test set under `test/Driver/wchar_t.c` to
ensure that we calculate the proper types for the various cases.
llvm-svn: 315126
This patch is an attempt to clarify and simplify generation and
propagation of TBAA information. The idea is to pack all values
that describe a memory access, namely, base type, access type and
offset, into a single structure. This is supposed to make further
changes, such as adding support for unions and array members,
easier to prepare and review.
DecorateInstructionWithTBAA() is no more responsible for
converting types to tags. These implicit conversions not only
complicate reading the code, but also suggest assigning scalar
access tags while we generally prefer full-size struct-path tags.
TBAAPathTag is replaced with TBAAAccessInfo; the latter is now
the type of the keys of the cache map that translates access
descriptors to metadata nodes.
Fixed a bug with writing to a wrong map in
getTBAABaseTypeMetadata() (former getTBAAStructTypeInfo()).
We now check for valid base access types every time we
dereference a field. The original code only checks the top-level
base type. See isValidBaseType() / isTBAAPathStruct() calls.
Some entities have been renamed to sound more adequate and less
confusing/misleading in presence of path-aware TBAA information.
Now we do not lookup twice for the same cache entry in
getAccessTagInfo().
Refined relevant comments and descriptions.
Differential Revision: https://reviews.llvm.org/D37826
llvm-svn: 315048
code size.
Currently clang expands a call to __builtin_os_log_format into a long
sequence of instructions at the call site, causing code size to
increase in some cases.
This commit attempts to reduce code size by emitting a helper function
that can be shared by calls to __builtin_os_log_format with similar
formats and arguments. The helper function has linkonce_odr linkage to
enable the linker to merge identical functions across translation units.
Attribute 'noinline' is attached to the helper function at -Oz so that
the inliner doesn't inline functions that can potentially be merged.
This commit also fixes a bug where the generated IR writes past the end
of the buffer when "%m" is the last specifier appearing in the format
string passed to __builtin_os_log_format.
Original patch by Duncan Exon Smith.
rdar://problem/34065973
rdar://problem/34196543
Differential Revision: https://reviews.llvm.org/D38606
llvm-svn: 315045
Currently block is translated to a structure equivalent to
struct Block {
void *isa;
int flags;
int reserved;
void *invoke;
void *descriptor;
};
Except invoke, which is the pointer to the block invoke function,
all other fields are useless for OpenCL, which clutter the IR and
also waste memory since the block struct is passed to the block
invoke function as argument.
On the other hand, the size and alignment of the block struct is
not stored in the struct, which causes difficulty to implement
__enqueue_kernel as library function, since the library function
needs to know the size and alignment of the argument which needs
to be passed to the kernel.
This patch removes the useless fields from the block struct and adds
size and align fields. The equivalent block struct will become
struct Block {
int size;
int align;
generic void *invoke;
/* custom fields */
};
It also changes the pointer to the invoke function to be
a generic pointer since the address space of a function
may not be private on certain targets.
Differential Revision: https://reviews.llvm.org/D37822
llvm-svn: 314932
Allow the proper recognition of Enum values and global variables inside ms inline-asm memory / immediate expressions, as they require some additional overhead and treated incorrect if doesn't early recognized.
supersedes D33278, D35774
Differential Revision: https://reviews.llvm.org/D37413
llvm-svn: 314494
Allow the proper recognition of Enum values and global variables inside ms inline-asm memory / immediate expressions, as they require some additional overhead and treated incorrect if doesn't early recognized.
supersedes D33277, D35775
Corrsponds with D37412, D37413
llvm-svn: 314300
This patch fixes clang to decorate reference accesses as pointers
and not as "omnipotent chars".
Differential Revision: https://reviews.llvm.org/D38074
llvm-svn: 314209
Summary:
This is the follow-up patch to D37924.
This change refactors clang to use the the newly added section headers
in SpecialCaseList to specify which sanitizers blacklists entries
should apply to, like so:
[cfi-vcall]
fun:*bad_vcall*
[cfi-derived-cast|cfi-unrelated-cast]
fun:*bad_cast*
The SanitizerSpecialCaseList class has been added to allow querying by
SanitizerMask, and SanitizerBlacklist and its downstream users have been
updated to provide that information. Old blacklists not using sections
will continue to function identically since the blacklist entries will
be placed into a '[*]' section by default matching against all
sanitizers.
Reviewers: pcc, kcc, eugenis, vsk
Reviewed By: eugenis
Subscribers: dberris, cfe-commits, mgorny
Differential Revision: https://reviews.llvm.org/D37925
llvm-svn: 314171
Clang regression tests that depend on the optimizer can break
when there are changes to LLVM...as in:
https://reviews.llvm.org/rL314117
llvm-svn: 314144
This commit fixes a bug in the handling of storage-only __fp16 vectors
where clang didn't promote __fp16 vector operands to float vectors.
Conceptually, it performs the following transformation on the AST in
CreateBuiltinBinOp and CreateBuiltinUnaryOp:
(Before)
typedef __fp16 half4 __attribute__ ((vector_size (8)));
typedef float float4 __attribute__ ((vector_size (16)));
half4 hv0, hv1, hv2, hv3;
hv0 = hv1 + hv2 + hv3;
(After)
float4 t0 = (float4)hv1 + (float4)hv2;
float4 t1 = t0 + (float4)hv3;
hv0 = (half4)t1;
Note that this commit fixes the bug for targets that set
HalfArgsAndReturns to true (ARM and ARM64). Targets using intrinsics
such as llvm.convert.to.fp16 to handle __fp16 are still broken.
rdar://problem/20625184
Differential Revision: https://reviews.llvm.org/D32520
llvm-svn: 314056
body of global block invoke functions.
This commit fixes an infinite loop in IRGen that occurs when compiling
the following code:
void FUNC2() {
static void (^const block1)(int) = ^(int a){
if (a--)
block1(a);
};
}
This is how IRGen gets stuck in the infinite loop:
1. GenerateBlockFunction is called to emit the body of "block1".
2. GetAddrOfGlobalBlock is called to get the address of "block1". The
function calls getAddrOfGlobalBlockIfEmitted to check whether the
global block has been emitted. If it hasn't been emitted, it then
tries to emit the body of the block function by calling
GenerateBlockFunction, which goes back to step 1.
This commit prevents the inifinite loop by building the global block in
GenerateBlockFunction before emitting the body of the block function.
rdar://problem/34541684
Differential Revision: https://reviews.llvm.org/D38118
llvm-svn: 314029
The recently behavior in the code that these tests were meant to be checking will be ammended as soon as a suitable change can be properly reviewed.
llvm-svn: 313684
Summary:
Restore the `__builtin_wasm_rethrow` builtin deleted in D37931. On second
thought, it appears it can be used to implement `__cxa_rethrow`.
Reviewers: dschuff, sunfish
Reviewed By: dschuff
Subscribers: jfb, sbc100, jgravelle-google
Differential Revision: https://reviews.llvm.org/D37942
llvm-svn: 313430
This patch replaces the perm2f128 intrinsics with native shuffle vectors.
This uses a pretty simple approach to allocate source 0 to the lower half input and source 1 to the upper half input. Then its just a matter of filling in the indices to use either the lower or upper half of that specific source. This can result in the same source being used by both operands. InstCombine or SelectionDAGBuilder should be able to clean that up.
Differential Revision: https://reviews.llvm.org/D37892
llvm-svn: 313418
Summary:
Remove `__builtin_wasm_rethrow` builtin. I thought it was required to implement
`__cxa_rethrow` function in libcxxabi, but it turned out it will be using
`__builtin_wasm_throw` instead.
Reviewers: dschuff, jgravelle-google
Reviewed By: jgravelle-google
Subscribers: jfb, sbc100, jgravelle-google
Differential Revision: https://reviews.llvm.org/D37931
llvm-svn: 313402
The __builtin_ia32_pbroadcastq512_mem_mask we were previously trying to use in 32-bit mode is not implemented in the x86 backend and causes isel to fail in release builds. In debug builds it fails even earlier during legalization with an llvm_unreachable.
While there add the missing test case for this intrinsic for this for 64-bit mode.
This fixes PR34631. D37668 should be able to recover this for 32-bit mode soon. But I wanted to fix the crash ahead of that.
llvm-svn: 313392
Summary:
As the attributed statements are considered simple statements no
stoppoint was generated before emitting attributed do/while/for/range-
statement. This lead to faulty debug locations.
Reviewers: echristo, aaron.ballman, dblaikie
Reviewed By: dblaikie
Subscribers: bjope, aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D37428
llvm-svn: 312623
Based off the Intel Intrinsics guide, we should expect a void const* argument.
Prevents 'passing 'const void *' to parameter of type 'void *' discards qualifiers' warnings.
Differential Revision: https://reviews.llvm.org/D37449
llvm-svn: 312523
Because it is common to treat vector types as an array of their elements, or
even some other type that's not the element type, and thus index into them, we
can't use struct-path TBAA for these accesses. Even though we already treat all
vector types as equivalent to 'char', we were using field-offset information
for them with TBAA, and this renders undefined the intra-value indexing we
intend to allow. Note that, although 'char' is universally aliasing, with path
TBAA, we can still differentiate between access to s.a and s.b in
struct { char a, b; } s;. We can't use this capability as-is for vector types.
Fixes PR33967.
llvm-svn: 312447
Tests fail on ARM targets due to ABI name between define and void. Added reg ex to skip.
Patch by Glenn Howe (and expanded on by Douglas Yung)!
Differential Revision: https://reviews.llvm.org/D33410
llvm-svn: 312181
This patch implements the broadcastf32x2/broadcasti32x2 intrinsics using __builtin_shufflevector.
Differential Revision: https://reviews.llvm.org/D37287
llvm-svn: 312135
Summary:
An implementation of ubsan runtime library suitable for use in production.
Minimal attack surface.
* No stack traces.
* Definitely no C++ demangling.
* No UBSAN_OPTIONS=log_file=/path (very suid-unfriendly). And no UBSAN_OPTIONS in general.
* as simple as possible
Minimal CPU and RAM overhead.
* Source locations unnecessary in the presence of (split) debug info.
* Values and types (as in A+B overflows T) can be reconstructed from register/stack dumps, once you know what type of error you are looking at.
* above two items save 3% binary size.
When UBSan is used with -ftrap-function=abort, sometimes it is hard to reason about failures. This library replaces abort with a slightly more informative message without much extra overhead. Since ubsan interface in not stable, this code must reside in compiler-rt.
Reviewers: pcc, kcc
Subscribers: srhines, mgorny, aprantl, krytarowski, llvm-commits
Differential Revision: https://reviews.llvm.org/D36810
llvm-svn: 312029
This adds builtin_cpu_init which will emit a call to cpu_indicator_init in libgcc or compiler-rt.
This is needed to support builtin_cpu_supports/builtin_cpu_is in an ifunc resolver.
Differential Revision: https://reviews.llvm.org/D36336
llvm-svn: 311874
Summary: With accurate sample profile, we can do more aggressive size optimization. For some size-critical application, this can reduce the text size by 20%
Reviewers: davidxl, rsmith
Reviewed By: davidxl, rsmith
Subscribers: mehdi_amini, eraman, sanjoy, cfe-commits
Differential Revision: https://reviews.llvm.org/D37091
llvm-svn: 311707
This patch is intended to enable the use of basic double letter constraints used in GCC extended inline asm {Yi Y2 Yz Y0 Ym Yt}.
Supersedes D35205
llvm counterpart: D36369
Differential Revision: https://reviews.llvm.org/D36371
llvm-svn: 311643
Summary:
Most DIExpressions are empty or very simple. When they are complex, they
tend to be unique, so checking them inline is reasonable.
This also avoids the need for CodeGen passes to append to the
llvm.dbg.mir named md node.
See also PR22780, for making DIExpression not be an MDNode.
Reviewers: aprantl, dexonsmith, dblaikie
Subscribers: qcolombet, javed.absar, eraman, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D37075
llvm-svn: 311594
Summary:
Even in the case of the input file is a preprocessed source, clang uses the file name of the preprocesses source for debug info (DW_AT_name attribute for DW_TAG_compile_unit). However, gcc uses the file name specified in the first linemarker instead. This makes more sense because the one specified in the linemarker represents the "actual" source file name.
Clang already uses the file name specified in the first linemarker for Module name (https://github.com/llvm-mirror/clang/blob/master/lib/Frontend/FrontendAction.cpp#L779) if the input is preprocessed. This patch makes clang to use the same value for debug info as well.
Reviewers: compnerd, rnk, dblaikie, rsmith
Reviewed By: rnk
Subscribers: aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D36474
llvm-svn: 311037
This is causing failures when compiling clang with -O3
as one of the structures used by clang is passed by
value and uses the fastcc calling convention.
Faliures manifest for stage2 mips build.
llvm-svn: 310704
This patch adds support for __builtin_cpu_is. I've tried to match the strings supported to the latest version of gcc.
Differential Revision: https://reviews.llvm.org/D35449
llvm-svn: 310657
Currently, only non-negative immediate is allowed prior to a brac expression (memory reference).
MASM / GAS does not have any problem cope with the left side of the real line, so we should be able to as well.
llvm: D36229
Differential Revision: https://reviews.llvm.org/D36230
llvm-svn: 310529
MS InlineAsm Dot operator accepts "Bases" such as "this" (cpp) and class/struct pointer typedef.
This patch enhance its implementation with this behavior.
Differential Revision: https://reviews.llvm.org/D36450
llvm-svn: 310472
They still need to be implemented in the intrinsics, the command line, and the backend. But this change isn't dependent on any of that and resolves a TODO.
llvm-svn: 310386
Summary:
On older processors this instruction encoding is treated as a NOP.
MSVC doesn't disable intrinsics based on features the way clang/gcc does. Because the PAUSE instruction encoding doesn't crash older processors, some software out there uses these intrinsics without checking for SSE2.
This change also seems to also be consistent with gcc behavior.
Fixes PR34079
Reviewers: RKSimon, zvi
Reviewed By: RKSimon
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D36362
llvm-svn: 310191
Summary:
Previously, STL allocators were blacklisted in compiler_rt's
cfi_blacklist.txt because they mandated a cast from void* to T* before
object initialization completed. This change moves that logic into the
front end because C++ name mangling supports a substitution compression
mechanism for symbols that makes it difficult to blacklist the mangled
symbol for allocate() using a regular expression.
Motivated by crbug.com/751385.
Reviewers: pcc, kcc
Reviewed By: pcc
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D36294
llvm-svn: 310097
This is causing failures when compiling clang with -O3
as one of the structures used by clang is passed by
value and uses the fastcc calling convention.
Faliures manifest for stage2 mips build.
llvm-svn: 310057
This option when combined with -mgpopt and -membedded-data places all
uninitialized constant variables in the read-only section.
Reviewers: atanasyan, nitesh.jain
Differential Revision: https://reviews.llvm.org/D35917
llvm-svn: 309940
A change to InstCombine broke this test, but we generally frown on running optimizations clang tests anyway. So I've updated the checks to not depend on optimizations anymore.
llvm-svn: 309616
MS ignores the keyword "short" when used after a jc/jz instruction, LLVM ought to do the same.
llvm: D35892
Differential Revision: https://reviews.llvm.org/D35893
llvm-svn: 309510
Allows the incorporation of legit (x86) Control Regs within inline asm stataements
Differential Revision: https://reviews.llvm.org/D35903
llvm-svn: 309508
Clang specifies a max type alignment of 16 bytes on darwin targets (annoyingly in the driver not via cc1), meaning that the builtin nontemporal stores don't correctly align the loads/stores to 32 or 64 bytes when required, resulting in lowering to temporal unaligned loads/stores.
This patch casts the vectors to explicitly aligned types prior to the load/store to ensure that the require alignment is respected.
Differential Revision: https://reviews.llvm.org/D35996
llvm-svn: 309488
On some targets, passing zero to the clz() or ctz() builtins has undefined
behavior. I ran into this issue while debugging UB in __hash_table from libcxx:
the bug I was seeing manifested itself differently under -O0 vs -Os, due to a
UB call to clz() (see: libcxx/r304617).
This patch introduces a check which can detect UB calls to builtins.
llvm.org/PR26979
Differential Revision: https://reviews.llvm.org/D34590
llvm-svn: 309459
Clang specifies a max type alignment of 16 bytes on darwin targets, meaning that the builtin nontemporal stores don't correctly align the loads/stores to 32 or 64 bytes when required, resulting in lowering to temporal unaligned loads/stores.
llvm-svn: 309382
The ARM Runtime ABI document (IHI0043) defines the AEABI floating point
helper functions in 4.1.2 The floating-point helper functions. These
functions always use the base PCS (soft-fp). However helper functions
defined outside of this document such as the complex-number multiply and
divide helpers are not covered by this requirement and should use
hard-float PCS if the target is hard-float as both compiler-rt and libgcc
for a hard-float sysroot implement these functions with a hard-float PCS.
All of the floating point helper functions that are explicitly soft float
are expanded in the llvm ARM backend. This change makes clang not force the
BuiltinCC to AAPCS for AAPCS_VFP. With this change the ARM compiler-rt
tests involving _Complex pass with both hard-fp and soft-fp targets.
Differential Revision: https://reviews.llvm.org/D35538
llvm-svn: 309257
Notifying the author via Diffusion did not yield any answer. Therefore, I'm
adding the missing triple. I have no idea if this is the intended triple, but
it seems to fit the bill and should turn the bots back to green.
If the intended triple is a different one, please feel free to change it but I
need make this change to turn the bots back to green now.
llvm-svn: 308985
This reverts r308867 and r308866.
It broke the sanitizer-windows buildbot on C++ code similar to the
following:
namespace cl { }
void f() {
__asm {
mov al, cl
}
}
t.cpp(4,13): error: unexpected namespace name 'cl': expected expression
mov al, cl
^
In this case, MSVC parses 'cl' as a register, not a namespace.
llvm-svn: 308926
On MS-style, the following snippet:
int eax;
__asm mov eax, ebx
should yield loading of ebx, into the location pointed by the variable eax
This patch sees to it.
Currently, a reg-to-reg move would have been invoked.
llvm: D34739
Differential Revision: https://reviews.llvm.org/D34740
llvm-svn: 308867
MIPS gcc supports `long_call/far/near` attributes only, but other
targets have the `short_call` attribut, so let's support it for MIPS
for consistency.
llvm-svn: 308719
The patch adds support of i128 params lowering. The changes are quite trivial to
support i128 as a "special case" of integer type. With this patch, we lower i128
params the same way as aggregates of size 16 bytes: .param .b8 _ [16].
Currently, NVPTX can't deal with the 128 bit integers:
* in some cases because of failed assertions like
ValVTs.size() == OutVals.size() && "Bad return value decomposition"
* in other cases emitting PTX with .i128 or .u128 types (which are not valid [1])
[1] http://docs.nvidia.com/cuda/parallel-thread-execution/index.html#fundamental-types
Differential Revision: https://reviews.llvm.org/D34555
Patch by: Denys Zariaiev (denys.zariaiev@gmail.com)
llvm-svn: 308675
This patch adds support for the `long_call`, `far`, and `near` attributes
for MIPS targets. The `long_call` and `far` attributes are synonyms. All
these attributes override `-mlong-calls` / `-mno-long-calls` command
line options for particular function.
Differential revision: https://reviews.llvm.org/D35479
llvm-svn: 308667
Summary: COFF ARM64 is LLP64 platform. So int is 4 bytes, long is 4 bytes and long long is 8 bytes.
Reviewers: compnerd, ruiu, rnk, efriedma
Reviewed By: compnerd, efriedma
Subscribers: efriedma, javed.absar, cfe-commits, aemerson, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D34859
llvm-svn: 308222
Move builtins from the x86 specific scope into the global
scope. Their use is still limited to x86_64 and aarch64 though.
This allows wine on aarch64 to properly handle variadic functions.
Differential Revision: https://reviews.llvm.org/D34475
llvm-svn: 308218
This patch updates the vecintrin.h header file to provide the new
set of high-level vector built-in functions. This matches the
updated definition implemented by other compilers for the platform,
indicated by the pre-defined macro __VEC__ == 10302.
Note that some of the new functions (notably those involving the
vector float data type) are only available with -march=z14
(indicated by __ARCH__ == 12).
llvm-svn: 308199
This patch extends the -fzvector language feature to enable the new
"vector float" data type when compiling at -march=z14. This matches
the updated extension definition implemented by other compilers for
the platform, which is indicated to applications by pre-defining
__VEC__ to 10302 (instead of 10301).
llvm-svn: 308198
This patch series adds support for the IBM z14 processor. This part includes:
- Basic support for the new processor and its features.
- Support for low-level builtins mapped to new LLVM intrinsics.
Support for the -fzvector extension to vector float and the new
high-level vector intrinsics is provided by separate patches.
llvm-svn: 308197
This is the clang part, adding support for
void __builtin_HEXAGON_Y2_dccleana(void*);
void __builtin_HEXAGON_Y2_dccleaninva(void*);
void __builtin_HEXAGON_Y2_dcinva(void*);
void __builtin_HEXAGON_Y2_dczeroa(void*);
void __builtin_HEXAGON_Y4_l2fetch(void*, unsigned);
void __builtin_HEXAGON_Y5_l2fetch(void*, unsigned long long);
Requires r308032.
llvm-svn: 308035
The pointer overflow check gives false negatives when dealing with
expressions in which an unsigned value is subtracted from a pointer.
This is summarized in PR33430 [1]: ubsan permits the result of the
subtraction to be greater than "p", but it should not.
To fix the issue, we should track whether or not the pointer expression
is a subtraction. If it is, and the indices are unsigned, we know to
expect "p - <unsigned> <= p".
I've tested this by running check-{llvm,clang} with a stage2
ubsan-enabled build. I've also added some tests to compiler-rt, which
are in D34122.
[1] https://bugs.llvm.org/show_bug.cgi?id=33430
Differential Revision: https://reviews.llvm.org/D34121
llvm-svn: 307955
devirtualized.
The code to detect devirtualized calls is already in IRGen, so move the
code to lib/AST and make it a shared utility between Sema and IRGen.
This commit fixes a linkage error I was seeing when compiling the
following code:
$ cat test1.cpp
struct Base {
virtual void operator()() {}
};
template<class T>
struct Derived final : Base {
void operator()() override {}
};
Derived<int> *d;
int main() {
if (d)
(*d)();
return 0;
}
rdar://problem/33195657
Differential Revision: https://reviews.llvm.org/D34301
llvm-svn: 307883
Summary:
The _bit_scan_forward and _bit_scan_reverse intrinsics were accidentally
masked under the preprocessor checks that prune intrinsics definitions for the
benefit of faster compile-time on Windows. This patch moves the
definitons out of that region.
Fixes pr33722
Reviewers: craig.topper, aaboud, thakis
Reviewed By: craig.topper
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D35184
llvm-svn: 307524
Certain targets (e.g. amdgcn) require global variable to stay in global or constant address
space. In C or C++ global variables are emitted in the default (generic) address space.
This patch introduces virtual functions TargetCodeGenInfo::getGlobalVarAddressSpace
and TargetInfo::getConstantAddressSpace to handle this in a general approach.
It only affects IR generated for amdgcn target.
Differential Revision: https://reviews.llvm.org/D33842
llvm-svn: 307470
Summary: The patch makes the integration test cover major sample PGO components.
Reviewers: davidxl
Reviewed By: davidxl
Subscribers: sanjoy, cfe-commits
Differential Revision: https://reviews.llvm.org/D34725
llvm-svn: 307445
problems in testing, see comments in D34161 for some more details.
A fix is in progres in D35011, but a revert seems better now as the fix will
probably take some more time to land.
llvm-svn: 307277
Summary: This implements the clang bits of https://reviews.llvm.org/D34720, and add corresponding test to verify if it worked.
Reviewers: chandlerc, davidxl, davide, tejohnson
Reviewed By: chandlerc, tejohnson
Subscribers: tejohnson, sanjoy, mehdi_amini, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D34721
llvm-svn: 306764
This patch extends the `overloadable` attribute to allow for one
function with a given name to not be marked with the `overloadable`
attribute. The overload without the `overloadable` attribute will not
have its name mangled.
So, the following code is now legal:
void foo(void) __attribute__((overloadable));
void foo(int);
void foo(float) __attribute__((overloadable));
In addition, this patch fixes a bug where we'd accept code with
`__attribute__((overloadable))` inconsistently applied. In other words,
we used to accept:
void foo(void);
void foo(void) __attribute__((overloadable));
But we will do this no longer, since it defeats the original purpose of
requiring `__attribute__((overloadable))` on all redeclarations of a
function.
This breakage seems to not be an issue in practice, since the only code
I could find that had this pattern often looked like:
void foo(void);
void foo(void) __attribute__((overloadable)) __asm__("foo");
void foo(int) __attribute__((overloadable));
...Which can now be simplified by simply removing the asm label and
overloadable attribute from the redeclaration of `void foo(void);`
Differential Revision: https://reviews.llvm.org/D32332
llvm-svn: 306467
Summary: This is the test update patch for https://reviews.llvm.org/D34662
Reviewers: davidxl
Reviewed By: davidxl
Subscribers: cfe-commits, sanjoy, mehdi_amini, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D34663
llvm-svn: 306430
Summary:
Change data layout string so it would be compatible with MSP430 EABI.
Depends on D34561
Reviewers: asl, awygle
Reviewed By: asl
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D34562
llvm-svn: 306161
We need to take type alignment padding into account whe computing physical
layouts.
The layout must be compatible with the input layout, offsets are defined in
terms of offsets within a packed struct which are computed in terms of the alloc
size of a type.
Usingthe store size we would insert padding for the following type for example:
struct {
int3 v;
long long l;
} __attribute((packed))
On x86-64 int3 is padded to int4 alignment. The swiftcc type would be
<{ <3 x float>, [4 x i8], i64 }> which is not compatible with <{ <3 x float>,
i64 }>.
The latter has i64 at offset 16 and the former at offset 20.
rdar://32618125
llvm-svn: 305956
In running some internal vectorcall tests in 32 bit mode, we discovered that the
behavior I'd previously implemented for x64 (and applied to x32) regarding the
assignment of SSE registers was incorrect. See spec here:
https://msdn.microsoft.com/en-us/library/dn375768.aspx
My previous implementation applied register argument position from the x64
version to both. This isn't correct for x86, so this removes and refactors that
section. Additionally, it corrects the integer/int-pointer assignments. Unlike
x64, x86 permits integers to be assigned independent of position.
Finally, the code for 32 bit was cleaned up a little to clarify the intent,
as well as given a descriptive comment.
Differential Revision: https://reviews.llvm.org/D34455
llvm-svn: 305928
This allows for -fms-extensions to work the same on LP64. For example,
_BitScanReverse is expected to be 32-bit, matching Windows/LLP64, even
though long is 64-bit on x86_64 Darwin or Linux (LP64).
Implement this by adding a new character code 'N', which is 'int' if
the target is LP64 and the same 'L' otherwise
Differential Revision: https://reviews.llvm.org/D34377
rdar://problem/32599746
llvm-svn: 305875
Summary:
Disable generation of counting-function attribute if no_instrument_function
attribute is present in function.
Interaction between -pg and no_instrument_function is the desired behavior
and matches gcc as well.
This is required for fixing a crash in Linux kernel when function tracing
is enabled.
Fixes PR33515.
Reviewers: hfinkel, rengolin, srhines, hans
Reviewed By: hfinkel
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D34357
llvm-svn: 305728
In C++ all variables are in default address space. Previously change has been
made to cast automatic variables to default address space. However that is
not sufficient since all temporary variables need to be casted to default
address space.
This patch casts all temporary variables to default address space except those
for passing indirect arguments since they are only used for load/store.
This patch only affects target having non-zero alloca address space.
Differential Revision: https://reviews.llvm.org/D33706
llvm-svn: 305711
Skip checks for null dereference, alignment violation, object size
violation, and dynamic type violation if the pointer points to volatile
data.
Differential Revision: https://reviews.llvm.org/D34262
llvm-svn: 305546
If a regular LTO module has a summary index, then instead of linking
it into the combined regular LTO module right away, add it to the
combined summary index and associate it with a special module that
represents the combined regular LTO module.
Any such modules are linked during LTO::run(), at which time we use
the results of summary-based dead stripping to control whether to
link prevailing symbols.
Differential Revision: https://reviews.llvm.org/D33922
llvm-svn: 305482
Add checking for the second parameter of altivec conversion builtin to make sure
it is compile-time constant int.
This patch fixes PR33212: PPC vec_cst useless at -O0
Differential Revision: https://reviews.llvm.org/D34092
llvm-svn: 305401
Summary:
It seems -flto must be either "thin" or "full". I think the use of
containValue is just a typo.
Reviewers: ruiu, tejohnson
Subscribers: mehdi_amini, inglorion
Differential Revision: https://reviews.llvm.org/D34055
llvm-svn: 305392
Summary:
The change "[CodeView] Implement support for bit fields in
Clang" (r274201, https://reviews.llvm.org/rL274201) broke the
calculation of bit offsets for the debug info describing bitfields on
big-endian targets.
Prior to commit r274201 the debug info for bitfields got their offsets
from the ASTRecordLayout in CGDebugInfo::CollectRecordFields(), the
current field offset was then passed on to
CGDebugInfo::CollectRecordNormalField() and used directly in the
DIDerivedType.
Since commit r274201, the bit offset ending up in the DIDerivedType no
longer comes directly from the ASTRecordLayout. Instead
CGDebugInfo::CollectRecordNormalField() calls the new method
CGDebugInfo::createBitFieldType(), which in turn calls
CodeGenTypes::getCGRecordLayout().getBitFieldInfo() to fetch a
CGBitFieldInfo describing the field. The 'Offset' member of
CGBitFieldInfo is then used to calculate the bit offset of the
DIDerivedType. Unfortunately the previous and current method of
calculating the bit offset are only equivalent for little endian
targets, as CGRecordLowering::setBitFieldInfo() reverses the bit
offsets for big endian targets as the last thing it does.
A simple reproducer for this error is the following module:
struct fields {
unsigned a : 4;
unsigned b : 4;
} flags = {0x0f, 0x1};
Compiled for Mips, with commit r274200 both the DIDerivedType bit
offsets on the IR-level and the DWARF information on the ELF-level
will have the expected values: the offsets of 'a' and 'b' are 0 and 4
respectively. With r274201 the offsets are switched to 4 and 0. By
noting that the static initialization of 'flags' in both cases is the
same, we can eliminate a change in record layout as the cause of the
change in the debug info. Also compiling this example with gcc,
produces the same record layout and debug info as commit r274200.
In order to restore the previous function we extend
CGDebugInfo::createBitFieldType() to compensate for the reversal done
in CGRecordLowering::setBitFieldInfo().
Patch by Frej Drejhammar!
Reviewers: cfe-commits, majnemer, rnk, aaboud, echristo, aprantl
Reviewed By: rnk, aprantl
Subscribers: aprantl, arichardson, frej
Differential Revision: https://reviews.llvm.org/D32745
llvm-svn: 305224
Adding an unsigned offset to a base pointer has undefined behavior if
the result of the expression would precede the base. An example from
@regehr:
int foo(char *p, unsigned offset) {
return p + offset >= p; // This may be optimized to '1'.
}
foo(p, -1); // UB.
This patch extends the pointer overflow check in ubsan to detect invalid
unsigned pointer index expressions. It changes the instrumentation to
only permit non-negative offsets in pointer index expressions when all
of the GEP indices are unsigned.
Testing: check-llvm, check-clang run on a stage2, ubsan-instrumented
build.
Differential Revision: https://reviews.llvm.org/D33910
llvm-svn: 305216
Summary:
If the first parameter of the function is the ImplicitParamDecl, codegen
automatically marks it as an implicit argument with `this` or `self`
pointer. Added internal kind of the ImplicitParamDecl to separate
'this', 'self', 'vtt' and other implicit parameters from other kind of
parameters.
Reviewers: rjmccall, aaron.ballman
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D33735
llvm-svn: 305075
The test in r304929 broke multiple buildbots as it expected mips target to
be registered and available (which is not necessarily true). Updating the
test with this condition.
Original commit:
[mips] Add runtime options to enable/disable madd.fmt and msub.fmt
Add options to clang: -mmadd4 and -mno-madd4, use it to enable or disable
generation of madd.fmt and similar instructions respectively, as per GCC.
Patch by Stefan Maksimovic.
llvm-svn: 304953
Revert r304929 since the test broke buildbots.
Original commit:
[mips] Add runtime options to enable/disable madd.fmt and msub.fmt
Add options to clang: -mmadd4 and -mno-madd4, use it to enable or disable
generation of madd.fmt and similar instructions respectively, as per GCC.
Patch by Stefan Maksimovic.
llvm-svn: 304935
Add options to clang: -mmadd4 and -mno-madd4, use it to enable or disable
generation of madd.fmt and similar instructions respectively, as per GCC.
Patch by Stefan Maksimovic.
Differential Revision: https://reviews.llvm.org/D33401
llvm-svn: 304929
Summary:
The thumb-mode target feature is used to force Thumb or ARM code
generation on a per-function basis. Explicitly adding +thumb-mode to
functions for thumbxx triples enables mixed ARM/Thumb code generation in
places where compilation units with thumbxx and armxx triples are merged
together (e.g. the IR linker or LTO).
For armxx triples, -thumb-mode is added in a similar fashion.
Reviewers: echristo, t.p.northover, kristof.beyls, rengolin
Reviewed By: echristo
Subscribers: rinon, aemerson, mehdi_amini, javed.absar, cfe-commits
Differential Revision: https://reviews.llvm.org/D33448
llvm-svn: 304897
This is restricted version of patch - https://reviews.llvm.org/D33205
that I reverted as it was leading to ABI breaks on darwin etc.
This patch restricts the fix to AAPCS (Android remains 128-bit).
Reviewed by: Renato Golin, Stephen Hines
Differential Revision: https://reviews.llvm.org/D33786
llvm-svn: 304889
Summary:
This patch adds support for the target("arm") and target("thumb")
attributes, which can be used to force the compiler to generated ARM or
Thumb code for a function.
In LLVM, ARM or Thumb code generation can be controlled by the
thumb-mode target feature. But GCC already uses target("arm") and
target("thumb"), so we have to substitute "arm" with -thumb-mode and
"thumb" with +thumb-mode.
Reviewers: echristo, pcc, kristof.beyls
Reviewed By: echristo
Subscribers: ahatanak, aemerson, javed.absar, kristof.beyls, cfe-commits
Differential Revision: https://reviews.llvm.org/D33721
llvm-svn: 304781
It already specifies the triples, so the intention was to test x86 for
now (or then).
Differential Revision: https://reviews.llvm.org/D33692
llvm-svn: 304501
Summary: This patch teaches clang to use and propagate new PM in ThinLTO.
Reviewers: davide, chandlerc, tejohnson
Subscribers: mehdi_amini, Prazek, inglorion, cfe-commits
Differential Revision: https://reviews.llvm.org/D33692
llvm-svn: 304496
I'm not sure why, but on some bots, the order of two instructions are
swapped (as compared to the output on my machine). Loosen up the
CHECK-NEXT directives to deal with this.
Failing bot: http://lab.llvm.org:8011/builders/clang-with-lto-ubuntu/builds/3097
llvm-svn: 304486
Check pointer arithmetic for overflow.
For some more background on this check, see:
https://wdtz.org/catching-pointer-overflow-bugs.htmlhttps://reviews.llvm.org/D20322
Patch by Will Dietz and John Regehr!
This version of the patch is different from the original in a few ways:
- It introduces the EmitCheckedInBoundsGEP utility which inserts
checks when the pointer overflow check is enabled.
- It does some constant-folding to reduce instrumentation overhead.
- It does not check some GEPs in CGExprCXX. I'm not sure that
inserting checks here, or in CGClass, would catch many bugs.
Possible future directions for this check:
- Introduce CGF.EmitCheckedStructGEP, to detect overflows when
accessing structures.
Testing: Apart from the added lit test, I ran check-llvm and check-clang
with a stage2, ubsan-instrumented clang. Will and John have also done
extensive testing on numerous open source projects.
Differential Revision: https://reviews.llvm.org/D33305
llvm-svn: 304459
The test being marked 'REQUIRES: long-tests' doesn't make sense. It's not the
first time the test is broken without being noticed by the committer. If the
test is too long, it should be shortened, split in multiple ones or removed
altogether. Keeping it as is is actively harmful.
(BTW, on my machine `ninja check-clang` takes 90-92 seconds with and without
this test. The difference in times is below the spread caused by random
factors.)
llvm-svn: 304302
This reverts commit 304208, since r304201 has been reverted as well.
The test needs to be turned on by default to detect breakages earlier. Will
commit this change separately.
llvm-svn: 304301
The second argument must be a constant, otherwise instruction selection
will fail. always_inline is not enough for isel to always fold
everything away at -O0.
Sadly the overloading turned this into a big macro mess. Fixes PR33212.
llvm-svn: 304205
The maximum alignment for ARM NEON data types should be 64-bits as specified
in ARM procedure call standard document Sec. A.2 Notes.
This patch fixes it from its current larger natural default values, except
for Android (so as not to break existing ABI).
Reviewed by: Stephen Hines, Renato Golin.
Differential Revision: https://reviews.llvm.org/D33205
llvm-svn: 304201
Amongst other, this will help LTO to correctly handle/honor files
compiled with O0, helping debugging failures.
It also seems in line with how we handle other options, like how
-fnoinline adds the appropriate attribute as well.
Differential Revision: https://reviews.llvm.org/D28404
llvm-svn: 304127
AVX512_VPOPCNTDQ is a new feature set that was published by Intel.
The patch represents the Clang side of the addition of six intrinsics for two new machine instructions (vpopcntd and vpopcntq).
It also includes the addition of the new feature set.
Differential Revision: https://reviews.llvm.org/D33170
llvm-svn: 303857
It is clean when I build boostrap and run make checkall on my machine, I guess
it could be I only build bootstrap with assert, while the buildbots may build
without asserts, which could cause the difference.
llvm-svn: 303786
Summary:
This change allows us to add arg1 logging support to functions through
the special case list provided through -fxray-always-instrument=. This
is useful for adding arg1 logging to functions that are either in
headers that users don't have control over (i.e. cannot change the
source) or would rather not do.
It only takes effect when the pattern is matched through the "fun:"
special case, as a category. As in:
fun:*pattern=arg1
Reviewers: pelikan, rnk
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D33392
llvm-svn: 303719
This test was failing on our fork of clang because it was not capturing
[[ARG]] in the N32 case. Therefore it used the value from the last function
which does not always have to be the same. All other cases were already
capturing ARG so this appears to be an oversight.
The test now uses -enable-var-scope to prevent such errors in the future.
Reviewers: sdardis, atanasyan
Patch by: Alexander Richardson
Differential Revision: https://reviews.llvm.org/D32425
llvm-svn: 303619
This patch adds support for the `micromips` and `nomicromips` attributes
for MIPS targets.
Differential revision: https://reviews.llvm.org/D33363
llvm-svn: 303546
Re-commit r303463 now that LLVM is fixed and adjust some lit tests.
llvm::TargetLibraryInfo needs to know the size of wchar_t to work on
functions like `wcslen`. This patch changes clang to always emit the
wchar_size module flag (it would only do so for ARM previously).
This also adds an `assert()` to ensure the LLVM defaults based on the
target triple are in sync with clang.
Differential Revision: https://reviews.llvm.org/D32982
llvm-svn: 303478
Let's revert this for now (and with it the assert()) to get the bots
back to green until I have LLVM synced up properly.
This reverts commit r303463.
llvm-svn: 303474
llvm::TargetLibraryInfo needs to know the size of wchar_t to work on
functions like `wcslen`. This patch changes clang to always emit the
wchar_size module flag (it would only do so for ARM previously).
This also adds an `assert()` to ensure the LLVM defaults based on the
target triple are in sync with clang.
Differential Revision: https://reviews.llvm.org/D32982
llvm-svn: 303463
Alloca always returns a pointer in alloca address space, which may
be different from the type defined by the language. For example,
in C++ the auto variables are in the default address space. Therefore
cast alloca to the expected address space when necessary.
Differential Revision: https://reviews.llvm.org/D32248
llvm-svn: 303370
Summary:
Clang changes to remove this option and replace with a parameter
always set in the context of a ThinLTO distributed backend.
Depends on D33133.
Reviewers: pcc
Subscribers: mehdi_amini, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D33134
llvm-svn: 302940
Avoid using report_fatal_error, because it will ask the user to file a
bug. If the user attempts to disable SSE on x86_64 and them use floating
point, that's a bug in their code, not a bug in the compiler.
This is just a start. There are other ways to crash the backend in this
configuration, but they should be updated to follow this pattern.
Differential Revision: https://reviews.llvm.org/D27522
llvm-svn: 302835
Modified MipsABIInfo::classifyArgumentType so that it now coerces
aggregate structures only if the size of said aggregate is less than
16/64 bytes, depending on the ABI.
Patch by Stefan Maksimovic.
Differential Revision: https://reviews.llvm.org/D32900
with minor changes (use regexp instead of the hardcoded values) to the test.
llvm-svn: 302670
Sanitizer instrumentation generally needs to be marked with !nosanitize,
but we're not doing this properly for ubsan's overflow checks.
r213291 has more information about why this is needed.
llvm-svn: 302598
This feature is subtly broken when the linker is gold 2.26 or
earlier. See the following bug for details:
https://sourceware.org/bugzilla/show_bug.cgi?id=19002
Since the decision needs to be made at compilation time, we can not
test the linker version. The flag is off by default on ELF targets,
and on otherwise.
llvm-svn: 302591
Reverting
Modified MipsABIInfo::classifyArgumentType so that it now coerces
aggregate structures only if the size of said aggregate is less than 16/64
bytes, depending on the ABI.
as it broke clang-with-lto-ubuntu builder.
llvm-svn: 302555
Modified MipsABIInfo::classifyArgumentType so that it now coerces aggregate
structures only if the size of said aggregate is less than 16/64 bytes,
depending on the ABI.
Patch by Stefan Maksimovic.
Differential Revision: https://reviews.llvm.org/D32900
llvm-svn: 302547
Summary:
We define the `__xray_customeevent` builtin that gets translated to
IR calls to the correct intrinsic. The default implementation of this is
a no-op function. The codegen side of this follows the following logic:
- When `-fxray-instrument` is not provided in the driver, we elide all
calls to `__xray_customevent`.
- When `-fxray-instrument` is enabled and a function is marked as "never
instrumented", we elide all calls to `__xray_customevent` in that
function; if either marked as "always instrumented" or subject to
threshold-based instrumentation, we emit a call to the
`llvm.xray.customevent` intrinsic from LLVM for each
`__xray_customevent` occurrence in the function.
This change depends on D27503 (to land in LLVM first).
Reviewers: echristo, rsmith
Subscribers: mehdi_amini, pelikan, lrl, cfe-commits
Differential Revision: https://reviews.llvm.org/D30018
llvm-svn: 302492
This patch adds support for the the LightWeight Profiling (LWP) instructions which are available on all AMD Bulldozer class CPUs (bdver1 to bdver4).
Differential Revision: https://reviews.llvm.org/D32770
llvm-svn: 302418
It turns out there are some sort-of-but-not-quite empty structs that break all
the rules. For example:
struct SuperEmpty { int arr[0]; };
struct SortOfEmpty { struct SuperEmpty e; };
Both of these have sizeof == 0, even in C++ mode, for GCC compatibility. The
first one also doesn't occupy a register when passed by value in GNU C++ mode,
unlike everything else.
On Darwin, we want to ignore the lot (and especially don't want to try to use
an i0 as we were).
llvm-svn: 302313
This avoids problems on code like this:
char buf[16];
__asm {
movups xmm0, [buf]
mov [buf], eax
}
The frontend size in this case (1) is wrong, and the register makes the
instruction matching unambiguous. There are also enough bytes available
that we shouldn't complain to the user that they are potentially using
an incorrectly sized instruction to access the variable.
Supersedes D32636 and D26586 and fixes PR28266
llvm-svn: 302179
Implemented the remaining integer data processing intrinsics from
the ARM ACLE v2.1 spec, such as parallel arithemtic and DSP style
multiplications.
Differential Revision: https://reviews.llvm.org/D32282
llvm-svn: 302131
Currently, ubsan emits overflow checks for arithmetic that is known to
be safe at compile-time, e.g:
1 + 1 => CheckedAdd(1, 1)
This leads to breakage when using the __builtin_prefetch intrinsic. LLVM
expects the arguments to @llvm.prefetch to be constant integers, and
when ubsan inserts unnecessary checks on the operands to the intrinsic,
this contract is broken, leading to verifier failures (see PR32874).
Instead of special-casing __builtin_prefetch for ubsan, this patch fixes
the underlying problem, i.e that clang currently emits unnecessary
overflow checks.
Testing: I ran the check-clang and check-ubsan targets with a stage2,
ubsan-enabled build of clang. I added a regression test for PR32874, and
some extra checking to make sure we don't regress runtime checking for
unsafe arithmetic. The existing ubsan-promoted-arithmetic.cpp test also
provides coverage for this change.
llvm-svn: 301988
The previous algorithm processed one character at a time, which is very
painful on a modern CPU. Replace it with xxHash64, which both already
exists in the codebase and is fairly fast.
Patch from Scott Smith!
Differential Revision: https://reviews.llvm.org/D32509
llvm-svn: 301487
It's possible to determine the alignment of an alloca at compile-time.
Use this information to skip emitting some runtime alignment checks.
Testing: check-clang, check-ubsan.
This significantly reduces the amount of alignment checks we emit when
compiling X86ISelLowering.cpp. Here are the numbers from patched/unpatched
clangs based on r301361.
------------------------------------------
| Setup | # of alignment checks |
------------------------------------------
| unpatched, -O0 | 47195 |
| patched, -O0 | 30876 | (-34.6%)
------------------------------------------
llvm-svn: 301377
This change restores pre-r301225 behavior, where linker GC compatible global
instrumentation was used on COFF targets disregarding -f(no-)data-sections and/or
/Gw flags.
This instrumentation puts each global in a COMDAT with an ASan descriptor for that global.
It effectively enables -fdata-sections, but limits it to ASan-instrumented globals.
llvm-svn: 301374
Since Split DWARF needs to name the actual .dwo file that is generated,
it can't be known at the time the llvm::Module is produced as it may be
merged with other Modules before the object is generated and that object
may be generated with any name.
By passing the Split DWARF file name when LLVM is producing object code
the .dwo file name in the object file can match correctly.
The support for Split DWARF for implicit modules remains the same -
using metadata to store the dwo name and dwo id so that potentially
multiple skeleton CUs referring to different dwo files can be generated
from one llvm::Module.
llvm-svn: 301063
This restores the behavior prior to D31167 where the code-gen default was
FPC_On which mapped to FPOpFusion::Standard. After merging the FE
state (on/off) and the code-gen state (on/fast/off), the default became off to
match the front-end.
In other words, the front-end controls when to fuse along the language
standards and the backend shouldn't override this by splitting fused
intrinsics as FPOpFusion::Strict would imply.
Differential Revision: https://reviews.llvm.org/D32301
llvm-svn: 300858
LLVM has changed the semantics of dbg.declare for describing function
arguments. After this patch a dbg.declare always takes the *address*
of a variable as the first argument, even if the argument is not an
alloca.
https://bugs.llvm.org/show_bug.cgi?id=32382
rdar://problem/31205000
llvm-svn: 300523
MOVNTDQA non-temporal aligned vector loads can be correctly represented using generic builtin loads, allowing us to remove the existing x86 intrinsics.
LLVM companion patch: D31767.
Differential Revision: https://reviews.llvm.org/D31766
llvm-svn: 300326
For OpenCL, the private address space qualifier is 0 in AST. Before this change, 0 address space qualifier
is always mapped to target address space 0. As now target private address space is specified by
alloca address space in data layout, address space qualifier 0 needs to be mapped to alloca addr space specified by the data layout.
This change has no impact on targets whose alloca addr space is 0.
With contributions from Matt Arsenault, Tony Tye and Wen-Heng (Jack) Chung
Differential Revision: https://reviews.llvm.org/D31404
llvm-svn: 299965
This re-lands r299875.
I introduced a bug in Clang code responsible for replacing K&R, no
prototype declarations with a real function definition with a prototype.
The bug was here:
// Collect any return attributes from the call.
- if (oldAttrs.hasAttributes(llvm::AttributeList::ReturnIndex))
- newAttrs.push_back(llvm::AttributeList::get(newFn->getContext(),
- oldAttrs.getRetAttributes()));
+ newAttrs.push_back(oldAttrs.getRetAttributes());
Previously getRetAttributes() carried AttributeList::ReturnIndex in its
AttributeList. Now that we return the AttributeSetNode* directly, it no
longer carries that index, and we call this overload with a single node:
AttributeList::get(LLVMContext&, ArrayRef<AttributeSetNode*>)
That aborted with an assertion on x86_32 targets. I added an explicit
triple to the test and added CHECKs to help find issues like this in the
future sooner.
llvm-svn: 299899
Previously __cfi_check was created in LTO optimization pipeline, which
means LLD has no way of knowing about the existence of this symbol
without rescanning the LTO output object. As a result, LLD fails to
export __cfi_check, even when given --export-dynamic-symbol flag.
llvm-svn: 299806
It's used by MS headers in VS 2017 without including intrin.h, so we
can't implement it in the header anymore.
Differential Revision: https://reviews.llvm.org/D31736
llvm-svn: 299782
This adds the new pragma and the first variant, contract(on/off/fast).
The pragma has the same block scope rules as STDC FP_CONTRACT, i.e. it can be
placed at the beginning of a compound statement or at file scope.
Similarly to STDC FP_CONTRACT there is no need to use attributes. First an
annotate token is inserted with the parsed details of the pragma. Then the
annotate token is parsed in the proper contexts and the Sema is updated with
the corresponding FPOptions using the shared ActOn function with STDC
FP_CONTRACT.
After this the FPOptions from the Sema is propagated into the AST expression
nodes. There is no change here.
I was going to add a 'default' option besides 'on/off/fast' similar to STDC
FP_CONTRACT but then decided against it. I think that we'd have to make option
uppercase then to avoid using 'default' the keyword. Also because of the
scoped activation of pragma I am not sure there is really a need a for this.
Differential Revision: https://reviews.llvm.org/D31276
llvm-svn: 299470
With this, FMF(contract) becomes an alternative way to express the request to
contract.
These are currently only propagated for FMul, FAdd and FSub. The rest will be
added as more FMFs are hooked up for this.
This is toward fixing PR25721.
Differential Revision: https://reviews.llvm.org/D31168
llvm-svn: 299469
MS assembly syntax provide us with the 'EVEN' directive as a synonymous to at&t '.even'.
This patch include the (small, simple) changes need to allow it.
llvm-side:
https://reviews.llvm.org/D27417
Differential Revision: https://reviews.llvm.org/D27418
llvm-svn: 299454
This patch is a part two of two reviews, one for the clang and the other for LLVM.
In this patch, I covered the clang side, by introducing the intrinsic to the front end.
This is done by creating a generic replacement.
Differential Revision: https://reviews.llvm.org/D31394a
llvm-svn: 299431
Summary:
Use PreCodeGenModuleHook to invoke the correct writer when emitting LLVM
IR, returning false to skip codegen from within thinBackend.
Reviewers: pcc, mehdi_amini
Subscribers: Prazek, cfe-commits
Differential Revision: https://reviews.llvm.org/D31534
llvm-svn: 299274
Our _MM_HINT_T0/T1 constant values are 3/2 which matches gcc, but not icc or Intel documentation. Interestingly gcc had this same bug on their implementation of the gather/scatter builtins at one point too.
Fixes PR32411.
llvm-svn: 299233
Reasoning behind this change was allowing the function to accept all values
from range [-128, 255] since all of them can be encoded in an 8bit wide
value.
This differs from the prior state where only range [-128, 127] was accepted,
where values were assumed to be signed, whereas now the actual
interpretation of the immediate is deferred to the consumer as required.
Patch by Stefan Maksimovic.
Differential Revision: https://reviews.llvm.org/D31082
llvm-svn: 299229
Sigh, another follow-on fix needed for test in r299152 causing bot
failures. We also need the target-cpu for the ThinLTO BE clang
invocation.
llvm-svn: 299178
Third and hopefully final fix to test for r299152 that is causing bot
failures: make sure the target triple specified for the ThinLTO backend
clang invocations as well.
llvm-svn: 299176
Summary:
This involved refactoring out pieces of
EmitAssemblyHelper::CreateTargetMachine for use in runThinLTOBackend.
Subsumes D31114.
Reviewers: mehdi_amini, pcc
Subscribers: Prazek, cfe-commits
Differential Revision: https://reviews.llvm.org/D31508
llvm-svn: 299152
Summary:
The refactoring introduced a regression in the flag processing for
-fxray-instruction-threshold which causes it to not get passed properly.
This change should restore the previous behaviour.
Reviewers: rnk, pelikan
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D31491
llvm-svn: 299126
GCC has the alloc_align attribute, which is similar to assume_aligned, except the attribute's parameter is the index of the integer parameter that needs aligning to.
Differential Revision: https://reviews.llvm.org/D29599
llvm-svn: 299117
Summary:
The -fxray-always-instrument= and -fxray-never-instrument= flags take
filenames that are used to imbue the XRay instrumentation attributes
using a whitelist mechanism (similar to the sanitizer special cases
list). We use the same syntax and semantics as the sanitizer blacklists
files in the implementation.
As implemented, we respect the attributes that are already defined in
the source file (i.e. those that have the
[[clang::xray_{always,never}_instrument]] attributes) before applying
the always/never instrument lists.
Reviewers: rsmith, chandlerc
Subscribers: jfb, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D30388
llvm-svn: 299041
attributes.
These patches don't work because we can't currently access the parameter
information in a reliable way when building attributes. I thought this
would be relatively straightforward to fix, but it seems not to be the
case. Fixing this will requrie a substantial re-plumbing of machinery to
allow attributes to be handled in this location, and several other fixes
to the attribute machinery should probably be made at the same time. All
of this will make the patch .... substantially more complicated.
Reverting for now as there are active miscompiles caused by the current
version.
llvm-svn: 298695
Summary:
Clang companion patch to LLVM patch D31027, which adds support
for emitting minimized bitcode file for use in the thin link step.
Add a cc1 option -fthin-link-bitcode=<file> to trigger this behavior.
Depends on D31027.
Reviewers: mehdi_amini, pcc
Subscribers: cfe-commits, Prazek
Differential Revision: https://reviews.llvm.org/D31050
llvm-svn: 298639
It seems MS headers have started using __readgsqword, and since it's
used in a header that doesn't include intrin.h, we can't implement it as
an inline function anymore.
That was already the case for __readfsdword, which Saleem added support
for in r220859. This patch reuses that codegen to implement all of
__read[fg]s{byte,word,dword,qword}.
Differential Revision: https://reviews.llvm.org/D31248
llvm-svn: 298538
explaining why we have to ignore errors here even though in other parts
of codegen we can be more strict with builtins.
Also add a test case based on the code in a TSan test that found this
issue.
llvm-svn: 298494
declarations and calls instead of just definitions, and then teach it to
*not* attach such attributes even if the source code contains them.
This follows the design direction discussed on cfe-dev here:
http://lists.llvm.org/pipermail/cfe-dev/2017-January/052066.html
The idea is that for C standard library builtins, even if the library
vendor chooses to annotate their routines with __attribute__((nonnull)),
we will ignore those attributes which pertain to pointer arguments that
have an associated size. This allows the widespread (and seemingly
reasonable) pattern of calling these routines with a null pointer and
a zero size. I have only done this for the library builtins currently
recognized by Clang, but we can now trivially add to this set. This will
be controllable with -fno-builtin if anyone should care to do so.
Note that this does *not* change the AST. As a consequence, warnings,
static analysis, and source code rewriting are not impacted.
This isn't even a regression on any platform as neither Clang nor LLVM
have ever put 'nonnull' onto these arguments for declarations. All this
patch does is enable it on other declarations while preventing us from
ever accidentally enabling it on these libc functions due to a library
vendor.
It will also allow any other libraries using this annotation to gain
optimizations based on the annotation even when only a declaration is
visible.
llvm-svn: 298491
The alias was only ever used on darwin and had some issues there,
and isn't used in practice much. Also fixes a problem with -mno-altivec
not turning off -maltivec.
Also add a diagnostic for faltivec/fno-altivec that directs users to use
maltivec options and include the altivec.h file explicitly.
llvm-svn: 298449
Summary:
Because SamplePGO passes will be invoked twice in ThinLTO build: once at compile phase, the other at backend. We want to make sure the IR at the 2nd phase matches the hot part in pro
file, thus we do not want to inline hot callsites in the first phase.
Reviewers: tejohnson, eraman
Reviewed By: tejohnson
Subscribers: mehdi_amini, cfe-commits, Prazek
Differential Revision: https://reviews.llvm.org/D31202
llvm-svn: 298429
This patch introduces X86AsmParser with the ability to handle the aforementioned ops within compound "MS" arithmetical expressions.
Currently - only supported as a stand alone Operand, e.g.:
"TYPE X"
now allowed :
"4 + TYPE X * 128"
LLVM side: https://reviews.llvm.org/D31173
Differential Revision: https://reviews.llvm.org/D31174
llvm-svn: 298426