Summary:
JavaScript automatic semicolon insertion can trigger before [ and (, so
avoid breaking before them if the previous token is likely to terminate
an expression.
Reviewers: djasper
Subscribers: cfe-commits, klimek
Differential Revision: https://reviews.llvm.org/D42570
llvm-svn: 323532
Summary:
If the same value is going to be vectorized several times in the same
tree entry, this entry is considered to be a gather entry and cost of
this gather is counter as cost of InsertElementInstrs for each gathered
value. But we can consider these elements as ShuffleInstr with
SK_PermuteSingle shuffle kind.
Reviewers: spatel, RKSimon, mkuper, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38697
llvm-svn: 323530
Use fuzzy return addresses in lock testcases so that these
testcases can also be run using the Intel Compiler.
Patch by Simon Convent!
Differential Revision: https://reviews.llvm.org/D41896
llvm-svn: 323529
We can stash the cached transparent tag bit in existing pointer padding.
Everything coming out of ASTContext is always aligned to a multiple of
8, so we have 8 spare bits.
llvm-svn: 323528
Summary:
For OpenCL 1.1 embedded profile 64 bit integers i.e. long,
ulong including the appropriate vector data types and operations
on 64-bit integers are optional. The "cles_khr_int64" extension
string will be reported if the embedded profile implementation
supports 64-bit integers.
Reviewers: Anastasia, bader
Reviewed By: Anastasia, bader
Subscribers: bader, yaxunl, Anastasia, cfe-commits
Differential Revision: https://reviews.llvm.org/D42532
llvm-svn: 323522
Add support for printing / parsing the addrspace of a MachineMemOperand.
Fixes PR35970.
Differential Revision: https://reviews.llvm.org/D42502
llvm-svn: 323521
TestLibcxxListLoop - fails because the evil "define private public"
trick does not work with gmodules. The purpose of the test is not to
test debug info parsing so I just mark it as no_debug_info_testcase.
In the long term it may be interesting to write a mock std::list which
will allow us to test bad inputs to data formatters more easily.
TestGModules - seems to be a genuine bug. Filed pr36107 and xfailed.
llvm-svn: 323520
Also, a number of style and bug fixes was done:
* ASTImporterTest: added sanity check for source node
* ExternalASTMerger: better lookup for template specializations
* ASTImporter: don't add templated declarations into DeclContext
* ASTImporter: introduce a helper, ImportTemplateArgumentListInfo getting SourceLocations
* ASTImporter: proper set ParmVarDecls for imported FunctionProtoTypeLoc
Differential Revision: https://reviews.llvm.org/D42301
llvm-svn: 323519
- using qualified pointer addrspace in intrinsics class to avoid .f32 mangling
- changed too common atomic mangling to ds
- added missing intrinsics to AMDGPUTTIImpl::getTgtMemIntrinsic
Reviewed by: b-sumner
Differential Revision: https://reviews.llvm.org/D42383
llvm-svn: 323516
load instruction
The function `Thumb1InstrInfo::loadRegFromStackSlot` accepts only the `tGPR`
register class. The function serves to emit a `tLDRspi` instruction and
certainly any subset of the `tGPR` register class is a valid destination of the
load.
Differential revision: https://reviews.llvm.org/D42535
llvm-svn: 323514
This is the groundwork for Armv8.2-A FP16 code generation .
Clang passes and returns _Float16 values as floats, together with the required
bitconverts and truncs etc. to implement correct AAPCS behaviour, see D42318.
We will implement half-precision argument passing/returning lowering in the ARM
backend soon, but for now this means that this:
_Float16 sub(_Float16 a, _Float16 b) {
return a + b;
}
gets lowered to this:
define float @sub(float %a.coerce, float %b.coerce) {
entry:
%0 = bitcast float %a.coerce to i32
%tmp.0.extract.trunc = trunc i32 %0 to i16
%1 = bitcast i16 %tmp.0.extract.trunc to half
<SNIP>
%add = fadd half %1, %3
<SNIP>
}
When FullFP16 is *not* supported, we don't make f16 a legal type, and we get
legalization for "free", i.e. nothing changes and everything works as before.
And also f16 argument passing/returning is handled.
When FullFP16 is supported, we do make f16 a legal type, and have 2 places that
we need to patch up: f16 argument passing and returning, which involves minor
tweaks to avoid unnecessary code generation for some bitcasts.
As a "demonstrator" that this works for the different FP16, FullFP16, softfp
modes, etc., I've added match rules to the VSUB instruction description showing
that we can codegen this instruction from IR, but more importantly, also to
some conversion instructions. These conversions were causing issue before in
the FP16 and FullFP16 cases.
I've also added match rules to the VLDRH and VSTRH desriptions, so that we can
actually compile the entire half-precision sub code example above. This showed
that these loads and stores had the wrong addressing mode specified: AddrMode5
instead of AddrMode5FP16, which turned out not be implemented at all, so that
has also been added.
This is the minimal patch that shows all the different moving parts. In patch
2/3 I will add some efficient lowering of bitcasts, and in 2/3 I will add the
remaining Armv8.2-A FP16 instruction descriptions.
Thanks to Sam Parker and Oliver Stannard for their help and reviews!
Differential Revision: https://reviews.llvm.org/D38315
llvm-svn: 323512
Summary:
This is probably the right behavior for distributed tracers, and makes unpaired
begin-end events impossible without requiring Spans to be bound to a thread.
The API is conceptually clean but syntactically awkward. As discussed offline,
this is basically a naming problem and will go away if (when) we use TLS to
store the current context.
The apparently-unrelated change to onScopeExit are because its move semantics
broken if Func is POD-like since r322838. This is true of function pointers,
and the lambda I use here that captures two pointers only.
I've raised this issue on llvm-dev and will revert this part if we fix it in
some other way.
Reviewers: ilya-biryukov
Subscribers: klimek, jkorous-apple, ioeric, cfe-commits
Differential Revision: https://reviews.llvm.org/D42499
llvm-svn: 323511
Type legalization would prevent any i64 operands to the build_vector from existing before we get here. The coverage bots show this code as uncovered.
llvm-svn: 323506
The original autoupgrade for kunpck intrinsics used a bitcasted scalar shift, or, and. This combine would turn this into a concat_vectors. Now the kunpck intrinsics are autoupgraded to a vector shuffle that will become a concat_vectors.
llvm-svn: 323504
This listed all legal 128-bit integer types individually, but since we already know we have a legal type and its integer, we can just check is128BitVector.
llvm-svn: 323502
Otherwise, a shared library build with SJLJ APIs enabled would
end up with duplicate symbols.
This didn't occur for the apple && arm case due to specifically
checking for that in the surrounding ifdef.
Differential Revision: https://reviews.llvm.org/D42555
llvm-svn: 323499
When pass creates a MOV instruction for
lea (%base,%index,1), %dst => mov %base,%dst; add %index,%dst
modification it should clean the killed flag for base
if base is equal to index.
Otherwise verifier complains about usage of killed register in add instruction.
Reviewers: lsaba, zvi, zansari, aaboud
Reviewed By: lsaba
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42522
llvm-svn: 323497
Clang and llvm already use llvm_setup_rpath(), so this change will
help standarize rpath usage across all projects.
Differential Revision: https://reviews.llvm.org/D42461
llvm-svn: 323496
[cmake] [libcxxabi] Call llvm_setup_rpath() when adding shared libraries.
Clang and llvm already use llvm_setup_rpath(), so this change will
help standarize rpath usage across all projects.
Differential Revision: https://reviews.llvm.org/D42460
llvm-svn: 323495
Clang and llvm already use llvm_setup_rpath(), so this change will
help standarize rpath usage across all projects.
Differential Revision: https://reviews.llvm.org/D42459
llvm-svn: 323492
We need to use the vcruntime declarations on Windows to avoid an
ODR violation involving rtti.obj, which provides the definition of
the runtime function implementing dynamic_cast and depends on the
vcruntime implementations of bad_cast and bad_typeid.
Differential Revision: https://reviews.llvm.org/D42220
llvm-svn: 323491
Code on Windows expects to be able to do:
#define _USE_MATH_DEFINES
#include <math.h>
and receive the definitions of mathematical constants, even if <math.h>
has previously been included. To support this scenario, re-include
<math.h> every time the wrapper header is included.
Differential Revision: https://reviews.llvm.org/D42403
llvm-svn: 323490
Tests were working on my system because the old correct files were left over
and the new bug was that the output files were not being output at all.
Consequently the test work on my system but fail on any other system.
This reverts commit r323484.
llvm-svn: 323486