The motivation for this patch starts with the epic fail example in PR18007:
https://llvm.org/bugs/show_bug.cgi?id=18007
...unfortunately, this patch makes no difference for that case, but it solves some
simpler cases. We'll get there some day. :)
The current 'or' matching code was using computeKnownBits() via
isBaseWithConstantOffset() -> MaskedValueIsZero(), but that's an unnecessarily limited use.
We can do more by copying the logic in ValueTracking's haveNoCommonBitsSet(), so we can
treat the 'or' as if it was an 'add'.
There's a TODO comment here because we should lift the bit-checking logic into a helper
function, so it's not duplicated in DAGCombiner.
An example of the better LEA matching:
leal (%rdi,%rdi), %eax
andl $1, %esi
orl %esi, %eax
Becomes:
andl $1, %esi
leal (%rsi,%rdi,2), %eax
Differential Revision: http://reviews.llvm.org/D13956
llvm-svn: 252515
For some reason we'd never run MachineVerifier on WinEH code, and you
explicitly have to ask for it with llc. I added it to a few test cases
to get some coverage.
Fixes PR25461.
llvm-svn: 252512
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.
This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.
llvm-svn: 252507
In this way, when a language needs to tell itself things that are not bound to a type but to a value (imagine a base-class relation, this is not about the type, but about the ValueObject), it can do so in a clean and general fashion
The interpretation of the values of the flags is, of course, up to the language that owns the value (the value object's runtime language, that is)
llvm-svn: 252503
This patch makes ASAN for aarch64 use the same shadow offset for all
currently supported VMAs (39 and 42 bits). The shadow offset is the
same for 39-bit (36).
llvm-svn: 252497
This patch makes ASAN for aarch64 use the same shadow offset for all
currently supported VMAs (39 and 42 bits). The shadow offset is the
same for 39-bit (36). Similar to ppc64 port, aarch64 transformation
also requires to use an add instead of 'or' for 42-bit VMA.
llvm-svn: 252495
Summary: Use "auto" when the type name is redundant
Reviewers: aaron.ballman
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D14501
llvm-svn: 252494
Summary: Call instructions that are from the same line and same basic block needs to have separate discriminators to distinguish between different callsites.
Reviewers: davidxl, dnovillo, dblaikie
Subscribers: dblaikie, probinson, llvm-commits
Differential Revision: http://reviews.llvm.org/D14464
llvm-svn: 252492
When GlobalOpt splits an internal, global variable with an aggregate type, it
should propagate the externally_initialized flag to the newly created globals.
This makes the pass safe for our downstream use of this flag, while still
allowing some useful optimisations (such as removing dead parts of the split
aggregate) to be performed.
Differential Revision: http://reviews.llvm.org/D13382
llvm-svn: 252490
1) Add get_ptr_type() method to all wait flag types.
2) Flag in sleep_loc may change type by the time the resume is called from
__kmp_null_resume_wrapper. We use get_ptr_type to obtain the real type
and compare it to the casted object received. If they don't match, we know
the flag has changed (already resumed and replaced by another flag). If they
match, it doesn't hurt to go ahead and resume it.
Differential Revision: http://reviews.llvm.org/D14458
llvm-svn: 252487
1) When the number of threads in a team increases, new threads need to have all
their barrier struct fields initialized. We were missing the parent_bar and
team fields.
2) For non-forkjoin barriers, we now do the __kmp_task_team_setup before the
gather. The setup now sets up the task_team that all the threads will switch
to after the barrier, but it needs to be done before other threads do the
switch.
3) Remove an unneeded assignment of tt_found_tasks in task team free function.
Differential Revision: http://reviews.llvm.org/D14456
llvm-svn: 252486
These changes include:
1) Machine hierarchy now uses the base_num_threads field to indicate the
maximum number of threads the current hierarchy can handle without a resize.
2) In __kmp_get_hierarchy, we need to get depth after any potential resize
is done.
3) Cleanup of hierarchy resize code to support 1 above.
Differential Revision: http://reviews.llvm.org/D14455
llvm-svn: 252475
Implemented as many of Michael's suggestions as were possible:
* clang-format the added code while it is still fresh.
* tried to change Value* to Instruction* in many places in computeMinimumValueSizes - unfortunately there are several places where Constants need to be handled so this wasn't possible.
* Reduce the pass list on loop-vectorization-factors.ll.
* Fix a bug where we were querying MinBWs for I->getOperand(0) but using MinBWs[I].
llvm-svn: 252469
Summary:
LAA currently generates a set of SCEV predicates that must be checked by users.
In the case of Loop Distribute/Loop Load Elimination, no such predicates could have
been emitted, since we don't allow stride versioning. However, in the future there
could be SCEV predicates that will need to be checked.
This change adds support for SCEV predicate versioning in the Loop Distribute, Loop
Load Eliminate and the loop versioning infrastructure.
Reviewers: anemet
Subscribers: mssimpso, sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D14240
llvm-svn: 252467
Summary:
This matches the sum-of-absdiff patterns emitted by the vectoriser using log2 shuffles.
Relies on D14207 to be able to match the `extract_subvector(..., 0)`
Reviewers: t.p.northover, jmolloy
Subscribers: aemerson, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D14208
llvm-svn: 252465
Summary:
Lowering this pattern early to an `EXTRACT_SUBREG` was making it impossible to match larger patterns in tblgen that use `extract_subvector(..., 0)` as part of the their input pattern.
It seems like there will exist somewhere a better way of specifying this pattern over all relevant register value types, but I didn't manage to find it.
Reviewers: t.p.northover, jmolloy
Subscribers: aemerson, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D14207
llvm-svn: 252464
The -meabi flag to control LLVM EABI version.
Without '-meabi' or with '-meabi default' imply LLVM triple default.
With '-meabi gnu' sets EABI GNU.
With '-meabi 4' or '-meabi 5' set EABI version 4 and 5 respectively.
A similar patch was introduced in LLVM.
Patch by Vinicius Tinti.
llvm-svn: 252463
"GCC requires the freestanding environment provide memcpy, memmove, memset
and memcmp": https://gcc.gnu.org/onlinedocs/gcc-5.2.0/gcc/Standards.html
Hence in GNUEABI targets LLVM should not convert 'memops' to their equivalent
'__aeabi_memops'. This convertion violates GCC contract.
The -meabi flag controls whether or not LLVM will modify 'memops' in GNUEABI
targets.
Without -meabi: use the triple default EABI.
With -meabi=default: use the triple default EABI.
With -meabi=gnu: use 'memops'.
With -meabi=4 or -meabi=5: use '__aeabi_memops'.
With -meabi set to an unknown value: same as -meabi=default.
Patch by Vinicius Tinti.
llvm-svn: 252462
We don't currently have any runtime library functions for operations on
f16 values (other than conversions to and from f32 and f64), so we
should always promote it to f32, even if that is not a legal type. In
that case, the f32 values would be softened to f32 library calls.
SoftenFloatRes_FP_EXTEND now needs to check the promoted operand's type,
as it may ne a no-op or require a different library call.
getCopyFromParts and getCopyToParts now need to cope with a
floating-point value stored in a larger integer part, as is the case for
any target that needs to store an f16 value in a 32-bit integer
register.
Differential Revision: http://reviews.llvm.org/D12856
llvm-svn: 252459
Summary:
This patch adds the LIBCXX_LIBC_IS_MUSL cmake option to allow the
building of libcxx with the Musl C library. The option is necessary as
Musl does not provide any predefined macro in order to test for its
presence, like GLIBC. Most of the changes specify the correct path to
choose through the various #if/#else constructs in the locale code.
Depends on D13407.
Reviewers: mclow.lists, jroelofs, EricWF
Subscribers: jfb, tberghammer, danalbert, srhines, cfe-commits
Differential Revision: http://reviews.llvm.org/D13673
llvm-svn: 252457
The TSan-instrumented version of libcxx doesn't even build on OS X at this point. Let's skip it from the OS X build for now, since most of TSan functionality doesn't depend on it. This will enable `check-tsan` to be run.
Differential Revision: http://reviews.llvm.org/D14486
llvm-svn: 252455