This testing configuration links tests against one libc++ shared library,
but runs them against another libc++ shared library. This makes sure that
we can build applications against the libc++ provided in a recent SDK and
back-deploy them to platforms containing older libc++ dylibs.
It also switches the Apple CI script to using that new configuration
instead of the legacy one.
Differential Revision: https://reviews.llvm.org/D119195
Summary:
1.added a helper function isSymbolDefined().
2.Split out sorting code
3.refactor symbol comparing function
Reviewers: James Henderson,Fangrui Song
Differential Revision: https://reviews.llvm.org/D119028
The test diffs are identical to D119111.
This only affects x86 currently because no other target
has an override for the TLI hook that controls this transform.
This patch adds custom lowering support for ISD::MUL with v1i64 and v2i64
types when SVE is enabled, regardless of the minimum SVE vector length. We
do this because NEON simply does not have 64-bit vector multiplies, so we
want to take advantage of these instructions in SVE.
I've updated the 128-bit min SVE vector bits tests here:
CodeGen/AArch64/sve-fixed-length-int-arith.ll
CodeGen/AArch64/sve-fixed-length-int-mulh.ll
CodeGen/AArch64/sve-fixed-length-int-rem.ll
Differential Revision: https://reviews.llvm.org/D118802
Add support for computing an overapproximation of the number of integer points
in a polyhedron. The returned result is actually the number of integer points
one gets by computing the "rational shadow" obtained by projecting out the
local IDs, finding the minimal axis-parallel hyperrectangular approximation
of the shadow, and returning the number of integer points in that. This does
not currently support symbols.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D119228
a) Using a do...while loop in the number formatter means we do not
have to special case zero.
b) Let's use 'if (auto size = ...) {}' for appending to the output
buffer.
c) We should also be using memcpy there, not memmove -- the string
being appended is never part of the current buffer.
d) Let's put all the operator<< functions together.
e) I find 'if (cond) frob(..., true) ; elseOD frob(..., false)'
somewhat confusing. Let's just use std::abs in the signed integer
printer and let CSE decide about the duplicate < 0 testing.
f) Let's have as many as possible return *this. That's both more
consistent, and allows tailcalls in some cases (the actual number
formatter has a local array though).
These changes removed around 100 bytes from the demangler's
instructions on x86_64.
Reviewed By: ChuanqiXu
Differential Revision: https://reviews.llvm.org/D119176
Following the discussion on D118229, this marks all pointer-typed
kernel arguments as having ABI alignment, per section 6.3.5 of
the OpenCL spec:
> For arguments to a __kernel function declared to be a pointer to
> a data type, the OpenCL compiler can assume that the pointee is
> always appropriately aligned as required by the data type.
Differential Revision: https://reviews.llvm.org/D118894
Instead of using the pointer element type, look at how the pointer
is actually being used in store instructions, while looking through
bitcasts. This makes the transform compatible with opaque pointers
and a bit more general.
It's worth noting that I have dropped the 3-vector to 4-vector
shufflevector special case, because this is now handled in a
different way: If the value is actually used as a 4-vector, then
we're directly going to use that type, instead of shuffling to a
3-vector in between.
Differential Revision: https://reviews.llvm.org/D119237
As suggested by @craig.topper, relaxing LEA matching to only require the ADD to be fed from a single op with EFLAGS helps avoid duplication when the EFLAGS are consumed in a later, dependent instruction.
There was some concern about whether the heuristic is too simple, not taking into account lost loads that can't fold by using a LEA, but some basic tests (included in select-lea.ll) don't suggest that's really a problem.
Differential Revision: https://reviews.llvm.org/D118128
Major user-facing changes:
Many headers in llvm/DebugInfo/CodeView no longer include
llvm/Support/BinaryStreamReader.h or llvm/Support/BinaryStreamWriter.h,
those headers may need to be included manually.
Several headers in llvm/DebugInfo/CodeView no longer include
llvm/DebugInfo/CodeView/EnumTables.h or llvm/DebugInfo/CodeView/CodeView.h,
those headers may need to be included manually.
Some statistics:
$ clang++ -E -Iinclude -I../llvm/include ../llvm/lib/DebugInfo/CodeView/*.cpp -std=c++14 -fno-rtti -fno-exceptions | wc -l
after: 2794466
before: 2832765
Discourse thread on the topic: https://discourse.llvm.org/t/include-what-you-use-include-cleanup/
Differential Revision: https://reviews.llvm.org/D119092
D117898 added the generic __builtin_elementwise_add_sat and __builtin_elementwise_sub_sat with the same integer behaviour as the SSE/AVX instructions
This patch removes the __builtin_ia32_padd/psub saturated intrinsics and just uses the generics - the existing tests see no changes:
__m256i test_mm256_adds_epi8(__m256i a, __m256i b) {
// CHECK-LABEL: test_mm256_adds_epi8
// CHECK: call <32 x i8> @llvm.sadd.sat.v32i8(<32 x i8> %{{.*}}, <32 x i8> %{{.*}})
return _mm256_adds_epi8(a, b);
}
We currently don't have any specialized upgrades for intrinsics
that can be used in invokes, but they can still be subject to
a generic remangling upgrade. In particular, this happens when
upgrading statepoint intrinsics under -opaque-pointers.
This patch just changes the upgrade code to work on CallBase
instead of CallInst in particular.
This patch enables running the new driver tests for AMDGPU. Previously
this was disabled because some tests failed. This was only because the
new driver tests hadn't been listed as unsupported or expected to fail.
Reviewed By: JonChesterfield
Differential Revision: https://reviews.llvm.org/D119240
This is no-functional-change-intended because only the
x86 target enables the TLI hook currently.
We can add fmul/fdiv opcodes to the switch similar to the
proposal D119111, but we don't need to make other changes
like enabling target-specific combines.
We can also add integer opcodes (add, or, shl, etc.) to
the switch because this function is called from all of the
generic binary opcodes.
The goal is to incrementally enable the profitable diffs
from D90113 while avoiding regressions.
Differential Revision: https://reviews.llvm.org/D119150
If the original invokes had uses, the uses must have been in PHI's,
but that immediately results in the incoming values being incompatible.
But we'll replace uses of the original invokes with the use of the
merged invoke, so as long as the incoming values become compatible
after that, we can merge.
Even if the invokes have normal destination, iff it's the same block,
we can merge them. For now, require that there are no PHI nodes,
and the returned values of invokes aren't used.
When splitting values, CallLowering assumes Lo part goes first. But in big endian ISA such as M68k, Hi part goes first.
This patch fixes this.
Differential Revision: https://reviews.llvm.org/D116877
The demangler treats ->* as a BinaryExpr, but .* as a MemberExpr.
That's inconsistent. This makes the former a MemberExpr too.
However, in order to not regress the paren output, MemberExpr::print
is modified to parenthesize the MemberExpr if the operator ends with
'*'. Printing is affected thusly:
Before:
obj.member
obj->member
obj.*member
(obj) ->* (member)
After:
obj.member # Unchanged
obj->member # Unchanged
obj.*(member) # Added paren member operand
obj->*(member) # Removed paren on object operand, less whitespace
The right solution to the paren problem is to add some notion of
precedence (and associativity) to Nodes, but that's a larger change
that would become simpler once the refactoring I'm doing is completed.
FWIW, binutils' demangler's paren algorithm has a small idea of
precedence, and will generally not emit parens when the operand is
unary.
Reviewed By: bruno
Differential Revision: https://reviews.llvm.org/D118486
D117898 added the generic __builtin_elementwise_add_sat and __builtin_elementwise_sub_sat with the same integer behaviour as the SSE/AVX instructions
This patch removes the __builtin_ia32_padd/psub saturated intrinsics and just uses the generics - the existing tests see no changes:
__m256i test_mm256_adds_epi8(__m256i a, __m256i b) {
// CHECK-LABEL: test_mm256_adds_epi8
// CHECK: call <32 x i8> @llvm.sadd.sat.v32i8(<32 x i8> %{{.*}}, <32 x i8> %{{.*}})
return _mm256_adds_epi8(a, b);
}
Instead of checking for a bitcast from a function type, check
whether the aliasee is a function after stripping bitcasts. This
is not strictly equivalent, but serves the same purpose.
Done in manner similar to mutexinoutset
(see https://reviews.llvm.org/D57576)
Runtime support already exists in LLVM OpenMP runtime (see
https://reviews.llvm.org/D97085).
The value used to identify an inoutset dependency type in the LLVM
OpenMP runtime is 8.
Some tests updated due to change in dependency type error messages that
now include new dependency type. Also updated
test/OpenMP/task_codegen.cpp to verify we emit the right code.
D108992 added KnownBits handling for 'Quadratic Reciprocity' self-multiplication patterns (bit[1] == 0), which can be used for non-undef values (poison is OK).
This patch adds noundef selfmultiply handling to value tracking so demanded bits patterns can make use of it.
Differential Revision: https://reviews.llvm.org/D117995
The test invocation at the start of run-clang-tidy.py (line 257) prints
all enabled checks - meaning either the default set or anything
configured via the -checks option. If any checks were (un-)configured
via the -config option, these are not printed. This is confusing to the
user, since the list of checks that are printed may be different from
the list of checks that are used by the non-testing calls to clang-tidy,
where the -config option is passed correctly.
This patch adds the -config option to the test invocation of clang-tidy
at the start of the script. This means that checks (un-)configured via
the -config option (rather than the -checks option) are applied
correctly, when printing the list of enabled checks.
Use --include-generated-funcs checks. Unfortunately this places
all the functions at the end of the file rather than interleaving
them, but at least makes it feasible to update these tests.
In many cases, calls to isShiftedMask are immediately followed with checks to determine the size and position of the bitmask.
This patch adds variants of APInt::isShiftedMask, isShiftedMask_32 and isShiftedMask_64 that return these values as additional arguments.
I've updated a number of cases that were either performing seperate size/position calculations or had created their own local wrapper versions of these.
Differential Revision: https://reviews.llvm.org/D119019
This implementation relies on storing data in registers for sizes up to 128B.
Then depending on whether `dst` is less (resp. greater) than `src` we move data forward (resp. backward) by chunks of 32B.
We first make sure one of the pointers is aligned to increase performance on large move sizes.
Differential Revision: https://reviews.llvm.org/D114637
We currently use emitConjunction to create CCMP conjunctions from the
conditions of selects, helping turning and/ors into more optimal ccmp
sequences that don't need to go through csels. This extends that to also
be used whilst lowering brcond, giving more opportunity for better
condition generation.
Differential Revision: https://reviews.llvm.org/D118650
This patch implements `__builtin_elementwise_add_sat` and `__builtin_elementwise_sub_sat` builtins.
These map to the add/sub saturated math intrinsics described here:
https://llvm.org/docs/LangRef.html#saturation-arithmetic-intrinsics
With this in place we should then be able to replace the x86 SSE adds/subs intrinsics with these generic variants - it looks like other targets should be able to use these as well (arm/aarch64/webassembly all have similar examples in cgbuiltin).
Differential Revision: https://reviews.llvm.org/D117898