The test was meant for Darwin anyway, so I'm not even sure it's supposed
to run on Linux. If it was, then we need time to investigate, but since
the test is new, there's no point in reverting the whole patch because
of it.
llvm-svn: 304010
On SPARC, .plt is both writeable and executable. The current way
sections are sorted means that lld puts it after .data/.bss. but it
really needs to be close to .test to make sure branches into .plt
don't overflow. I'd argue that because .bss is supposed to come last
on all architectures, we should change the default sort order such
that writable and executable sections come before sections that are
just writeable. read-only executable sections should still come after
sections that are just read-only of course. This diff makes this
change.
llvm-svn: 304008
This produced 'strange' DWARF anyway - the CU would have no ranges (or
at least not a range including the inlined code) nor any subprogram or
inlined_subroutine - yet the line table would have entries for these
instructions.
(this actually becomes more relevant with changes coming after this,
where a CU without any contents will be omitted entirely - so there
would be no line table to put this on anyway)
llvm-svn: 304004
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses with use of llvm::LoadedObjectInfo
interface. We assigned file offsets as addressed. Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well. That solves problem mentioned above.
Differential revision: https://reviews.llvm.org/D33184
llvm-svn: 304002
Re-commit r303938 and r303954 with a fix for addLiveIns(): the internal
addPristines() function must be called on an empty set or it may
accidentally reset saved registers.
- addLiveOutsNoPristines() needs to add callee saved registers that are
actually saved and restored somewhere to the set (they are not
pristine).
- Cleanup/rewrite the code for addLiveOuts()/addLiveOutsNoPristines().
This fixes the problem from D32156.
Differential Revision: https://reviews.llvm.org/D32464
llvm-svn: 304001
Summary:
Currently we are not enforcing the success of `pthread_once`, and
`pthread_setspecific`. Errors could lead to harder to debug issues later in
the thread's life. This adds checks for a 0 return value for both.
If `pthread_setspecific` fails in the teardown path, opt for an immediate
teardown as opposed to a fatal failure.
Reviewers: alekseyshl, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33555
llvm-svn: 303998
In the best case:
extract (binop (concat X1, X2), (concat Y1, Y2)), N --> binop XN, YN
...we kill all of the extract/concat and just have narrow binops remaining.
If only one of the binop operands is amenable, this transform is still
worthwhile because we kill some of the extract/concat.
Optional bitcasting makes the code more complicated, but there doesn't
seem to be a way to avoid that.
The TODO about extending to more than bitwise logic is there because we really
will regress several x86 tests including madd, psad, and even a plain
integer-multiply-by-2 or shift-left-by-1. I don't think there's anything
fundamentally wrong with this patch that would cause those regressions; those
folds are just missing or brittle.
If we extend to more binops, I found that this patch will fire on at least one
non-x86 regression test. There's an ARM NEON test in
test/CodeGen/ARM/coalesce-subregs.ll with a pattern like:
t5: v2f32 = vector_shuffle<0,3> t2, t4
t6: v1i64 = bitcast t5
t8: v1i64 = BUILD_VECTOR Constant:i64<0>
t9: v2i64 = concat_vectors t6, t8
t10: v4f32 = bitcast t9
t12: v4f32 = fmul t11, t10
t13: v2i64 = bitcast t12
t16: v1i64 = extract_subvector t13, Constant:i32<0>
There was no functional change in the codegen from this transform from what I
could see though.
For the x86 test changes:
1. PR32790() is the closest call. We don't reduce the AVX1 instruction count in that case,
but we improve throughput. Also, on a core like Jaguar that double-pumps 256-bit ops,
there's an unseen win because two 128-bit ops have the same cost as the wider 256-bit op.
SSE/AVX2/AXV512 are not affected which is expected because only AVX1 has the extract/concat
ops to match the pattern.
2. do_not_use_256bit_op() is the best case. Everyone wins by avoiding the concat/extract.
Related bug for IR filed as: https://bugs.llvm.org/show_bug.cgi?id=33026
3. The SSE diffs in vector-trunc-math.ll are just scheduling/RA, so nothing real AFAICT.
4. The AVX1 diffs in vector-tzcnt-256.ll are all the same pattern: we reduced the instruction
count by one in each case by eliminating two insert/extract while adding one narrower logic op.
https://bugs.llvm.org/show_bug.cgi?id=32790
Differential Revision: https://reviews.llvm.org/D33137
llvm-svn: 303997
Summary:
D33521 addressed a memory ordering issue in BlockingMutex, which seems
to be the cause of a flakiness of a few ASan tests on PowerPC.
Reviewers: eugenis
Subscribers: kubamracek, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D33569
llvm-svn: 303995
Currently getOptimalMemOpType returns i32 for large enough sizes without
checking for alignment, leading to poor code generation when misaligned accesses
aren't permitted as we generate a word store then later split it up into byte
stores. This means we inadvertantly go over the MaxStoresPerMemcpy limit and for
memset we splat the memset value into a word then immediately split it up
again.
Fix this by leaving it up to FindOptimalMemOpLowering to figure out which type
to use, but also fix a bug there where it wasn't correctly checking if
misaligned memory accesses are allowed.
Differential Revision: https://reviews.llvm.org/D33442
llvm-svn: 303990
r303972 used GetValueForKeyAsInteger with mismatched types (e.g.
instantiating with uint64_t, but passing a size_t argument), which
manifested itself on 32-bit architectures.
The intended usage of these functions was to not specify the type
explicitly, and let the compiler figure that out, so switch to that kind
of usage instead.
llvm-svn: 303988
With fix of test compilation.
Initial commit message:
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section
which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses
with use of llvm::LoadedObjectInfo interface. We assigned file offsets as addressed.
Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason
of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well.
That solves problem mentioned above.
Differential revision: https://reviews.llvm.org/D33184
llvm-svn: 303983
Running unittests/Support/DynamicLibrary/DynamicLibraryTests fails when LLVM is
configured with LLVM_EXPORT_SYMBOLS_FOR_PLUGINS=ON, because the test's version
script only contains symbols extracted from the static libraries, that the test
links with, but not those from the main object/executable itself. The patch
explicitly exports the one symbol needed by the test.
This change fixes https://bugs.llvm.org/show_bug.cgi?id=32893
Patch authored by Momchil Velikov.
Differential Revision: https://reviews.llvm.org/D33490
llvm-svn: 303979
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section
which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses
with use of llvm::LoadedObjectInfo interface. We assigned file offsets as addressed.
Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason
of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well.
That solves problem mentioned above.
Differential revision: https://reviews.llvm.org/D33184
llvm-svn: 303978
Summary:
Custom vfs::FileSystem is currently used for unit tests.
This revision depends on https://reviews.llvm.org/D33397.
Reviewers: bkramer, krasimir
Reviewed By: bkramer, krasimir
Subscribers: klimek, cfe-commits, mgorny
Differential Revision: https://reviews.llvm.org/D33416
llvm-svn: 303977
I found this when builded llc binary using gcc 5.4.1 + LLD.
gcc produces duplicate entries in .debug_gnu_pubtypes section, ex:
UnifyFunctionExitNodes.cpp.o has:
0x0000ac07 EXTERNAL TYPE "std::success_type<void*>"
0x0000ac07 EXTERNAL TYPE "std::success_type<void*>"
clang produces single entry here:
0x0000d291 EXTERNAL TYPE "std::__success_type<void *>"
If we link output from gcc with LLD, that would produce excessive duplicate
entries in .gdb_index constant pool area. That does not seem affect gdb work,
but makes .gdb_index larger than it can be.
I also checked that gold filters out such duplicates too. Patch fixes it.
Differential revision: https://reviews.llvm.org/D32647
llvm-svn: 303975
https://sourceware.org/gdb/onlinedocs/gdb/Index-Section-Format.html says:
"A CU vector in the constant pool is a sequence of offset_type values.
The first value is the number of CU indices in the vector.
Each subsequent value is the index and symbol attributes of a CU in the CU list."
Previously we keeped 2 values until the end, what was useless.
Initially was a part of D32647, though it is possible to split out.
Patch do that.
Differential revision: https://reviews.llvm.org/D33551
llvm-svn: 303973
Summary:
The changes consist of new packets for trace manipulation and
trace collection. The new packets are also documented. The packets
are capable of providing custom trace specific parameters to start
tracing and also retrieve such configuration from the server.
Reviewers: clayborg, lldb-commits, tberghammer, labath, zturner
Reviewed By: clayborg, labath
Subscribers: krytarowski, lldb-commits
Differential Revision: https://reviews.llvm.org/D32585
llvm-svn: 303972
The patch rL303730 was reverted because test lsr-expand-quadratic.ll failed on
many non-X86 configs with this patch. The reason of this is that the patch
makes a correctless fix that changes optimizer's behavior for this test.
Without the change, LSR was making an overconfident simplification basing on a
wrong SCEV. Apparently it did not need the IV analysis to do this. With the
change, it chose a different way to simplify (that wasn't so confident), and
this way required the IV analysis. Now, following the right execution path,
LSR tries to make a transformation relying on IV Users analysis. This analysis
is target-dependent due to this code:
// LSR is not APInt clean, do not touch integers bigger than 64-bits.
// Also avoid creating IVs of non-native types. For example, we don't want a
// 64-bit IV in 32-bit code just because the loop has one 64-bit cast.
uint64_t Width = SE->getTypeSizeInBits(I->getType());
if (Width > 64 || !DL.isLegalInteger(Width))
return false;
To make a proper transformation in this test case, the type i32 needs to be
legal for the specified data layout. When the test runs on some non-X86
configuration (e.g. pure ARM 64), opt gets confused by the specified target
and does not use it, rejecting the specified data layout as well. Instead,
it uses some default layout that does not treat i32 as a legal type
(currently the layout that is used when it is not specified does not have
legal types at all). As result, the transformation we expect to happen does
not happen for this test.
This re-enabling patch does not have any source code changes compared to the
original patch rL303730. The only difference is that the failing test is
moved to X86 directory and now has requirement of running on x86 only to comply
with the specified target triple and data layout.
Differential Revision: https://reviews.llvm.org/D33543
llvm-svn: 303971
Re-commit r303937 + r303949 as they were not the cause for the build
failures.
We do not track liveness of reserved registers so adding them to the
liveins list in computeLiveIns() was completely unnecessary.
llvm-svn: 303970
In the absense of a more specific handler for TRAP_CAP (generated by
ENOTCAPABLE or ECAPMODE while in capability mode) treat it as a trace
trap. Obtained from FreeBSD r318884.
We should later add an option to have LLDB control the trapcap procctl
(as with ASLR), as well as report a specific stop reason. For now this
change eliminates an assertion failure from LLDB.
llvm-svn: 303965
block.
This allows writing much more natural and readable range based for loops
directly over the PHI nodes. It also takes advantage of the same tricks
for terminating the sequence as the hand coded versions.
I've replaced one example of this mostly to showcase the difference and
I've added a unit test to make sure the facilities really work the way
they're intended. I want to use this inside of SimpleLoopUnswitch but it
seems generally nice.
Differential Revision: https://reviews.llvm.org/D33533
llvm-svn: 303964
Clang supports coroutines in all dialects; Therefore libc++ should too,
otherwise the Clang extension is unusable.
I'm not convinced extending support to C++03 is a feasible long term
plan, since as the library grows to offer things like generators it
will be come increasingly difficult to limit the implementation to C++03.
However for the time being supporting C++03 isn't a big deal.
llvm-svn: 303963