Commit Graph

390452 Commits

Author SHA1 Message Date
Craig Topper 7a105b5768 [RISCV] Use AVL Operand instead of GPR for tied mask pseudo for vwadd.wv and similar.
I mistakenly copied this from an older version of our internal
repo.
2021-06-07 21:16:50 -07:00
Yonghong Song 8ce45f9728 BPF: fix relocation types in lib/Object/RelocationResolver.cpp
Commit 6a2ea84600 ("BPF: Add more relocation kinds")
added new relocations R_BPF_64_ABS64 and R_BPF_64_ABS32
for normal 64-bit and 32-bit data relocations.
This is to replace some of functionalities with
R_BPF_64_64 and R_BPF_64_32 so that new R_BPF_64_64
and R_BPF_64_32 semantics are for ld_imm64 and
call instructions only.

The BPF support in lib/Object/RelocationResolver.cpp
is used to perform normal data relocations for
the case like DWARFObjInMemory with an object file
(search function getRelocationResolver() in file
DebugInfo/DWARF/DWARFContext.cpp) or llvm-readobj
to dump ".stack_sizes" section data.
In all these casees, normal 64-bit and 32-bit relocations
are performed and such resolution resolution
is exactly what implemented in RelocationResolver.cpp.

But Commit 6a2ea84600 missed to change
R_BPF_64_64/R_BPF_64_32 to R_BPF_64_ABS64/R_BPF_64_ABS32.
This patch fixed the issue and added a test for it
with llvm-readobj dumping ".stack_sizes" section.

Differential Revision: https://reviews.llvm.org/D103864
2021-06-07 20:59:21 -07:00
Jez Ng 447dfbe005 [lld-macho] Implement -force_load_swift_libs
It causes libraries whose names start with "swift" to be force-loaded.
Note that unlike the more general `-force_load`, this flag only applies
to libraries specified via LC_LINKER_OPTIONS, and not those passed on
the command-line. This is what ld64 does.

Reviewed By: #lld-macho, thakis

Differential Revision: https://reviews.llvm.org/D103709
2021-06-07 23:48:35 -04:00
Jez Ng 04259cde15 [lld-macho] Implement cstring deduplication
Our implementation draws heavily from LLD-ELF's, which in turn delegates
its string deduplication to llvm-mc's StringTableBuilder. The messiness of
this diff is largely due to the fact that we've previously assumed that
all InputSections get concatenated together to form the output. This is
no longer true with CStringInputSections, which split their contents into
StringPieces. StringPieces are much more lightweight than InputSections,
which is important as we create a lot of them. They may also overlap in
the output, which makes it possible for strings to be tail-merged. In
fact, the initial version of this diff implemented tail merging, but
I've dropped it for reasons I'll explain later.

**Alignment Issues**

Mergeable cstring literals are found under the `__TEXT,__cstring`
section. In contrast to ELF, which puts strings that need different
alignments into different sections, clang's Mach-O backend puts them all
in one section. Strings that need to be aligned have the `.p2align`
directive emitted before them, which simply translates into zero padding
in the object file.

I *think* ld64 extracts the desired per-string alignment from this data
by preserving each string's offset from the last section-aligned
address. I'm not entirely certain since it doesn't seem consistent about
doing this; but perhaps this can be chalked up to cases where ld64 has
to deduplicate strings with different offset/alignment combos -- it
seems to pick one of their alignments to preserve. This doesn't seem
correct in general; we can in fact can induce ld64 to produce a crashing
binary just by linking in an additional object file that only contains
cstrings and no code. See PR50563 for details.

Moreover, this scheme seems rather inefficient: since unaligned and
aligned strings are all put in the same section, which has a single
alignment value, it doesn't seem possible to tell whether a given string
doesn't have any alignment requirements. Preserving offset+alignments
for strings that don't need it is wasteful.

In practice, the crashes seen so far seem to stem from x86_64 SIMD
operations on cstrings. X86_64 requires SIMD accesses to be
16-byte-aligned. So for now, I'm thinking of just aligning all strings
to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise
it's simpler than preserving per-string alignment+offsets. It also
avoids the aforementioned crash after deduplication of
differently-aligned strings. Finally, the overhead is not huge: using
16-byte alignment (vs no alignment) is only a 0.5% size overhead when
linking chromium_framework.

With these alignment requirements, it doesn't make sense to attempt tail
merging -- most strings will not be eligible since their overlaps aren't
likely to start at a 16-byte boundary. Tail-merging (with alignment) for
chromium_framework only improves size by 0.3%.

It's worth noting that LLD-ELF only does tail merging at `-O2`. By
default (at `-O1`), it just deduplicates w/o tail merging. @thakis has
also mentioned that they saw it regress compressed size in some cases
and therefore turned it off. `ld64` does not seem to do tail merging at
all.

**Performance Numbers**

CString deduplication reduces chromium_framework from 250MB to 242MB, or
about a 3.2% reduction.

Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W:

      N           Min           Max        Median           Avg        Stddev
  x  20          3.91          4.03         3.935          3.95   0.034641016
  +  20          3.99          4.14         4.015        4.0365     0.0492336
  Difference at 95.0% confidence
          0.0865 +/- 0.027245
          2.18987% +/- 0.689746%
          (Student's t, pooled s = 0.0425673)

As expected, cstring merging incurs some non-trivial overhead.

When passing `--no-literal-merge`, it seems that performance is the
same, i.e. the refactoring in this diff didn't cost us.

      N           Min           Max        Median           Avg        Stddev
  x  20          3.91          4.03         3.935          3.95   0.034641016
  +  20          3.89          4.02         3.935        3.9435   0.043197831
  No difference proven at 95.0% confidence

Reviewed By: #lld-macho, gkm

Differential Revision: https://reviews.llvm.org/D102964
2021-06-07 23:48:35 -04:00
Esme-Yi 310d2b4957 [yaml2obj] Fix buildbot-issue-4886
XCOFFEmitter.cpp:67:16: runtime error: null pointer passed as argument 2,
which is declared to never be null
2021-06-08 03:00:52 +00:00
Carl Ritson c8bbfb8cf5 [AMDGPU] Allow oversize vaddr in GFX10 MIMG assembly
As a follow up to D103672, we should allow vaddr to be larger than
required when assembling GFX10 MIMG instructions.

Reviewed By: dp

Differential Revision: https://reviews.llvm.org/D103733
2021-06-08 11:57:07 +09:00
Jake.Egan f38eff777e [AIX] Define __STDC_NO_ATOMICS__ and __STDC_NO_THREADS__
Revert/reapply to fix Git authorship metadata

Differential Revision: https://reviews.llvm.org/D103707
2021-06-07 22:45:41 -04:00
Chris Bowler f97e01e61a Revert "[AIX] Define __STDC_NO_ATOMICS__ and __STDC_NO_THREADS__ predefined macros"
This reverts commit e6629be31e.
2021-06-07 22:45:41 -04:00
Carl Ritson f8816c7400 [AMDGPU] Add v5f32/VReg_160 support for MIMG instructions
Avoid having to round up to v8f32/VReg_256 when only 5 VGPRs are
required for a MIMG address operand.

Maintain _V8 instruction variants of pseudo instructions allowing
assembly prior to GFX10 to work as-is.  Currently the validator
can tell for GFX10 what the correct size is, so will disallow
oversize address registers.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D103672
2021-06-08 11:11:40 +09:00
=Jake Egan e6629be31e [AIX] Define __STDC_NO_ATOMICS__ and __STDC_NO_THREADS__ predefined macros
Differential Revision: https://reviews.llvm.org/D103707
2021-06-07 22:04:18 -04:00
Vitaly Buka b41b76b303 [NFC][scudo] Print errno of fork failure
This fork fails sometime on sanitizer-x86_64-linux-qemu bot.
2021-06-07 18:59:35 -07:00
Craig Topper 0aa941654f [RISCV] Use bitfields to shrink the size of the vector load/store intrinsics to pseudo instruction lookup tables. 2021-06-07 17:57:51 -07:00
Vitaly Buka 11539edf52 [NFC][LSAN] Limit the number of concurrent threads is the test
Test still fails with D88184 reverted.

The test was flaky on https://bugs.chromium.org/p/chromium/issues/detail?id=1206745 and
https://lab.llvm.org/buildbot/#/builders/sanitizer-x86_64-linux

Reviewed By: morehouse

Differential Revision: https://reviews.llvm.org/D102218
2021-06-07 17:38:12 -07:00
George Balatsouras 5b4dda550e [dfsan] Add full fast8 support
Complete support for fast8:
- amend shadow size and mapping in runtime
- remove fast16 mode and -dfsan-fast-16-labels flag
- remove legacy mode and make fast8 mode the default
- remove dfsan-fast-8-labels flag
- remove functions in dfsan interface only applicable to legacy
- remove legacy-related instrumentation code and tests
- update documentation.

Reviewed By: stephan.yichao.zhao, browneee

Differential Revision: https://reviews.llvm.org/D103745
2021-06-07 17:20:54 -07:00
LLVM GN Syncbot 3b69318eef [gn build] Port 692d7166f7 2021-06-08 00:16:13 +00:00
Petr Hosek 692d7166f7 Revert "[libcxx][gardening] Move all algorithms into their own headers."
This reverts commit 7ed7d4ccb8 as it
uncovered a Clang bug PR50592.
2021-06-07 17:15:20 -07:00
Petr Hosek d9633f229c Revert "[libcxx][module-map] creates submodules for private headers"
This reverts commit f1417eb9b1 as it
uncovered a Clang bug PR50592.
2021-06-07 17:10:05 -07:00
Ben Shi c705b7b04d [RISCV] Optimize bitwise and with constant for the Zbs extension
This patch optimizes (and r i) to
(BCLRI (BCLRI r, i0), i1) in which i = ~((1<<i0) | (1<<i1)).
or
(BCLRI (ANDI r, i0), i1) in which i = i0 & ~(1<<i1).

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D103743
2021-06-08 07:26:00 +08:00
Arthur Eubanks 47211fa889 Revert "[TargetLowering] Only inspect attributes in the arguments for ArgListEntry"
Needs to be discussed more.

This reverts commit 255a5c1baa6020c009934b4fa342f9f6dbbcc46
This reverts commit df2056ff3730316f376f29d9986c9913b95ceb1
This reverts commit faff79b7ca144e505da6bc74aa2b2f7cffbbf23
This reverts commit d2a9020785c6e02afebc876aa2778fa64c5cafd
2021-06-07 16:07:44 -07:00
Craig Topper 9b92ae01ee [RISCV] Store Log2 of EEW in the vector load/store intrinsic to pseudo lookup tables. NFCI
This uses 3 bits of data instead of 7. I'm wondering if we can use
bitfields for the lookup table key where this would matter.

I also name the shift_amount template to log2 since it is used
with more than just an srl now.
2021-06-07 15:47:45 -07:00
Stanislav Mekhanoshin 05289dfb62 [AMDGPU] Handle constant LDS uses from different kernels
This allows to lower an LDS variable into a kernel structure
even if there is a constant expression used from different
kernels.

Differential Revision: https://reviews.llvm.org/D103655
2021-06-07 15:39:08 -07:00
hsmahesha 713ca2f360 [AMDGPU] Introduce command line switch to control super aligning of LDS.
Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D103817
2021-06-08 03:58:13 +05:30
hsmahesha 3af5f3e692 [IR] Add utility to convert constant expression operands (of an instruction) to instructions.
In the situation where we need to replace a constant operand C from a constant expression CE
by an instruction NI, it not possible without converting CE itself into an instruction. This
utility helps to convert the given set of constant expression operands from an instruction I
into a corresponding set of instructions.

The current use-case for this utility is from the patches - https://reviews.llvm.org/D103225
and https://reviews.llvm.org/D103655.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D103661
2021-06-08 03:22:32 +05:30
Philip Reames 3c6e419198 [SCEV] Properly guard reasoning about infinite loops being UB on mustprogress
Noticed via code inspection. We changed the semantics of the IR when we added mustprogress, and we appear to have not updated this location.

Differential Revision: https://reviews.llvm.org/D103834
2021-06-07 14:47:36 -07:00
Daniil Suchkov d32cc150fe [BasicAA] Handle PHIs without incoming values gracefully
Fix a bug introduced by f6f6f6375d.
Now for empty PHIs, instead of crashing on assert(hasVal()) in
Optional's internals, we'll return NoAlias, as we did before that patch.

Differential Revision: https://reviews.llvm.org/D103831
2021-06-07 21:39:01 +00:00
Daniil Suchkov e72f16b7e6 [Test] Add a JumpThreading test exposing a bug in BasicAA. 2021-06-07 21:39:00 +00:00
Jian Cai 9145a3d4ab Revert "[AArch64] handle -Wa,-march="
This reverts commit fd11a26d36.
2021-06-07 14:31:07 -07:00
River Riddle 2db4701caf [mlir-lsp-server] Fix bug in symbol use/def tracking
We were accidentally only using the first found reference, instead of all of them. This revision fixes this by properly tracking all references to a symbol.

Differential Revision: https://reviews.llvm.org/D103730
2021-06-07 14:07:41 -07:00
River Riddle 4c3adea7a4 [mlir-lsp-server] Add support for hover on symbol references
For now the hover simply shows the same information as hovering on the operation
name. If necessary this can be tweaked to something symbol specific later.

Differential Revision: https://reviews.llvm.org/D103728
2021-06-07 14:07:41 -07:00
River Riddle f492c35965 [mlir-lsp-server] Add support for hover on region operations
This revision adds support for hover on region operations, by temporarily removing the regions during printing. This revision also tweaks the hover format for operations to include symbol information, now that FuncOp can be shown in the hover.

Differential Revision: https://reviews.llvm.org/D103727
2021-06-07 14:07:41 -07:00
Nico Weber 17c43c4045 [lld/mac] Add reexports after reexporter to inputFiles
When a library "host"'s reexports change their installName with
`$ld$os10.11$install_name$host`, we used to write a load command for "host" but
write the version numbers of the reexport instead of "host". This fixes that.

I first thought that the rule is to take the version numbers from the library
that originally had that install name (implemented in D103819), but that's not
what ld64 seems to be doing: It takes the version number from the first dylib
with that install name it loads, and it loads the reexporting library before
the reexports. We already did most of that, we just added reexports before the
reexporter. After this change, we add the reexporter before the reexports.

Addresses https://bugs.llvm.org/show_bug.cgi?id=49800#c11 part 1.

(ld64 seems to add reexports after processing _all_ files on the command line,
while we add them right after the reexporter. For the common case of reexport +
$ld$ symbol changing back to the exporter name, this doesn't make a difference,
but you can construct a case where it does. I expect this to not make a
difference in practice though.)

Differential Revision: https://reviews.llvm.org/D103821
2021-06-07 17:04:03 -04:00
Amir Ayupov 8ec73e96b7 [ELF] getRelocatedSection: remove the check for ET_REL object file
getRelocatedSection interface should not check that the object file is
relocatable, as executable files may have relocations preserved with
`--emit-relocs` linker flag. The relocations are useful in context of post-link
binary analysis for function reference identification. For example, BOLT relies
on relocations to perform function reordering.

Reviewed By: MaskRay, jhenderson

Differential Revision: https://reviews.llvm.org/D102296
2021-06-07 13:17:00 -07:00
Harald van Dijk 75521bd9d8
[X32] Add Triple::isX32(), use it.
So far, support for x86_64-linux-gnux32 has been handled by explicit
comparisons of Triple.getEnvironment() to GNUX32. This worked as long as
x86_64-linux-gnux32 was the only X32 environment to worry about, but we
now have x86_64-linux-muslx32 as well. To support this, this change adds
an isX32() function and uses it. It replaces all checks for GNUX32 or
MuslX32 by isX32(), except for the following:

- Triple::isGNUEnvironment() and Triple::isMusl() are supposed to treat
  GNUX32 and MuslX32 differently.
- computeTargetTriple() needs to be able to transform triples to add or
  remove X32 from the environment and needs to map GNU to GNUX32, and
  Musl to MuslX32.
- getMultiarchTriple() completely lacks any Musl support and retains the
  explicit check for GNUX32 as it can only return x86_64-linux-gnux32.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D103777
2021-06-07 20:48:39 +01:00
Martin Storsjö 6de45b9e6a [clang] Fix reading long doubles with va_arg on x86_64 mingw
On x86_64 mingw, long doubles are always passed indirectly as
arguments (see an existing case in WinX86_64ABIInfo::classify);
generalize the existing code for reading varargs - any non-aggregate
type that is larger than 64 bits (which would be both long double
in mingw, and __int128) are passed indirectly too.

This makes reading varargs consistent with how they're passed,
fixing interop with both gcc and clang callers, for long double
and __int128.

Differential Revision: https://reviews.llvm.org/D103452
2021-06-07 22:34:10 +03:00
Nikita Popov 8fdd7c2ff1 [LoopUnroll] Clamp unroll count to MaxTripCount
Unrolling with more iterations than MaxTripCount is pointless, as
those iterations can never be executed. As such, we clamp ULO.Count
to MaxTripCount if it is known. This means we no longer need to
consider iterations after MaxTripCount for exit folding, and the
CompletelyUnroll flag becomes independent of ULO.TripCount.

Differential Revision: https://reviews.llvm.org/D103748
2021-06-07 21:08:42 +02:00
Peyton, Jonathan L d70e1f1276 [OpenMP][runtime] add .clang-tidy file
Use same checks as compiler-rt which removes checks for readability-*
and llvm-header style.

Differential Revision: https://reviews.llvm.org/D103711
2021-06-07 13:56:39 -05:00
AndreyChurbanov a1f550e052 [OpenMP] libomp: implement OpenMP 5.1 inoutset task dependence type
Refactored code of dependence processing and added new inoutset dependence type.
Compiler can set dependence flag to 0x8 when call __kmpc_omp_task_with_deps.
Size of type of the dependence flag changed from 1 to 4 bytes in clang.
All dependence flags library gets so far and corresponding dependence types:
1 - IN, 2 - OUT, 3 - INOUT, 4 - MUTEXINOUTSET, 8 - INOUTSET.

Differential Revision: https://reviews.llvm.org/D97085
2021-06-07 21:42:51 +03:00
Matt Arsenault ccf28ea800 AMDGPU: Move codegen test out of MIR test directory
This is testing an actual pass, not the MIR parser/printer.
2021-06-07 14:26:48 -04:00
Matt Arsenault dc98adfb44 GlobalISel: Use MMO helper for getting the size in bits 2021-06-07 14:26:48 -04:00
Matt Arsenault f6555b917b GlobalISel: Remove unnecessary .getReg(0)s 2021-06-07 14:26:48 -04:00
Philip Reames 38540d71c7 [SCEV] Compute exit counts for unsigned IVs using mustprogress semantics
The motivation here is simple loops with unsigned induction variables w/non-one steps. A toy example would be:
for (unsigned i = 0; i < N; i += 2) { body; }

Given C/C++ semantics, we do not get the nuw flag on the induction variable. Given that lack, we currently can't compute a bound for this loop. We can do better for many cases, depending on the contents of "body".

The basic intuition behind this patch is as follows:
* A step which evenly divides the iteration space must wrap through the same numbers repeatedly. And thus, we can ignore potential cornercases where we exit after the n-th wrap through uint32_max.
* Per C++ rules, infinite loops without side effects are UB. We already have code in SCEV which relies on this.  In LLVM, this is tied to the mustprogress attribute.

Together, these let us conclude that the trip count of this loop must come before unsigned overflow unless the body would form a well defined infinite loop.

A couple notes for those reading along:
* I reused the loop properties code which is overly conservative for this case. I may follow up in another patch to generalize it for the actual UB rules.
* We could cache the n(s/u)w facts. I left that out because doing a pre-patch which cached existing inference showed a lot of diffs I had trouble fully explaining. I plan to get back to this, but I don't want it on the critical path.

Differential Revision: https://reviews.llvm.org/D103118
2021-06-07 11:24:00 -07:00
William S. Moses 00b6463b26 [MLIR][GPU] Simplify memcpy of cast
Introduce a simplification that allows memcpy of a cast to simply use the underlying op

Differential Revision: https://reviews.llvm.org/D103830
2021-06-07 14:00:13 -04:00
Louis Dionne 85966df3aa [libc++] Rename 'and' to '&&' 2021-06-07 13:48:51 -04:00
Nico Weber 422544414b [lld/mac] Add a test for -reexport_library + -dead_strip_dylibs
Our behavior here already matched ld64, now we have a test for it.

(ld64 even strips the library here if you also pass -needed_library bar.dylib.
That seems wrong to me, and lld honors needed_library in that case.)

Differential Revision: https://reviews.llvm.org/D103812
2021-06-07 13:44:58 -04:00
William S. Moses 854d0edce6 [MLIR] Conditional Branch Argument Propagation
In an operation in the true/false dest of a branch,
one can assume that the operation itself was true/false if
only that edge can reach the operation.

Differential Revision: https://reviews.llvm.org/D101709
2021-06-07 13:33:10 -04:00
Craig Topper f30f8b4f12 [RISCV] Lower i8/i16 bswap/bitreverse to grevi/greviw with Zbp.
Include known bits support so we know we don't need to zext the
output if the input was already zero extended.

Reviewed By: luismarques

Differential Revision: https://reviews.llvm.org/D103757
2021-06-07 10:31:51 -07:00
Philip Reames c880d5e583 [RS4GC] Treat inttoptr as base pointer
This is a modified version of a patch by tolziplohu with a style change, and most importantly, a revised commit message.

inttoptr for a non-integral address space is currently ill defined in the LangRef.  Figuring out exactly what the dynamic semantics of such a cast would be is hard, and not yet settled.  Despite that, we still need to go ahead and implement something in RS4GC for a couple of reasons.

First, as a simple consistency argument.  We're apparently added support for constexpr inttoptrs a while back, and even have tests which exercised them.  Having a lack of constant folding trigger a crash during lowering is non-ideal.

Second, and more fundementally, the optimizer is allowed to insert undefined constructs in unreachable code.  At the same time, we can't assume that dynamically dead code is always pruned before lowering.  As a result, we must assume that inttoptrs can occur (even if completely ill defined) along dead paths.  We need the lowering to not crash.  The stackmaps produced can be garbage (as the assumption is the code is dynamically dead), but the lowering itself can't crash.

Differential Revision: https://reviews.llvm.org/D103492
2021-06-07 10:27:23 -07:00
jasonliu 8e84311a84 [XCOFF][AIX] Enable tooling support for 64 bit symbol table parsing
Add in the ability of parsing symbol table for 64 bit object.

Reviewed By: jhenderson, DiggerLin

Differential Revision: https://reviews.llvm.org/D85774
2021-06-07 17:24:13 +00:00
Sanjay Patel 4675beaa21 [InstCombine] intersect nsz and ninf fast-math-flags (FMF) for fneg(fdiv) fold
https://alive2.llvm.org/ce/z/3KPvih

https://llvm.org/PR49654
2021-06-07 13:22:49 -04:00
Sanjay Patel 519e98cd9a [InstCombine] refactor match clauses; NFC
We need to adjust the FMF propagation on at least
one of these transforms as discussed in:
https://llvm.org/PR49654
...so this should make it easier to intersect flags.
2021-06-07 13:22:49 -04:00