Commit Graph

66275 Commits

Author SHA1 Message Date
Steven Wu 6da58e7e0f [Object][MachO] Rewrite macho-invalid-fat-arch-size into YAML
Summary:
Rewrite one of the invalid macho test input file with YAML file. The
original invalid macho is breaking our internal test infrastusture
because it is too broken to be copy around.

Need to relax an assertion in the YAML/MachoEmitter to allow yaml2obj to
write an invalid object like this.

rdar://problem/56879982

Reviewers: beanz, mtrent

Reviewed By: beanz

Subscribers: hiraditya, jkorous, dexonsmith, ributzka, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69856
2019-11-06 11:26:25 -08:00
Simon Pilgrim 1786047b91 [X86] Fix SLM v2i64 ADD/Sub/CMPEQ instruction schedules
Noticed while fixing the reduction costs for D59710 - the SLM model doesn't account for the poor throughput of v2i64 ops.

Numbers taken from Intel AOM (+ checked against Agner)
2019-11-06 19:08:15 +00:00
Simon Pilgrim ad70d5f39a [X86] Fix SLM v2f64 ADD/MUL + FP BLEND/HADD instruction schedules
Noticed while fixing the reduction costs for D59710 - the SLM model doesn't account for the poor throughput of v2f64/v2i64 ops.
2019-11-06 19:08:15 +00:00
Simon Pilgrim a091f70610 [CostModel][X86] Improve add vXi64 + fadd vXf64 reduction tests for SLM
As noted on D59710 we weren't handling the high costs of these operations on SLM.
2019-11-06 17:55:38 +00:00
Simon Pilgrim 1b986b41ac [CostModel][X86] Add add/fadd reduction tests for SLM 2019-11-06 17:04:22 +00:00
Pavel Labath e1f8c8a16f DWARFDebugLoclists: Move to a incremental parsing model
Summary:
This patch stems from the discussion D68270 (including some offline
talks). The idea is to provide an "incremental" api for parsing location
lists, which will avoid caching or materializing parsed data. An
additional goal is to provide a high level location list api, which
abstracts the differences between different encoding schemes, and can be
used by users which don't care about those (such as LLDB).

This patch implements the first part. It implements a call-back based
"visitLocationList" api. This function parses a single location list,
calling a user-specified callback for each entry. This is going to be
the base api, which other location list functions (right now, just the
dumping code) are going to be based on.

Future patches will do something similar for the v4 location lists, and
add a mechanism to translate raw entries into concrete address ranges.

Reviewers: dblaikie, probinson, JDevlieghere, aprantl, SouraVX

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69672
2019-11-06 16:25:06 +01:00
Sanjay Patel 8e34dd941c [x86] avoid crashing when splitting AVX stores with non-simple type (PR43916)
The store splitting transform was assuming a simple type (MVT),
but that's not necessarily the case as shown in the test.
2019-11-06 09:28:41 -05:00
Momchil Velikov d91ea7fc6f [AArch64] Move the branch relaxation pass after BTI insertion
Summary:
Inserting BTI instructions can push branch destinations out of range.

The branch relaxation pass itself cannot insert indirect branches since `TargetInstrInfo::insertIndirecrtBranch` is not implemented for AArch64 (guess +/-128 MB direct branch range is more than enough in practice).

Testing this is a bit tricky.

The original test case we have is 155kloc/6.1M. I've generated a test case using this program:
```

int main() {
  std::cout << R"src(int test();
void g0(), g1(), g2(), g3(), g4(), e();

void f(int v) {
  if ((test() & 2) == 0) {
  switch (v) {
  case 0:
    g0();
  case 1:
    g1();
  case 2:
    g2();
  case 3:
    g3();
  }
)src";

  const int N = 8176;

  for (int i = 0; i < N; ++i)
    std::cout << "    void h" << i << "();\n";
  for (int i = 0; i < N; ++i)
    std::cout << "    h" << i << "();\n";

  std::cout << R"src(
  } else {
    e();
  }
}
)src";
}
```
which is still a bit too much to commit as a regression test, IMHO.

Reviewers: t.p.northover, ostannard

Reviewed By: ostannard

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69118

Change-Id: Ide5c922bcde08ff4cf635da5e52365525a997a0a
2019-11-06 12:46:50 +00:00
Roman Lebedev 4fe94d0331
[LoopUnroll] countToEliminateCompares(): fix handling of [in]equality predicates (PR43840)
Summary:
I believe this bisects to https://reviews.llvm.org/D44983
(`[LoopUnroll] Only peel if a predicate becomes known in the loop body.`)

While that revision did contain tests that showed arguably-subpar peeling
for [in]equality predicates that [not] happen in the middle of the loop,
it also disabled peeling for the *first* loop iteration,
because latch would be canonicalized to [in]equality comparison..

That was intentional as per https://reviews.llvm.org/D44983#1059583.
I'm not 100% sure that i'm using correct checks here,
but this fix appears to be going in the right direction..

Let me know if i'm missing some checks here..

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=43840 | PR43840 ]].

Reviewers: fhahn, mkazantsev, efriedma

Reviewed By: fhahn

Subscribers: xbolva00, hiraditya, zzheng, llvm-commits, fhahn

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69617
2019-11-06 15:08:59 +03:00
Roman Lebedev 432a12c803
[NFC][LoopUnroll] Update test coverage for peeling w/ inequality predicates 2019-11-06 15:08:59 +03:00
dfukalov 47a5c36b37 [AMDGPU] Improve code size cost model (part 2)
Summary: Added estimations for ShuffleVector, some cast and arithmetic instructions

Reviewers: rampitec

Reviewed By: rampitec

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, zzheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69629
2019-11-06 13:55:48 +03:00
Sjoerd Meijer 6c2a4f5ff9 [TTI][LV] preferPredicateOverEpilogue
We have two ways to steer creating a predicated vector body over creating a
scalar epilogue. To force this, we have 1) a command line option and 2) a
pragma available. This adds a third: a target hook to TargetTransformInfo that
can be queried whether predication is preferred or not, which allows the
vectoriser to make the decision without forcing it.

While this change behaves as a non-functional change for now, it shows the
required TTI plumbing, usage of this new hook in the vectoriser, and the
beginning of an ARM MVE implementation. I will follow up on this with:
- a complete MVE implementation, see D69845.
- a patch to disable this, i.e. we should respect "vector_predicate(disable)"
  and its corresponding loophint.

Differential Revision: https://reviews.llvm.org/D69040
2019-11-06 10:14:20 +00:00
Simon Tatham 6c3fee47a6 [ARM,MVE] Add intrinsics for gather/scatter load/stores.
This patch adds two new families of intrinsics, both of which are
memory accesses taking a vector of locations to load from / store to.

The vldrq_gather_base / vstrq_scatter_base intrinsics take a vector of
base addresses, and an immediate offset to be added consistently to
each one. vldrq_gather_offset / vstrq_scatter_offset take a scalar
base address, and a vector of offsets to add to it. The
'shifted_offset' variants also multiply each offset by the element
size type, so that the vector is effectively of array indices.

At the IR level, these operations are represented by a single set of
four IR intrinsics: {gather,scatter} × {base,offset}. The other
details (signed/unsigned, shift, and memory element size as opposed to
vector element size) are all specified by IR intrinsic polymorphism
and immediate operands, because that made the selection job easier
than making a huge family of similarly named intrinsics.

I considered using the standard IR representations such as
llvm.masked.gather, but they're not a good fit. In order to use
llvm.masked.gather to represent a gather_offset load with element size
smaller than a pointer, you'd have to expand the <8 x i16> vector of
offsets into an <8 x i16*> vector of pointers, which would be split up
during legalization, so you'd spend most of your time undoing the mess
it had made. Also, ISel support for llvm.masked.gather would be easy
enough in a trivial way (you can expand it into a gather-base load
with a zero immediate offset), but instruction-selecting lots of
fiddly idioms back into all the _other_ MVE load instructions would be
much more work. So I think dedicated IR intrinsics are the more
sensible approach, at least for the moment.

On the clang tablegen side, I've added two new features to the
Tablegen source accepted by MveEmitter: a 'CopyKind' type node for
defining a type that varies with the parameter type (it lets you ask
for an unsigned integer type of the same width as the parameter), and
an 'unsignedflag' value node for passing an immediate IR operand which
is 0 for a signed integer type or 1 for an unsigned one. That lets me
write each kind of intrinsic just once and get all its subtypes and
immediate arguments generated automatically.

Also I've tweaked the handling of pointer-typed values in the code
generation part of MveEmitter: they're generated as Address rather
than Value (i.e. including an alignment) so that they can be given to
the ordinary IR load and store operations, but I'd omitted the code to
convert them back to Value when they're going to be used as an
argument to an IR intrinsic.

On the MC side, I've enhanced MVEVectorVTInfo so that it can tell you
not only the full assembly-language suffix for a given vector type
(like 's32' or 'u16') but also the numeric-only one used by store
instructions (just '32' or '16').

Reviewers: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D69791
2019-11-06 09:01:42 +00:00
QingShan Zhang 529bb8a980 [PowerPC] Fix the incorrect 'RM' flag set on load/store instr
The 'RM' flag model the "Rounding Mode" and it has nothing to do with the load/store instructions.

Differential Revision: https://reviews.llvm.org/D69551
2019-11-06 02:46:37 +00:00
Vladimir Vereschaka bcbb121ff6 Fixed a profdata file size detection on Windows system.
The space symbols are allowed in the group names on Windows system (as
example: Domain Users). In that case the test extracts a wrong field
from the output to get a size of the profdata file.

This patch avoids a printing of the group names in the test output and
extracts a proper field as a file size.

Differential Revision: https://reviews.llvm.org/D69317
2019-11-05 17:09:50 -08:00
Teresa Johnson b36e3a8bac [IRMover] Set Address Space for moved global values
Summary:
Set Address Space when creating a new function (from another).

Fix PR41154.

Patch by Ehud Katz <ehudkatz@gmail.com>

Reviewers: tejohnson, chandlerc

Reviewed By: tejohnson

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69361
2019-11-05 16:32:48 -08:00
Sanjay Patel 7effd37b00 [SLP] add tests for 2-wide reductions; NFC 2019-11-05 17:18:37 -05:00
Simon Atanasyan 37f4955c9b [mips] Fix `getRegForInlineAsmConstraint` to do not crash on empty Constraint 2019-11-06 00:50:39 +03:00
Amy Huang a078c77d72 [MIR] Add MIR parsing for heap alloc site instruction markers
Summary:
This patch adds MIR parsing and printing for heap alloc markers, which were
added in D69136. They are printed as an operand similar to pre-/post-instr
symbols, with a heap-alloc-marker token and a metadata node.

Reviewers: rnk

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69864
2019-11-05 12:57:45 -08:00
Benjamin Kramer 5f158d8e21 [X86] Gate select->fmin/fmax transform on NoSignedZeros instead of UnsafeFPMath 2019-11-05 21:28:41 +01:00
Philip Reames 027aa27d95 [X86/Atomics] (Semantically) revert G246098, switch back to the old atomic example
When writing an email for a follow up proposal, I realized one of the diffs in the committed change was incorrect.  Digging into it revealed that the fix is complicated enough to require some thought, so reverting in the meantime.

The problem is visible in this diff (from the revert):
 ; X64-SSE-LABEL: store_fp128:
 ; X64-SSE:       # %bb.0:
-; X64-SSE-NEXT:    movaps %xmm0, (%rdi)
+; X64-SSE-NEXT:    subq $24, %rsp
+; X64-SSE-NEXT:    .cfi_def_cfa_offset 32
+; X64-SSE-NEXT:    movaps %xmm0, (%rsp)
+; X64-SSE-NEXT:    movq (%rsp), %rsi
+; X64-SSE-NEXT:    movq {{[0-9]+}}(%rsp), %rdx
+; X64-SSE-NEXT:    callq __sync_lock_test_and_set_16
+; X64-SSE-NEXT:    addq $24, %rsp
+; X64-SSE-NEXT:    .cfi_def_cfa_offset 8
 ; X64-SSE-NEXT:    retq
   store atomic fp128 %v, fp128* %fptr unordered, align 16
   ret void

The problem here is three fold:
1) x86-64 doesn't guarantee atomicity of anything larger than 8 bytes.  Some platforms observably break this guarantee, others don't, but the codegen isn't considering this, so it's wrong on at least some platforms.
2) When I started to track down the problem, I discovered that DAGCombiner had stripped the atomicity off the store entirely.  This comes down to idiomatic usage of DAG.getStore passing all MMO components separately as opposed to just passing the MMO.
3) On x86 (not -64), there are cases where 8 byte atomiciy is supported, but only for floating point operations.  This would seem to imply that operation typing matters for correctness, and DAGCombine happily folds away bitcasts.  I'm not 100% sure there's a problem here, but I'm not entirely sure there isn't either.

I plan on returning to each issue in turn;  sorry for the churn here.
2019-11-05 11:24:27 -08:00
Sid Manning 6cd47f9dd7 [llvm-objdump] Fix spurious "The end of the file was unexpectedly encountered" if a SHT_NOBITS sh_offset is larger than the file size
llvm-objdump -D this file:

  int a[100000];
  int main() { return 0; }

Will produce an error: "The end of the file was unexpectedly encountered".

This happens because of a check in Binary.h checkOffset.  (Addr + Size > M.getBufferEnd()).

The sh_offset and sh_size fields can be ignored for SHT_NOBITS sections.
Fix the error by changing ELFObjectFile<ELFT>::getSectionContents to use
the file base for SHT_NOBITS sections.

Reviewed By: grimar, MaskRay

Differential Revision: https://reviews.llvm.org/D69192
2019-11-05 11:14:12 -08:00
Benjamin Kramer 00e53d912d [X86] Specifically limit fmin/fmax commutativity to NoNaNs + NoSignedZeros
The backend UnsafeFPMath flag is not a superset of all the others, so
limit it to the exact bits needed.
2019-11-05 19:34:06 +01:00
Daniel Sanders e74c5b9661 [globalisel] Rename G_GEP to G_PTR_ADD
Summary:
G_GEP is rather poorly named. It's a simple pointer+scalar addition and
doesn't support any of the complexities of getelementptr. I therefore
propose that we rename it. There's a G_PTR_MASK so let's follow that
convention and go with G_PTR_ADD

Reviewers: volkan, aditya_nandakumar, bogner, rovka, arsenm

Subscribers: sdardis, jvesely, wdng, nhaehnle, hiraditya, jrtc27, atanasyan, arphaman, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69734
2019-11-05 10:31:17 -08:00
Stanislav Mekhanoshin de56a89072 [AMDGPU] return Fail instead of SolfFail from addOperand()
addOperand() method of AMDGPU disassembler returns SoftFail
on error. All instances which may lead to that place are
an impossible encdoing, not something which is possible to
encode, but semantically incorrect as described for SoftFail.

Then tablegen generates a check of the following form:

if (Decode...(..) == MCDisassembler::Fail) { return MCDisassembler::Fail; }

Since we can only return Success and SoftFail that is dead
code as detected by the static code analyzer.

Solution: return Fail as it should be.

See https://bugs.llvm.org/show_bug.cgi?id=43886

Differential Revision: https://reviews.llvm.org/D69819
2019-11-05 10:25:27 -08:00
Steven Wu e64f7bfefe Revert "[Object][MachO] Rewrite macho-invalid-fat-arch-size into YAML"
The invalid binary trying to construct triggers an assertion.
2019-11-05 09:34:26 -08:00
Steven Wu bc496677d0 [Object][MachO] Rewrite macho-invalid-fat-arch-size into YAML
Rewrite one of the invalid macho test input file with YAML file. The
original invalid macho is breaking our internal test infrastusture
because it is too broken to be copy around.

rdar://problem/56879982
2019-11-05 09:07:06 -08:00
Fangrui Song 5ad0103d8a [llvm-objcopy][ELF] Implement --only-keep-debug
--only-keep-debug produces a debug file as the output that only
preserves contents of sections useful for debugging purposes (the
binutils implementation preserves SHT_NOTE and non-SHF_ALLOC sections),
by changing their section types to SHT_NOBITS and rewritting file
offsets.

See https://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html

The intended use case is:

```
llvm-objcopy --only-keep-debug a a.dbg
llvm-objcopy --strip-debug a b
llvm-objcopy --add-gnu-debuglink=a.dbg b
```

The current layout algorithm is incapable of deleting contents and
shrinking segments, so it is not suitable for implementing the
functionality.

This patch adds a new algorithm which assigns sh_offset to sections
first, then modifies p_offset/p_filesz of program headers. It bears a
resemblance to lld/ELF/Writer.cpp.

Reviewed By: jhenderson, jakehehrlich

Differential Revision: https://reviews.llvm.org/D67137
2019-11-05 08:56:15 -08:00
David Green 03bf229bd4 [ARM] Multi-vector MVE spill test
This is a test from D67169, that can now be added after the vld2
intrinsics were committed upstream.
2019-11-05 16:17:25 +00:00
Francis Visoiu Mistrih 47d1029788 [ObjC][ARC] Ignore lifetime markers between *ReturnValue calls
When eliminating a pair of

`llvm.objc.autoreleaseReturnValue`

followed by

`llvm.objc.retainAutoreleasedReturnValue`

we need to make sure that the instructions in between are safe to
ignore.

Other than bitcasts and useless GEPs, it's also safe to ignore lifetime
markers for both static allocas (lifetime.start/lifetime.end) and dynamic
allocas (stacksave/stackrestore).

These get added by the inliner as part of the return sequence and can
prevent the transformation from happening in practice.

Differential Revision: https://reviews.llvm.org/D69833
2019-11-05 06:39:22 -08:00
Francis Visoiu Mistrih 68f39de042 [NFC][ObjC][ARC] Add tests for OptimizeRetainRVCall
Add tests for bitcasts + zero GEPs, and pre-commit tests for lifetime
markers.
2019-11-05 06:39:22 -08:00
Sanjay Patel 3ce0c78501 [InstCombine] add tests for shift-logic-shift; NFC
This is based on existing CodeGen test files for x86 and AArch64.
The corresponding potential transform is shown in:
rL370617
2019-11-05 08:18:50 -05:00
David Green f01b9aa89e [MachineScheduler] Enable AA in PostRA Machine scheduler
This adds AA to Post-RA Machine Scheduling, allowing the pass more
freedom when handling memory operations.

My understanding is that this was just never done, not that it is
inherently incorrect to do so. The older PostRA List scheduler already
makes use of AA, it's just that the MI PostRA Scheduler was never taught
to use it.

Differential Revision: https://reviews.llvm.org/D69814
2019-11-05 11:58:50 +00:00
David Green cf581d7977 [ARM] Always enable UseAA in the arm backend
This feature controls whether AA is used into the backend, and was
previously turned on for certain subtargets to help create less
constrained scheduling graphs. This patch turns it on for all
subtargets, so that they can all make use of the extra information to
produce better code.

Differential Revision: https://reviews.llvm.org/D69796
2019-11-05 10:46:56 +00:00
David Green 7d9af03ff7 [Scheduling][ARM] Consistently enable PostRA Machine scheduling
In the ARM backend, for historical reasons we have only some targets
using Machine Scheduling. The rest use the old list scheduler as they
are using itinaries and the list scheduler seems to produce better code
(and not crash running out of register on v6m codes). So whether to use
the MIScheduler or not is checked at runtime from the subtarget
features.

This is fine, except for post-ra scheduling. Whether to use the old
post-ra list scheduler or the post-ra machine schedule is decided as the
pass manager is set up, in arms case from a newly constructed subtarget.
Under some situations, like LTO, this won't include the correct cpu so
can pick the wrong option. This can have a surprising effect on
performance.

To fix that, this patch overrides targetSchedulesPostRAScheduling and
addPreSched2 in the ARM backend, adding _both_ post-ra schedulers and
picking at runtime which to execute. To pick between the two I've had to
add a enablePostRAMachineScheduler() method that normally returns
enableMachineScheduler() && enablePostRAScheduler(), which can be
overridden to enable just one of PostRAMachineScheduler vs
PostRAScheduler.

Thanks to David Penry for the identifying this problem.

Differential Revision: https://reviews.llvm.org/D69775
2019-11-05 10:44:55 +00:00
Roman Lebedev 12c4a71ca9
[LoopUnroll] peel-loop-conditions.ll: add some 'is even/odd' peeling tests 2019-11-05 13:02:57 +03:00
Roman Lebedev ccf1a5f4bb
[InstCombine] dropRedundantMaskingOfLeftShiftInput(): truncation (PR42563)
Summary:
That fold keeps growing and growing :(
I think this may be one of the last pieces for it.

Since D67677/D67725, the fold knowns the general form
of the pattern - where some masking is needed:
https://rise4fun.com/Alive/F5R
https://rise4fun.com/Alive/gslRa

But there is one more huge piece missing - if you are extracting some bits,
it is not impossible that the origin is wider than the extraction,
i.e. there may be a truncation. And we don't deal with that yet.

But we can, and the generalization remains fully identical:
https://rise4fun.com/Alive/Uar
https://rise4fun.com/Alive/5SW

After a preparatory cleanup i think the diff looks rather clean.

One missing piece is that in some patterns (especially pat. b),
`-1` only needs to be `-1` in final type, but that is for later..

https://bugs.llvm.org/show_bug.cgi?id=42563

Reviewers: spatel, nikic

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69125
2019-11-05 12:41:26 +03:00
Luís Marques 0d47c7aba3 [RISCV] Add InstrInfo areMemAccessesTriviallyDisjoint hook
Summary: Introduces the `InstrInfo::areMemAccessesTriviallyDisjoint`
hook. The test could check for instruction reorderings, but to avoid
being brittle it just checks instruction dependencies.

Reviewers: asb, lenary
Reviewed By: lenary
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67046
2019-11-05 09:39:06 +00:00
Pavel Labath b4c5b8f3f5 DWARFDebugLoclists: Make it possible to read relocated addresses
Summary:
Handling relocations was not needed when the loclists section was a
DWO-only thing. But since DWARF5, it is possible to use it in regular
objects too, and the standard permits embedding addresses into the
section directly. These addresses need to be relocated in unlinked
files.

Reviewers: JDevlieghere, dblaikie, probinson

Subscribers: aprantl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68271
2019-11-05 10:21:39 +01:00
Sjoerd Meijer 92164cf25d Recommit "[HardwareLoops] Optimisation remarks"
With a few things fixed:
- initialisaiton of the optimisation remark pass (this was causing the buildbot
  failures on PPC),
- a test case.

Differential Revision: https://reviews.llvm.org/D69660
2019-11-05 09:06:22 +00:00
David Green edfb8eea57 [AArch64] Update test checks on merge-store-dependency.ll. NFC 2019-11-05 09:01:47 +00:00
Craig Topper 103968d147 [X86] Lower the cost of avx512 horizontal bool and/or reductions to 2*log2(bitwidth)+1 for legal types.
This better represents the kshift+binop we'd get for each stage
before the final extract. Its likely we'll do even better by
doing a kmov and a cmp with a GPR, but this is a good start.

The default handling was costing a worst case single source
permute shuffle of the vector before the binop. This worst
case assumes the shuffle might have to be emulated with
extracts and inserts. But since we know we're doing a reduction
we can assume we'll get kshift lowering.

There's still some room for improvement here, but this is
much better than it was.
2019-11-04 22:58:04 -08:00
aqjune 58acbce3de [IR] Add Freeze instruction
Summary:
- Define Instruction::Freeze, let it be UnaryOperator
- Add support for freeze to LLLexer/LLParser/BitcodeReader/BitcodeWriter
  The format is `%x = freeze <ty> %v`
- Add support for freeze instruction to llvm-c interface.
- Add m_Freeze in PatternMatch.
- Erase freeze when lowering IR to SelDag.

Reviewers: deadalnix, hfinkel, efriedma, lebedev.ri, nlopes, jdoerfert, regehr, filcab, delcypher, whitequark

Reviewed By: lebedev.ri, jdoerfert

Subscribers: jfb, kristof.beyls, hiraditya, lebedev.ri, steven_wu, dexonsmith, xbolva00, delcypher, spatel, regehr, trentxintong, vsk, filcab, nlopes, mehdi_amini, deadalnix, llvm-commits

Differential Revision: https://reviews.llvm.org/D29011
2019-11-05 15:54:56 +09:00
Craig Topper f65493a83e [X86] Teach X86MCInstLower to swap operands of commutable instructions to enable 2-byte VEX encoding.
Summary:
The 2 source operands commutable instructions are encoded in the
VEX.VVVV field and the r/m field of the MODRM byte plus the VEX.B
field.

The VEX.B field is missing from the 2-byte VEX encoding. If the
VEX.VVVV source is 0-7 and the other register is 8-15 we can
swap them to avoid needing the VEX.B field. This works as long as
the VEX.W, VEX.mmmmm, and VEX.X fields are also not needed.

Fixes PR36706.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68550
2019-11-04 22:07:46 -08:00
aqjune 31be9f3f7d Fix clone_constant_impl to correctly deal with null pointers
Summary:
This patch resolves llvm-c-test's following error

```
LLVM ERROR: LLVMGetValueKind returned incorrect type
```

which arises when the input bitcode contains a null pointer.

Reviewers: jdoerfert, CodaFi, deadalnix

Reviewed By: jdoerfert

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68928
2019-11-05 13:53:52 +09:00
Yonghong Song fff2721286 [BPF] Fix CO-RE bugs with bitfields
bitfield handling is not robust with current implementation.
I have seen two issues as described below.

Issue 1:
  struct s {
    long long f1;
    char f2;
    char b1:1;
  } *p;
  The current approach will generate an access bit size
  56 (from b1 to the end of structure) which will be
  rejected as it is not power of 2.

Issue 2:
  struct s {
    char f1;
    char b1:3;
    char b2:5;
    char b3:6:
    char b4:2;
    char f2;
  };
  The LLVM will group 4 bitfields together with 2 bytes. But
  loading 2 bytes is not correct as it violates alignment
  requirement. Note that sometimes, LLVM breaks a large
  bitfield groups into multiple groups, but not in this case.

To resolve the above two issues, this patch takes a
different approach. The alignment for the structure is used
to construct the offset of the bitfield access. The bitfield
incurred memory access is an aligned memory access with alignment/size
equal to the alignment of the structure.
This also simplified the code.

This may not be the optimal memory access in terms of memory access
width. But this should be okay since extracting the bitfield value
will have the same amount of work regardless of what kind of
memory access width.

Differential Revision: https://reviews.llvm.org/D69837
2019-11-04 20:08:05 -08:00
Vedant Kumar a5c8ec4baa [CGDebugInfo] Emit subprograms for decls when AT_tail_call is understood
Currently, clang emits subprograms for declared functions when the
target debugger or DWARF standard is known to support entry values
(DW_OP_entry_value & the GNU equivalent).

Treat DW_AT_tail_call the same way to allow debuggers to follow cross-TU
tail calls.

Pre-patch debug session with a cross-TU tail call:

```
  * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
    frame #1: 0x0000000100000f99 main`main at a.c:8:10 [opt]
```

Post-patch (note that the tail-calling frame, "helper", is visible):

```
  * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
    frame #1: 0x0000000100000f80 main`helper [opt] [artificial]
    frame #2: 0x0000000100000f99 main`main at a.c:8:10 [opt]
```

rdar://46577651

Differential Revision: https://reviews.llvm.org/D69743
2019-11-04 15:14:24 -08:00
Craig Topper b2b6a54f84 [X86] Add support for -mvzeroupper and -mno-vzeroupper to match gcc
-mvzeroupper will force the vzeroupper insertion pass to run on
CPUs that normally wouldn't. -mno-vzeroupper disables it on CPUs
where it normally runs.

To support this with the default feature handling in clang, we
need a vzeroupper feature flag in X86.td. Since this flag has
the opposite polarity of the fast-partial-ymm-or-zmm-write we
used to use to disable the pass, we now need to add this new
flag to every CPU except KNL/KNM and BTVER2 to keep identical
behavior.

Remove -fast-partial-ymm-or-zmm-write which is no longer used.

Differential Revision: https://reviews.llvm.org/D69786
2019-11-04 11:03:54 -08:00
Philip Reames 6ff439b57f [SimplifyCFG] Use a (trivially) dominanting widenable branch to remove later slow path blocks
This transformation is a variation on the GuardWidening transformation we have checked in as it's own pass. Instead of focusing on merge (i.e. hoisting and simplifying) two widenable branches, this transform makes the observation that simply removing a second slowpath block (by reusing an existing one) is often a very useful canonicalization. This may lead to later merging, or may not. This is a useful generalization when the intermediate block has loads whose dereferenceability is hard to establish.

As noted in the patch, this can be generalized further, and will be.

Differential Revision: https://reviews.llvm.org/D69689
2019-11-04 11:03:28 -08:00
Sanjay Patel 113181e9bd [DAGCombine][MSP430] use shift amount threshold in DAGCombine (2/2)
Continuation of:
D69116

Contributes to a fix for PR43559:
https://bugs.llvm.org/show_bug.cgi?id=43559

See also D69099 and D69116

Use the TLI hook in DAGCombine.cpp to guard against creating
shift nodes that are not optimal for a target.

Patch by: @joanlluch (Joan LLuch)

Differential Revision: https://reviews.llvm.org/D69120
2019-11-04 13:41:41 -05:00