Commit Graph

50732 Commits

Author SHA1 Message Date
Sjoerd Meijer 5ea465ded7 [AArch64] Don't materialize 0 with "fmov h0, .." when FullFP16 is not supported
We were generating "fmov h0, wzr" instructions when FullFP16 is not enabled.
I've not added any tests, because the problem was visible in:
test/CodeGen/AArch64/arm64-zero-cycle-zeroing.ll,
which I had to change: I don't think Cyclone has FullFP16 enabled
by default, so it shouldn't be using this v8.2a instruction.

I've also removed these rdar tags, please shout if there are any objections.

Differential Revision: https://reviews.llvm.org/D43020

llvm-svn: 324581
2018-02-08 08:39:05 +00:00
Craig Topper 8d0c8c9be1 [X86] Support folding in a k-register OR when creating KORTEST from scalar compare of a bitcast from vXi1.
This should allow us to remove the kortest intrinsic from IR and use compare+bitcast+or in IR instead.

llvm-svn: 324580
2018-02-08 08:29:43 +00:00
Craig Topper 93505707b6 [X86] Allow KORTEST instruction to be used for testing if a mask is all ones
The KTEST instruction sets the C flag if the result of anding both operands together is all 1s. We can use this to lower (icmp eq/ne (bitcast (vXi1 X), -1)

Differential Revision: https://reviews.llvm.org/D42772

llvm-svn: 324577
2018-02-08 07:54:16 +00:00
Craig Topper f5465f98d2 [X86] Don't emit KTEST instructions unless only the Z flag is being used
Summary:
KTEST has weird flag behavior. The Z flag is set for all bits in the AND of the k-registers being 0, and the C flag is set for all bits being 1. All other flags are cleared.

We currently emit this instruction in EmitTEST and don't check the condition code. This can lead to strange things like using the S flag after a KTEST for a signed compare.

The domain reassignment pass can also transform TEST instructions into KTEST and is not protected against the flag usage either. For now I've disabled this part of the domain reassignment pass. I tried to comment out the checks in the mir test so that we could recover them later, but I couldn't figure out how to get that to work.

This patch moves the KTEST handling into LowerSETCC and now creates a ktest+x86setcc. I've chosen this approach because I'd like to add support for the C flag for all ones in a followup patch. To do that requires that I can rewrite the condition code going in the x86setcc to be different than the original SETCC condition code.

This fixes PR36182. I'll file a PR to fix domain reassignment once this goes in. Should this be merged to 6.0?

Reviewers: spatel, guyblank, RKSimon, zvi

Reviewed By: guyblank

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D42770

llvm-svn: 324576
2018-02-08 07:45:55 +00:00
Serguei Katkov 66182d6c38 [SimplifyCFG] Re-apply Relax restriction for folding unconditional branches
The commit rL308422 introduces a restriction for folding unconditional
branches. Specifically if empty block with unconditional branch leads to
header of the loop then elimination of this basic block is prohibited.
However it seems this condition is redundantly strict.
If elimination of this basic block does not introduce more back edges
then we can eliminate this block.

The patch implements this relax of restriction.

The test profile/Linux/counter_promo_nest.c in compiler-rt project
is updated to meet this change.

Reviewers: efriedma, mcrosier, pacxx, hsung, davidxl	
Reviewed By: pacxx
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42691

llvm-svn: 324572
2018-02-08 07:16:29 +00:00
Francis Visoiu Mistrih da89d1812a [CodeGen] Print MachineBasicBlock labels using MIR syntax in -debug output
Instead of:

%bb.1: derived from LLVM BB %for.body

print:

bb.1.for.body:

Also use MIR syntax for MBB attributes like "align", "landing-pad", etc.

llvm-svn: 324563
2018-02-08 05:02:00 +00:00
Yonghong Song f2075aef68 bpf: Improve expanding logic in LowerSELECT_CC
LowerSELECT_CC is not generating optimal Select_Ri pattern at the moment. It
is not guaranteed to place ConstantNode at RHS which would miss matching
Select_Ri.

A new testcase added into the existing select_ri.ll, also there is an
existing case in cmp.ll which would be improved to use Select_Ri after this
patch, it is adjusted accordingly.

Reported-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Reviewed-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 324560
2018-02-08 04:37:49 +00:00
Matt Arsenault b02cebf552 AMDGPU: Fix incorrect reordering when inline asm defines LDS address
Defs of operands outside of the instruction's explicit defs need
to be checked.

llvm-svn: 324554
2018-02-08 01:56:14 +00:00
Matt Arsenault c908e3f77a AMDGPU: Don't crash when trying to fold implicit operands
llvm-svn: 324550
2018-02-08 01:12:46 +00:00
Chandler Carruth 0be0cfa65b [x86] Fix nasty bug in the x86 backend that is essentially impossible to
hit from IR but creates a minefield for MI passes.

The x86 backend has fairly powerful logic to try and fold loads that
feed register operands to instructions into a memory operand on the
instruction. This is almost always a good thing, but there are specific
relocated loads that are only allowed to appear in specific
instructions. Notably, R_X86_64_GOTTPOFF is only allowed in `movq` and
`addq`. This patch blocks folding of memory operands using this
relocation unless the target is in fact `addq`.

The particular relocation indicates why we simply don't hit this under
normal circumstances. This relocation is only used for TLS, and it gets
used in very specific ways in conjunction with %fs-relative addressing.
The result is that loads using this relocation are essentially never
eligible for folding into an instruction's memory operands. Unless, of
course, you have an MI pass that inserts usage of such a load. I have
exactly such an MI pass and was greeted by truly mysterious miscompiles
where the linker replaced my instruction with a completely garbage byte
sequence. Go team.

This is the only such relocation I'm aware of in x86, but there may be
others that need to be similarly restricted.

Fixes PR36165.

Differential Revision: https://reviews.llvm.org/D42732

llvm-svn: 324546
2018-02-07 23:59:14 +00:00
Mircea Trofin 06ac8cfbd1 Verify profile data confirms large loop trip counts.
Summary:
Loops with inequality comparers, such as:

   // unsigned bound
   for (unsigned i = 1; i < bound; ++i) {...}

have getSmallConstantMaxTripCount report a large maximum static
trip count - in this case, 0xffff fffe. However, profiling info
may show that the trip count is much smaller, and thus
counter-recommend vectorization.

This change:
- flips loop-vectorize-with-block-frequency on by default.
- validates profiled loop frequency data supports vectorization,
  when static info appears to not counter-recommend it. Absence
  of profile data means we rely on static data, just as we've
  done so far.

Reviewers: twoh, mkuper, davidxl, tejohnson, Ayal

Reviewed By: davidxl

Subscribers: bkramer, llvm-commits

Differential Revision: https://reviews.llvm.org/D42946

llvm-svn: 324543
2018-02-07 23:29:52 +00:00
Craig Topper 8baa9c77e3 [X86] When doing callee save/restore for k-registers make sure we don't use KMOVQ on non-BWI targets
If we are saving/restoring k-registers, the default behavior of getMinimalRegisterClass will find the VK64 class with a spill size of 64 bits. This will cause the KMOVQ opcode to be used for save/restore. If we don't have have BWI instructions we need to constrain the class returned to give us VK16 with a 16-bit spill size. We can do this by passing the either v16i1 or v64i1 into getMinimalRegisterClass.

Also add asserts to make sure BWI is enabled anytime we use KMOVD/KMOVQ. These are what caught this bug.

Fixes PR36256

Differential Revision: https://reviews.llvm.org/D42989

llvm-svn: 324533
2018-02-07 21:41:50 +00:00
Craig Topper ce26819f9e [X86] Auto-generate complete checks. NFC
llvm-svn: 324530
2018-02-07 21:29:30 +00:00
Momchil Velikov 74906a467c Revert "[DebugInfo] Improvements to representation of enumeration types (PR36168)"
Revert commit r324489, it broke LLDB tests.

llvm-svn: 324511
2018-02-07 20:28:47 +00:00
Alexey Bataev cd8d6de381 [SLP] Add a tests for PR36280, NFC.
llvm-svn: 324510
2018-02-07 20:11:37 +00:00
Craig Topper d18430018d [X86] Regenerate test using update_mir_test_checks.py. NFC
llvm-svn: 324497
2018-02-07 18:32:15 +00:00
Rafael Espindola f4e3f3e31c Revert "AMDGPU: Add 32-bit constant address space"
This reverts commit r324487.

It broke clang tests.

llvm-svn: 324494
2018-02-07 18:09:35 +00:00
Jonas Devlieghere 36df7631b4 Revert dsymutil -update commits
Revert "[dsymutil][test] Check the updated dSYM instead of companion file."
Revert "[dsymutil] Upstream update feature."

llvm-svn: 324493
2018-02-07 17:35:27 +00:00
Momchil Velikov c502027efd [DebugInfo] Improvements to representation of enumeration types (PR36168)
This patch is the LLVM part of fixing the issues, described in
https://bugs.llvm.org/show_bug.cgi?id=36168

* The representation of enumerator values in the debug info metadata now
  contains a boolean flag isUnsigned, which determines how the bits of
  the value are interpreted.
* The DW_TAG_enumeration type DIE now always (for DWARF version >= 3)
  includes a DW_AT_type attribute, which refers to the underlying
  integer type, as suggested in DWARFv4 (5.7 Enumeration Type Entries).
* The debug info metadata for enumeration type contains (in flags)
  indication whether this is a C++11 "fixed enum".
* For C++11 enumeration with a fixed underlying type, the DIE also
  includes the DW_AT_enum_class attribute (for DWARF version >= 4).
* Encoding of enumerator constants uses DW_FORM_sdata for signed values
  and DW_FORM_udata for unsigned values, as suggested by DWARFv4 (7.5.4
  Attribute Encodings).

The changes should be backwards compatible:

* the isUnsigned attribute is optional and defaults to false.
* if the underlying type for the enumeration is not available, the
  enumerator values are considered signed.
* the FixedEnum flag defaults to clear.
* the bitcode format for DIEnumerator stores the unsigned flag bit #1 of
  the first record element, so the format does not change and the zero
  previously stored there is consistent with the false default for
  IsUnsigned.

Differential Revision: https://reviews.llvm.org/D42734

llvm-svn: 324489
2018-02-07 16:46:33 +00:00
Marek Olsak 871c30e540 AMDGPU: Add 32-bit constant address space
Note: This is a candidate for LLVM 6.0, because it was planned to be
      in that release but was delayed due to a long review period.

Merge conflict in release_60 - resolution:
    Add "-p6:32:32" into the second (non-amdgiz) string.

Only scalar loads support 32-bit pointers. An address in a VGPR will
fail to compile. That's OK because the results of loads will only be used
in places where VGPRs are forbidden.

Updated AMDGPUAliasAnalysis and used SReg_64_XEXEC.
The tests cover all uses cases we need for Mesa.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D41651

llvm-svn: 324487
2018-02-07 16:01:00 +00:00
Marek Olsak b2cc77985b AMDGPU: Remove the s_buffer workaround for GFX9 chips
Summary:
I checked the AMD closed source compiler and the workaround is only
needed when x3 is emulated as x4, which we don't do in LLVM.

SMEM x3 opcodes don't exist, and instead there is a possibility to use x4
with the last component being unused. If the last component is out of
buffer bounds and falls on the next 4K page, the hw hangs.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D42756

llvm-svn: 324486
2018-02-07 16:00:40 +00:00
Simon Pilgrim b4e789e8f6 [X86][AVX] Add PACKSSDW/PACKUSDW support for truncation of clamped values
SSE and shorter vector sizes will have to wait until we can add support for general SMIN/SMAX matching.

llvm-svn: 324485
2018-02-07 15:48:44 +00:00
Jonas Devlieghere b419108203 [dsymutil][test] Check the updated dSYM instead of companion file.
This patch has llvm-dwarfdump check the whole dSYM, rather than the
hard-coded path to the Mach-O companion file. This might be what's
causing the Windows bot to fail.

llvm-svn: 324483
2018-02-07 15:18:21 +00:00
Jonas Devlieghere a4b9417b52 [dsymutil] Upstream update feature.
Now that dsymutil can generate accelerator tables, we can upstream the
update logic that, as the name implies, updates the accelerator tables
in an existing dSYM bundle. In combination with `-minimize` this can be
used to remove redundant .debug_(inlines|pubtypes|pubnames).

Differential revision: https://reviews.llvm.org/D42880

llvm-svn: 324480
2018-02-07 13:51:29 +00:00
Simon Pilgrim c90d79f80a [X86] Regenerate atomic i32 tests
llvm-svn: 324479
2018-02-07 13:28:23 +00:00
Simon Atanasyan 70498f81de [mips] Support 'y' operand code to print exact log2 of the operand
llvm-svn: 324477
2018-02-07 12:36:39 +00:00
Simon Atanasyan 737bec38d0 [mips] Handle 'M' and 'L' operand codes for memory operands
Both operand codes now work the same way in case of register or memory
operands. It print high-order or low-order word in a double-word
register or memory location.

llvm-svn: 324476
2018-02-07 12:36:33 +00:00
Max Kazantsev b299ade2c5 Re-enable "[SCEV] Make isLoopEntryGuardedByCond a bit smarter"
The failures happened because of assert which was overconfident about
SCEV's proving capabilities and is generally not valid.

Differential Revision: https://reviews.llvm.org/D42835

llvm-svn: 324473
2018-02-07 11:16:29 +00:00
Clement Courbet 10003e31f4 [MergeICmps] Re-commit rL324317 "Enable the MergeICmps Pass by default."
With fixes from rL324341.

Original commit message:

[MergeICmps] Enable the MergeICmps Pass by default.

Summary: Now that PR33325 is fixed, this should always improve the generated code.

Reviewers: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D42793

llvm-svn: 324465
2018-02-07 09:58:55 +00:00
Serguei Katkov 69246ca787 Revert [SCEV] Make isLoopEntryGuardedByCond a bit smarter
Revert rL324453 commit which causes buildbot failures.

Differential Revision: https://reviews.llvm.org/D42835

llvm-svn: 324462
2018-02-07 09:10:08 +00:00
Sjoerd Meijer 8c0739347c [ARM] FP16 mov imm pattern
This is a follow up of r324321, adding a match pattern for mov with a FP16
immediate (also fixing operand vfp_f16imm that wasn't even compiling).

Differential Revision: https://reviews.llvm.org/D42973

llvm-svn: 324456
2018-02-07 08:37:17 +00:00
Max Kazantsev dd5ee6f5d9 [SCEV] Make isLoopEntryGuardedByCond a bit smarter
Sometimes `isLoopEntryGuardedByCond` cannot prove predicate `a > b` directly.
But it is a common situation when `a >= b` is known from ranges and `a != b` is
known from a dominating condition. Thia patch teaches SCEV to sum these facts
together and prove strict comparison via non-strict one.

Differential Revision: https://reviews.llvm.org/D42835

llvm-svn: 324453
2018-02-07 07:56:26 +00:00
Michael Zolotukhin cae66ba5f8 The xfailed test from r324448 passed on one of the bots: remove it entirely for now.
llvm-svn: 324451
2018-02-07 06:54:11 +00:00
Chandler Carruth 282ae1632a [x86/retpoline] Make the external thunk names exactly match the names
that happened to end up in GCC.

This is really unfortunate, as the names don't have much rhyme or reason
to them. Originally in the discussions it seemed fine to rely on aliases
to map different names to whatever external thunk code developers wished
to use but there are practical problems with that in the kernel it turns
out. And since we're discovering this practical problems late and since
GCC has already shipped a release with one set of names, we are forced,
yet again, to blindly match what is there.

Somewhat rushing this patch out for the Linux kernel folks to test and
so we can get it patched into our releases.

Differential Revision: https://reviews.llvm.org/D42998

llvm-svn: 324449
2018-02-07 06:16:24 +00:00
Michael Zolotukhin 1713dd5b8d Xfail the test added in r324445 until the underlying issue in LoopSink is fixed.
llvm-svn: 324448
2018-02-07 06:11:50 +00:00
Eugene Leviant 25347ea895 [LegalizeDAG] Truncate condition operand of ISD::SELECT
Differential revision: https://reviews.llvm.org/D42737

llvm-svn: 324447
2018-02-07 05:38:29 +00:00
Tom Stellard 33445765dd AMDGPU/GlobalISel: Mark 32-bit G_FPTOUI as legal
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D42152

llvm-svn: 324446
2018-02-07 04:47:59 +00:00
Michael Zolotukhin e82e83fcce Follow-up for r324429: "[LCSSAVerification] Run verification only when asserts are enabled."
Before r324429 we essentially didn't have a verification of LCSSA, so
no wonder that it has been broken: currently loop-sink breaks it (the
attached test illustrates the failure).

It was detected during a stage2 RA build, so to unbreak it I'm disabling
the check for now.

llvm-svn: 324445
2018-02-07 04:24:44 +00:00
Teresa Johnson f368101567 [ThinLTO] Serialize WithGlobalValueDeadStripping index flag for distributed backends
Summary:
A recent fix to drop dead symbols (r323633) did not work for ThinLTO
distributed backends because we lose the WithGlobalValueDeadStripping
set on the index during the thin link. This patch adds a new flags
record to the bitcode format for the index, and serializes this flag
for the combined index (it would always be 0 for the per-module index
generated by the compile step, so no need to serialize the new flags
record there until/unless we add another flag that applies to the
per-module indexes).

Generally this flag should always be set for the distributed backends,
which are necessarily performed after the thin link. However, if we were
to simply set this flag on the index applied to the distributed backends
(invoked via clang), we would lose the ability to disable dead stripping
via -compute-dead=false for debugging purposes.

Reviewers: grimar, pcc

Subscribers: mehdi_amini, inglorion, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D42799

llvm-svn: 324444
2018-02-07 04:05:59 +00:00
Volkan Keles 5838f7c013 GlobalISel: Always check operand types when executing match table
Summary:
Some of the commands tries to get the register without checking
if the specified operands is a register and causing crash. All commands
should check the type of the operand first and reject if the type is
not expected.

Reviewers: dsanders, qcolombet

Reviewed By: qcolombet

Subscribers: qcolombet, rovka, kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D42984

llvm-svn: 324442
2018-02-07 02:44:51 +00:00
Mark Searles 24c92eeb83 [AMDGPU] Suppress redundant waitcnt instrs.
1. Run the memory legalizer prior to the waitcnt pass; keep the policy that the waitcnt pass does not remove any waitcnts within the incoming IR.

2. The waitcnt pass doesn't (yet) track waitcnts that exist prior to the waitcnt pass (it just skips over them); because the waitcnt pass is ignorant of them, it may insert a redundant waitcnt. To avoid this, check the prev instr. If it and the to-be-inserted waitcnt are the same, suppress the insertion. We keep the existing waitcnt under the assumption that whomever, e.g., the memory legalizer, inserted it knows what they were doing.

3. Follow-on work: teach the waitcnt pass to record the pre-existing waitcnts for better waitcnt production.

Differential Revision: https://reviews.llvm.org/D42854

llvm-svn: 324440
2018-02-07 02:21:21 +00:00
Craig Topper 6d9e090a64 [Mips][AMDGPU] Update test cases to not use vector lt/gt compares that can be simplified to an equality/inequality or to always true/false.
For example 'ugt X, 0' can be simplified to 'ne X, 0'. Or 'uge X, 0' is always true.

We already simplify this for scalars in SimplifySetCC, but we don't currently for vectors in SimplifySetCC. D42948 proposes to change that.

llvm-svn: 324436
2018-02-07 00:51:37 +00:00
Matt Arsenault a18b3bcf51 AMDGPU: Select BFI patterns with 64-bit ints
llvm-svn: 324431
2018-02-07 00:21:34 +00:00
Craig Topper 58ecffd857 [DAGCombiner][AMDGPU][X86] Turn cttz/ctlz into cttz_zero_undef/ctlz_zero_undef if we can prove the input is never zero
X86 currently has a late DAG combine after cttz/ctlz are turned into BSR+BSF+CMOV to detect this and remove the CMOV. But we should be able to do this much earlier and avoid creating the cmov all together.

For the changed AMDGPU test case it appears that previously the i8 cttz was type legalized to i16 which introduced an OR with 256 in order to limit the result to 8 on the widened type. At this point the result is known to never be zero, but nothing checked that. Then operation legalization is told to promote all i16 cttz to i32. This introduces an extend and a truncate and another OR with 65536 to limit the result to 16. With the DAG combiner change we are able to prevent the creation of the second OR since the opcode will have been changed to cttz_zero_undef after the first OR. I the lack of the OR caused the instruction to change to v_ffbl_b32_sdwa

Differential Revision: https://reviews.llvm.org/D42985

llvm-svn: 324427
2018-02-06 23:54:37 +00:00
Adrian Prantl 8c59921ca3 Add DWARF for discriminated unions
n Rust, an enum that carries data in the variants is, essentially, a
discriminated union. Furthermore, the Rust compiler will perform
space optimizations on such enums in some situations. Previously,
DWARF for these constructs was emitted using a hack (a magic field
name); but this approach stopped working when more space optimizations
were added in https://github.com/rust-lang/rust/pull/45225.

This patch changes LLVM to allow discriminated unions to be
represented in DWARF. It adds createDiscriminatedUnionType and
createDiscriminatedMemberType to DIBuilder and then arranges for this
to be emitted using DWARF's DW_TAG_variant_part and DW_TAG_variant.

Note that DWARF requires that a discriminated union be represented as
a structure with a variant part. However, as Rust only needs to emit
pure discriminated unions, this is what I chose to expose on
DIBuilder.

Patch by Tom Tromey!

Differential Revision: https://reviews.llvm.org/D42082

llvm-svn: 324426
2018-02-06 23:45:59 +00:00
Eli Friedman cd07a3e2f9 Place undefined globals in .bss instead of .data
Following up on the discussion from
http://lists.llvm.org/pipermail/llvm-dev/2017-April/112305.html, undef
values are now placed in the .bss as well as null values. This prevents
undef global values taking up potentially huge amounts of space in the
.data section.

The following two lines now both generate equivalent .bss data:

@vals1 = internal unnamed_addr global [20000000 x i32] zeroinitializer, align 4
@vals2 = internal unnamed_addr global [20000000 x i32] undef, align 4 ; previously unaccounted for

This is primarily motivated by the corresponding issue in the Rust
compiler (https://github.com/rust-lang/rust/issues/41315).

Differential Revision: https://reviews.llvm.org/D41705

Patch by varkor!

llvm-svn: 324424
2018-02-06 23:22:14 +00:00
Eli Friedman 98f8bba283 [LivePhysRegs] Fix handling of return instructions.
See D42509 for the original version of this.

Basically, there are two significant changes to behavior here:

- addLiveOuts always adds all pristine registers (even if a block has
no successors).
- addLiveOuts and addLiveOutsNoPristines always add all callee-saved
registers for return blocks (including conditional return blocks).

I cleaned up the functions a bit to make it clear these properties hold.

Differential Revision: https://reviews.llvm.org/D42655

llvm-svn: 324422
2018-02-06 23:00:17 +00:00
Adrian Prantl c929f7ad42 Fix a crash when emitting DIEs for variable-length arrays
VLAs may refer to a previous DIE to express the DW_AT_count of their
type. Clang generates an artificial "vla_expr" variable for this. If
this DIE hasn't been created yet LLVM asserts. This patch fixes this
by sorting the local variables so that dependencies come before they
are needed. It also replaces the linear scan in DWARFFile with a
std::map, which can be faster.

Differential Revision: https://reviews.llvm.org/D42940

llvm-svn: 324412
2018-02-06 22:17:45 +00:00
Craig Topper dfea544c84 [X86] Add test cases that exercise the BSR/BSF optimization combineCMov.
combineCmov tries to remove compares against BSR/BSF if we can prove the input to the BSR/BSF are never zero.

As far as I can tell most of the time codegenprepare despeculates ctlz/cttz and gives us a cttz_zero_undef/ctlz_zero_undef which don't use a cmov.

So the only way I found to trigger this code is to show codegenprepare an illegal type which it won't despeculate.

I think we should be turning ctlz/cttz into ctlz_zero_undef/cttz_zero_undef for these cases before we ever get to operation legalization where the cmov is created. But wanted to add these tests so we don't regress.

llvm-svn: 324409
2018-02-06 21:47:04 +00:00
Sanjay Patel 0cdc273ada [x86] add tests to show demanded bits shortcoming; NFC
llvm-svn: 324408
2018-02-06 21:43:57 +00:00