The pattern may look more obviously like a sext if written as:
define i32 @g(i16 %x) {
%zext = zext i16 %x to i32
%xor = xor i32 %zext, 32768
%add = add i32 %xor, -32768
ret i32 %add
}
We already have that fold in visitAdd().
Differential Revision: https://reviews.llvm.org/D22477
llvm-svn: 276035
This patch adds function summary support to CFLAnders. It also comes
with a lot of tests! Woohoo!
Patch by Jia Chen.
Differential Revision: https://reviews.llvm.org/D22450
llvm-svn: 276026
This step builds on Lang Hames work to change Archive::child_iterator
for better interoperation with Error/Expected. Building on that it is now
possible to return an error message when the size field of an archive
contains non-decimal characters.
llvm-svn: 276025
This patch adds more specific edges to CFLAndersAliasAnalysis. The goal
of these edges is to give us more information about *how* two values
that MayAlias alias. With this, we can now tell cases like
a = b; // ergo, a may alias b
apart from
a = c;
b = c;
// so, a may alias b, but only because they were both assigned to c.
...And others.
Patch by Jia Chen.
Differential Revision: https://reviews.llvm.org/D22429
llvm-svn: 276023
Sema actions on ObjCDictionaryLiteral and ObjCArryLiteral are currently
done as a side-effect of Sema upon parent expressions, which incurs of
delayed typo corrections for such literals to be performed by TypoTransforms
upon the ObjCDictionaryLiteral and ObjCArryLiteral themselves instead of
its elements individually.
This is specially bad because it was not designed to act on several
elements; searching through all possible combinations of corrections for
several elements is very expensive. Additionally, when one of the
elements has no correction candidate, we still explore all options and
at the end emit no typo corrections whatsoever.
Do the proper sema actions by acting on each element alone during appropriate
literal parsing time to get proper diagonistics and decent compile time
behavior.
Differential Revision: http://reviews.llvm.org/D22183
rdar://problem/21046678
llvm-svn: 276020
This makes sure that space is actually available. With this change
running lld on a full file system causes it to exit with
failed to open foo: No space left on device
instead of crashing with a sigbus.
llvm-svn: 276017
pointer-to-member type, produce a null value of the right type.
This fixes a bug where throwing an exception of type nullptr_t and catching it
as a pointer-to-member would not guarantee to produce a null value in the catch
handler. The fix is pretty simple: we statically allocate a constant null
pointer-to-data-member representation and a constant null
pointer-to-member-function representation, and produce the address of the
relevant value as the adjusted pointer for the exception.
llvm-svn: 276016
r274801 did not go far enough to allow gcov+tsan to cooperate. With this
commit it's possible to run the following code without false positives:
std::thread T1(fib), T2(fib);
T1.join(); T2.join();
llvm-svn: 276015
There's not much functional change, but it really is an architectural feature
(on v6T2, v7A, v7R and v7EM) rather than something each CPU implements
individually.
The main functional change is the default behaviour you get when specifying
only "-triple".
llvm-svn: 276013
Also verify that we never try to set the size of a vreg associated
to a register class.
Report an error when we encounter that in MIR. Fix a testcase that
hit that error and had a size for no reason.
llvm-svn: 276012
Added the opencl.ocl.version metadata to be emitted with amdgcn. Created a static function emitOCLVerMD which is shared between triple spir and target amdgcn.
Also added new testcases to existing test file, spir_version.cl inside test/CodeGenOpenCL.
Patch by Aaron En Ye Shi.
Differential Revision: https://reviews.llvm.org/D22424
llvm-svn: 276010
We skipped over ReturnInsts which didn't return an argument which would
lead us to incorrectly conclude that an argument returned by another
ReturnInst was 'returned'.
This reverts commit r275756.
This fixes PR28610.
llvm-svn: 276008
Summary: This flag could be used to disable check in runtime.
Subscribers: kubabrecka
Differential Revision: https://reviews.llvm.org/D22495
llvm-svn: 276004
Summary:
This patch attempts to fix the undefined behavior in __tree by changing the node pointer types used throughout. The pointer types are changed for raw pointers in the current ABI and for fancy pointers in ABI V2 (since the fancy pointer types may not be ABI compatible).
The UB in `__tree` arises because tree downcasts the embedded end node and then deferences that pointer. Currently there are 3 node types in __tree.
* `__tree_end_node` which contains the `__left_` pointer. This node is embedded within the container.
* `__tree_node_base` which contains `__right_`, `__parent_` and `__is_black`. This node is used throughout the tree rebalancing algorithms.
* `__tree_node` which contains `__value_`.
Currently `__tree` stores the start of the tree, `__begin_node_`, as a pointer to a `__tree_node`. Additionally the iterators store their position as a pointer to a `__tree_node`. In both of these cases the pointee can be the end node. This is fixed by changing them to store `__tree_end_node` pointers instead.
To make this change I introduced an `__iter_pointer` typedef which is defined to be a pointer to either `__tree_end_node` in the new ABI or `__tree_node` in the current one.
Both `__tree::__begin_node_` and iterator pointers are now stored as `__iter_pointers`.
The other situation where `__tree_end_node` is stored as the wrong type is in `__tree_node_base::__parent_`. Currently `__left_`, `__right_`, and `__parent_` are all `__tree_node_base` pointers. Since the end node will only be stored in `__parent_` the fix is to change `__parent_` to be a pointer to `__tree_end_node`.
To make this change I introduced a `__parent_pointer` typedef which is defined to be a pointer to either `__tree_end_node` in the new ABI or `__tree_node_base` in the current one.
Note that in the new ABI `__iter_pointer` and `__parent_pointer` are the same type (but not in the old one). The confusion between these two types is unfortunate but it was the best solution I could come up with that maintains the ABI.
The typedef changes force a ton of explicit type casts to correct pointer types and to make current code compatible with both the old and new pointer typedefs. This is the bulk of the change and it's really messy. Unfortunately I don't know how to avoid it.
Please let me know what you think.
Reviewers: howard.hinnant, mclow.lists
Subscribers: howard.hinnant, bbannier, cfe-commits
Differential Revision: https://reviews.llvm.org/D20786
llvm-svn: 276003
NetBSD's system unwinder is a modified version of LLVM's libunwind.
Slightly reduce diffs by updating comments to match theirs where
appropriate.
llvm-svn: 275997
Summary:
Currently, InstCombine is already able to fold expressions of the form `logic(cast(A), cast(B))` to the simpler form `cast(logic(A, B))`, where logic designates one of `and`/`or`/`xor`. This transformation is implemented in `foldCastedBitwiseLogic()` in InstCombineAndOrXor.cpp. However, this optimization will not be performed if both `A` and `B` are `icmp` instructions. The decision to preclude casts of `icmp` instructions originates in r48715 in combination with r261707, and can be best understood by the title of the former one:
> Transform (zext (or (icmp), (icmp))) to (or (zext (cimp), (zext icmp))) if at least one of the (zext icmp) can be transformed to eliminate an icmp.
Apparently, it introduced a transformation that is a reverse of the transformation that is done in `foldCastedBitwiseLogic()`. Its purpose is to expose pairs of `zext icmp` that would subsequently be optimized by `transformZExtICmp()` in InstCombineCasts.cpp. Therefore, in order to avoid an endless loop of switching back and forth between these two transformations, the one in `foldCastedBitwiseLogic()` has been restricted to exclude `icmp` instructions which is mirrored in the responsible check:
`if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) && ...`
This check seems to sort out more cases than necessary because:
- the reverse transformation is obviously done for `or` instructions only
- and also not every `zext icmp` pair is necessarily the result of this reverse transformation
Therefore we now remove this check and replace it by a more finegrained one in `shouldOptimizeCast()` that now rejects only those `logic(zext(icmp), zext(icmp))` that would be able to be optimized by `transformZExtICmp()`, which also avoids the mentioned endless loop. That means we are now able to also simplify expressions of the form `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` (`cast` being an arbitrary `CastInst`).
As an example, consider the following IR snippet
```
%1 = icmp sgt i64 %a, %b
%2 = zext i1 %1 to i8
%3 = icmp slt i64 %a, %c
%4 = zext i1 %3 to i8
%5 = and i8 %2, %4
```
which would now be transformed to
```
%1 = icmp sgt i64 %a, %b
%2 = icmp slt i64 %a, %c
%3 = and i1 %1, %2
%4 = zext i1 %3 to i8
```
This issue became apparent when experimenting with the programming language Julia, which makes use of LLVM. Currently, Julia lowers its `Bool` datatype to LLVM's `i8` (also see https://github.com/JuliaLang/julia/pull/17225). In fact, the above IR example is the lowered form of the Julia snippet `(a > b) & (a < c)`. Like shown above, this may introduce `zext` operations, casting between `i1` and `i8`, which could for example hinder ScalarEvolution and Polly on certain code.
Reviewers: grosser, vtjnash, majnemer
Subscribers: majnemer, llvm-commits
Differential Revision: https://reviews.llvm.org/D22511
Contributed-by: Matthias Reisinger
llvm-svn: 275989