- test with python API
- test with command interpreter
- test stepping a single (selected) thread
- test stepping all threads in the program
llvm-svn: 186446
These floats all represented block frequencies anyway, so just use the
BlockFrequency class directly.
Some floating point computations remain in tryLocalSplit(). They are
estimating spill weights which are still floats.
llvm-svn: 186435
Original commit message:
Remove floating point computations from SpillPlacement.cpp.
Patch by Benjamin Kramer!
Use the BlockFrequency class instead of floats in the Hopfield network
computations. This rescales the node Bias field from a [-2;2] float
range to two block frequencies BiasN and BiasP pulling in opposite
directions. This construct has a more predictable behavior when block
frequencies saturate.
The per-node scaling factors are no longer necessary, assuming the block
frequencies around a bundle are consistent.
This patch can cause the register allocator to make different spilling
decisions. The differences should be small.
llvm-svn: 186434
The fundamental concept is:
Format as if the braced init list was a function call (with parentheses
replaced by braces). If there is no name/type before the opening brace
(e.g. if the braced list is nested), assume a zero-length identifier
just before the opening brace.
This behavior is gated on a new style flag, which for now replaces the
SpacesInBracedLists style flag. Activate this style flag for Google
style to reflect recent style guide changes.
llvm-svn: 186433
Use PMIN/PMAX for UGE/ULE vector comparions to reduce the number of required
instructions. This trick also works for UGT/ULT, but there is no advantage in
doing so. It wouldn't reduce the number of instructions and it would actually
reduce performance.
Reviewer: Ben
radar:5972691
llvm-svn: 186432
This is to support parsing UTF16 response files in LLVM/lib/Option for
lld and clang.
Reviewers: hans
Differential Revision: http://llvm-reviews.chandlerc.com/D1138
llvm-svn: 186426
For safety, the inliner cannot decrease the allignment on an alloca when
merging it with another.
I've included two variants of the test case for this: one with DataLayout
available, and one without. When DataLayout is not available, if only one of
the allocas uses the default alignment (getAlignment() == 0), then they cannot
be safely merged.
llvm-svn: 186425
Summary:
Add support for CXXCtorInitializer and TemplateArgument types to ASTNodeKind.
This change is to support more matchers from clang/ASTMatchers/ASTMatchers.h in the dynamic layer (clang/ASTMatchers/Dynamic).
Reviewers: klimek
CC: cfe-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1143
llvm-svn: 186422
Ensure that the scalar write access corresponds to the result of a load
instruction appears after the generic read access corresponds to the load
instruction.
llvm-svn: 186419
The assembly optimizations were making unsafe assumptions about which address
spaces had which identifiers.
Also, fix vload/vstore with 64-bit pointers. This was broken previously on
Radeon SI.
This version still only has assembly versions of int/uint 2/4/8/16 for global
loads and stores on R600, but it does it in a way that would be very easily
extended to private/local/constant and could also be handled easily on other
architectures.
v2: 1) Leave v[load|store]_impl.ll in generic/lib
2) Remove vload_if.ll and vstore_if.ll interfaces
3) Fix address+offset calculations
3) Remove offset from assembly arg list
llvm-svn: 186416
This commit gets us back to pure CLC and fixes offset calculations.
The next commit will re-enable the assembly implementation for R600,
fix bugs related to 64-bit address spaces, and also fix the
incorrect assumption that address space identifiers are the same in
all architectures.
llvm-svn: 186415
As every match call can recursively call back into the memoized match
via a nested traversal matcher (for example:
stmt(hasAncestor(stmt(hasDescendant(stmt(hasDescendant(stmt()))))))),
and every memoization step might clear the cache, we must not store
iterators into the result cache when calling match on a submatcher.
llvm-svn: 186411
When truncating to a format with fewer mantissa bits, APFloat::convert
will perform a right shift of the mantissa by the difference of the
precision of the two formats. Usually, this will result in just the
mantissa bits needed for the target format.
One special situation is if the input number is denormal. In this case,
the right shift may discard significant bits. This is usually not a
problem, since truncating a denormal usually results in zero (underflow)
after normalization anyway, since the result format's exponent range is
usually smaller than the target format's.
However, there is one case where the latter property does not hold:
when truncating from ppc_fp128 to double. In particular, truncating
a ppc_fp128 whose first double of the pair is denormal should result
in just that first double, not zero. The current code however
performs an excessive right shift, resulting in lost result bits.
This is then caught in the APFloat::normalize call performed by
APFloat::convert and causes an assertion failure.
This patch checks for the scenario of truncating a denormal, and
attempts to (possibly partially) replace the initial mantissa
right shift by decrementing the exponent, if doing so will still
result in a valid *target format* exponent.
Index: test/CodeGen/PowerPC/pr16573.ll
===================================================================
--- test/CodeGen/PowerPC/pr16573.ll (revision 0)
+++ test/CodeGen/PowerPC/pr16573.ll (revision 0)
@@ -0,0 +1,11 @@
+; RUN: llc < %s | FileCheck %s
+
+target triple = "powerpc64-unknown-linux-gnu"
+
+define double @test() {
+ %1 = fptrunc ppc_fp128 0xM818F2887B9295809800000000032D000 to double
+ ret double %1
+}
+
+; CHECK: .quad -9111018957755033591
+
Index: lib/Support/APFloat.cpp
===================================================================
--- lib/Support/APFloat.cpp (revision 185817)
+++ lib/Support/APFloat.cpp (working copy)
@@ -1956,6 +1956,23 @@
X86SpecialNan = true;
}
+ // If this is a truncation of a denormal number, and the target semantics
+ // has larger exponent range than the source semantics (this can happen
+ // when truncating from PowerPC double-double to double format), the
+ // right shift could lose result mantissa bits. Adjust exponent instead
+ // of performing excessive shift.
+ if (shift < 0 && isFiniteNonZero()) {
+ int exponentChange = significandMSB() + 1 - fromSemantics.precision;
+ if (exponent + exponentChange < toSemantics.minExponent)
+ exponentChange = toSemantics.minExponent - exponent;
+ if (exponentChange < shift)
+ exponentChange = shift;
+ if (exponentChange < 0) {
+ shift -= exponentChange;
+ exponent += exponentChange;
+ }
+ }
+
// If this is a truncation, perform the shift before we narrow the storage.
if (shift < 0 && (isFiniteNonZero() || category==fcNaN))
lostFraction = shiftRight(significandParts(), oldPartCount, -shift);
llvm-svn: 186409
We'd forgotten to provide string representations for the special ARMISD atomic
nodes; this adds them in. No effect on CodeGen, just makes the output of
"-view-whatever-dags" slightly more readable.
llvm-svn: 186406
This fixes an incorrect detection that led to a formatting error.
Before:
some_var = function (*some_pointer_var)[0];
After:
some_var = function(*some_pointer_var)[0];
llvm-svn: 186402