-Wc++11-narrowing warning on Darwin
The internal CI produced the following diagnostic:
error: non-constant-expression cannot be narrowed from type 'long long' to '__darwin_suseconds_t' (aka 'int') in initializer list [-Wc++11-narrowing]
struct ::timeval ConvertedTS[2] = {{TS[0].tv_sec, Convert(TS[0].tv_nsec)},
^~~~~~~~~~~~~~~~~~~~~~
llvm-svn: 337968
We are already ICF'ing these sections as a unit with their dependent
sections, so they don't need to be considered for ICF individually.
This change also "fixes" slowness caused by our quadratic-in-group-size
relocation segregation algorithm on 32-bit ARM platforms with unwind
data and ICF on rodata. In this scenario almost every function's
.ARM.exidx is identical except for the targets of the relocations
that refer to the function and its .ARM.extab, which causes almost
all of the program's .ARM.exidx sections to be initially added to the
same class, which causes us to compare every such section with every
other such section.
Differential Revision: https://reviews.llvm.org/D49716
llvm-svn: 337967
If the DAGCombiner's rotate matching was working as expected,
I don't think we'd see any test diffs here.
This sidesteps the issue of custom lowering for rotates raised in PR38243:
https://bugs.llvm.org/show_bug.cgi?id=38243
...by only dealing with legal operations.
llvm-svn: 337966
In some cases LSV sees (load/store _ (select _ <pointer expression>
<pointer expression>)) patterns in input IR, often due to sinking and
other forms of CFG simplification, sometimes interspersed with
bitcasts and all-constant-indices GEPs. With this
patch`areConsecutivePointers` method would attempt to handle select
instructions. This leads to an increased number of successful
vectorizations.
Technically, select instructions could appear in index arithmetic as
well, however, we don't see those in our test suites / benchmarks.
Also, there is a lot more freedom in IR shapes computing integral
indices in general than in what's common in pointer computations, and
it appears that it's quite unreliable to do anything short of making
select instructions first class citizens of Scalar Evolution, which
for the purposes of this patch is most definitely an overkill.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D49428
llvm-svn: 337965
Patch by Victor Zverovich.
This fixes an error when compiling `<experimental/filesystem>` with gcc 4.8.5:
```
.../libcxx/src/experimental/filesystem/filesystem_common.h:137:34:
error: redeclaration ‘T
std::experimental::filesystem::v1::detail::{anonymous}::error_value() [with T =
bool]’ d
iffers in ‘constexpr’
constexpr bool error_value<bool>() {
^
.../libcxx/src/experimental/filesystem/filesystem_common.h:133:3:
error: from previous declaration ‘T
std::experimental::filesystem::v1::detail::{anonymous}::error_value() [with T
= bool]’
T error_value();
^
```
Reviewed as https://reviews.llvm.org/D49813
llvm-svn: 337962
Instead of depending on implicit padding from the structure layout code,
use a packed struct and emit the padding explicitly.
Differential Revision: https://reviews.llvm.org/D49710
llvm-svn: 337961
Summary:
The ``file_time_type`` time point is used to represent the write times for files.
Its job is to act as part of a C++ wrapper for less ideal system interfaces. The
underlying filesystem uses the ``timespec`` struct for the same purpose.
However, the initial implementation of ``file_time_type`` could not represent
either the range or resolution of ``timespec``, making it unsuitable. Fixing
this requires an implementation which uses more than 64 bits to store the
time point.
I primarily considered two solutions: Using ``__int128_t`` and using a
arithmetic emulation of ``timespec``. Each has its pros and cons, and both
come with more than one complication.
However, after a lot of consideration, I decided on using `__int128_t`. This patch implements that change.
Please see the [FileTimeType Design Document](http://libcxx.llvm.org/docs/DesignDocs/FileTimeType.html) for more information.
Reviewers: mclow.lists, ldionne, joerg, arthur.j.odwyer, EricWF
Reviewed By: EricWF
Subscribers: christof, K-ballo, cfe-commits, BillyONeal
Differential Revision: https://reviews.llvm.org/D49774
llvm-svn: 337960
The first argument for the parallel outlined functions, called as
serialized parallel regions, should be a pointer to the global thread id
that always is 0.
llvm-svn: 337957
Summary:
The exact same code was replicated 11 times for implementing the basic_istream
input operators (those that don't use numeric_limits). The same code was also
duplicated twice for implementing the basic_istream input operators that take
numeric_limits into account.
This commit factors the common code into function templates to avoid
the duplication.
Reviewers: mclow.lists, EricWF
Subscribers: christof, dexonsmith, cfe-commits
Differential Revision: https://reviews.llvm.org/D49808
llvm-svn: 337955
Summary:
Update the documentation of all the classes introduced with the new generic SMT API, most of them were referencing Z3 and how previous operations were being done (like including the context as parameter in a few methods).
Renamed the following methods, so it's clear that the operate on bitvectors:
*`mkSignExt` -> `mkBVSignExt`
*`mkZeroExt` -> `mkBVZeroExt`
*`mkExtract` -> `mkBVExtract`
*`mkConcat` -> `mkBVConcat`
Removed the unecessary methods:
* `getDataExpr`: it was an one line method that called `fromData`
* `mkBitvector(const llvm::APSInt Int)`: it was not being used anywhere
Reviewers: NoQ, george.karpenkov
Reviewed By: george.karpenkov
Subscribers: xazax.hun, szepet, a.sidorin
Differential Revision: https://reviews.llvm.org/D49799
llvm-svn: 337954
Fixes a problem when we have multiple inclusion cycles and try to
enumerate all possible ways to reach the max inclusion depth.
rdar://problem/38871876
Reviewers: bruno, rsmith, jkorous, aaron.ballman
Reviewed By: bruno, jkorous, aaron.ballman
Subscribers: dexonsmith, cfe-commits
Differential Revision: https://reviews.llvm.org/D48786
llvm-svn: 337953
GNU binutils tools have no problems with this kind of shared constants,
provided that we actually hook it up completely in AsmPrinter and
produce a global symbol.
This effectively reverts SVN r335918 by hooking the rest of it up
properly.
This feature was implemented originally in SVN r213006, with no reason
for why it can't be used for MinGW other than the fact that GCC doesn't
do it while MSVC does.
Differential Revision: https://reviews.llvm.org/D49646
llvm-svn: 337951
In SVN r334523, the first half of comdat constant pool handling was
hoisted from X86WindowsTargetObjectFile (which despite the name only
was used for msvc targets) into the arch independent
TargetLoweringObjectFileCOFF, but the other half of the handling was
left behind in X86AsmPrinter::GetCPISymbol.
With only half of the handling in place, inconsistent comdat
sections/symbols are created, causing issues with both GNU binutils
(avoided for X86 in SVN r335918) and with the MS linker, which
would complain like this:
fatal error LNK1143: invalid or corrupt file: no symbol for COMDAT section 0x4
Differential Revision: https://reviews.llvm.org/D49644
llvm-svn: 337950
Violating the invariants specified by attributes is undefined behavior.
Maybe we could use poison instead for some of the parameter attributes,
but I don't think it's worthwhile.
Differential Revision: https://reviews.llvm.org/D49041
llvm-svn: 337947
This fixes a warning like this:
warning: comparison of integers of different signs:
'std::__1::__libcpp_tls_key' (aka 'long') and 'DWORD'
(aka 'unsigned long') [-Wsign-compare]
if (*__key == FLS_OUT_OF_INDEXES)
~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
Differential Revision: https://reviews.llvm.org/D49782
llvm-svn: 337946
Saves materializing the immediate for the "ands".
Corresponding patterns exist for lsrs+lsls, but that seems less common
in practice.
Now implemented as a DAGCombine.
Differential Revision: https://reviews.llvm.org/D49585
llvm-svn: 337945
as well as sext(C + x + ...) -> (D + sext(C-D + x + ...))<nuw><nsw>
similar to the equivalent transformation for zext's
if the top level addition in (D + (C-D + x * n)) could be proven to
not wrap, where the choice of D also maximizes the number of trailing
zeroes of (C-D + x * n), ensuring homogeneous behaviour of the
transformation and better canonicalization of such AddRec's
(indeed, there are 2^(2w) different expressions in `B1 + ext(B2 + Y)` form for
the same Y, but only 2^(2w - k) different expressions in the resulting `B3 +
ext((B4 * 2^k) + Y)` form, where w is the bit width of the integral type)
This patch generalizes sext(C1 + C2*X) --> sext(C1) + sext(C2*X) and
sext{C1,+,C2} --> sext(C1) + sext{0,+,C2} transformations added in
r209568 relaxing the requirements the following way:
1. C2 doesn't have to be a power of 2, it's enough if it's divisible by 2
a sufficient number of times;
2. C1 doesn't have to be less than C2, instead of extracting the entire
C1 we can split it into 2 terms: (00...0XXX + YY...Y000), keep the
second one that may cause wrapping within the extension operator, and
move the first one that doesn't affect wrapping out of the extension
operator, enabling further simplifications;
3. C1 and C2 don't have to be positive, splitting C1 like shown above
produces a sum that is guaranteed to not wrap, signed or unsigned;
4. in AddExpr case there could be more than 2 terms, and in case of
AddExpr the 2nd and following terms and in case of AddRecExpr the
Step component don't have to be in the C2*X form or constant
(respectively), they just need to have enough trailing zeros,
which in turn could be guaranteed by means other than arithmetics,
e.g. by a pointer alignment;
5. the extension operator doesn't have to be a sext, the same
transformation works and profitable for zext's as well.
Apparently, optimizations like SLPVectorizer currently fail to
vectorize even rather trivial cases like the following:
double bar(double *a, unsigned n) {
double x = 0.0;
double y = 0.0;
for (unsigned i = 0; i < n; i += 2) {
x += a[i];
y += a[i + 1];
}
return x * y;
}
If compiled with `clang -std=c11 -Wpedantic -Wall -O3 main.c -S -o - -emit-llvm`
(!{!"clang version 7.0.0 (trunk 337339) (llvm/trunk 337344)"})
it produces scalar code with the loop not unrolled with the unsigned `n` and
`i` (like shown above), but vectorized and unrolled loop with signed `n` and
`i`. With the changes made in this commit the unsigned version will be
vectorized (though not unrolled for unclear reasons).
How it all works:
Let say we have an AddExpr that looks like (C + x + y + ...), where C
is a constant and x, y, ... are arbitrary SCEVs. Let's compute the
minimum number of trailing zeroes guaranteed of that sum w/o the
constant term: (x + y + ...). If, for example, those terms look like
follows:
i
XXXX...X000
YYYY...YY00
...
ZZZZ...0000
then the rightmost non-guaranteed-zero bit (a potential one at i-th
position above) can change the bits of the sum to the left (and at
i-th position itself), but it can not possibly change the bits to the
right. So we can compute the number of trailing zeroes by taking a
minimum between the numbers of trailing zeroes of the terms.
Now let's say that our original sum with the constant is effectively
just C + X, where X = x + y + .... Let's also say that we've got 2
guaranteed trailing zeros for X:
j
CCCC...CCCC
XXXX...XX00 // this is X = (x + y + ...)
Any bit of C to the left of j may in the end cause the C + X sum to
wrap, but the rightmost 2 bits of C (at positions j and j - 1) do not
affect wrapping in any way. If the upper bits cause a wrap, it will be
a wrap regardless of the values of the 2 least significant bits of C.
If the upper bits do not cause a wrap, it won't be a wrap regardless
of the values of the 2 bits on the right (again).
So let's split C to 2 constants like follows:
0000...00CC = D
CCCC...CC00 = (C - D)
and represent the whole sum as D + (C - D + X). The second term of
this new sum looks like this:
CCCC...CC00
XXXX...XX00
----------- // let's add them up
YYYY...YY00
The sum above (let's call it Y)) may or may not wrap, we don't know,
so we need to keep it under a sext/zext. Adding D to that sum though
will never wrap, signed or unsigned, if performed on the original bit
width or the extended one, because all that that final add does is
setting the 2 least significant bits of Y to the bits of D:
YYYY...YY00 = Y
0000...00CC = D
----------- <nuw><nsw>
YYYY...YYCC
Which means we can safely move that D out of the sext or zext and
claim that the top-level sum neither sign wraps nor unsigned wraps.
Let's run an example, let's say we're working in i8's and the original
expression (zext's or sext's operand) is 21 + 12x + 8y. So it goes
like this:
0001 0101 // 21
XXXX XX00 // 12x
YYYY Y000 // 8y
0001 0101 // 21
ZZZZ ZZ00 // 12x + 8y
0000 0001 // D
0001 0100 // 21 - D = 20
ZZZZ ZZ00 // 12x + 8y
0000 0001 // D
WWWW WW00 // 21 - D + 12x + 8y = 20 + 12x + 8y
therefore zext(21 + 12x + 8y) = (1 + zext(20 + 12x + 8y))<nuw><nsw>
This approach could be improved if we move away from using trailing
zeroes and use KnownBits instead. For instance, with KnownBits we could
have the following picture:
i
10 1110...0011 // this is C
XX X1XX...XX00 // this is X = (x + y + ...)
Notice that some of the bits of X are known ones, also notice that
known bits of X are interspersed with unknown bits and not grouped on
the rigth or left.
We can see at the position i that C(i) and X(i) are both known ones,
therefore the (i + 1)th carry bit is guaranteed to be 1 regardless of
the bits of C to the right of i. For instance, the C(i - 1) bit only
affects the bits of the sum at positions i - 1 and i, and does not
influence if the sum is going to wrap or not. Therefore we could split
the constant C the following way:
i
00 0010...0011 = D
10 1100...0000 = (C - D)
Let's compute the KnownBits of (C - D) + X:
XX1 1 = carry bit, blanks stand for known zeroes
10 1100...0000 = (C - D)
XX X1XX...XX00 = X
--- -----------
XX X0XX...XX00
Will this add wrap or not essentially depends on bits of X. Adding D
to this sum, however, is guaranteed to not to wrap:
0 X
00 0010...0011 = D
sX X0XX...XX00 = (C - D) + X
--- -----------
sX XXXX XX11
As could be seen above, adding D preserves the sign bit of (C - D) +
X, if any, and has a guaranteed 0 carry out, as expected.
The more bits of (C - D) we constrain, the better the transformations
introduced here canonicalize expressions as it leaves less freedom to
what values the constant part of ((C - D) + x + y + ...) can take.
Reviewed By: mzolotukhin, efriedma
Differential Revision: https://reviews.llvm.org/D48853
llvm-svn: 337943
Summary:
The VS compiler (on Windows) has a bug which results in fieldFromInstruction being optimized out in some circumstances. This only happens in *release no debug info* builds that have assertions *turned off* - in all other situations the function is not inlined, so the functionality is correct. All of the bots have assertions turned on, so this path is not regularly tested. The workaround is to not inline the function on Windows - if the bug is fixed in a later release of the VS compiler, the noinline specification can be removed.
The test that consistently reproduces this is Lanai v11.txt test.
Reviewers: asmith, labath, zturner
Subscribers: dblaikie, stella.stamenova, aprantl, JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D49753
llvm-svn: 337942
the children.
Special internal helper expressions/statements for the OpenMP directives
should not be exposed as children, only the main substatement must be
represented as the child.
llvm-svn: 337941
When VectorLegalizer::LegalizeOp creates a new SDValue after iterating
over its arguments, we need to refer to the same result number of the
new node that the original value used.
Reviewed by: cameron.mcinally
Differential Revision: https://reviews.llvm.org/D49805
llvm-svn: 337939
Currently ComputeNumSignBits does early exit while processing some
of the operations (add, sub, mul, and select). This prevents the
function from using AssumptionCacheTracker if passed.
Differential Revision: https://reviews.llvm.org/D49759
llvm-svn: 337936
For example v = <2 x i1> is represented as bbbbaaaa in a predicate register,
where b = v[1], a = v[0]. Extracting v[1] is equivalent to extracting bit 4
from the predicate register.
llvm-svn: 337934
This recommits r337910 after fixing an "ambiguous call to addAttribute"
error with some compilers (gcc circa 4.9 and MSVC). It seems that these
compilers will consider a "false -> pointer" conversion during overload
resolution. This creates ambiguity because one I added an overload which
takes a MCExpr * as an argument.
I fix this by making the new overload take MCExpr&, which avoids the
conversion. It also documents the fact that we expect a valid MCExpr
object.
Original commit message follows:
The motivation for this is D49493, where we'd like to test details of
debug_str_offsets behavior which is difficult to trigger from a
traditional test.
This adds the plubming necessary for dwarfgen to generate this section.
The more interesting changes are:
- I've moved emitStringOffsetsTableHeader function from DwarfFile to
DwarfStringPool, so I can generate the section header more easily from
the unit test.
- added a new addAttribute overload taking an MCExpr*. This is used to
generate the DW_AT_str_offsets_base, which links a compile unit to the
offset table.
I've also added a basic test for reading and writing DW_form_strx forms.
Reviewers: dblaikie, JDevlieghere, probinson
Subscribers: llvm-commits, aprantl
Differential Revision: https://reviews.llvm.org/D49670
llvm-svn: 337933
Summary:
Replace the existing combination of FastDemangle and the fallback to llvm::itaniumDemangle() with LLVM's new ItaniumPartialDemangler. It slightly reduces complexity and slightly improves performance, but doesn't introduce conceptual changes. This patch is preparing for more fundamental improvements on LLDB's demangling approach.
Reviewers: friss, jingham, erik.pilkington, labath, clayborg, mgorny, davide, JDevlieghere
Reviewed By: JDevlieghere
Subscribers: teemperor, JDevlieghere, labath, clayborg, davide, lldb-commits, mgorny, erik.pilkington
Differential Revision: https://reviews.llvm.org/D49612
llvm-svn: 337931
Initially, in https://reviews.llvm.org/D44890, I had these defined as
empty functions inside the header when the respective event listener
was not built in. As done in that commit, that wasn't correct, because
it was a ODR violation. Krasimir hot-fixed that in r333265, but that
wasn't quite right either, because it'd lead to the symbol not being
available.
Instead just move the fallbacksto ExecutionEngineBindings.cpp. Could
define them as static inlines in the header too, but I don't think it
matters.
Reviewers: whitequark
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D49654
llvm-svn: 337930
By default, xray filters events that takes less than 5uS from its log.
In this existing test, should printf complete very quickly this will
lead to test-critical function calls being filtered (i.e. print_parent_tid).
Given that we're not testing the filtering feature, disable it for this
test.
llvm-svn: 337929
This reverts commit r337910 as it's generating "ambiguous call to
addAttribute" errors on some bots.
Will resubmit once I get a chance to look into the problem.
llvm-svn: 337924
Summary:
The macro was manually expanded in the Z3 backend and this patch adds it back.
Adding the expanded code is dangerous as the macro may change in the future and the expanded code might be left outdated.
Reviewers: NoQ, george.karpenkov
Reviewed By: george.karpenkov
Subscribers: xazax.hun, szepet, a.sidorin
Differential Revision: https://reviews.llvm.org/D49769
llvm-svn: 337923
Summary:
Third patch in the refactoring series, to decouple the SMT Solver from the Refutation Manager (1st: D49668, 2nd: D49767).
The refutation API in the `SMTConstraintManager` was a hack to allow us to create an SMT solver and verify the constraints; it was conceptually wrong from the start. Now, we don't actually need to use the `SMTConstraintManager` and can create an SMT object directly, add the constraints and check them.
While updating the Falsification visitor, I inlined the two functions that were used to collect the constraints and add them to the solver.
As a result of this patch, we could move the SMT API elsewhere and as it's not really dependent on the CSA anymore. Maybe we can create a new dir (utils/smt) for Z3 and future solvers?
Reviewers: NoQ, george.karpenkov
Reviewed By: george.karpenkov
Subscribers: xazax.hun, szepet, a.sidorin
Differential Revision: https://reviews.llvm.org/D49768
llvm-svn: 337922
Summary:
This is the second part of D49668, and moves all the code that's not specific to a ConstraintManager to SMTSolver.
No functional change intended.
Reviewers: NoQ, george.karpenkov
Reviewed By: george.karpenkov
Subscribers: xazax.hun, szepet, a.sidorin
Differential Revision: https://reviews.llvm.org/D49767
llvm-svn: 337921
Summary:
This patch changes how the SMT bug refutation runs in an equivalent bug report class.
Now, all other visitor are executed until they find a valid bug or mark all bugs as invalid. When the one valid bug is found (and crosscheck is enabled), the SMT refutation checks the satisfiability of this single bug.
If the bug is still valid after checking with Z3, it is returned and a bug report is created. If the bug is found to be invalid, the next bug report in the equivalent class goes through the same process, until we find a valid bug or all bugs are marked as invalid.
Massive speedups when verifying redis/src/rax.c, from 1500s to 10s.
Reviewers: NoQ, george.karpenkov
Reviewed By: george.karpenkov
Subscribers: xazax.hun, szepet, a.sidorin
Differential Revision: https://reviews.llvm.org/D49693
llvm-svn: 337920