Flags are clang's default UI is flags.
We can have an env var in addition to that, but in D69825 nobody has yet
mentioned why this needs an env var, so omit it for now. If someone
needs to set the flag via env var, the existing CCC_OVERRIDE_OPTIONS
mechanism works for it (set CCC_OVERRIDE_OPTIONS=+-fno-integrated-cc1
for example).
Also mention the cc1-in-process change in the release notes.
Also spruce up the test a bit so it actually tests something :)
Differential Revision: https://reviews.llvm.org/D72769
D39317 made clang use .init_array when no gcc installations is found.
This change changes all gcc installations to use .init_array .
GCC 4.7 by default stopped providing .ctors/.dtors compatible crt files,
and stopped emitting .ctors for __attribute__((constructor)).
.init_array should always work.
FreeBSD rules are moved to FreeBSD.cpp to make Generic_ELF rules clean.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D71434
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
901cd3b3f6/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp (L1320-L1334)
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
This originally landed in 9872ea4ed1
but got immediately reverted in cbfa237892
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
The asssertion that was added does not hold,
breaks on test-suite/MultiSource/Applications/SPASS/analyze.c
Will reduce the testcase and revisit.
This reverts commit 9872ea4ed1, 870f3542d3.
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.
As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.
This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
901cd3b3f6/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp (L1320-L1334)
But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]
Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.
It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
(i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
Which kinda defeats the point of
using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
we'd have two `add`s with basically the same arguments,
one of which is used for check+error-less codepath and other one
for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
Which means we'll need a whole new handler.
But then what about the `CompoundAssignOperator`,
it would also be applicable for it.
So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
(due to partial register stalls), so even if we avoid promotion+demotion,
the computations will //likely// be performed in `i32` anyways.
So i'm not really seeing much benefit of
not doing the straight-forward thing.
While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].
Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.
TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.
I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..
Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Reviewers: rjmccall, erichkeane, rsmith, vsk
Reviewed By: erichkeane
Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr
Tags: #llvm, #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D70539
For RISC-V the value provided to -march should determine whether to
compile for 32- or 64-bit RISC-V irrespective of the target provided to
the Clang driver. This adds a test for this flag for RISC-V and sets the
Target architecture correctly in these cases.
Differential Revision: https://reviews.llvm.org/D54214
Summary:
Clang/LLVM is a cross-compiler, and so we don't have to make a choice
about `-march`/`-mabi` at build-time, but we may have to compute a
default `-march`/`-mabi` when compiling a program. Until now, each
place that has needed a default `-march` has calculated one itself.
This patch adds a single place where a default `-march` is calculated,
in order to avoid calculating different defaults in different places.
This patch adds a new function `riscv::getRISCVArch` which encapsulates
this logic based on GCC's for computing a default `-march` value
when none is provided. This patch also updates the logic in
`riscv::getRISCVABI` to match the logic in GCC's build system for
computing a default `-mabi`.
This patch also updates anywhere that `-march` is used to now use the
new function which can compute a default. In particular, we now
explicitly pass a `-march` value down to the gnu assembler.
GCC has convoluted logic in its build system to choose a default
`-march`/`-mabi` based on build options, which would be good to match.
This patch is based on the logic in GCC 9.2.0. This commit's logic is
different to GCC's only for baremetal targets, where we default
to rv32imac/ilp32 or rv64imac/lp64 depending on the target triple.
Tests have been updated to match the new logic.
Reviewers: asb, luismarques, rogfer01, kito-cheng, khchen
Reviewed By: asb, luismarques
Subscribers: sameer.abuasal, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D69383
Summary:
By additional regex match, grouping of main include can be enabled in files that are not normally considered as a C/C++ source code.
For example, this might be useful in templated code, where template implementations are being held in *Impl.hpp files.
On the occassion, 'assume-filename' option description was reworded as it was misleading. It has nothing to do with `style=file` option and it does not influence sourced style filename.
Reviewers: rsmith, ioeric, krasimir, sylvestre.ledru, MyDeveloperDay
Reviewed By: MyDeveloperDay
Subscribers: MyDeveloperDay, cfe-commits
Patch by: furdyna
Tags: #clang
Differential Revision: https://reviews.llvm.org/D67750
-mvzeroupper will force the vzeroupper insertion pass to run on
CPUs that normally wouldn't. -mno-vzeroupper disables it on CPUs
where it normally runs.
To support this with the default feature handling in clang, we
need a vzeroupper feature flag in X86.td. Since this flag has
the opposite polarity of the fast-partial-ymm-or-zmm-write we
used to use to disable the pass, we now need to add this new
flag to every CPU except KNL/KNM and BTVER2 to keep identical
behavior.
Remove -fast-partial-ymm-or-zmm-write which is no longer used.
Differential Revision: https://reviews.llvm.org/D69786
Taking a value and the bitwise-or it with a non-zero constant will always
result in a non-zero value. In a boolean context, this is always true.
if (x | 0x4) {} // always true, intended '&'
This patch creates a new warning group -Wtautological-bitwise-compare for this
warning. It also moves in the existing tautological bitwise comparisons into
this group. A few other changes were needed to the CFGBuilder so that all bool
contexts would be checked. The warnings in -Wtautological-bitwise-compare will
be off by default due to using the CFG.
Fixes: https://bugs.llvm.org/show_bug.cgi?id=42666
Differential Revision: https://reviews.llvm.org/D66046
llvm-svn: 375318
I noticed that compiling on Windows with -fno-ms-compatibility had the
side effect of defining __GNUC__, along with __GNUG__, __GXX_RTTI__, and
a number of other macros for GCC compatibility. This is undesirable and
causes Chromium to do things like mix __attribute__ and __declspec,
which doesn't work. We should have a positive language option to enable
GCC compatibility features so that we can experiment with
-fno-ms-compatibility on Windows. This change adds -fgnuc-version= to be
that option.
My issue aside, users have, for a long time, reported that __GNUC__
doesn't match their expectations in one way or another. We have
encouraged users to migrate code away from this macro, but new code
continues to be written assuming a GCC-only environment. There's really
nothing we can do to stop that. By adding this flag, we can allow them
to choose their own adventure with __GNUC__.
This overlaps a bit with the "GNUMode" language option from -std=gnu*.
The gnu language mode tends to enable non-conforming behaviors that we'd
rather not enable by default, but the we want to set things like
__GXX_RTTI__ by default, so I've kept these separate.
Helps address PR42817
Reviewed By: hans, nickdesaulniers, MaskRay
Differential Revision: https://reviews.llvm.org/D68055
llvm-svn: 374449
Summary:
Quote from http://eel.is/c++draft/expr.add#4:
```
4 When an expression J that has integral type is added to or subtracted
from an expression P of pointer type, the result has the type of P.
(4.1) If P evaluates to a null pointer value and J evaluates to 0,
the result is a null pointer value.
(4.2) Otherwise, if P points to an array element i of an array object x with n
elements ([dcl.array]), the expressions P + J and J + P
(where J has the value j) point to the (possibly-hypothetical) array
element i+j of x if 0≤i+j≤n and the expression P - J points to the
(possibly-hypothetical) array element i−j of x if 0≤i−j≤n.
(4.3) Otherwise, the behavior is undefined.
```
Therefore, as per the standard, applying non-zero offset to `nullptr`
(or making non-`nullptr` a `nullptr`, by subtracting pointer's integral value
from the pointer itself) is undefined behavior. (*if* `nullptr` is not defined,
i.e. e.g. `-fno-delete-null-pointer-checks` was *not* specified.)
To make things more fun, in C (6.5.6p8), applying *any* offset to null pointer
is undefined, although Clang front-end pessimizes the code by not lowering
that info, so this UB is "harmless".
Since rL369789 (D66608 `[InstCombine] icmp eq/ne (gep inbounds P, Idx..), null -> icmp eq/ne P, null`)
LLVM middle-end uses those guarantees for transformations.
If the source contains such UB's, said code may now be miscompiled.
Such miscompilations were already observed:
* https://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20190826/687838.html
* https://github.com/google/filament/pull/1566
Surprisingly, UBSan does not catch those issues
... until now. This diff teaches UBSan about these UB's.
`getelementpointer inbounds` is a pretty frequent instruction,
so this does have a measurable impact on performance;
I've addressed most of the obvious missing folds (and thus decreased the performance impact by ~5%),
and then re-performed some performance measurements using my [[ https://github.com/darktable-org/rawspeed | RawSpeed ]] benchmark:
(all measurements done with LLVM ToT, the sanitizer never fired.)
* no sanitization vs. existing check: average `+21.62%` slowdown
* existing check vs. check after this patch: average `22.04%` slowdown
* no sanitization vs. this patch: average `48.42%` slowdown
Reviewers: vsk, filcab, rsmith, aaron.ballman, vitalybuka, rjmccall, #sanitizers
Reviewed By: rsmith
Subscribers: kristof.beyls, nickdesaulniers, nikic, ychen, dtzWill, xbolva00, dberris, arphaman, rupprecht, reames, regehr, llvm-commits, cfe-commits
Tags: #clang, #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D67122
llvm-svn: 374293
This matches how GCC handles it, see e.g. https://gcc.godbolt.org/z/HPplnl.
GCC documents the gnu_inline attribute with "In C++, this attribute does
not depend on extern in any way, but it still requires the inline keyword
to enable its special behavior."
The previous behaviour of gnu_inline in C++, without the extern
keyword, can be traced back to the original commit that added
support for gnu_inline, SVN r69045.
Differential Revision: https://reviews.llvm.org/D67414
llvm-svn: 373078
-Wtautological-overlap-compare and self-comparison from -Wtautological-compare
relay on detecting the same operand in different locations. Previously, each
warning had it's own operand checker. Now, both are merged together into
one function that each can call. The function also now looks through member
access and array accesses.
Differential Revision: https://reviews.llvm.org/D66045
llvm-svn: 372453
Allow this warning to detect a larger number of constant values, including
negative numbers, and handle non-int types better.
Differential Revision: https://reviews.llvm.org/D66044
llvm-svn: 372448
AVX512 instructions can cause a frequency drop on these CPUs. This
can negate the performance gains from using wider vectors. Enabling
prefer-vector-width=256 will prevent generation of zmm registers
unless explicit 512 bit operations are used in the original source
code.
I believe gcc and icc both do something similar to this by default.
Differential Revision: https://reviews.llvm.org/D67259
llvm-svn: 371694
As far as I can tell, gcc passes 256/512 bit vectors __int128 in memory. And passes a vector of 1 _int128 in an xmm register. The backend considers <X x i128> as an illegal type and will scalarize any arguments with that type. So we need to coerce the argument types in the frontend to match to avoid the illegal type.
I'm restricting this to change to Linux and NetBSD based on the
how similar ABI changes have been handled in the past.
PS4, FreeBSD, and Darwin are unaffected. I've also added a
new -fclang-abi-compat version to restore the old behavior.
This issue was identified in PR42607. Though even with the types changed, we still seem to be doing some unnecessary stack realignment.
llvm-svn: 371169
-Deprecate -mmpx and -mno-mpx command line options
-Remove CPUID detection of mpx for -march=native
-Remove MPX from all CPUs
-Remove MPX preprocessor define
I've left the "mpx" string in the backend so we don't fail on old IR, but its not connected to anything.
gcc has also deprecated these command line options. https://www.phoronix.com/scan.php?page=news_item&px=GCC-Patch-To-Drop-MPX
Differential Revision: https://reviews.llvm.org/D66669
llvm-svn: 370393
This broke compiling some ASan tests with never versions of MSVC/the Win
SDK, see https://crbug.com/996675
> MSVC 2017 update 3 (_MSC_VER 1911) enables /Zc:twoPhase by default, and
> so should clang-cl:
> https://docs.microsoft.com/en-us/cpp/build/reference/zc-twophase
>
> clang-cl takes the MSVC version it emulates from the -fmsc-version flag,
> or if that's not passed it tries to check what the installed version of
> MSVC is and uses that, and failing that it uses a default version that's
> currently 1911. So this changes the default if no -fmsc-version flag is
> passed and no installed MSVC is detected. (It also changes the default
> if -fmsc-version is passed or MSVC is detected, and either indicates
> _MSC_VER >= 1911.)
>
> As mentioned in the MSDN article, the Windows SDK header files in
> version 10.0.15063.0 (Creators Update or Redstone 2) and earlier
> versions do not work correctly with /Zc:twoPhase. If you need to use
> these old SDKs with a new clang-cl, explicitly pass /Zc:twoPhase- to get
> the old behavior.
>
> Fixes PR43032.
>
> Differential Revision: https://reviews.llvm.org/D66394
llvm-svn: 369647
MSVC 2017 update 3 (_MSC_VER 1911) enables /Zc:twoPhase by default, and
so should clang-cl:
https://docs.microsoft.com/en-us/cpp/build/reference/zc-twophase
clang-cl takes the MSVC version it emulates from the -fmsc-version flag,
or if that's not passed it tries to check what the installed version of
MSVC is and uses that, and failing that it uses a default version that's
currently 1911. So this changes the default if no -fmsc-version flag is
passed and no installed MSVC is detected. (It also changes the default
if -fmsc-version is passed or MSVC is detected, and either indicates
_MSC_VER >= 1911.)
As mentioned in the MSDN article, the Windows SDK header files in
version 10.0.15063.0 (Creators Update or Redstone 2) and earlier
versions do not work correctly with /Zc:twoPhase. If you need to use
these old SDKs with a new clang-cl, explicitly pass /Zc:twoPhase- to get
the old behavior.
Fixes PR43032.
Differential Revision: https://reviews.llvm.org/D66394
llvm-svn: 369402
Some targets such as Python 2.7.16 still use VERSION in
their builds. Without VERSION defined, the source code
has syntax errors.
Reverting as it will probably break many other things.
Noticed by Sterling Augustine
llvm-svn: 365992
Summary:
It has been introduced in 2011 for gcc compat:
ad1a4c6e89
it is probably time to remove it
Reviewers: rnk, dexonsmith
Reviewed By: rnk
Subscribers: dschuff, aheejin, fedor.sergeev, arphaman, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64062
llvm-svn: 365962
This affects users of older (pre 2.26) binutils in such a way that they can't necessarily
work around it as it doesn't support the compress option on the command line. Reverting
to unblock them and we can revisit whether to make this change now or fix how we want
to express the option.
This reverts commit bdb21337e6e1732c9895966449c33c408336d295/r360403.
llvm-svn: 360703
Since July 15, 2015 (binutils-gdb commit
19a7fe52ae3d0971e67a134bcb1648899e21ae1c, included in 2.26), gas
--compress-debug-sections=zlib (gcc -gz) means zlib-gabi:
SHF_COMPRESSED. Before that it meant zlib-gnu (.zdebug).
clang's -gz was introduced in rC306115 (Jun 2017) to indicate zlib-gnu. It
is 2019 now and it is not unreasonable to assume users of the new
feature to have new linkers (ld.bfd/gold >= 2.26, lld >= rLLD273661).
Change clang's default accordingly to improve standard conformance.
zlib-gnu becomes out of fashion and gets poorer toolchain support.
Its mangled names confuse tools and are more likely to cause problems.
Reviewed By: compnerd
Differential Revision: https://reviews.llvm.org/D61689
llvm-svn: 360403
Summary:
This revision adds basic support for formatting C# files with clang-format, I know the barrier to entry is high here so I'm sending this revision in to test the water as to whether this might be something we'd consider landing.
Tracking in Bugzilla as:
https://bugs.llvm.org/show_bug.cgi?id=40850
Justification:
C# code just looks ugly in comparison to the C++ code in our source tree which is clang-formatted.
I've struggled with Visual Studio reformatting to get a clean and consistent style, I want to format our C# code on saving like I do now for C++ and i want it to have the same style as defined in our .clang-format file, so it consistent as it can be with C++. (Braces/Breaking/Spaces/Indent etc..)
Using clang format without this patch leaves the code in a bad state, sometimes when the BreakStringLiterals is set, it fails to compile.
Mostly the C# is similar to Java, except instead of JavaAnnotations I try to reuse the TT_AttributeSquare.
Almost the most valuable portion is to have a new Language in order to partition the configuration for C# within a common .clang-format file, with the auto detection on the .cs extension. But there are other C# specific styles that could be added later if this is accepted. in particular how `{ set;get }` is formatted.
Reviewers: djasper, klimek, krasimir, benhamilton, JonasToth
Reviewed By: klimek
Subscribers: llvm-commits, mgorny, jdoerfert, cfe-commits
Tags: #clang, #clang-tools-extra
Differential Revision: https://reviews.llvm.org/D58404
llvm-svn: 356662