In standard C library, both rint and nearbyint returns rounding result
in current rounding mode. But nearbyint never raises inexact exception.
On PowerPC, x(v|s)r(d|s)pic may modify FPSCR XX, raising inexact
exception. So we can't select constrained fnearbyint into xvrdpic.
One exception here is xsrqpi, which will not raise inexact exception, so
fnearbyint f128 is okay here.
Reviewed By: uweigand
Differential Revision: https://reviews.llvm.org/D87220
Extract a simple check to check if a `RecordDecl` is a `CFError` Decl.
This is a simple refactoring to prepare for an upcoming change. NFC.
Patch is extracted from
8afaf3aad2.
This patch resumes the work of D16586.
According to the AAPCS, volatile bit-fields should
be accessed using containers of the widht of their
declarative type. In such case:
```
struct S1 {
short a : 1;
}
```
should be accessed using load and stores of the width
(sizeof(short)), where now the compiler does only load
the minimum required width (char in this case).
However, as discussed in D16586,
that could overwrite non-volatile bit-fields, which
conflicted with C and C++ object models by creating
data race conditions that are not part of the bit-field,
e.g.
```
struct S2 {
short a;
int b : 16;
}
```
Accessing `S2.b` would also access `S2.a`.
The AAPCS Release 2020Q2
(https://documentation-service.arm.com/static/5efb7fbedbdee951c1ccf186?token=)
section 8.1 Data Types, page 36, "Volatile bit-fields -
preserving number and width of container accesses" has been
updated to avoid conflict with the C++ Memory Model.
Now it reads in the note:
```
This ABI does not place any restrictions on the access widths of bit-fields where the container
overlaps with a non-bit-field member or where the container overlaps with any zero length bit-field
placed between two other bit-fields. This is because the C/C++ memory model defines these as being
separate memory locations, which can be accessed by two threads simultaneously. For this reason,
compilers must be permitted to use a narrower memory access width (including splitting the access into
multiple instructions) to avoid writing to a different memory location. For example, in
struct S { int a:24; char b; }; a write to a must not also write to the location occupied by b, this requires at least two
memory accesses in all current Arm architectures. In the same way, in struct S { int a:24; int:0; int b:8; };,
writes to a or b must not overwrite each other.
```
Patch D16586 was updated to follow such behavior by verifying that we
only change volatile bit-field access when:
- it won't overlap with any other non-bit-field member
- we only access memory inside the bounds of the record
- avoid overlapping zero-length bit-fields.
Regarding the number of memory accesses, that should be preserved, that will
be implemented by D67399.
Differential Revision: https://reviews.llvm.org/D72932
The following people contributed to this patch:
- Diogo Sampaio
- Ties Stuij
In some situation shifts can be treated as a template, and is thus formatted as one. So, by doing a couple extra checks to assure that the condition doesn't contain a template, and is in fact a bit shift should solve this problem.
This is a fix for [[ https://bugs.llvm.org/show_bug.cgi?id=46969 | bug 46969 ]]
Reviewed By: MyDeveloperDay
Patch By: Saldivarcher
Differential Revision: https://reviews.llvm.org/D86581
Change capitalization of some names due to LLVM naming rules.
Change names of some variables to make them more speaking.
Rework similar bug reports into one common function.
Prepare code for the next patches to reduce unrelated changes.
Differential Revision: https://reviews.llvm.org/D87138
LLVM has bumped the minimum required CMake version to 3.13.4, so this has become dead code.
Reviewed By: #libc, ldionne
Differential Revision: https://reviews.llvm.org/D87189
Discussed with @craig.topper and @spatel - this is to try and tidyup the codegen folder and move the x86 specific tests (as opposed to general tests that just happen to use x86 triples) into subfolders. Its up to other targets if they follow suit.
It also helps speed up test iterations as using wildcards on lit commands often misses some filenames.
Fixes issue noticed by static analysis where we have a copy+paste typo, testing ScheduleKind.M1 twice instead of ScheduleKind.M2.
Differential Revision: https://reviews.llvm.org/D87250
* Do not visit `CXXDefaultArgExpr`
* To build `CallArguments` nodes, just go through non-default arguments
Differential Revision: https://reviews.llvm.org/D87249
MSVC's cl.exe has a few command line arguments which start with -M such
as "-MD", "-MDd", "-MT", "-MTd", "-MP".
These arguments are not dependency file generation related, and these
arguments were being removed by getClangStripDependencyFileAdjuster()
which was wrong.
Differential revision: https://reviews.llvm.org/D86999
We want the generice StdLibraryFunctionsChecker to report only if there
are no specific checkers that would handle the argument constraint for a
function.
Note, the assumptions are still evaluated, even if the arguement
constraint checker is set to not report. This means that the assumptions
made in the generic StdLibraryFunctionsChecker should be an
over-approximation of the assumptions made in the specific checkers. But
most importantly, the assumptions should not contradict.
Differential Revision: https://reviews.llvm.org/D87240
We're now getting close to having the necessary analysis/combines etc. for the new generic llvm.abs.* intrinsics.
This patch updates the SSE/AVX ABS vector intrinsics to emit the generic equivalents instead of the icmp+sub+select code pattern.
Differential Revision: https://reviews.llvm.org/D87101
Decl::dump is primarily used for debugging to visualise the current state of a
declaration. Usually Decl::dump just displays the current state of the Decl and
doesn't actually change any of its state, however since commit
457226e02a the method actually started loading
additional declarations from the ExternalASTSource. This causes that calling
Decl::dump during a debugging session now actually does permanent changes to the
AST and will cause the debugged program run to deviate from the original run.
The change that caused this behaviour is the addition of
`hasConstexprDestructor` (which is called from the TextNodeDumper) which
performs a lookup into the current CXXRecordDecl to find the destructor. All
other similar methods just return their respective bit in the DefinitionData
(which obviously doesn't have such side effects).
This just changes the node printer to emit "unknown_constexpr" in case a
CXXRecordDecl is dumped that could potentially call into the ExternalASTSource
instead of the usually empty string/"constexpr". For CXXRecordDecls that can
safely be dumped the old behaviour is preserved
Reviewed By: bruno
Differential Revision: https://reviews.llvm.org/D80878
This change groups
* Rename: `ignoreParenBaseCasts` -> `IgnoreParenBaseCasts` for uniformity
* Rename: `IgnoreConversionOperator` -> `IgnoreConversionOperatorSingleStep` for uniformity
* Inline `IgnoreNoopCastsSingleStep` into a lambda inside `IgnoreNoopCasts`
* Refactor `IgnoreUnlessSpelledInSource` to make adequate use of `IgnoreExprNodes`
Differential Revision: https://reviews.llvm.org/D86880
Rationale:
This allows users to use `IgnoreExprNodes` and `Ignore*SingleStep` outside of
`clang/AST/Expr.cpp`.
Minor:
Rename `IgnoreImp...SingleStep` into `IgnoreImplicit...SingleStep`.
Differential Revision: https://reviews.llvm.org/D86778
When using the always break after return type setting:
Before:
SomeType funcdecl(LIST(uint64_t));
After:
SomeType
funcdecl(LIST(uint64_t));"
Reviewed By: MyDeveloperDay
Differential Revision: https://reviews.llvm.org/D87007
Before: _Atomic(uint64_t) * a;
After: _Atomic(uint64_t) *a;
This treats _Atomic the same as the the TypenameMacros and decltype. It
also allows some cleanup by removing checks whether the token before a
paren is kw_decltype and instead checking for TT_TypeDeclarationParen.
While touching this code also extend the decltype test cases to also check
for typeof() and _Atomic(T).
Reviewed By: MyDeveloperDay
Differential Revision: https://reviews.llvm.org/D86959
This adds a `AttributeMacros` configuration option that causes certain
identifiers to be parsed like a __attribute__((foo)) annotation.
This is motivated by our CHERI C/C++ fork which adds a __capability
qualifier for pointer/reference. Without this change clang-format parses
many type declarations as multiplications/bitwise-and instead.
I initially considered adding "__capability" as a new clang-format keyword,
but having a list of macros that should be treated as attributes is more
flexible since it can be used e.g. for static analyzer annotations or other language
extensions.
Example: std::vector<foo * __capability> -> std::vector<foo *__capability>
Depends on D86775 (to apply cleanly)
Reviewed By: MyDeveloperDay, jrtc27
Differential Revision: https://reviews.llvm.org/D86782
Seems '${cmake_2_8_12_PRIVATE}' was removed a long time ago, so it should
be just PRIVATE keyword here.
Reviewed By: john.brawn
Differential Revision: https://reviews.llvm.org/D86091
send_patched_file decodes with utf-8.
The default encoder for python 2 is ascii.
So it is necessary to also change send_string to use utf-8.
Differential Revision: https://reviews.llvm.org/D83984
This patch implements the vec_expandm function prototypes in altivec.h in order
to utilize the vector expand with mask instructions introduced in Power10.
Differential Revision: https://reviews.llvm.org/D82727
They are for more powerful than the current documentation implies, this
adds
* adopting a lock,
* deferring a lock,
* manually unlocking the scoped capability,
* relocking the scoped capability, possibly in a different mode,
* try-relocking the scoped capability.
Also there is now a generic explanation how attributes on scoped
capabilities work. There has been confusion in the past about how to
annotate them (see e.g. PR33504), hopefully this clears things up.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D87066
The old locking attributes had a generic release, but as it turns out
the capability-based attributes have it as well.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D87064
Recent versions of python3's multiprocessing module will blow up with
a Runtime error from this code, saying:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase
This is becuae the wrappers in bin/ are not using the `__name__ == "__main__"` idiom correctly.
Reviewed By: ldionne
Differential Revision: https://reviews.llvm.org/D87051
After adding a field of one bit, the bitfield members would take
30+1+1+1 = 33 bits, causing the size of TemplateParameterList to
increase from 16 to 24 bytes on 64-bit systems.
With 29 bits for NumParams we can encode up to half a billion template
parameters, which is almost certainly still enough for anybody.
Instead of just mutex members we also consider mutex globals.
Unsurprisingly they are always in scope. Now the paper [1] says that
> The scope of a class member is assumed to be its enclosing class,
> while the scope of a global variable is the translation unit in
> which it is defined.
But I don't think we should limit this to TUs where a definition is
available - a declaration is enough to acquire the mutex, and if a mutex
is really limited in scope to a translation unit, it should probably be
only declared there.
[1] https://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/42958.pdf
Fixes PR46354.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D84604