Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169
Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169
[Clang/Test]: Rename enable_noundef_analysis to disable-noundef-analysis and turn it off by default (2)
This patch updates test files after D105169.
Autogenerated test codes are changed by `utils/update_cc_test_checks.py,` and non-autogenerated test codes are changed as follows:
(1) I wrote a python script that (partially) updates the tests using regex: {F18594904} The script is not perfect, but I believe it gives hints about which patterns are updated to have `noundef` attached.
(2) The remaining tests are updated manually.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D108453
Resolve lit failures in clang after 8ca4b3e's land
Fix lit test failures in clang-ppc* and clang-x64-windows-msvc
Fix missing failures in clang-ppc64be* and retry fixing clang-x64-windows-msvc
Fix internal_clone(aarch64) inline assembly
Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169
This patch updates test files after D105169.
Autogenerated test codes are changed by `utils/update_cc_test_checks.py,` and non-autogenerated test codes are changed as follows:
(1) I wrote a python script that (partially) updates the tests using regex: {F18594904} The script is not perfect, but I believe it gives hints about which patterns are updated to have `noundef` attached.
(2) The remaining tests are updated manually.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D108453
For a definition (of most linkage types), dso_local is set for ELF -fno-pic/-fpie
and COFF, but not for Mach-O. This nuance causes unneeded binary format differences.
This patch replaces (function) `define ` with `define{{.*}} `,
(variable/constant/alias) `= ` with `={{.*}} `, or inserts appropriate `{{.*}} `
if there is an explicit linkage.
* Clang will set dso_local for Mach-O, which is currently implied by TargetMachine.cpp. This will make COFF/Mach-O and executable ELF similar.
* Eventually I hope we can make dso_local the textual LLVM IR default (write explicit "dso_preemptable" when applicable) and -fpic ELF will be similar to everything else. This patch helps move toward that goal.
arguments.
* Adds 'nonnull' and 'dereferenceable(N)' to 'this' pointer arguments
* Gates 'nonnull' on -f(no-)delete-null-pointer-checks
* Introduces this-nonnull.cpp and microsoft-abi-this-nullable.cpp tests to
explicitly test the behavior of this change
* Refactors hundreds of over-constrained clang tests to permit these
attributes, where needed
* Updates Clang12 patch notes mentioning this change
Reviewed-by: rsmith, jdoerfert
Differential Revision: https://reviews.llvm.org/D17993
This reverts commit 55c4ff91bd.
Issues were introduced as discussed in https://reviews.llvm.org/D88241
where this change made previous bugs in the linker and BitCodeWriter
visible.
Make the corresponding change that was made for byval in
b7141207a4. Like byval, this requires a
bulk update of the test IR tests to include the type before this can
be mandatory.
copy constructors of classes with array members, instead using
ArrayInitLoopExpr to represent the initialization loop.
This exposed a bug in the static analyzer where it was unable to differentiate
between zero-initialized and unknown array values, which has also been fixed
here.
llvm-svn: 289618
Previous it was allowed to capture VLAs/types with arrays of runtime bounds only inside the first lambda/capture statement in stack. Patch allows to capture these typedefs implicitly in chains of lambdas/captured statements.
llvm-svn: 258669
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
Currently we emit DeferredDeclsToEmit in reverse order. This patch changes that.
The advantages of the change are that
* The output order is a bit closer to the source order. The change to
test/CodeGenCXX/pod-member-memcpys.cpp is a good example.
* If we decide to deffer more, it will not cause as large changes in the
estcases as it would without this patch.
llvm-svn: 226751
The DeclRefExpr might be for a variable initialized by a constant
expression which hasn't been ODR used.
Emit the initializer for the variable instead of trying to capture the
variable itself.
This fixes PR22071.
llvm-svn: 225060
CodeGen assumed that a compound literal with array type should have a
corresponding LLVM IR array type.
We had two bugs in this area:
- Zero sized arrays in compound literals would lead to the creation of
an opaque type. This is unnecessary, we should just create an array
type with a bound of zero.
- Funny record types (like unions) lead to exotic IR types for compound
literals. In this case, CodeGen must be prepared to deal with the
possibility that it might not have an array IR type.
This fixes PR21912.
llvm-svn: 224219
initialized by a reference constant expression.
Our odr-use modeling still needs work here: we don't yet implement the 'set of
potential results of an expression' DR.
llvm-svn: 166361
was mistakenly classifying dynamic_casts which might throw as having no side
effects.
Switch it from a visitor to a switch, so it is kept up-to-date as future Expr
nodes are added. Move it from ExprConstant.cpp to Expr.cpp, since it's not
really related to constant expression evaluation.
Since we use HasSideEffect to determine whether to emit an unused global with
internal linkage, this has the effect of suppressing emission of globals in
some cases.
I've left many of the Objective-C cases conservatively assuming that the
expression has side-effects. I'll leave it to someone with better knowledge
of Objective-C than mine to improve them.
llvm-svn: 161388
stable mangling, since these lambdas can end up in multiple
translation units. Sema is responsible for deciding when this is the
case, because it's already responsible for choosing the mangling
number.
llvm-svn: 151029
and introducing the lambda closure type and its function call
operator. Previously, we assumed that the lambda closure type would
land directly in the current context, and not some parent context (as
occurs with linkage specifications). Thanks to Richard for the test case.
llvm-svn: 150987
name mangling in the Itanium C++ ABI for lambda expressions is so
dependent on context, we encode the number used to encode each lambda
as part of the lambda closure type, and maintain this value within
Sema.
Note that there are a several pieces still missing:
- We still get the linkage of lambda expressions wrong
- We aren't properly numbering or mangling lambda expressions that
occur in default function arguments or in data member initializers.
- We aren't (de-)serializing the lambda numbering tables
llvm-svn: 150982
conversion to function pointer. Rather than having IRgen synthesize
the body of this function, we instead introduce a static member
function "__invoke" with the same signature as the lambda's
operator() in the AST. Sema then generates a body for the conversion
to function pointer which simply returns the address of __invoke. This
approach makes it easier to evaluate a call to the conversion function
as a constant, makes the linkage of the __invoke function follow the
normal rules for member functions, and may make life easier down the
road if we ever want to constexpr'ify some of lambdas.
Note that IR generation is responsible for filling in the body of
__invoke (Sema just adds a dummy body), because the body can't
generally be expressed in C++.
Eli, please review!
llvm-svn: 150783