Merge definition visibility the same way we do for other decls. Without
the fix the added test emits `-Wobjc-method-access` as it cannot find a
visible protocol. Make this warning `-Werror` so the test would fail
when protocol visibility regresses.
rdar://83600696
Differential Revision: https://reviews.llvm.org/D111860
For each selector encountered in the source code, we need to load
selectors from the imported modules and check that we are calling a
selector with compatible types.
At the moment, for each module we are storing methods declared in the
headers belonging to this module and methods from the transitive closure
of imported modules. When a module is imported by a few other modules,
methods from the shared module are duplicated in each importer. As the
result, we can end up with lots of identical methods that we try to add
to the global method pool. Doing this duplicate work is useless and
relatively expensive.
Avoid processing duplicate methods by storing in each module only its
own methods and not storing methods from dependencies. Collect methods
from dependencies by walking the graph of module dependencies.
The issue was discovered and reported by Richard Howell. He has done the
hard work for this fix as he has investigated and provided a detailed
explanation of the performance problem.
Differential Revision: https://reviews.llvm.org/D110123
We keep using the first encountered definition and need to take into
account visibility from subsequent definitions. For example, if the
first definition is hidden and the second is visible, we need to make
the first one visible too.
rdar://82263843
Differential Revision: https://reviews.llvm.org/D110453
While working on https://reviews.llvm.org/D110280 I've tried to merge
decl contexts as it seems to be correct and matching our handling of
decl contexts from different modules. It's not required for the fix in
https://reviews.llvm.org/D110280 but it revealed a missing diagnostic,
so separating this change into a separate commit.
Renamed some variables to distinguish diagnostic like "declaration of
'x' does not match" for different cases.
Differential Revision: https://reviews.llvm.org/D110287
At this point, `F.ImportLoc` has not been initialized by the `ASTReader` yet and using it leads to an assertion failure.
Introduced in 638c673a8c and 4445135109.
To reduce the number of explicit builds of a single module, we can try to squash multiple occurrences of the module with different command-lines (and context hashes) by removing benign command-line options. The greatest contributors to benign differences between command-lines are the header search paths.
In this patch, the lookup cache in `HeaderSearch` is used to identify paths that were actually used when implicitly building the module during scanning. This information is serialized into the unhashed control block of the implicitly-built PCM. The dependency scanner then loads this and may use it to prune the header search paths before computing the context hash of the module and generating the command-line.
We could also prune the header search paths when serializing `HeaderSearchOptions` into the PCM. That way, we could do it only once instead of every load of the PCM file by dependency scanner. However, that would result in a PCM file whose contents don't produce the same context hash as the original build, which is probably highly surprising.
There is an alternative approach to storing extra information into the PCM: wire up preprocessor callbacks to capture the used header search paths on-the-fly during preprocessing of modularized headers (similar to what we currently do for the main source file and textual headers). Right now, that's not compatible with the fact that we do an actual implicit build producing PCM files during dependency scanning. The second run of dependency scanner loads the PCM from the first run, skipping the preprocessing altogether, which would result in different results between runs. We can revisit this approach when we stop building implicitly during dependency scanning.
Depends on D102923.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D102488
This patch propagates the import `SourceLocation` into `HeaderSearch::lookupModule`. This enables remarks on search path usage (implemented in D102923) to point to the source code that initiated header search.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D111557
When have ObjCInterfaceDecl with the same name in 2 different modules,
hitting the assertion
> Assertion failed: (Index < RL->getFieldCount() && "Ivar is not inside record layout!"),
> function lookupFieldBitOffset, file llvm-project/clang/lib/AST/RecordLayoutBuilder.cpp, line 3434.
on accessing an ivar inside a method. The assertion happens because
ivar belongs to one module while its containing interface belongs to
another module and then we fail to find the ivar inside the containing
interface. We already keep a single ObjCInterfaceDecl definition in
redecleration chain and in this case containing interface was correct.
The issue is with ObjCIvarDecl. IVar decl for IRGen is taken from
ObjCIvarRefExpr that is created in `Sema::BuildIvarRefExpr` using ivar
decl returned from `Sema::LookupIvarInObjCMethod`. And ivar lookup
returns a wrong decl because basically we take the first ObjCIvarDecl
found in `ASTReader::FindExternalVisibleDeclsByName` (called by
`DeclContext::lookup`). And in `ASTReader.Lookups` lookup table for a
wrong module comes first because `ASTReader::finishPendingActions`
processes `PendingUpdateRecords` in reverse order and the first
encountered ObjCIvarDecl will end up the last in `ASTReader.Lookups`.
Fix by merging ObjCIvarDecl from different modules correctly and by
using a canonical one in IRGen.
rdar://82854574
Differential Revision: https://reviews.llvm.org/D110280
Clang would reject
#pragma omp for
#pragma omp tile sizes(P)
for (int i = 0; i < 128; ++i) {}
where P is a template parameter, but the loop itself is not
template-dependent. Because P context-dependent, the TransformedStmt
cannot be generated and therefore is nullptr (until the template is
instantiated by TreeTransform). The OMPForDirective would still expect
the a loop is the dependent context and trigger an error.
Fix by introducing a NumGeneratedLoops field to OMPLoopTransformation.
This is used to distinguish the case where no TransformedStmt will be
generated at all (e.g. #pragma omp unroll full) and template
instantiation is needed. In the latter case, delay resolving the
iteration space like when the for-loop itself is template-dependent
until the template instatiation.
A more radical solution would always delay the iteration space analysis
until template instantiation, but would also break many test cases.
Reviewed By: ABataev
Differential Revision: https://reviews.llvm.org/D111124
Insert OMPLoopTransformationDirective between OMPLoopBasedDirective and the loop transformations OMPTileDirective and OMPUnrollDirective. This simplifies handling of loop transformations not requiring distinguishing between OMPTileDirective and OMPUnrollDirective anymore.
Reviewed By: ABataev
Differential Revision: https://reviews.llvm.org/D111119
As described on D111049, we're trying to remove the <string> dependency from error handling and replace uses of report_fatal_error(const std::string&) with the Twine() variant which can be forward declared.
Modify the IfStmt node to suppoort constant evaluated expressions.
Add a new ExpressionEvaluationContext::ImmediateFunctionContext to
keep track of immediate function contexts.
This proved easier/better/probably more efficient than walking the AST
backward as it allows diagnosing nested if consteval statements.
This patch supports OpenMP 5.0 metadirective features.
It is implemented keeping the OpenMP 5.1 features like dynamic user condition in mind.
A new function, getBestWhenMatchForContext, is defined in llvm/Frontend/OpenMP/OMPContext.h
Currently this function return the index of the when clause with the highest score from the ones applicable in the Context.
But this function is declared with an array which can be used in OpenMP 5.1 implementation to select all the valid when clauses which can be resolved in runtime. Currently this array is set to null by default and its implementation is left for future.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D91944
This patch supports OpenMP 5.0 metadirective features.
It is implemented keeping the OpenMP 5.1 features like dynamic user condition in mind.
A new function, getBestWhenMatchForContext, is defined in llvm/Frontend/OpenMP/OMPContext.h
Currently this function return the index of the when clause with the highest score from the ones applicable in the Context.
But this function is declared with an array which can be used in OpenMP 5.1 implementation to select all the valid when clauses which can be resolved in runtime. Currently this array is set to null by default and its implementation is left for future.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D91944
This patch supports OpenMP 5.0 metadirective features.
It is implemented keeping the OpenMP 5.1 features like dynamic user condition in mind.
A new function, getBestWhenMatchForContext, is defined in llvm/Frontend/OpenMP/OMPContext.h
Currently this function return the index of the when clause with the highest score from the ones applicable in the Context.
But this function is declared with an array which can be used in OpenMP 5.1 implementation to select all the valid when clauses which can be resolved in runtime. Currently this array is set to null by default and its implementation is left for future.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D91944
This refactor changes the GlobalMethodPool to a class that contains
the DenseMap of methods. This is to allow for the addition of a
separate DenseSet in a follow-up diff that will handle method
de-duplication when inserting methods into the global method pool.
Changes:
- the `GlobalMethods` pair becomes `GlobalMethodPool::Lists`
- the `GlobalMethodPool` becomes a class containing the `DenseMap` of methods
- pass through methods are added to maintain most of the existing code without changing `MethodPool` -> `MethodPool.Methods` everywhere
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D109898
Currently, we have no front-end type for ppc_fp128 type in IR. PowerPC
target generates ppc_fp128 type from long double now, but there's option
(-mabi=(ieee|ibm)longdouble) to control it and we're going to do
transition from IBM extended double-double ppc_fp128 to IEEE fp128 in
the future.
This patch adds type __ibm128 which always represents ppc_fp128 in IR,
as what GCC did for that type. Without this type in Clang, compilation
will fail if compiling against future version of libstdcxx (which uses
__ibm128 in headers).
Although all operations in backend for __ibm128 is done by software,
only PowerPC enables support for it.
There's something not implemented in this commit, which can be done in
future ones:
- Literal suffix for __ibm128 type. w/W is suitable as GCC documented.
- __attribute__((mode(IF))) should be for __ibm128.
- Complex __ibm128 type.
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D93377
Per the comments, `hash_code` values "are not stable to save or
persist", so are unsuitable for the module hash, which must persist
across compilations for the implicit module hashes to match. Note that
in practice, today, `hash_code` are stable. But this is an
implementation detail, with a clear `FIXME` indicating we should switch
to a per-execution seed.
The stability of `MD5` also allows modules cross-compilation use-cases.
The `size_t` underlying storage for `hash_code` varying across platforms
could cause mismatching hashes when cross-compiling from a 64bit
target to a 32bit target.
Note that native endianness is still used for the hash computation. So hashes
will differ between platforms of different endianness.
Reviewed By: jansvoboda11
Differential Revision: https://reviews.llvm.org/D102943
Per the contract of ReadLateParsedTemplates, we should not be returning
the same results multiple times. No functionality change intended, other
than to runtime.
Thanks to Luboš Luňák for identifying the cause of the regression!
When deserializing a RecordDecl we don't enforce that redeclaration
chain contains only a single definition. So if the canonical decl is not
a definition itself, `RecordType::getDecl` can return different objects
before and after an include. It means we can build CGRecordLayout for
one RecordDecl with its set of FieldDecl but try to use it with
FieldDecl belonging to a different RecordDecl. With assertions enabled
it results in
> Assertion failed: (FieldInfo.count(FD) && "Invalid field for record!"),
> function getLLVMFieldNo, file llvm-project/clang/lib/CodeGen/CGRecordLayout.h, line 199.
and with assertions disabled a bunch of fields are treated as their
memory is located at offset 0.
Fix by keeping the first encountered RecordDecl definition and marking
the subsequent ones as non-definitions. Also need to merge FieldDecl
properly, so that `getPrimaryMergedDecl` works correctly and during name
lookup we don't treat fields from same-name RecordDecl as ambiguous.
rdar://80184238
Differential Revision: https://reviews.llvm.org/D106994
Reading the AST block can never fail with a recoverable error as modules
cannot be removed during this phase. Change the return type of these
functions to return an llvm::Error instead, ie. either success or
failure.
NFC other than the wording of some of the errors.
Differential Revision: https://reviews.llvm.org/D108268
Reading modules first reads each control block in the chain and then all
AST blocks.
The first phase is intended to find recoverable errors, eg. an out of
date or missing module. If any error occurs during this phase, it is
safe to remove all modules in the chain as no references to them will
exist.
While reading the AST blocks, however, various fields in ASTReader are
updated with references to the module. Removing modules at this point
can cause dangling pointers which can be accessed later. These would be
otherwise harmless, eg. a binary search over `GlobalSLocEntryMap` may
access a failed module that could error, but shouldn't crash. Do not
remove modules in this phase, regardless of failures.
Since this is the case, it also doesn't make sense to return OutOfDate
during this phase, so remove the two cases where this happens.
When they were originally added these checks would return a failure when
the serialized and current path didn't match up. That was updated to an
OutOfDate as it was found to be hit when using VFS and overriding the
umbrella. Later on the path was changed to instead be the name as
written in the module file, resolved using the serialized base
directory. At this point the check is really only comparing the name of
the umbrella and only works for frameworks since those don't include
`Headers/` in the name (which means the resolved path will never exist)
Given all that, it seems safe to ignore this case entirely for now.
This makes the handling of an umbrella header/directory the same as
regular headers, which also don't check for differences in the path
caused by VFS.
Resolves rdar://79329355
Differential Revision: https://reviews.llvm.org/D107690
This is part of a patch series working towards the ability to make
SourceLocation into a 64-bit type to handle larger translation units.
NFC: this patch introduces typedefs for the integer type used by
SourceLocation and makes all the boring changes to use the typedefs
everywhere, but for the moment, they are unconditionally defined to
uint32_t.
Patch originally by Mikhail Maltsev.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D105492
It was possible to re-add a module to a shared in-memory module cache
when search paths are changed. This can eventually cause a crash if the
original module is referenced after this occurs.
1. Module A depends on B
2. B exists in two paths C and D
3. First run only has C on the search path, finds A and B and loads
them
4. Second run adds D to the front of the search path. A is loaded and
contains a reference to the already compiled module from C. But
searching finds the module from D instead, causing a mismatch
5. B and the modules that depend on it are considered out of date and
thus rebuilt
6. The recompiled module A is added to the in-memory cache, freeing
the previously inserted one
This can never occur from a regular clang process, but is very easy to
do through the API - whether through the use of a shared case or just
running multiple compilations from a single `CompilerInstance`. Update
the compilation to return early if a module is already finalized so that
the pre-condition in the in-memory module cache holds.
Resolves rdar://78180255
Differential Revision: https://reviews.llvm.org/D105328
Original commit message:
[clang-repl] Implement partial translation units and error recovery.
https://reviews.llvm.org/D96033 contained a discussion regarding efficient
modeling of error recovery. @rjmccall has outlined the key ideas:
Conceptually, we can split the translation unit into a sequence of partial
translation units (PTUs). Every declaration will be associated with a unique PTU
that owns it.
The first key insight here is that the owning PTU isn't always the "active"
(most recent) PTU, and it isn't always the PTU that the declaration
"comes from". A new declaration (that isn't a redeclaration or specialization of
anything) does belong to the active PTU. A template specialization, however,
belongs to the most recent PTU of all the declarations in its signature - mostly
that means that it can be pulled into a more recent PTU by its template
arguments.
The second key insight is that processing a PTU might extend an earlier PTU.
Rolling back the later PTU shouldn't throw that extension away. For example, if
the second PTU defines a template, and the third PTU requires that template to
be instantiated at float, that template specialization is still part of the
second PTU. Similarly, if the fifth PTU uses an inline function belonging to the
fourth, that definition still belongs to the fourth. When we go to emit code in
a new PTU, we map each declaration we have to emit back to its owning PTU and
emit it in a new module for just the extensions to that PTU. We keep track of
all the modules we've emitted for a PTU so that we can unload them all if we
decide to roll it back.
Most declarations/definitions will only refer to entities from the same or
earlier PTUs. However, it is possible (primarily by defining a
previously-declared entity, but also through templates or ADL) for an entity
that belongs to one PTU to refer to something from a later PTU. We will have to
keep track of this and prevent unwinding to later PTU when we recognize it.
Fortunately, this should be very rare; and crucially, we don't have to do the
bookkeeping for this if we've only got one PTU, e.g. in normal compilation.
Otherwise, PTUs after the first just need to record enough metadata to be able
to revert any changes they've made to declarations belonging to earlier PTUs,
e.g. to redeclaration chains or template specialization lists.
It should even eventually be possible for PTUs to provide their own slab
allocators which can be thrown away as part of rolling back the PTU. We can
maintain a notion of the active allocator and allocate things like Stmt/Expr
nodes in it, temporarily changing it to the appropriate PTU whenever we go to do
something like instantiate a function template. More care will be required when
allocating declarations and types, though.
We would want the PTU to be efficiently recoverable from a Decl; I'm not sure
how best to do that. An easy option that would cover most declarations would be
to make multiple TranslationUnitDecls and parent the declarations appropriately,
but I don't think that's good enough for things like member function templates,
since an instantiation of that would still be parented by its original class.
Maybe we can work this into the DC chain somehow, like how lexical DCs are.
We add a different kind of translation unit `TU_Incremental` which is a
complete translation unit that we might nonetheless incrementally extend later.
Because it is complete (and we might want to generate code for it), we do
perform template instantiation, but because it might be extended later, we don't
warn if it declares or uses undefined internal-linkage symbols.
This patch teaches clang-repl how to recover from errors by disconnecting the
most recent PTU and update the primary PTU lookup tables. For instance:
```./clang-repl
clang-repl> int i = 12; error;
In file included from <<< inputs >>>:1:
input_line_0:1:13: error: C++ requires a type specifier for all declarations
int i = 12; error;
^
error: Parsing failed.
clang-repl> int i = 13; extern "C" int printf(const char*,...);
clang-repl> auto r1 = printf("i=%d\n", i);
i=13
clang-repl> quit
```
Differential revision: https://reviews.llvm.org/D104918
This reverts commit 6775fc6ffa.
It also reverts "[lldb] Fix compilation by adjusting to the new ASTContext signature."
This reverts commit 03a3f86071.
We see some failures on the lldb infrastructure, these changes might play a role
in it. Let's revert it now and see if the bots will become green.
Ref: https://reviews.llvm.org/D104918
https://reviews.llvm.org/D96033 contained a discussion regarding efficient
modeling of error recovery. @rjmccall has outlined the key ideas:
Conceptually, we can split the translation unit into a sequence of partial
translation units (PTUs). Every declaration will be associated with a unique PTU
that owns it.
The first key insight here is that the owning PTU isn't always the "active"
(most recent) PTU, and it isn't always the PTU that the declaration
"comes from". A new declaration (that isn't a redeclaration or specialization of
anything) does belong to the active PTU. A template specialization, however,
belongs to the most recent PTU of all the declarations in its signature - mostly
that means that it can be pulled into a more recent PTU by its template
arguments.
The second key insight is that processing a PTU might extend an earlier PTU.
Rolling back the later PTU shouldn't throw that extension away. For example, if
the second PTU defines a template, and the third PTU requires that template to
be instantiated at float, that template specialization is still part of the
second PTU. Similarly, if the fifth PTU uses an inline function belonging to the
fourth, that definition still belongs to the fourth. When we go to emit code in
a new PTU, we map each declaration we have to emit back to its owning PTU and
emit it in a new module for just the extensions to that PTU. We keep track of
all the modules we've emitted for a PTU so that we can unload them all if we
decide to roll it back.
Most declarations/definitions will only refer to entities from the same or
earlier PTUs. However, it is possible (primarily by defining a
previously-declared entity, but also through templates or ADL) for an entity
that belongs to one PTU to refer to something from a later PTU. We will have to
keep track of this and prevent unwinding to later PTU when we recognize it.
Fortunately, this should be very rare; and crucially, we don't have to do the
bookkeeping for this if we've only got one PTU, e.g. in normal compilation.
Otherwise, PTUs after the first just need to record enough metadata to be able
to revert any changes they've made to declarations belonging to earlier PTUs,
e.g. to redeclaration chains or template specialization lists.
It should even eventually be possible for PTUs to provide their own slab
allocators which can be thrown away as part of rolling back the PTU. We can
maintain a notion of the active allocator and allocate things like Stmt/Expr
nodes in it, temporarily changing it to the appropriate PTU whenever we go to do
something like instantiate a function template. More care will be required when
allocating declarations and types, though.
We would want the PTU to be efficiently recoverable from a Decl; I'm not sure
how best to do that. An easy option that would cover most declarations would be
to make multiple TranslationUnitDecls and parent the declarations appropriately,
but I don't think that's good enough for things like member function templates,
since an instantiation of that would still be parented by its original class.
Maybe we can work this into the DC chain somehow, like how lexical DCs are.
We add a different kind of translation unit `TU_Incremental` which is a
complete translation unit that we might nonetheless incrementally extend later.
Because it is complete (and we might want to generate code for it), we do
perform template instantiation, but because it might be extended later, we don't
warn if it declares or uses undefined internal-linkage symbols.
This patch teaches clang-repl how to recover from errors by disconnecting the
most recent PTU and update the primary PTU lookup tables. For instance:
```./clang-repl
clang-repl> int i = 12; error;
In file included from <<< inputs >>>:1:
input_line_0:1:13: error: C++ requires a type specifier for all declarations
int i = 12; error;
^
error: Parsing failed.
clang-repl> int i = 13; extern "C" int printf(const char*,...);
clang-repl> auto r1 = printf("i=%d\n", i);
i=13
clang-repl> quit
```
Differential revision: https://reviews.llvm.org/D104918
It's useful to be able to load explicitly-built PCH files into an implicit build (e.g. during dependency scanning). That's currently impossible, since the explicitly-built PCH has an empty modules cache path, while the current compilation has (and needs to have) a valid path, triggering an error in the `PCHValidator`.
This patch adds a preprocessor option and command-line flag that can be used to omit this check.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D103802
Implementation of the unroll directive introduced in OpenMP 5.1. Follows the approach from D76342 for the tile directive (i.e. AST-based, not using the OpenMPIRBuilder). Tries to use `llvm.loop.unroll.*` metadata where possible, but has to fall back to an AST representation of the outer loop if the partially unrolled generated loop is associated with another directive (because it needs to compute the number of iterations).
Reviewed By: ABataev
Differential Revision: https://reviews.llvm.org/D99459
This implements the 'using enum maybe-qualified-enum-tag ;' part of
1099. It introduces a new 'UsingEnumDecl', subclassed from
'BaseUsingDecl'. Much of the diff is the boilerplate needed to get the
new class set up.
There is one case where we accept ill-formed, but I believe this is
merely an extended case of an existing bug, so consider it
orthogonal. AFAICT in class-scope the c++20 rule is that no 2 using
decls can bring in the same target decl ([namespace.udecl]/8). But we
already accept:
struct A { enum { a }; };
struct B : A { using A::a; };
struct C : B { using A::a;
using B::a; }; // same enumerator
this patch permits mixtures of 'using enum Bob;' and 'using Bob::member;' in the same way.
Differential Revision: https://reviews.llvm.org/D102241
This attribute applies to a using declaration, and permits importing a
declaration without knowing if that declaration exists. This is useful
for libc++ C wrapper headers that re-export declarations in std::, in
cases where the base C library doesn't provide all declarations.
This attribute was proposed in http://lists.llvm.org/pipermail/cfe-dev/2020-June/066038.html.
rdar://69313357
Differential Revision: https://reviews.llvm.org/D90188
The original version of this was reverted, and @rjmcall provided some
advice to architect a new solution. This is that solution.
This implements a builtin to provide a unique name that is stable across
compilations of this TU for the purposes of implementing the library
component of the unnamed kernel feature of SYCL. It does this by
running the Itanium mangler with a few modifications.
Because it is somewhat common to wrap non-kernel-related lambdas in
macros that aren't present on the device (such as for logging), this
uniquely generates an ID for all lambdas involved in the naming of a
kernel. It uses the lambda-mangling number to do this, except replaces
this with its own number (starting at 10000 for readabililty reasons)
for lambdas used to name a kernel.
Additionally, this implements itself as constexpr with a slight catch:
if a name would be invalidated by the use of this lambda in a later
kernel invocation, it is diagnosed as an error (see the Sema tests).
Differential Revision: https://reviews.llvm.org/D103112
Reduce memory footprint of AST Reader/Writer:
1. Adjust internal data containers' element type.
2. Switch to set for deduplication of deferred diags.
Differential Revision: https://reviews.llvm.org/D101793
This patch is the Part-1 (FE Clang) implementation of HW Exception handling.
This new feature adds the support of Hardware Exception for Microsoft Windows
SEH (Structured Exception Handling).
This is the first step of this project; only X86_64 target is enabled in this patch.
Compiler options:
For clang-cl.exe, the option is -EHa, the same as MSVC.
For clang.exe, the extra option is -fasync-exceptions,
plus -triple x86_64-windows -fexceptions and -fcxx-exceptions as usual.
NOTE:: Without the -EHa or -fasync-exceptions, this patch is a NO-DIFF change.
The rules for C code:
For C-code, one way (MSVC approach) to achieve SEH -EHa semantic is to follow
three rules:
* First, no exception can move in or out of _try region., i.e., no "potential
faulty instruction can be moved across _try boundary.
* Second, the order of exceptions for instructions 'directly' under a _try
must be preserved (not applied to those in callees).
* Finally, global states (local/global/heap variables) that can be read
outside of _try region must be updated in memory (not just in register)
before the subsequent exception occurs.
The impact to C++ code:
Although SEH is a feature for C code, -EHa does have a profound effect on C++
side. When a C++ function (in the same compilation unit with option -EHa ) is
called by a SEH C function, a hardware exception occurs in C++ code can also
be handled properly by an upstream SEH _try-handler or a C++ catch(...).
As such, when that happens in the middle of an object's life scope, the dtor
must be invoked the same way as C++ Synchronous Exception during unwinding
process.
Design:
A natural way to achieve the rules above in LLVM today is to allow an EH edge
added on memory/computation instruction (previous iload/istore idea) so that
exception path is modeled in Flow graph preciously. However, tracking every
single memory instruction and potential faulty instruction can create many
Invokes, complicate flow graph and possibly result in negative performance
impact for downstream optimization and code generation. Making all
optimizations be aware of the new semantic is also substantial.
This design does not intend to model exception path at instruction level.
Instead, the proposed design tracks and reports EH state at BLOCK-level to
reduce the complexity of flow graph and minimize the performance-impact on CPP
code under -EHa option.
One key element of this design is the ability to compute State number at
block-level. Our algorithm is based on the following rationales:
A _try scope is always a SEME (Single Entry Multiple Exits) region as jumping
into a _try is not allowed. The single entry must start with a seh_try_begin()
invoke with a correct State number that is the initial state of the SEME.
Through control-flow, state number is propagated into all blocks. Side exits
marked by seh_try_end() will unwind to parent state based on existing
SEHUnwindMap[].
Note side exits can ONLY jump into parent scopes (lower state number).
Thus, when a block succeeds various states from its predecessors, the lowest
State triumphs others. If some exits flow to unreachable, propagation on those
paths terminate, not affecting remaining blocks.
For CPP code, object lifetime region is usually a SEME as SEH _try.
However there is one rare exception: jumping into a lifetime that has Dtor but
has no Ctor is warned, but allowed:
Warning: jump bypasses variable with a non-trivial destructor
In that case, the region is actually a MEME (multiple entry multiple exits).
Our solution is to inject a eha_scope_begin() invoke in the side entry block to
ensure a correct State.
Implementation:
Part-1: Clang implementation described below.
Two intrinsic are created to track CPP object scopes; eha_scope_begin() and eha_scope_end().
_scope_begin() is immediately added after ctor() is called and EHStack is pushed.
So it must be an invoke, not a call. With that it's also guaranteed an
EH-cleanup-pad is created regardless whether there exists a call in this scope.
_scope_end is added before dtor(). These two intrinsics make the computation of
Block-State possible in downstream code gen pass, even in the presence of
ctor/dtor inlining.
Two intrinsic, seh_try_begin() and seh_try_end(), are added for C-code to mark
_try boundary and to prevent from exceptions being moved across _try boundary.
All memory instructions inside a _try are considered as 'volatile' to assure
2nd and 3rd rules for C-code above. This is a little sub-optimized. But it's
acceptable as the amount of code directly under _try is very small.
Part-2 (will be in Part-2 patch): LLVM implementation described below.
For both C++ & C-code, the state of each block is computed at the same place in
BE (WinEHPreparing pass) where all other EH tables/maps are calculated.
In addition to _scope_begin & _scope_end, the computation of block state also
rely on the existing State tracking code (UnwindMap and InvokeStateMap).
For both C++ & C-code, the state of each block with potential trap instruction
is marked and reported in DAG Instruction Selection pass, the same place where
the state for -EHsc (synchronous exceptions) is done.
If the first instruction in a reported block scope can trap, a Nop is injected
before this instruction. This nop is needed to accommodate LLVM Windows EH
implementation, in which the address in IPToState table is offset by +1.
(note the purpose of that is to ensure the return address of a call is in the
same scope as the call address.
The handler for catch(...) for -EHa must handle HW exception. So it is
'adjective' flag is reset (it cannot be IsStdDotDot (0x40) that only catches
C++ exceptions).
Suppress push/popTerminate() scope (from noexcept/noTHrow) so that HW
exceptions can be passed through.
Original llvm-dev [RFC] discussions can be found in these two threads below:
https://lists.llvm.org/pipermail/llvm-dev/2020-March/140541.htmlhttps://lists.llvm.org/pipermail/llvm-dev/2020-April/141338.html
Differential Revision: https://reviews.llvm.org/D80344/new/