Thread safety analysis: Support copy-elided production of scoped capabilities through arbitrary calls

When support for copy elision was initially added in e97654b2f2, it
was taking attributes from a constructor call, although that constructor
call is actually not involved. It seems more natural to use attributes
on the function returning the scoped capability, which is where it's
actually coming from. This would also support a number of interesting
use cases, like producing different scope kinds without the need for tag
types, or producing scopes from a private mutex.

Changing the behavior was surprisingly difficult: we were not handling
CXXConstructorExpr calls like regular calls but instead handled them
through the DeclStmt they're contained in. This was based on the
assumption that constructors are basically only called in variable
declarations (not true because of temporaries), and that variable
declarations necessitate constructors (not true with C++17 anymore).

Untangling this required separating construction from assigning a
variable name. When a call produces an object, we use a placeholder
til::LiteralPtr for `this`, and we collect the call expression and
placeholder in a map. Later when going through a DeclStmt, we look up
the call expression and set the placeholder to the new VarDecl.

The change has a couple of nice side effects:
* We don't miss constructor calls not contained in DeclStmts anymore,
  allowing patterns like
    MutexLock{&mu}, requiresMutex();
  The scoped lock temporary will be destructed at the end of the full
  statement, so it protects the following call without the need for a
  scope, but with the ability to unlock in case of an exception.
* We support lifetime extension of temporaries. While unusual, one can
  now write
    const MutexLock &scope = MutexLock(&mu);
  and have it behave as expected.
* Destructors used to be handled in a weird way: since there is no
  expression in the AST for implicit destructor calls, we instead
  provided a made-up DeclRefExpr to the variable being destructed, and
  passed that instead of a CallExpr. Then later in translateAttrExpr
  there was special code that knew that destructor expressions worked a
  bit different.
* We were producing dummy DeclRefExprs in a number of places, this has
  been eliminated. We now use til::SExprs instead.

Technically this could break existing code, but the current handling
seems unexpected enough to justify this change.

Reviewed By: aaron.ballman

Differential Revision: https://reviews.llvm.org/D129755
This commit is contained in:
Aaron Puchert 2022-07-14 01:47:52 +02:00
parent d8fa40dfa7
commit 0041a69495
8 changed files with 220 additions and 151 deletions

View File

@ -235,6 +235,11 @@ Improvements to Clang's diagnostics
function definition without a prototype which is preceded by a static
declaration of the function with a prototype. Fixes
`Issue 58181 <https://github.com/llvm/llvm-project/issues/58181>`_.
- Copy-elided initialization of lock scopes is now handled differently in
``-Wthread-safety-analysis``: annotations on the move constructor are no
longer taken into account, in favor of annotations on the function returning
the lock scope by value. This could result in new warnings if code depended
on the previous undocumented behavior.
Non-comprehensive list of changes in this release
-------------------------------------------------

View File

@ -408,7 +408,8 @@ and destructor refer to the capability via different names; see the
Scoped capabilities are treated as capabilities that are implicitly acquired
on construction and released on destruction. They are associated with
the set of (regular) capabilities named in thread safety attributes on the
constructor. Acquire-type attributes on other member functions are treated as
constructor or function returning them by value (using C++17 guaranteed copy
elision). Acquire-type attributes on other member functions are treated as
applying to that set of associated capabilities, while ``RELEASE`` implies that
a function releases all associated capabilities in whatever mode they're held.
@ -930,6 +931,13 @@ implementation.
// Assume mu is not held, implicitly acquire *this and associate it with mu.
MutexLocker(Mutex *mu, defer_lock_t) EXCLUDES(mu) : mut(mu), locked(false) {}
// Same as constructors, but without tag types. (Requires C++17 copy elision.)
static MutexLocker Lock(Mutex *mu) ACQUIRE(mu);
static MutexLocker Adopt(Mutex *mu) REQUIRES(mu);
static MutexLocker ReaderLock(Mutex *mu) ACQUIRE_SHARED(mu);
static MutexLocker AdoptReaderLock(Mutex *mu) REQUIRES_SHARED(mu);
static MutexLocker DeferLock(Mutex *mu) EXCLUDES(mu);
// Release *this and all associated mutexes, if they are still held.
// There is no warning if the scope was already unlocked before.
~MutexLocker() RELEASE() {

View File

@ -31,6 +31,7 @@
#include "clang/Basic/LLVM.h"
#include "llvm/ADT/DenseMap.h"
#include "llvm/ADT/PointerIntPair.h"
#include "llvm/ADT/PointerUnion.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/Support/Casting.h"
#include <sstream>
@ -354,7 +355,7 @@ public:
const NamedDecl *AttrDecl;
// Implicit object argument -- e.g. 'this'
const Expr *SelfArg = nullptr;
llvm::PointerUnion<const Expr *, til::SExpr *> SelfArg = nullptr;
// Number of funArgs
unsigned NumArgs = 0;
@ -378,10 +379,18 @@ public:
// Translate a clang expression in an attribute to a til::SExpr.
// Constructs the context from D, DeclExp, and SelfDecl.
CapabilityExpr translateAttrExpr(const Expr *AttrExp, const NamedDecl *D,
const Expr *DeclExp, VarDecl *SelfD=nullptr);
const Expr *DeclExp,
til::SExpr *Self = nullptr);
CapabilityExpr translateAttrExpr(const Expr *AttrExp, CallingContext *Ctx);
// Translate a variable reference.
til::LiteralPtr *createVariable(const VarDecl *VD);
// Create placeholder for this: we don't know the VarDecl on construction yet.
std::pair<til::LiteralPtr *, StringRef>
createThisPlaceholder(const Expr *Exp);
// Translate a clang statement or expression to a TIL expression.
// Also performs substitution of variables; Ctx provides the context.
// Dispatches on the type of S.

View File

@ -634,15 +634,14 @@ typename V::R_SExpr Literal::traverse(V &Vs, typename V::R_Ctx Ctx) {
/// At compile time, pointer literals are represented by symbolic names.
class LiteralPtr : public SExpr {
public:
LiteralPtr(const ValueDecl *D) : SExpr(COP_LiteralPtr), Cvdecl(D) {
assert(D && "ValueDecl must not be null");
}
LiteralPtr(const ValueDecl *D) : SExpr(COP_LiteralPtr), Cvdecl(D) {}
LiteralPtr(const LiteralPtr &) = default;
static bool classof(const SExpr *E) { return E->opcode() == COP_LiteralPtr; }
// The clang declaration for the value that this pointer points to.
const ValueDecl *clangDecl() const { return Cvdecl; }
void setClangDecl(const ValueDecl *VD) { Cvdecl = VD; }
template <class V>
typename V::R_SExpr traverse(V &Vs, typename V::R_Ctx Ctx) {
@ -651,6 +650,8 @@ public:
template <class C>
typename C::CType compare(const LiteralPtr* E, C& Cmp) const {
if (!Cvdecl || !E->Cvdecl)
return Cmp.comparePointers(this, E);
return Cmp.comparePointers(Cvdecl, E->Cvdecl);
}

View File

@ -623,7 +623,10 @@ protected:
}
void printLiteralPtr(const LiteralPtr *E, StreamType &SS) {
SS << E->clangDecl()->getNameAsString();
if (const NamedDecl *D = E->clangDecl())
SS << D->getNameAsString();
else
SS << "<temporary>";
}
void printVariable(const Variable *V, StreamType &SS, bool IsVarDecl=false) {

View File

@ -1029,7 +1029,7 @@ public:
template <typename AttrType>
void getMutexIDs(CapExprSet &Mtxs, AttrType *Attr, const Expr *Exp,
const NamedDecl *D, VarDecl *SelfDecl = nullptr);
const NamedDecl *D, til::SExpr *Self = nullptr);
template <class AttrType>
void getMutexIDs(CapExprSet &Mtxs, AttrType *Attr, const Expr *Exp,
@ -1220,7 +1220,7 @@ bool ThreadSafetyAnalyzer::inCurrentScope(const CapabilityExpr &CapE) {
if (const auto *LP = dyn_cast<til::LiteralPtr>(SExp)) {
const ValueDecl *VD = LP->clangDecl();
// Variables defined in a function are always inaccessible.
if (!VD->isDefinedOutsideFunctionOrMethod())
if (!VD || !VD->isDefinedOutsideFunctionOrMethod())
return false;
// For now we consider static class members to be inaccessible.
if (isa<CXXRecordDecl>(VD->getDeclContext()))
@ -1311,10 +1311,10 @@ void ThreadSafetyAnalyzer::removeLock(FactSet &FSet, const CapabilityExpr &Cp,
template <typename AttrType>
void ThreadSafetyAnalyzer::getMutexIDs(CapExprSet &Mtxs, AttrType *Attr,
const Expr *Exp, const NamedDecl *D,
VarDecl *SelfDecl) {
til::SExpr *Self) {
if (Attr->args_size() == 0) {
// The mutex held is the "this" object.
CapabilityExpr Cp = SxBuilder.translateAttrExpr(nullptr, D, Exp, SelfDecl);
CapabilityExpr Cp = SxBuilder.translateAttrExpr(nullptr, D, Exp, Self);
if (Cp.isInvalid()) {
warnInvalidLock(Handler, nullptr, D, Exp, Cp.getKind());
return;
@ -1326,7 +1326,7 @@ void ThreadSafetyAnalyzer::getMutexIDs(CapExprSet &Mtxs, AttrType *Attr,
}
for (const auto *Arg : Attr->args()) {
CapabilityExpr Cp = SxBuilder.translateAttrExpr(Arg, D, Exp, SelfDecl);
CapabilityExpr Cp = SxBuilder.translateAttrExpr(Arg, D, Exp, Self);
if (Cp.isInvalid()) {
warnInvalidLock(Handler, nullptr, D, Exp, Cp.getKind());
continue;
@ -1529,6 +1529,8 @@ class BuildLockset : public ConstStmtVisitor<BuildLockset> {
ThreadSafetyAnalyzer *Analyzer;
FactSet FSet;
/// Maps constructed objects to `this` placeholder prior to initialization.
llvm::SmallDenseMap<const Expr *, til::LiteralPtr *> ConstructedObjects;
LocalVariableMap::Context LVarCtx;
unsigned CtxIndex;
@ -1536,14 +1538,17 @@ class BuildLockset : public ConstStmtVisitor<BuildLockset> {
void warnIfMutexNotHeld(const NamedDecl *D, const Expr *Exp, AccessKind AK,
Expr *MutexExp, ProtectedOperationKind POK,
SourceLocation Loc);
void warnIfMutexHeld(const NamedDecl *D, const Expr *Exp, Expr *MutexExp);
void warnIfMutexHeld(const NamedDecl *D, const Expr *Exp, Expr *MutexExp,
SourceLocation Loc);
void checkAccess(const Expr *Exp, AccessKind AK,
ProtectedOperationKind POK = POK_VarAccess);
void checkPtAccess(const Expr *Exp, AccessKind AK,
ProtectedOperationKind POK = POK_VarAccess);
void handleCall(const Expr *Exp, const NamedDecl *D, VarDecl *VD = nullptr);
void handleCall(const Expr *Exp, const NamedDecl *D,
til::LiteralPtr *Self = nullptr,
SourceLocation Loc = SourceLocation());
void examineArguments(const FunctionDecl *FD,
CallExpr::const_arg_iterator ArgBegin,
CallExpr::const_arg_iterator ArgEnd,
@ -1560,6 +1565,7 @@ public:
void VisitCallExpr(const CallExpr *Exp);
void VisitCXXConstructExpr(const CXXConstructExpr *Exp);
void VisitDeclStmt(const DeclStmt *S);
void VisitMaterializeTemporaryExpr(const MaterializeTemporaryExpr *Exp);
};
} // namespace
@ -1629,7 +1635,7 @@ void BuildLockset::warnIfMutexNotHeld(const NamedDecl *D, const Expr *Exp,
/// Warn if the LSet contains the given lock.
void BuildLockset::warnIfMutexHeld(const NamedDecl *D, const Expr *Exp,
Expr *MutexExp) {
Expr *MutexExp, SourceLocation Loc) {
CapabilityExpr Cp = Analyzer->SxBuilder.translateAttrExpr(MutexExp, D, Exp);
if (Cp.isInvalid()) {
warnInvalidLock(Analyzer->Handler, MutexExp, D, Exp, Cp.getKind());
@ -1641,7 +1647,7 @@ void BuildLockset::warnIfMutexHeld(const NamedDecl *D, const Expr *Exp,
const FactEntry *LDat = FSet.findLock(Analyzer->FactMan, Cp);
if (LDat) {
Analyzer->Handler.handleFunExcludesLock(Cp.getKind(), D->getNameAsString(),
Cp.toString(), Exp->getExprLoc());
Cp.toString(), Loc);
}
}
@ -1761,21 +1767,34 @@ void BuildLockset::checkPtAccess(const Expr *Exp, AccessKind AK,
/// and check that the appropriate locks are held. Non-const method calls with
/// the same signature as const method calls can be also treated as reads.
///
/// \param Exp The call expression.
/// \param D The callee declaration.
/// \param Self If \p Exp = nullptr, the implicit this argument.
/// \param Loc If \p Exp = nullptr, the location.
void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
VarDecl *VD) {
SourceLocation Loc = Exp->getExprLoc();
til::LiteralPtr *Self, SourceLocation Loc) {
CapExprSet ExclusiveLocksToAdd, SharedLocksToAdd;
CapExprSet ExclusiveLocksToRemove, SharedLocksToRemove, GenericLocksToRemove;
CapExprSet ScopedReqsAndExcludes;
// Figure out if we're constructing an object of scoped lockable class
bool isScopedVar = false;
if (VD) {
if (const auto *CD = dyn_cast<const CXXConstructorDecl>(D)) {
const CXXRecordDecl* PD = CD->getParent();
if (PD && PD->hasAttr<ScopedLockableAttr>())
isScopedVar = true;
CapabilityExpr Scp;
if (Exp) {
assert(!Self);
const auto *TagT = Exp->getType()->getAs<TagType>();
if (TagT && Exp->isPRValue()) {
std::pair<til::LiteralPtr *, StringRef> Placeholder =
Analyzer->SxBuilder.createThisPlaceholder(Exp);
auto inserted = ConstructedObjects.insert({Exp, Placeholder.first});
assert(inserted.second && "Are we visiting the same expression again?");
if (isa<CXXConstructExpr>(Exp))
Self = Placeholder.first;
if (TagT->getDecl()->hasAttr<ScopedLockableAttr>())
Scp = CapabilityExpr(Placeholder.first, Placeholder.second, false);
}
assert(Loc.isInvalid());
Loc = Exp->getExprLoc();
}
for(const Attr *At : D->attrs()) {
@ -1786,7 +1805,7 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
const auto *A = cast<AcquireCapabilityAttr>(At);
Analyzer->getMutexIDs(A->isShared() ? SharedLocksToAdd
: ExclusiveLocksToAdd,
A, Exp, D, VD);
A, Exp, D, Self);
break;
}
@ -1797,7 +1816,7 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
const auto *A = cast<AssertExclusiveLockAttr>(At);
CapExprSet AssertLocks;
Analyzer->getMutexIDs(AssertLocks, A, Exp, D, VD);
Analyzer->getMutexIDs(AssertLocks, A, Exp, D, Self);
for (const auto &AssertLock : AssertLocks)
Analyzer->addLock(
FSet, std::make_unique<LockableFactEntry>(
@ -1808,7 +1827,7 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
const auto *A = cast<AssertSharedLockAttr>(At);
CapExprSet AssertLocks;
Analyzer->getMutexIDs(AssertLocks, A, Exp, D, VD);
Analyzer->getMutexIDs(AssertLocks, A, Exp, D, Self);
for (const auto &AssertLock : AssertLocks)
Analyzer->addLock(
FSet, std::make_unique<LockableFactEntry>(
@ -1819,7 +1838,7 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
case attr::AssertCapability: {
const auto *A = cast<AssertCapabilityAttr>(At);
CapExprSet AssertLocks;
Analyzer->getMutexIDs(AssertLocks, A, Exp, D, VD);
Analyzer->getMutexIDs(AssertLocks, A, Exp, D, Self);
for (const auto &AssertLock : AssertLocks)
Analyzer->addLock(FSet, std::make_unique<LockableFactEntry>(
AssertLock,
@ -1833,11 +1852,11 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
case attr::ReleaseCapability: {
const auto *A = cast<ReleaseCapabilityAttr>(At);
if (A->isGeneric())
Analyzer->getMutexIDs(GenericLocksToRemove, A, Exp, D, VD);
Analyzer->getMutexIDs(GenericLocksToRemove, A, Exp, D, Self);
else if (A->isShared())
Analyzer->getMutexIDs(SharedLocksToRemove, A, Exp, D, VD);
Analyzer->getMutexIDs(SharedLocksToRemove, A, Exp, D, Self);
else
Analyzer->getMutexIDs(ExclusiveLocksToRemove, A, Exp, D, VD);
Analyzer->getMutexIDs(ExclusiveLocksToRemove, A, Exp, D, Self);
break;
}
@ -1845,10 +1864,10 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
const auto *A = cast<RequiresCapabilityAttr>(At);
for (auto *Arg : A->args()) {
warnIfMutexNotHeld(D, Exp, A->isShared() ? AK_Read : AK_Written, Arg,
POK_FunctionCall, Exp->getExprLoc());
POK_FunctionCall, Loc);
// use for adopting a lock
if (isScopedVar)
Analyzer->getMutexIDs(ScopedReqsAndExcludes, A, Exp, D, VD);
if (!Scp.shouldIgnore())
Analyzer->getMutexIDs(ScopedReqsAndExcludes, A, Exp, D, Self);
}
break;
}
@ -1856,10 +1875,10 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
case attr::LocksExcluded: {
const auto *A = cast<LocksExcludedAttr>(At);
for (auto *Arg : A->args()) {
warnIfMutexHeld(D, Exp, Arg);
warnIfMutexHeld(D, Exp, Arg, Loc);
// use for deferring a lock
if (isScopedVar)
Analyzer->getMutexIDs(ScopedReqsAndExcludes, A, Exp, D, VD);
if (!Scp.shouldIgnore())
Analyzer->getMutexIDs(ScopedReqsAndExcludes, A, Exp, D, Self);
}
break;
}
@ -1882,7 +1901,7 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
// Add locks.
FactEntry::SourceKind Source =
isScopedVar ? FactEntry::Managed : FactEntry::Acquired;
!Scp.shouldIgnore() ? FactEntry::Managed : FactEntry::Acquired;
for (const auto &M : ExclusiveLocksToAdd)
Analyzer->addLock(FSet, std::make_unique<LockableFactEntry>(M, LK_Exclusive,
Loc, Source));
@ -1890,15 +1909,9 @@ void BuildLockset::handleCall(const Expr *Exp, const NamedDecl *D,
Analyzer->addLock(
FSet, std::make_unique<LockableFactEntry>(M, LK_Shared, Loc, Source));
if (isScopedVar) {
if (!Scp.shouldIgnore()) {
// Add the managing object as a dummy mutex, mapped to the underlying mutex.
SourceLocation MLoc = VD->getLocation();
DeclRefExpr DRE(VD->getASTContext(), VD, false, VD->getType(), VK_LValue,
VD->getLocation());
// FIXME: does this store a pointer to DRE?
CapabilityExpr Scp = Analyzer->SxBuilder.translateAttrExpr(&DRE, nullptr);
auto ScopedEntry = std::make_unique<ScopedLockableFactEntry>(Scp, MLoc);
auto ScopedEntry = std::make_unique<ScopedLockableFactEntry>(Scp, Loc);
for (const auto &M : ExclusiveLocksToAdd)
ScopedEntry->addLock(M);
for (const auto &M : SharedLocksToAdd)
@ -2058,36 +2071,11 @@ void BuildLockset::VisitCXXConstructExpr(const CXXConstructExpr *Exp) {
} else {
examineArguments(D, Exp->arg_begin(), Exp->arg_end());
}
if (D && D->hasAttrs())
handleCall(Exp, D);
}
static CXXConstructorDecl *
findConstructorForByValueReturn(const CXXRecordDecl *RD) {
// Prefer a move constructor over a copy constructor. If there's more than
// one copy constructor or more than one move constructor, we arbitrarily
// pick the first declared such constructor rather than trying to guess which
// one is more appropriate.
CXXConstructorDecl *CopyCtor = nullptr;
for (auto *Ctor : RD->ctors()) {
if (Ctor->isDeleted())
continue;
if (Ctor->isMoveConstructor())
return Ctor;
if (!CopyCtor && Ctor->isCopyConstructor())
CopyCtor = Ctor;
}
return CopyCtor;
}
static Expr *buildFakeCtorCall(CXXConstructorDecl *CD, ArrayRef<Expr *> Args,
SourceLocation Loc) {
ASTContext &Ctx = CD->getASTContext();
return CXXConstructExpr::Create(Ctx, Ctx.getRecordType(CD->getParent()), Loc,
CD, true, Args, false, false, false, false,
CXXConstructExpr::CK_Complete,
SourceRange(Loc, Loc));
}
static Expr *UnpackConstruction(Expr *E) {
static const Expr *UnpackConstruction(const Expr *E) {
if (auto *CE = dyn_cast<CastExpr>(E))
if (CE->getCastKind() == CK_NoOp)
E = CE->getSubExpr()->IgnoreParens();
@ -2106,7 +2094,7 @@ void BuildLockset::VisitDeclStmt(const DeclStmt *S) {
for (auto *D : S->getDeclGroup()) {
if (auto *VD = dyn_cast_or_null<VarDecl>(D)) {
Expr *E = VD->getInit();
const Expr *E = VD->getInit();
if (!E)
continue;
E = E->IgnoreParens();
@ -2116,29 +2104,27 @@ void BuildLockset::VisitDeclStmt(const DeclStmt *S) {
E = EWC->getSubExpr()->IgnoreParens();
E = UnpackConstruction(E);
if (const auto *CE = dyn_cast<CXXConstructExpr>(E)) {
const auto *CtorD = dyn_cast_or_null<NamedDecl>(CE->getConstructor());
if (!CtorD || !CtorD->hasAttrs())
continue;
handleCall(E, CtorD, VD);
} else if (isa<CallExpr>(E) && E->isPRValue()) {
// If the object is initialized by a function call that returns a
// scoped lockable by value, use the attributes on the copy or move
// constructor to figure out what effect that should have on the
// lockset.
// FIXME: Is this really the best way to handle this situation?
auto *RD = E->getType()->getAsCXXRecordDecl();
if (!RD || !RD->hasAttr<ScopedLockableAttr>())
continue;
CXXConstructorDecl *CtorD = findConstructorForByValueReturn(RD);
if (!CtorD || !CtorD->hasAttrs())
continue;
handleCall(buildFakeCtorCall(CtorD, {E}, E->getBeginLoc()), CtorD, VD);
if (auto Object = ConstructedObjects.find(E);
Object != ConstructedObjects.end()) {
Object->second->setClangDecl(VD);
ConstructedObjects.erase(Object);
}
}
}
}
void BuildLockset::VisitMaterializeTemporaryExpr(
const MaterializeTemporaryExpr *Exp) {
if (const ValueDecl *ExtD = Exp->getExtendingDecl()) {
if (auto Object =
ConstructedObjects.find(UnpackConstruction(Exp->getSubExpr()));
Object != ConstructedObjects.end()) {
Object->second->setClangDecl(ExtD);
ConstructedObjects.erase(Object);
}
}
}
/// Given two facts merging on a join point, possibly warn and decide whether to
/// keep or replace.
///
@ -2411,19 +2397,33 @@ void ThreadSafetyAnalyzer::runAnalysis(AnalysisDeclContext &AC) {
LocksetBuilder.Visit(CS.getStmt());
break;
}
// Ignore BaseDtor, MemberDtor, and TemporaryDtor for now.
// Ignore BaseDtor and MemberDtor for now.
case CFGElement::AutomaticObjectDtor: {
CFGAutomaticObjDtor AD = BI.castAs<CFGAutomaticObjDtor>();
const auto *DD = AD.getDestructorDecl(AC.getASTContext());
if (!DD->hasAttrs())
break;
// Create a dummy expression,
auto *VD = const_cast<VarDecl *>(AD.getVarDecl());
DeclRefExpr DRE(VD->getASTContext(), VD, false,
VD->getType().getNonReferenceType(), VK_LValue,
AD.getTriggerStmt()->getEndLoc());
LocksetBuilder.handleCall(&DRE, DD);
LocksetBuilder.handleCall(nullptr, DD,
SxBuilder.createVariable(AD.getVarDecl()),
AD.getTriggerStmt()->getEndLoc());
break;
}
case CFGElement::TemporaryDtor: {
auto TD = BI.castAs<CFGTemporaryDtor>();
// Clean up constructed object even if there are no attributes to
// keep the number of objects in limbo as small as possible.
if (auto Object = LocksetBuilder.ConstructedObjects.find(
TD.getBindTemporaryExpr()->getSubExpr());
Object != LocksetBuilder.ConstructedObjects.end()) {
const auto *DD = TD.getDestructorDecl(AC.getASTContext());
if (DD->hasAttrs())
// TODO: the location here isn't quite correct.
LocksetBuilder.handleCall(nullptr, DD, Object->second,
TD.getBindTemporaryExpr()->getEndLoc());
LocksetBuilder.ConstructedObjects.erase(Object);
}
break;
}
default:

View File

@ -115,19 +115,22 @@ static StringRef ClassifyDiagnostic(QualType VDT) {
/// \param D The declaration to which the attribute is attached.
/// \param DeclExp An expression involving the Decl to which the attribute
/// is attached. E.g. the call to a function.
/// \param Self S-expression to substitute for a \ref CXXThisExpr.
CapabilityExpr SExprBuilder::translateAttrExpr(const Expr *AttrExp,
const NamedDecl *D,
const Expr *DeclExp,
VarDecl *SelfDecl) {
til::SExpr *Self) {
// If we are processing a raw attribute expression, with no substitutions.
if (!DeclExp)
if (!DeclExp && !Self)
return translateAttrExpr(AttrExp, nullptr);
CallingContext Ctx(nullptr, D);
// Examine DeclExp to find SelfArg and FunArgs, which are used to substitute
// for formal parameters when we call buildMutexID later.
if (const auto *ME = dyn_cast<MemberExpr>(DeclExp)) {
if (!DeclExp)
/* We'll use Self. */;
else if (const auto *ME = dyn_cast<MemberExpr>(DeclExp)) {
Ctx.SelfArg = ME->getBase();
Ctx.SelfArrow = ME->isArrow();
} else if (const auto *CE = dyn_cast<CXXMemberCallExpr>(DeclExp)) {
@ -142,29 +145,24 @@ CapabilityExpr SExprBuilder::translateAttrExpr(const Expr *AttrExp,
Ctx.SelfArg = nullptr; // Will be set below
Ctx.NumArgs = CE->getNumArgs();
Ctx.FunArgs = CE->getArgs();
} else if (D && isa<CXXDestructorDecl>(D)) {
// There's no such thing as a "destructor call" in the AST.
Ctx.SelfArg = DeclExp;
}
// Hack to handle constructors, where self cannot be recovered from
// the expression.
if (SelfDecl && !Ctx.SelfArg) {
DeclRefExpr SelfDRE(SelfDecl->getASTContext(), SelfDecl, false,
SelfDecl->getType(), VK_LValue,
SelfDecl->getLocation());
Ctx.SelfArg = &SelfDRE;
if (Self) {
assert(!Ctx.SelfArg && "Ambiguous self argument");
Ctx.SelfArg = Self;
// If the attribute has no arguments, then assume the argument is "this".
if (!AttrExp)
return translateAttrExpr(Ctx.SelfArg, nullptr);
return CapabilityExpr(
Self, ClassifyDiagnostic(cast<CXXMethodDecl>(D)->getThisObjectType()),
false);
else // For most attributes.
return translateAttrExpr(AttrExp, &Ctx);
}
// If the attribute has no arguments, then assume the argument is "this".
if (!AttrExp)
return translateAttrExpr(Ctx.SelfArg, nullptr);
return translateAttrExpr(cast<const Expr *>(Ctx.SelfArg), nullptr);
else // For most attributes.
return translateAttrExpr(AttrExp, &Ctx);
}
@ -218,6 +216,16 @@ CapabilityExpr SExprBuilder::translateAttrExpr(const Expr *AttrExp,
return CapabilityExpr(E, Kind, Neg);
}
til::LiteralPtr *SExprBuilder::createVariable(const VarDecl *VD) {
return new (Arena) til::LiteralPtr(VD);
}
std::pair<til::LiteralPtr *, StringRef>
SExprBuilder::createThisPlaceholder(const Expr *Exp) {
return {new (Arena) til::LiteralPtr(nullptr),
ClassifyDiagnostic(Exp->getType())};
}
// Translate a clang statement or expression to a TIL expression.
// Also performs substitution of variables; Ctx provides the context.
// Dispatches on the type of S.
@ -327,8 +335,12 @@ til::SExpr *SExprBuilder::translateDeclRefExpr(const DeclRefExpr *DRE,
til::SExpr *SExprBuilder::translateCXXThisExpr(const CXXThisExpr *TE,
CallingContext *Ctx) {
// Substitute for 'this'
if (Ctx && Ctx->SelfArg)
return translate(Ctx->SelfArg, Ctx->Prev);
if (Ctx && Ctx->SelfArg) {
if (const auto *SelfArg = dyn_cast<const Expr *>(Ctx->SelfArg))
return translate(SelfArg, Ctx->Prev);
else
return cast<til::SExpr *>(Ctx->SelfArg);
}
assert(SelfVar && "We have no variable for 'this'!");
return SelfVar;
}

View File

@ -1690,6 +1690,15 @@ struct TestScopedLockable {
}
#endif
void temporary() {
MutexLock{&mu1}, a = 5;
}
void lifetime_extension() {
const MutexLock &mulock = MutexLock(&mu1);
a = 5;
}
void foo2() {
ReaderMutexLock mulock1(&mu1);
if (getBool()) {
@ -1708,6 +1717,12 @@ struct TestScopedLockable {
// expected-warning {{acquiring mutex 'mu1' that is already held}}
}
void temporary_double_lock() {
MutexLock mulock_a(&mu1); // expected-note{{mutex acquired here}}
MutexLock{&mu1}; // \
// expected-warning {{acquiring mutex 'mu1' that is already held}}
}
void foo4() {
MutexLock mulock1(&mu1), mulock2(&mu2);
a = b+1;
@ -4187,6 +4202,20 @@ public:
void foo() EXCLUSIVE_LOCKS_REQUIRED(this);
};
class SelfLockDeferred {
public:
SelfLockDeferred() LOCKS_EXCLUDED(mu_);
~SelfLockDeferred() UNLOCK_FUNCTION(mu_);
Mutex mu_;
};
class LOCKABLE SelfLockDeferred2 {
public:
SelfLockDeferred2() LOCKS_EXCLUDED(this);
~SelfLockDeferred2() UNLOCK_FUNCTION();
};
void test() {
SelfLock s;
@ -4198,6 +4227,14 @@ void test2() {
s2.foo();
}
void testDeferredTemporary() {
SelfLockDeferred(); // expected-warning {{releasing mutex '<temporary>.mu_' that was not held}}
}
void testDeferredTemporary2() {
SelfLockDeferred2(); // expected-warning {{releasing mutex '<temporary>' that was not held}}
}
} // end namespace SelfConstructorTest
@ -5892,47 +5929,41 @@ C c;
void f() { c[A()]->g(); }
} // namespace PR34800
#ifdef __cpp_guaranteed_copy_elision
namespace ReturnScopedLockable {
template<typename Object> class SCOPED_LOCKABLE ReadLockedPtr {
public:
ReadLockedPtr(Object *ptr) SHARED_LOCK_FUNCTION((*this)->mutex);
ReadLockedPtr(ReadLockedPtr &&) SHARED_LOCK_FUNCTION((*this)->mutex);
~ReadLockedPtr() UNLOCK_FUNCTION();
Object *operator->() const { return object; }
class Object {
public:
MutexLock lock() EXCLUSIVE_LOCK_FUNCTION(mutex) {
// TODO: False positive because scoped lock isn't destructed.
return MutexLock(&mutex); // expected-note {{mutex acquired here}}
} // expected-warning {{mutex 'mutex' is still held at the end of function}}
private:
Object *object;
};
int x GUARDED_BY(mutex);
void needsLock() EXCLUSIVE_LOCKS_REQUIRED(mutex);
struct Object {
int f() SHARED_LOCKS_REQUIRED(mutex);
Mutex mutex;
};
ReadLockedPtr<Object> get();
int use() {
auto ptr = get();
return ptr->f();
}
void use_constructor() {
auto ptr = ReadLockedPtr<Object>(nullptr);
ptr->f();
auto ptr2 = ReadLockedPtr<Object>{nullptr};
ptr2->f();
auto ptr3 = (ReadLockedPtr<Object>{nullptr});
ptr3->f();
}
struct Convertible {
Convertible();
operator ReadLockedPtr<Object>();
};
void use_conversion() {
ReadLockedPtr<Object> ptr = Convertible();
ptr->f();
void testInside() {
MutexLock scope = lock();
x = 1;
needsLock();
}
private:
Mutex mutex;
};
void testOutside() {
Object obj;
MutexLock scope = obj.lock();
obj.x = 1;
obj.needsLock();
}
} // namespace ReturnScopedLockable
#endif
namespace PR38640 {
void f() {
// Self-referencing assignment previously caused an infinite loop when thread