Commit Graph

5 Commits

Author SHA1 Message Date
Akira Yokosawa 5c587f9b9c tools/memory-model: Remove redundant initialization in litmus tests
This is a revert of commit 1947bfcf81 ("tools/memory-model: Add types
to litmus tests") with conflict resolutions.

klitmus7 [1] is aware of default types of "int" and "int*".
It accepts litmus tests for herd7 without extra type info unless
non-"int" variables are referenced by an "exists", "locations",
or "filter" directive.

[1]: Tested with klitmus7 versions 7.49 or later.

Suggested-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Akira Yokosawa <akiyks@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-01-04 14:40:49 -08:00
Paul E. McKenney 1947bfcf81 tools/memory-model: Add types to litmus tests
This commit adds type information for global variables in the litmus
tests in order to allow easier use with klitmus7.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-11-06 17:25:16 -08:00
Alan Stern 6e89e831a9 tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
More than one kernel developer has expressed the opinion that the LKMM
should enforce ordering of writes by locking.  In other words, given
the following code:

	WRITE_ONCE(x, 1);
	spin_unlock(&s):
	spin_lock(&s);
	WRITE_ONCE(y, 1);

the stores to x and y should be propagated in order to all other CPUs,
even though those other CPUs might not access the lock s.  In terms of
the memory model, this means expanding the cumul-fence relation.

Locks should also provide read-read (and read-write) ordering in a
similar way.  Given:

	READ_ONCE(x);
	spin_unlock(&s);
	spin_lock(&s);
	READ_ONCE(y);		// or WRITE_ONCE(y, 1);

the load of x should be executed before the load of (or store to) y.
The LKMM already provides this ordering, but it provides it even in
the case where the two accesses are separated by a release/acquire
pair of fences rather than unlock/lock.  This would prevent
architectures from using weakly ordered implementations of release and
acquire, which seems like an unnecessary restriction.  The patch
therefore removes the ordering requirement from the LKMM for that
case.

There are several arguments both for and against this change.  Let us
refer to these enhanced ordering properties by saying that the LKMM
would require locks to be RCtso (a bit of a misnomer, but analogous to
RCpc and RCsc) and it would require ordinary acquire/release only to
be RCpc.  (Note: In the following, the phrase "all supported
architectures" is meant not to include RISC-V.  Although RISC-V is
indeed supported by the kernel, the implementation is still somewhat
in a state of flux and therefore statements about it would be
premature.)

Pros:

	The kernel already provides RCtso ordering for locks on all
	supported architectures, even though this is not stated
	explicitly anywhere.  Therefore the LKMM should formalize it.

	In theory, guaranteeing RCtso ordering would reduce the need
	for additional barrier-like constructs meant to increase the
	ordering strength of locks.

	Will Deacon and Peter Zijlstra are strongly in favor of
	formalizing the RCtso requirement.  Linus Torvalds and Will
	would like to go even further, requiring locks to have RCsc
	behavior (ordering preceding writes against later reads), but
	they recognize that this would incur a noticeable performance
	degradation on the POWER architecture.  Linus also points out
	that people have made the mistake, in the past, of assuming
	that locking has stronger ordering properties than is
	currently guaranteed, and this change would reduce the
	likelihood of such mistakes.

	Not requiring ordinary acquire/release to be any stronger than
	RCpc may prove advantageous for future architectures, allowing
	them to implement smp_load_acquire() and smp_store_release()
	with more efficient machine instructions than would be
	possible if the operations had to be RCtso.  Will and Linus
	approve this rationale, hypothetical though it is at the
	moment (it may end up affecting the RISC-V implementation).
	The same argument may or may not apply to RMW-acquire/release;
	see also the second Con entry below.

	Linus feels that locks should be easy for people to use
	without worrying about memory consistency issues, since they
	are so pervasive in the kernel, whereas acquire/release is
	much more of an "experts only" tool.  Requiring locks to be
	RCtso is a step in this direction.

Cons:

	Andrea Parri and Luc Maranget think that locks should have the
	same ordering properties as ordinary acquire/release (indeed,
	Luc points out that the names "acquire" and "release" derive
	from the usage of locks).  Andrea points out that having
	different ordering properties for different forms of acquires
	and releases is not only unnecessary, it would also be
	confusing and unmaintainable.

	Locks are constructed from lower-level primitives, typically
	RMW-acquire (for locking) and ordinary release (for unlock).
	It is illogical to require stronger ordering properties from
	the high-level operations than from the low-level operations
	they comprise.  Thus, this change would make

		while (cmpxchg_acquire(&s, 0, 1) != 0)
			cpu_relax();

	an incorrect implementation of spin_lock(&s) as far as the
	LKMM is concerned.  In theory this weakness can be ameliorated
	by changing the LKMM even further, requiring
	RMW-acquire/release also to be RCtso (which it already is on
	all supported architectures).

	As far as I know, nobody has singled out any examples of code
	in the kernel that actually relies on locks being RCtso.
	(People mumble about RCU and the scheduler, but nobody has
	pointed to any actual code.  If there are any real cases,
	their number is likely quite small.)  If RCtso ordering is not
	needed, why require it?

	A handful of locking constructs (qspinlocks, qrwlocks, and
	mcs_spinlocks) are built on top of smp_cond_load_acquire()
	instead of an RMW-acquire instruction.  It currently provides
	only the ordinary acquire semantics, not the stronger ordering
	this patch would require of locks.  In theory this could be
	ameliorated by requiring smp_cond_load_acquire() in
	combination with ordinary release also to be RCtso (which is
	currently true on all supported architectures).

	On future weakly ordered architectures, people may be able to
	implement locks in a non-RCtso fashion with significant
	performance improvement.  Meeting the RCtso requirement would
	necessarily add run-time overhead.

Overall, the technical aspects of these arguments seem relatively
minor, and it appears mostly to boil down to a matter of opinion.
Since the opinions of senior kernel maintainers such as Linus,
Peter, and Will carry more weight than those of Luc and Andrea, this
patch changes the model in accordance with the maintainers' wishes.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Link: http://lkml.kernel.org/r/20180926182920.27644-2-paulmck@linux.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02 10:28:01 +02:00
Paul E. McKenney acb6c96c52 tools/memory-model: Fix ISA2+pooncelock+pooncelock+pombonce name
The names on the first line of the litmus tests are arbitrary,
but the convention is that they be the filename without the trailing
".litmus".  This commit therefore removes the stray trailing ".litmus"
from ISA2+pooncelock+pooncelock+pombonce.litmus's name.

Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/20180716180605.16115-2-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-07-17 09:29:31 +02:00
Alan Stern 556bb7d252 tools/memory-model: Add a S lock-based external-view litmus test
This commit adds a litmus test in which P0() and P1() form a lock-based S
litmus test, with the addition of P2(), which observes P0()'s and P1()'s
accesses with a full memory barrier but without the lock.  This litmus
test asks whether writes carried out by two different processes under the
same lock will be seen in order by a third process not holding that lock.
The answer to this question is "yes" for all architectures supporting
the Linux kernel, but is "no" according to the current version of LKMM.

A patch to LKMM is under development.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: nborisov@suse.com
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1519169112-20593-10-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-02-21 09:58:15 +01:00