2019-05-30 20:03:44 +08:00
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
2018-12-17 18:40:36 +08:00
|
|
|
generated-y += syscall_table_32.h
|
|
|
|
generated-y += syscall_table_64.h
|
|
|
|
generated-y += syscall_table_spu.h
|
2016-01-14 12:33:46 +08:00
|
|
|
generic-y += export.h
|
2020-07-03 10:35:38 +08:00
|
|
|
generic-y += kvm_types.h
|
2014-01-22 07:36:22 +08:00
|
|
|
generic-y += mcs_spinlock.h
|
powerpc/64s: Implement queued spinlocks and rwlocks
These have shown significantly improved performance and fairness when
spinlock contention is moderate to high on very large systems.
With this series including subsequent patches, on a 16 socket 1536
thread POWER9, a stress test such as same-file open/close from all
CPUs gets big speedups, 11620op/s aggregate with simple spinlocks vs
384158op/s (33x faster), where the difference in throughput between
the fastest and slowest thread goes from 7x to 1.4x.
Thanks to the fast path being identical in terms of atomics and
barriers (after a subsequent optimisation patch), single threaded
performance is not changed (no measurable difference).
On smaller systems, performance and fairness seems to be generally
improved. Using dbench on tmpfs as a test (that starts to run into
kernel spinlock contention), a 2-socket OpenPOWER POWER9 system was
tested with bare metal and KVM guest configurations. Results can be
found here:
https://github.com/linuxppc/issues/issues/305#issuecomment-663487453
Observations are:
- Queued spinlocks are equal when contention is insignificant, as
expected and as measured with microbenchmarks.
- When there is contention, on bare metal queued spinlocks have better
throughput and max latency at all points.
- When virtualised, queued spinlocks are slightly worse approaching
peak throughput, but significantly better throughput and max latency
at all points beyond peak, until queued spinlock maximum latency
rises when clients are 2x vCPUs.
The regressions haven't been analysed very well yet, there are a lot
of things that can be tuned, particularly the paravirtualised locking,
but the numbers already look like a good net win even on relatively
small systems.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200724131423.1362108-4-npiggin@gmail.com
2020-07-24 21:14:20 +08:00
|
|
|
generic-y += qrwlock.h
|
2013-12-18 10:26:19 +08:00
|
|
|
generic-y += vtime.h
|
2019-09-12 21:49:43 +08:00
|
|
|
generic-y += early_ioremap.h
|