This series drives the perag down into the AGI, AGF and AGFL access
routines and unifies the perag structure initialisation with the
high level AG header read functions. This largely replaces the
xfs_mount/agno pair that is passed to all these functions with a
perag, and in most places we already have a perag ready to pass in.
There are a few places where perags need to be grabbed before
reading the AG header buffers - some of these will need to be driven
to higher layers to ensure we can run operations on AGs without
getting stuck part way through waiting on a perag reference.
The latter section of this patchset moves some of the AG geometry
information from the xfs_mount to the xfs_perag, and starts
converting code that requires geometry validation to use a perag
instead of a mount and having to extract the AGNO from the object
location. This also allows us to store the AG size in the perag and
then we can stop having to compare the agno against sb_agcount to
determine if the AG is the last AG and so has a runt size. This
greatly simplifies some of the type validity checking we do and
substantially reduces the CPU overhead of type validity checking. It
also cuts over 1.2kB out of the binary size.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQJIBAABCgAyFiEEmJOoJ8GffZYWSjj/regpR/R1+h0FAmLHawsUHGRhdmlkQGZy
b21vcmJpdC5jb20ACgkQregpR/R1+h09BhAAzsBr2K8yj+2eCwGZO7g4/Cynf/bb
nbukVogeMkFuUOmxniQzY6F3NUNo9du0FcDgEfbh6mYJtGRVQZaVCBKyPmcMXPOZ
q/1VRTod20XdMdvPfXvXP3FJoSp1W7/dPIx9Mxl4b5zCdFfeUTTfScl12MAePrdW
TaJpRBmxP+CZ0/bocAyHL4/2kqY2FNVgbR4vGxxHgqyjfwgQrdmOBetw7xoVxze1
lK3Iogm0btBd1bkGO83x7DceGl41JGEutx+92gI+/43rzP7Q4BRqXZm5Ik5taWVN
QLcLgAyj5X91D/e5dmg9NkDvVyzo7QHx0/0O/HOfw5XbzNcVw81se49yUjUS7+VM
n2LocVCLYsx9/DxqSJxd9lJUXlLtY/YvY7lewNknmeASwtHH8ReOvMPS89L5PJqD
InPDKay7OVsBkJd9I2yG43Q/MzQTuJuVWmbP5yVoFqR/wX9V8bjf8ng9kkkfVqMj
1nXnMyCr/41zSwvM12fEkv67ilwsbke3j5jYrO/TcfjAV8xqb6D6HaEx1PCRQpT2
w1FATNRGDdc2m+ojU4/ETe36KHYO/eivio1oBtqzdoaE13GRJjCqQVErg9nsDSQn
34zEcShHuhKwn5VLsR6ngZVZOOHSkAjAw4G6XsvoUKpjwtVH0Ueu/bja7/ZwOq7W
ySka+/95OgHNS64=
=71q3
-----END PGP SIGNATURE-----
Merge tag 'xfs-perag-conv-5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs into xfs-5.20-mergeA
xfs: per-ag conversions for 5.20
This series drives the perag down into the AGI, AGF and AGFL access
routines and unifies the perag structure initialisation with the
high level AG header read functions. This largely replaces the
xfs_mount/agno pair that is passed to all these functions with a
perag, and in most places we already have a perag ready to pass in.
There are a few places where perags need to be grabbed before
reading the AG header buffers - some of these will need to be driven
to higher layers to ensure we can run operations on AGs without
getting stuck part way through waiting on a perag reference.
The latter section of this patchset moves some of the AG geometry
information from the xfs_mount to the xfs_perag, and starts
converting code that requires geometry validation to use a perag
instead of a mount and having to extract the AGNO from the object
location. This also allows us to store the AG size in the perag and
then we can stop having to compare the agno against sb_agcount to
determine if the AG is the last AG and so has a runt size. This
greatly simplifies some of the type validity checking we do and
substantially reduces the CPU overhead of type validity checking. It
also cuts over 1.2kB out of the binary size.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
* tag 'xfs-perag-conv-5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs:
xfs: make is_log_ag() a first class helper
xfs: replace xfs_ag_block_count() with perag accesses
xfs: Pre-calculate per-AG agino geometry
xfs: Pre-calculate per-AG agbno geometry
xfs: pass perag to xfs_alloc_read_agfl
xfs: pass perag to xfs_alloc_put_freelist
xfs: pass perag to xfs_alloc_get_freelist
xfs: pass perag to xfs_read_agf
xfs: pass perag to xfs_read_agi
xfs: pass perag to xfs_alloc_read_agf()
xfs: kill xfs_alloc_pagf_init()
xfs: pass perag to xfs_ialloc_read_agi()
xfs: kill xfs_ialloc_pagi_init()
xfs: make last AG grow/shrink perag centric
This series aims to improve the scalability of XFS transaction
commits on large CPU count machines. My 32p machine hits contention
limits in xlog_cil_commit() at about 700,000 transaction commits a
section. It hits this at 16 thread workloads, and 32 thread
workloads go no faster and just burn CPU on the CIL spinlocks.
This patchset gets rid of spinlocks and global serialisation points
in the xlog_cil_commit() path. It does this by moving to a
combination of per-cpu counters, unordered per-cpu lists and
post-ordered per-cpu lists.
This results in transaction commit rates exceeding 1.4 million
commits/s under unlink certain workloads, and while the log lock
contention is largely gone there is still significant lock
contention in the VFS (dentry cache, inode cache and security layers)
at >600,000 transactions/s that still limit scalability.
The changes to the CIL accounting and behaviour, combined with the
structural changes to xlog_write() in prior patchsets make the
per-cpu restructuring possible and sane. This allows us to move to
precalculated reservation requirements that allow for reservation
stealing to be accounted across multiple CPUs accurately.
That is, instead of trying to account for continuation log opheaders
on a "growth" basis, we pre-calculate how many iclogs we'll need to
write out a maximally sized CIL checkpoint and steal that reserveD
that space one commit at a time until the CIL has a full
reservation. If we ever run a commit when we are already at the hard
limit (because post-throttling) we simply take an extra reservation
from each commit that is run when over the limit. Hence we don't
need to do space usage math in the fast path and so never need to
sum the per-cpu counters in this fast path.
Similarly, per-cpu lists have the problem of ordering - we can't
remove an item from a per-cpu list if we want to move it forward in
the CIL. We solve this problem by using an atomic counter to give
every commit a sequence number that is copied into the log items in
that transaction. Hence relogging items just overwrites the sequence
number in the log item, and does not move it in the per-cpu lists.
Once we reaggregate the per-cpu lists back into a single list in the
CIL push work, we can run it through list-sort() and reorder it back
into a globally ordered list. This costs a bit of CPU time, but now
that the CIL can run multiple works and pipelines properly, this is
not a limiting factor for performance. It does increase fsync
latency when the CIL is full, but workloads issuing large numbers of
fsync()s or sync transactions end up with very small CILs and so the
latency impact or sorting is not measurable for such workloads.
OVerall, this pushes the transaction commit bottleneck out to the
lockless reservation grant head updates. These atomic updates don't
start to be a limiting fact until > 1.5 million transactions/s are
being run, at which point the accounting functions start to show up
in profiles as the highest CPU users. Still, this series doubles
transaction throughput without increasing CPU usage before we get
to that cacheline contention breakdown point...
`
Signed-off-by: Dave Chinner <dchinner@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQJIBAABCgAyFiEEmJOoJ8GffZYWSjj/regpR/R1+h0FAmLHai8UHGRhdmlkQGZy
b21vcmJpdC5jb20ACgkQregpR/R1+h3JZQ//bb9HyBiBkeuK9MvqH40hOfazfGXD
8+pdP9r22qWp9LHhjz/EtH4Wy1sYe6a99mtPxqlsT3DqSl8GiolA1VFn+T3Sadu4
nqmB/ppzMLE0LLzKoVrb3/Zw+mEaz5Is3WLpr86CpK5gNW6gBHCj4B68lWiBtvjs
OW5fTm0E44BnNORh/AdSUkJxxEB2OQhVk5omY/Op8vO5frviG5yqYakAeoQ3vFpS
UKadwlGjei91c63g9se360Re+DXTBhzbgXz0oNV4YbgWba2O9lnut5zqlcJMvVAU
YgGBxttT0OqCdSNp0vtwOG8UFeUqfWSY+AFwfDkNycltLASvU53efqC94kQHouoh
9++2VrPwPg0KOcQsvQo5WViQqWrr0+KlsaiTRO/TE0XCGFx4xQKEuhZ6QAnHiiVU
en34SMqY51qa5D3LSbs6F278rEZNcLQguiH6Urxe5KRmkJDfoxtsWQ/DpV8itbnk
raCUFlhW8GIBrRvizB7Na+hDWj1/HGQRIEs+xlfqPcFDV9bkECE/IpbD04+JDbil
wsDoy2IO15oG/rX05/bkXAY7fFuhWbnVAbKrqvl+50w8Oo5w0+X3ZHlqhiLqCzVr
e/TL5lc+9Ciq4uG8TCwal4HoktYLwqez4qxz396YpE4LN1ax2ICFgR9HyY4GLqmU
0H1qSxZmOkeueCU=
=vLZn
-----END PGP SIGNATURE-----
Merge tag 'xfs-cil-scale-5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs into xfs-5.20-mergeA
xfs: improve CIL scalability
This series aims to improve the scalability of XFS transaction
commits on large CPU count machines. My 32p machine hits contention
limits in xlog_cil_commit() at about 700,000 transaction commits a
section. It hits this at 16 thread workloads, and 32 thread
workloads go no faster and just burn CPU on the CIL spinlocks.
This patchset gets rid of spinlocks and global serialisation points
in the xlog_cil_commit() path. It does this by moving to a
combination of per-cpu counters, unordered per-cpu lists and
post-ordered per-cpu lists.
This results in transaction commit rates exceeding 1.4 million
commits/s under unlink certain workloads, and while the log lock
contention is largely gone there is still significant lock
contention in the VFS (dentry cache, inode cache and security layers)
at >600,000 transactions/s that still limit scalability.
The changes to the CIL accounting and behaviour, combined with the
structural changes to xlog_write() in prior patchsets make the
per-cpu restructuring possible and sane. This allows us to move to
precalculated reservation requirements that allow for reservation
stealing to be accounted across multiple CPUs accurately.
That is, instead of trying to account for continuation log opheaders
on a "growth" basis, we pre-calculate how many iclogs we'll need to
write out a maximally sized CIL checkpoint and steal that reserveD
that space one commit at a time until the CIL has a full
reservation. If we ever run a commit when we are already at the hard
limit (because post-throttling) we simply take an extra reservation
from each commit that is run when over the limit. Hence we don't
need to do space usage math in the fast path and so never need to
sum the per-cpu counters in this fast path.
Similarly, per-cpu lists have the problem of ordering - we can't
remove an item from a per-cpu list if we want to move it forward in
the CIL. We solve this problem by using an atomic counter to give
every commit a sequence number that is copied into the log items in
that transaction. Hence relogging items just overwrites the sequence
number in the log item, and does not move it in the per-cpu lists.
Once we reaggregate the per-cpu lists back into a single list in the
CIL push work, we can run it through list-sort() and reorder it back
into a globally ordered list. This costs a bit of CPU time, but now
that the CIL can run multiple works and pipelines properly, this is
not a limiting factor for performance. It does increase fsync
latency when the CIL is full, but workloads issuing large numbers of
fsync()s or sync transactions end up with very small CILs and so the
latency impact or sorting is not measurable for such workloads.
OVerall, this pushes the transaction commit bottleneck out to the
lockless reservation grant head updates. These atomic updates don't
start to be a limiting fact until > 1.5 million transactions/s are
being run, at which point the accounting functions start to show up
in profiles as the highest CPU users. Still, this series doubles
transaction throughput without increasing CPU usage before we get
to that cacheline contention breakdown point...
`
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
* tag 'xfs-cil-scale-5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs:
xfs: expanding delayed logging design with background material
xfs: xlog_sync() manually adjusts grant head space
xfs: avoid cil push lock if possible
xfs: move CIL ordering to the logvec chain
xfs: convert log vector chain to use list heads
xfs: convert CIL to unordered per cpu lists
xfs: Add order IDs to log items in CIL
xfs: convert CIL busy extents to per-cpu
xfs: track CIL ticket reservation in percpu structure
xfs: implement percpu cil space used calculation
xfs: introduce per-cpu CIL tracking structure
xfs: rework per-iclog header CIL reservation
xfs: lift init CIL reservation out of xc_cil_lock
xfs: use the CIL space used counter for emptiness checks
We check if an ag contains the log in many places, so make this
a first class XFS helper by lifting it to fs/xfs/libxfs/xfs_ag.h and
renaming it xfs_ag_contains_log(). The convert all the places that
check if the AG contains the log to use this helper.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Many of the places that call xfs_ag_block_count() have a perag
available. These places can just read pag->block_count directly
instead of calculating the AG block count from first principles.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
There is a lot of overhead in functions like xfs_verify_agino() that
repeatedly calculate the geometry limits of an AG. These can be
pre-calculated as they are static and the verification context has
a per-ag context it can quickly reference.
In the case of xfs_verify_agino(), we now always have a perag
context handy, so we can store the minimum and maximum agino values
in the AG in the perag. This means we don't have to calculate
it on every call and it can be inlined in callers if we move it
to xfs_ag.h.
xfs_verify_agino_or_null() gets the same perag treatment.
xfs_agino_range() is moved to xfs_ag.c as it's not really a type
function, and it's use is largely restricted as the first and last
aginos can be grabbed straight from the perag in most cases.
Note that we leave the original xfs_verify_agino in place in
xfs_types.c as a static function as other callers in that file do
not have per-ag contexts so still need to go the long way. It's been
renamed to xfs_verify_agno_agino() to indicate it takes both an agno
and an agino to differentiate it from new function.
$ size --totals fs/xfs/built-in.a
text data bss dec hex filename
before 1482185 329588 572 1812345 1ba779 (TOTALS)
after 1481937 329588 572 1812097 1ba681 (TOTALS)
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
There is a lot of overhead in functions like xfs_verify_agbno() that
repeatedly calculate the geometry limits of an AG. These can be
pre-calculated as they are static and the verification context has
a per-ag context it can quickly reference.
In the case of xfs_verify_agbno(), we now always have a perag
context handy, so we can store the AG length and the minimum valid
block in the AG in the perag. This means we don't have to calculate
it on every call and it can be inlined in callers if we move it
to xfs_ag.h.
Move xfs_ag_block_count() to xfs_ag.c because it's really a
per-ag function and not an XFS type function. We need a little
bit of rework that is specific to xfs_initialise_perag() to allow
growfs to calculate the new perag sizes before we've updated the
primary superblock during the grow (chicken/egg situation).
Note that we leave the original xfs_verify_agbno in place in
xfs_types.c as a static function as other callers in that file do
not have per-ag contexts so still need to go the long way. It's been
renamed to xfs_verify_agno_agbno() to indicate it takes both an agno
and an agbno to differentiate it from new function.
Future commits will make similar changes for other per-ag geometry
validation functions.
Further:
$ size --totals fs/xfs/built-in.a
text data bss dec hex filename
before 1483006 329588 572 1813166 1baaae (TOTALS)
after 1482185 329588 572 1812345 1ba779 (TOTALS)
This rework reduces the binary size by ~820 bytes, indicating
that much less work is being done to bounds check the agbno values
against on per-ag geometry information.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
We have the perag in most places we call xfs_alloc_read_agfl, so
pass the perag instead of a mount/agno pair.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
It's available in all callers, so pass it in so that the perag can
be passed further down the stack.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
It's available in all callers, so pass it in so that the perag can
be passed further down the stack.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
We have the perag in most places we call xfs_read_agf, so pass the
perag instead of a mount/agno pair.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
We have the perag in most palces we call xfs_read_agi, so pass the
perag instead of a mount/agno pair.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
xfs_alloc_read_agf() initialises the perag if it hasn't been done
yet, so it makes sense to pass it the perag rather than pull a
reference from the buffer. This allows callers to be per-ag centric
rather than passing mount/agno pairs everywhere.
Whilst modifying the xfs_reflink_find_shared() function definition,
declare it static and remove the extern declaration as it is an
internal function only these days.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Trivial wrapper around xfs_alloc_read_agf(), can be easily replaced
by passing a NULL agfbp to xfs_alloc_read_agf().
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
xfs_ialloc_read_agi() initialises the perag if it hasn't been done
yet, so it makes sense to pass it the perag rather than pull a
reference from the buffer. This allows callers to be per-ag centric
rather than passing mount/agno pairs everywhere.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
This is just a basic wrapper around xfs_ialloc_read_agi(), which can
be entirely handled by xfs_ialloc_read_agi() by passing a NULL
agibpp....
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Because the perag must exist for these operations, look it up as
part of the common shrink operations and pass it instead of the
mount/agno pair.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
I wrote up a description of how transactions, space reservations and
relogging work together in response to a question for background
material on the delayed logging design. Add this to the existing
document for ease of future reference.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
When xlog_sync() rounds off the tail the iclog that is being
flushed, it manually subtracts that space from the grant heads. This
space is actually reserved by the transaction ticket that covers
the xlog_sync() call from xlog_write(), but we don't plumb the
ticket down far enough for it to account for the space consumed in
the current log ticket.
The grant heads are hot, so we really should be accounting this to
the ticket is we can, rather than adding thousands of extra grant
head updates every CIL commit.
Interestingly, this actually indicates a potential log space overrun
can occur when we force the log. By the time that xfs_log_force()
pushes out an active iclog and consumes the roundoff space, the
reservation for that roundoff space has been returned to the grant
heads and is no longer covered by a reservation. In theory the
roundoff added to log force on an already full log could push the
write head past the tail. In practice, the CIL commit that writes to
the log and needs the iclog pushed will have reserved space for
roundoff, so when it releases the ticket there will still be
physical space for the roundoff to be committed to the log, even
though it is no longer reserved. This roundoff won't be enough space
to allow a transaction to be woken if the log is full, so overruns
should not actually occur in practice.
That said, it indicates that we should not release the CIL context
log ticket until after we've released the commit iclog. It also
means that xlog_sync() still needs the direct grant head
manipulation if we don't provide it with a ticket. Log forces are
rare when we are in fast paths running 1.5 million transactions/s
that make the grant heads hot, so let's optimise the hot case and
pass CIL log tickets down to the xlog_sync() code.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Because now it hurts when the CIL fills up.
- 37.20% __xfs_trans_commit
- 35.84% xfs_log_commit_cil
- 19.34% _raw_spin_lock
- do_raw_spin_lock
19.01% __pv_queued_spin_lock_slowpath
- 4.20% xfs_log_ticket_ungrant
0.90% xfs_log_space_wake
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Adding a list_sort() call to the CIL push work while the xc_ctx_lock
is held exclusively has resulted in fairly long lock hold times and
that stops all front end transaction commits from making progress.
We can move the sorting out of the xc_ctx_lock if we can transfer
the ordering information to the log vectors as they are detached
from the log items and then we can sort the log vectors. With these
changes, we can move the list_sort() call to just before we call
xlog_write() when we aren't holding any locks at all.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Because the next change is going to require sorting log vectors, and
that requires arbitrary rearrangement of the list which cannot be
done easily with a single linked list.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
So that we can remove the cil_lock which is a global serialisation
point. We've already got ordering sorted, so all we need to do is
treat the CIL list like the busy extent list and reconstruct it
before the push starts.
This is what we're trying to avoid:
- 75.35% 1.83% [kernel] [k] xfs_log_commit_cil
- 46.35% xfs_log_commit_cil
- 41.54% _raw_spin_lock
- 67.30% do_raw_spin_lock
66.96% __pv_queued_spin_lock_slowpath
Which happens on a 32p system when running a 32-way 'rm -rf'
workload. After this patch:
- 20.90% 3.23% [kernel] [k] xfs_log_commit_cil
- 17.67% xfs_log_commit_cil
- 6.51% xfs_log_ticket_ungrant
1.40% xfs_log_space_wake
2.32% memcpy_erms
- 2.18% xfs_buf_item_committing
- 2.12% xfs_buf_item_release
- 1.03% xfs_buf_unlock
0.96% up
0.72% xfs_buf_rele
1.33% xfs_inode_item_format
1.19% down_read
0.91% up_read
0.76% xfs_buf_item_format
- 0.68% kmem_alloc_large
- 0.67% kmem_alloc
0.64% __kmalloc
0.50% xfs_buf_item_size
It kinda looks like the workload is running out of log space all
the time. But all the spinlock contention is gone and the
transaction commit rate has gone from 800k/s to 1.3M/s so the amount
of real work being done has gone up a *lot*.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Before we split the ordered CIL up into per cpu lists, we need a
mechanism to track the order of the items in the CIL. We need to do
this because there are rules around the order in which related items
must physically appear in the log even inside a single checkpoint
transaction.
An example of this is intents - an intent must appear in the log
before it's intent done record so that log recovery can cancel the
intent correctly. If we have these two records misordered in the
CIL, then they will not be recovered correctly by journal replay.
We also will not be able to move items to the tail of
the CIL list when they are relogged, hence the log items will need
some mechanism to allow the correct log item order to be recreated
before we write log items to the hournal.
Hence we need to have a mechanism for recording global order of
transactions in the log items so that we can recover that order
from un-ordered per-cpu lists.
Do this with a simple monotonic increasing commit counter in the CIL
context. Each log item in the transaction gets stamped with the
current commit order ID before it is added to the CIL. If the item
is already in the CIL, leave it where it is instead of moving it to
the tail of the list and instead sort the list before we start the
push work.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
To get them out from under the CIL lock.
This is an unordered list, so we can simply punt it to per-cpu lists
during transaction commits and reaggregate it back into a single
list during the CIL push work.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Now that we have the CIL percpu structures in place, implement the
space used counter as a per-cpu counter.
We have to be really careful now about ensuring that the checks and
updates run without arbitrary delays, which means they need to run
with pre-emption disabled. We do this by careful placement of
the get_cpu_ptr/put_cpu_ptr calls to access the per-cpu structures
for that CPU.
We need to be able to reliably detect that the CIL has reached
the hard limit threshold so we can take extra reservations for the
iclog headers when the space used overruns the original reservation.
hence we factor out xlog_cil_over_hard_limit() from
xlog_cil_push_background().
The global CIL space used is an atomic variable that is backed by
per-cpu aggregation to minimise the number of atomic updates we do
to the global state in the fast path. While we are under the soft
limit, we aggregate only when the per-cpu aggregation is over the
proportion of the soft limit assigned to that CPU. This means that
all CPUs can use all but one byte of their aggregation threshold
and we will not go over the soft limit.
Hence once we detect that we've gone over both a per-cpu aggregation
threshold and the soft limit, we know that we have only
exceeded the soft limit by one per-cpu aggregation threshold. Even
if all CPUs hit this at the same time, we can't be over the hard
limit, so we can run an aggregation back into the atomic counter
at this point and still be under the hard limit.
At this point, we will be over the soft limit and hence we'll
aggregate into the global atomic used space directly rather than the
per-cpu counters, hence providing accurate detection of hard limit
excursion for accounting and reservation purposes.
Hence we get the best of both worlds - lockless, scalable per-cpu
fast path plus accurate, atomic detection of hard limit excursion.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Looking at the conditional lock acquire functions in the kernel due to
the new sparse support (see commit 4a557a5d1a "sparse: introduce
conditional lock acquire function attribute"), it became obvious that
the lockref code has a couple of them, but they don't match the usual
naming convention for the other ones, and their return value logic is
also reversed.
In the other very similar places, the naming pattern is '*_and_lock()'
(eg 'atomic_put_and_lock()' and 'refcount_dec_and_lock()'), and the
function returns true when the lock is taken.
The lockref code is superficially very similar to the refcount code,
only with the special "atomic wrt the embedded lock" semantics. But
instead of the '*_and_lock()' naming it uses '*_or_lock()'.
And instead of returning true in case it took the lock, it returns true
if it *didn't* take the lock.
Now, arguably the reflock code is quite logical: it really is a "either
decrement _or_ lock" kind of situation - and the return value is about
whether the operation succeeded without any special care needed.
So despite the similarities, the differences do make some sense, and
maybe it's not worth trying to unify the different conditional locking
primitives in this area.
But while looking at this all, it did become obvious that the
'lockref_get_or_lock()' function hasn't actually had any users for
almost a decade.
The only user it ever had was the shortlived 'd_rcu_to_refcount()'
function, and it got removed and replaced with 'lockref_get_not_dead()'
back in 2013 in commits 0d98439ea3 ("vfs: use lockred 'dead' flag to
mark unrecoverably dead dentries") and e5c832d555 ("vfs: fix dentry
RCU to refcounting possibly sleeping dput()")
In fact, that single use was removed less than a week after the whole
function was introduced in commit b3abd80250 ("lockref: add
'lockref_get_or_lock() helper") so this function has been around for a
decade, but only had a user for six days.
Let's just put this mis-designed and unused function out of its misery.
We can think about the naming and semantic oddities of the remaining
'lockref_put_or_lock()' later, but at least that function has users.
And while the naming is different and the return value doesn't match,
that function matches the whole '{atomic,refcount}_dec_and_test()'
pattern much better (ie the magic happens when the count goes down to
zero, not when it is incremented from zero).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The kernel tends to try to avoid conditional locking semantics because
it makes it harder to think about and statically check locking rules,
but we do have a few fundamental locking primitives that take locks
conditionally - most obviously the 'trylock' functions.
That has always been a problem for 'sparse' checking for locking
imbalance, and we've had a special '__cond_lock()' macro that we've used
to let sparse know how the locking works:
# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0)
so that you can then use this to tell sparse that (for example) the
spinlock trylock macro ends up acquiring the lock when it succeeds, but
not when it fails:
#define raw_spin_trylock(lock) __cond_lock(lock, _raw_spin_trylock(lock))
and then sparse can follow along the locking rules when you have code like
if (!spin_trylock(&dentry->d_lock))
return LRU_SKIP;
.. sparse sees that the lock is held here..
spin_unlock(&dentry->d_lock);
and sparse ends up happy about the lock contexts.
However, this '__cond_lock()' use does result in very ugly header files,
and requires you to basically wrap the real function with that macro
that uses '__cond_lock'. Which has made PeterZ NAK things that try to
fix sparse warnings over the years [1].
To solve this, there is now a very experimental patch to sparse that
basically does the exact same thing as '__cond_lock()' did, but using a
function attribute instead. That seems to make PeterZ happy [2].
Note that this does not replace existing use of '__cond_lock()', but
only exposes the new proposed attribute and uses it for the previously
unannotated 'refcount_dec_and_lock()' family of functions.
For existing sparse installations, this will make no difference (a
negative output context was ignored), but if you have the experimental
sparse patch it will make sparse now understand code that uses those
functions, the same way '__cond_lock()' makes sparse understand the very
similar 'atomic_dec_and_lock()' uses that have the old '__cond_lock()'
annotations.
Note that in some cases this will silence existing context imbalance
warnings. But in other cases it may end up exposing new sparse warnings
for code that sparse just didn't see the locking for at all before.
This is a trial, in other words. I'd expect that if it ends up being
successful, and new sparse releases end up having this new attribute,
we'll migrate the old-style '__cond_lock()' users to use the new-style
'__cond_acquires' function attribute.
The actual experimental sparse patch was posted in [3].
Link: https://lore.kernel.org/all/20130930134434.GC12926@twins.programming.kicks-ass.net/ [1]
Link: https://lore.kernel.org/all/Yr60tWxN4P568x3W@worktop.programming.kicks-ass.net/ [2]
Link: https://lore.kernel.org/all/CAHk-=wjZfO9hGqJ2_hGQG3U_XzSh9_XaXze=HgPdvJbgrvASfA@mail.gmail.com/ [3]
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Aring <aahringo@redhat.com>
Cc: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- Fix statfs blocking on background inode gc workers
- Fix some broken inode lock assertion code
- Fix xattr leaf buffer leaks when cancelling a deferred xattr update
operation
- Clean up xattr recovery to make it easier to understand.
- Fix xattr leaf block verifiers tripping over empty blocks.
- Remove complicated and error prone xattr leaf block bholding mess.
- Fix a bug where an rt extent crossing EOF was treated as "posteof"
blocks and cleaned unnecessarily.
- Fix a UAF when log shutdown races with unmount.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEUzaAxoMeQq6m2jMV+H93GTRKtOsFAmK/kVMACgkQ+H93GTRK
tOs0tQ/+PYRhEDKrgocxZGJFNvnxqPRdEDu9k5XCnO2Y/DZRAF52F0JZaPtuiFH4
12e9vzYYRNrE9KifzPWo4j2L067kFszt4XcAjytJuf5f6k/duX7XbsdMb17Qxd28
mZDtBBSQCc9fcQo21u5SdZlPaD1SC1843jB4Oe7Sbo3AFvVAMwuBUgnp2TSDA8V0
0q25PUD0ZvWP3UTQS4M4fW4WhFa5wF+GnLR1DZjryFIzuUp9JwdCQZHIFnp6cHq9
TZMDJ4WhD9igMSzicRfgPoC8z/D3Mm0cFmRoURbG3GLzAeJ+e7PJ43rvlwq6Ajcv
v5DhyQvFkiVjKLsrtJyvvUGSpkLL/touNG8MUE9I0heiiwb0QbP108aHWU8AS1Dr
q7XHIxPaOhvlzVZN1uTuZE4N51/0NWITGKBwF0XU1b5D3wLyvOY6fbI7KLfkX2Sa
4zHKn4QpHUIE9fs5Na3H6L+ndlJclo2DJA6lF26pLgmrT7NLmJG+r97XagBsp/pr
X8qOvVMg1XJA37Vy1bTN5cfEYzTTksJk/fQ3AvSKHDCeP5u87kiZ6hqNnW6dD0YF
D8VTX29rVQr5HavbcGCmAyBZpk4CfclCsWCQrZu9MCnQSW37HnObXPJkIWvzt8Mn
j6emhPcYHy5TwSChxdpzl733ZX0KdkdOAgkWgqtod2E/7fe+g7Q=
=8QeL
-----END PGP SIGNATURE-----
Merge tag 'xfs-5.19-fixes-4' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
Pull xfs fixes from Darrick Wong:
"This fixes some stalling problems and corrects the last of the
problems (I hope) observed during testing of the new atomic xattr
update feature.
- Fix statfs blocking on background inode gc workers
- Fix some broken inode lock assertion code
- Fix xattr leaf buffer leaks when cancelling a deferred xattr update
operation
- Clean up xattr recovery to make it easier to understand.
- Fix xattr leaf block verifiers tripping over empty blocks.
- Remove complicated and error prone xattr leaf block bholding mess.
- Fix a bug where an rt extent crossing EOF was treated as "posteof"
blocks and cleaned unnecessarily.
- Fix a UAF when log shutdown races with unmount"
* tag 'xfs-5.19-fixes-4' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
xfs: prevent a UAF when log IO errors race with unmount
xfs: dont treat rt extents beyond EOF as eofblocks to be cleared
xfs: don't hold xattr leaf buffers across transaction rolls
xfs: empty xattr leaf header blocks are not corruption
xfs: clean up the end of xfs_attri_item_recover
xfs: always free xattri_leaf_bp when cancelling a deferred op
xfs: use invalidate_lock to check the state of mmap_lock
xfs: factor out the common lock flags assert
xfs: introduce xfs_inodegc_push()
xfs: bound maximum wait time for inodegc work
Two important fixes for bugs in code which was added in kernel v5.18:
* Fix userspace signal failures on 32-bit kernel due to a bug in vDSO
* Fix 32-bit load-word unalignment exception handler which returned
wrong values
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQS86RI+GtKfB8BJu973ErUQojoPXwUCYsB3CwAKCRD3ErUQojoP
Xym6AQDs4E/8udjG/VXF9cvhgP54/6P6a6CL2W0eX5LXqolSPAD+POqsShLkq4PN
ynJa2BXRYi/YV21TKXj2MySsntfTpQQ=
=xrkt
-----END PGP SIGNATURE-----
Merge tag 'for-5.19/parisc-4' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux
Pull parisc architecture fixes from Helge Deller:
"Two important fixes for bugs in code which was added in 5.18:
- Fix userspace signal failures on 32-bit kernel due to a bug in vDSO
- Fix 32-bit load-word unalignment exception handler which returned
wrong values"
* tag 'for-5.19/parisc-4' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
parisc: Fix vDSO signal breakage on 32-bit kernel
parisc/unaligned: Fix emulate_ldw() breakage
Addition of vDSO support for parisc in kernel v5.18 suddenly broke glibc
signal testcases on a 32-bit kernel.
The trampoline code (sigtramp.S) which is mapped into userspace includes
an offset to the context data on the stack, which is used by gdb and
glibc to get access to registers.
In a 32-bit kernel we used by mistake the offset into the compat context
(which is valid on a 64-bit kernel only) instead of the offset into the
"native" 32-bit context.
Reported-by: John David Anglin <dave.anglin@bell.net>
Tested-by: John David Anglin <dave.anglin@bell.net>
Fixes: df24e1783e ("parisc: Add vDSO support")
CC: stable@vger.kernel.org # 5.18
Signed-off-by: Helge Deller <deller@gmx.de>
- BPF program info linear (BPIL) data is accessed assuming 64-bit alignment
resulting in undefined behavior as the data is just byte aligned. Fix it,
Found using -fsanitize=undefined.
- Fix 'perf offcpu' build on old kernels wrt task_struct's state/__state field.
- Fix perf_event_attr.sample_type setting on the 'offcpu-time' event synthesized
by the 'perf offcpu' tool.
- Don't bail out when synthesizing PERF_RECORD_ events for pre-existing threads
when one goes away while parsing its procfs entries.
- Don't sort the task scan result from /proc, its not needed and introduces bugs
when the main thread isn't the first one to be processed.
- Fix uninitialized 'offset' variable on aarch64 in the unwind code.
- Sync KVM headers with the kernel sources.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQR2GiIUctdOfX2qHhGyPKLppCJ+JwUCYsBFOAAKCRCyPKLppCJ+
J+xcAQDYFjs4ZlDVSd4Oj4Mk6ukHz8/9dluKMeWGUswx7x1nSQEAjBTlOrj/Dsrc
DR3s2lQpQWLk+vWiSLBBPMrYYcM62g4=
=6rgi
-----END PGP SIGNATURE-----
Merge tag 'perf-tools-fixes-for-v5.19-2022-07-02' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull perf tools fixes from Arnaldo Carvalho de Melo:
- BPF program info linear (BPIL) data is accessed assuming 64-bit
alignment resulting in undefined behavior as the data is just byte
aligned. Fix it, Found using -fsanitize=undefined.
- Fix 'perf offcpu' build on old kernels wrt task_struct's
state/__state field.
- Fix perf_event_attr.sample_type setting on the 'offcpu-time' event
synthesized by the 'perf offcpu' tool.
- Don't bail out when synthesizing PERF_RECORD_ events for pre-existing
threads when one goes away while parsing its procfs entries.
- Don't sort the task scan result from /proc, its not needed and
introduces bugs when the main thread isn't the first one to be
processed.
- Fix uninitialized 'offset' variable on aarch64 in the unwind code.
- Sync KVM headers with the kernel sources.
* tag 'perf-tools-fixes-for-v5.19-2022-07-02' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux:
perf synthetic-events: Ignore dead threads during event synthesis
perf synthetic-events: Don't sort the task scan result from /proc
perf unwind: Fix unitialized 'offset' variable on aarch64
tools headers UAPI: Sync linux/kvm.h with the kernel sources
perf bpf: 8 byte align bpil data
tools kvm headers arm64: Update KVM headers from the kernel sources
perf offcpu: Accept allowed sample types only
perf offcpu: Fix build failure on old kernels
- Fix BPF uapi confusion about the correct type of bpf_user_pt_regs_t.
- Fix virt_addr_valid() when memory is hotplugged above the boot-time high_memory value.
- Fix a bug in 64-bit Book3E map_kernel_page() which would incorrectly allocate a PMD
page at PUD level.
- Fix a couple of minor issues found since we enabled KASAN for 64-bit Book3S.
Thanks to: Aneesh Kumar K.V, Cédric Le Goater, Christophe Leroy, Kefeng Wang, Liam
Howlett, Nathan Lynch, Naveen N. Rao.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAmLAH8gTHG1wZUBlbGxl
cm1hbi5pZC5hdQAKCRBR6+o8yOGlgP2XD/0d4n4PBQ0NnhetD62HoEYQplYNapgi
B2sIfaYlDPC22NW23idOTCQF17HWVMDlWx+4wpqQrzC2YCJyzgt7/zALncwCnEiB
LO0cCPqP4mS2FkYsAzpPnON6akLt60vYAuGa5qpH9IKVgZ4awak0IDSbxiODLrbQ
q5KhE03GtasHevCOg9xrDXd2KkcTFVGy7LAfoubK0uBq8nLOrCPGFm5X9sJtlbtM
S9TMGd51zWyOy2weNA26npO/MMEI6/ljXn7Kn8M2EfkmIdIxaZSluFa2OSJsEFXH
x4Dr2Ex9iPwtD+mE1Z4uA3+8EnvSbtZgL/BfoqB2wlwLrEWZH9D/Hs2ZwrbsbKEW
WDBma/MUYL1hGCW3msjUGxT8dInaYzF5Z8sUimqpVy9HGIQXNza1lmh0kkrfpHSZ
2SeR/ThO9YsCUNJqDBVBcMvA1tRNB/NrwCXoDUNBTW9/In0WkHr+Lo5Du81AMhcW
QjKHqnYv9UwBZHU7sqQE3HkxsY1JEA84bN6IRB/CE8sM3xRrDGDJaZVwCPpgDxT4
LfADCoySDfNOUml4640Bsde9/T6SRgjgzEkmGVkMqap38YaHZquPjYNYyzJG596w
SKvhpARwMQkpHPGOiM+VQznuXVaItmBZt8gb6E0CEQhyGtRn+RyAIIkGu+UTEzhQ
ZiTzwLOrtszp3g==
=l3Rj
-----END PGP SIGNATURE-----
Merge tag 'powerpc-5.19-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
- Fix BPF uapi confusion about the correct type of bpf_user_pt_regs_t.
- Fix virt_addr_valid() when memory is hotplugged above the boot-time
high_memory value.
- Fix a bug in 64-bit Book3E map_kernel_page() which would incorrectly
allocate a PMD page at PUD level.
- Fix a couple of minor issues found since we enabled KASAN for 64-bit
Book3S.
Thanks to Aneesh Kumar K.V, Cédric Le Goater, Christophe Leroy, Kefeng
Wang, Liam Howlett, Nathan Lynch, and Naveen N. Rao.
* tag 'powerpc-5.19-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/memhotplug: Add add_pages override for PPC
powerpc/bpf: Fix use of user_pt_regs in uapi
powerpc/prom_init: Fix kernel config grep
powerpc/book3e: Fix PUD allocation size in map_kernel_page()
powerpc/xive/spapr: correct bitmap allocation size
When it synthesize various task events, it scans the list of task
first and then accesses later. There's a window threads can die
between the two and proc entries may not be available.
Instead of bailing out, we can ignore that thread and move on.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20220701205458.985106-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
It should not sort the result as procfs already returns a proper
ordering of tasks. Actually sorting the order caused problems that it
doesn't guararantee to process the main thread first.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20220701205458.985106-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Commit dc2cf4ca86 ("perf unwind: Fix segbase for ld.lld linked
objects") uncovered the following issue on aarch64:
util/unwind-libunwind-local.c: In function 'find_proc_info':
util/unwind-libunwind-local.c:386:28: error: 'offset' may be used uninitialized in this function [-Werror=maybe-uninitialized]
386 | if (ofs > 0) {
| ^
util/unwind-libunwind-local.c:199:22: note: 'offset' was declared here
199 | u64 address, offset;
| ^~~~~~
util/unwind-libunwind-local.c:371:20: error: 'offset' may be used uninitialized in this function [-Werror=maybe-uninitialized]
371 | if (ofs <= 0) {
| ^
util/unwind-libunwind-local.c:199:22: note: 'offset' was declared here
199 | u64 address, offset;
| ^~~~~~
util/unwind-libunwind-local.c:363:20: error: 'offset' may be used uninitialized in this function [-Werror=maybe-uninitialized]
363 | if (ofs <= 0) {
| ^
util/unwind-libunwind-local.c:199:22: note: 'offset' was declared here
199 | u64 address, offset;
| ^~~~~~
In file included from util/libunwind/arm64.c:37:
Fixes: dc2cf4ca86 ("perf unwind: Fix segbase for ld.lld linked objects")
Signed-off-by: Ivan Babrou <ivan@cloudflare.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Fangrui Song <maskray@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: James Clark <james.clark@arm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: kernel-team@cloudflare.com
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20220701182046.12589-1-ivan@cloudflare.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
- Fix a bug in the libnvdimm 'BTT' (Block Translation Table) driver
where accounting for poison blocks to be cleared was off by one,
causing a failure to clear the the last badblock in an nvdimm region.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQT9vPEBxh63bwxRYEEPzq5USduLdgUCYr9gEwAKCRAPzq5USduL
doTrAQDIrvLTM/bzCBnUIvkERTkalthTKvHiakw5lYD6z6XnxAEAmGvjdXn9q/UF
BwDrNSP0XyD4M3PZZeBhK/9zsETNGws=
=wJnW
-----END PGP SIGNATURE-----
Merge tag 'libnvdimm-fixes-5.19-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull libnvdimm fix from Vishal Verma:
- Fix a bug in the libnvdimm 'BTT' (Block Translation Table) driver
where accounting for poison blocks to be cleared was off by one,
causing a failure to clear the the last badblock in an nvdimm region.
* tag 'libnvdimm-fixes-5.19-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
nvdimm: Fix badblocks clear off-by-one error
Add a new CPU ID to the list of supported processors in the
intel_tcc_cooling driver (Sumeet Pawnikar).
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmK/TzESHHJqd0Byand5
c29ja2kubmV0AAoJEILEb/54YlRxwPcQALCM2DfwhD+6lvO2bc0e8gAVYRPwekfa
dZYK6e+b2HBnekzGOpOCSF58qB8jd4hxk2ODJ7RXUSkvtNfLXVDTotZ74Ggr0WN3
JSoSW4OTpds844HiMzaal1WOxc6DEWydCPliSryqyPYLjyxMJjFd/sfsSJRSANs1
q7ySDj+qdSNzSrsFkJ5Ia+W687lBFG3AJBOQ65Fuh8sfJDVsPOcyaWKYcdXUYZVn
bPNm7ll9EWYUYE8lAgWHTgH/keIYTMnyJbfcmqLXfp0jiYAfyKm4559Rf0YiCSX9
Hix5cvhlLBK7exHE7IUBY8k5pC1UxhIWUBlWf96vxGz3XY4hMgXaWUp6w/MOFjqa
ht3aobhvtPv6a3rzCpH3pZTLHuoILwR9pH7kq5aE562K687m2T9vsP+Wvn/5Oj7d
OmThKDuqeWq6FuVsTvBvBFHRtVZ+olUPx0R9aKhFtFDH1bxjwRXX0uEHQu12M8Ww
QwgRmkSSJeCd+iBtSjMXorCS5vjztCQDPZgvD8xH3sg0G5f9Ha/QHJbbpDUv2TaC
P6nghyqeFla8EA21Xg973UxJaajfnal2/FIDmBvoyEhLoQ5M5hs06QjIaY3HA9Js
zNI/4qjLfTR7HZoCy4CDttSA0pgdCHRfNY673QfN6F7sO5kZn9FGJPp7F/qsruO6
3f5kh/HcmQpy
=F4yU
-----END PGP SIGNATURE-----
Merge tag 'thermal-5.19-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull thermal control fix from Rafael Wysocki:
"Add a new CPU ID to the list of supported processors in the
intel_tcc_cooling driver (Sumeet Pawnikar)"
* tag 'thermal-5.19-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
thermal: intel_tcc_cooling: Add TCC cooling support for RaptorLake
- Fix error code path issues related PROBE_DEFER handling in
devfreq (Christian Marangi).
- Revert an editing accident in SPDX-License line in the devfreq
passive governor (Lukas Bulwahn).
- Fix refcount leak in of_get_devfreq_events() in the exynos-ppmu
devfreq driver (Miaoqian Lin).
- Use HZ_PER_KHZ macro in the passive devfreq governor (Yicong Yang).
- Fix missing of_node_put for qoriq and pmac32 driver (Liang He).
- Fix issues around throttle interrupt for qcom driver (Stephen Boyd).
- Add MT8186 to cpufreq-dt-platdev blocklist (AngeloGioacchino Del
Regno).
- Make amd-pstate enable CPPC on resume from S3 (Jinzhou Su).
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmK/TsASHHJqd0Byand5
c29ja2kubmV0AAoJEILEb/54YlRxEskP/18M5S8V2rNXqfPs2bfdcnvQJKSk5DWR
doewxSK9stpWkNvXsYc7dvmiIboYIN7+5F67ZPrM5M/jOPkgksdAAxb+bzgZRCiG
PBcBEEP7nfXLquqws9nvZHYakJlRRWIsbw5lD0wOeiUJkkuQjX6+kEg968H3c9tG
ciV3X+qvV2GUldw6P4L3bY0QR7uDDbHdDlg4z8PDUugNZ4QTFeM7Q6t0Q8zuPWQL
gO/2+P9iZm0rtd6Ezrw+mW7d5N3bX0AKWdVcQ2MJvPIARzExNucTyklBYQgJR73+
hk+DVfgH76vX2my/ftxjoHgoDMLIi15ZEhx3tCyjdY0fUDLEU0Vw3eqhMMl834TK
3gB9dLU50iLOHCX5/dNof8RpNZaCpQqsORbXQ7yqp8L+/xbkWii86nn/rKK8eNG5
c8luESTLq3JBQb25i/b/5In7soqxY9mWNnzBS/33YOfhKbD6nvSErY9VA7+NGzJE
deZ753RfkAJvmgFsy9xAnNzdPlP4/XGsOGDUSJuregSg6fiRiVgh9FAp4VPaCocr
tyFXdJ9Ryc7nZKsucdcIbygm/uwBnTa3L5NOUkdSTijKVorb7QvuR1J27TJBBJEn
1t77jncWonXkDx13tRi72ywm3/um6wPsKqqoQXcs5P7dGG6hkz8UYCuZRAARWuWX
3YkNgD5MKzfV
=vqFQ
-----END PGP SIGNATURE-----
Merge tag 'pm-5.19-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management fixes from Rafael Wysocki:
"These fix some issues in cpufreq drivers and some issues in devfreq:
- Fix error code path issues related PROBE_DEFER handling in devfreq
(Christian Marangi)
- Revert an editing accident in SPDX-License line in the devfreq
passive governor (Lukas Bulwahn)
- Fix refcount leak in of_get_devfreq_events() in the exynos-ppmu
devfreq driver (Miaoqian Lin)
- Use HZ_PER_KHZ macro in the passive devfreq governor (Yicong Yang)
- Fix missing of_node_put for qoriq and pmac32 driver (Liang He)
- Fix issues around throttle interrupt for qcom driver (Stephen Boyd)
- Add MT8186 to cpufreq-dt-platdev blocklist (AngeloGioacchino Del
Regno)
- Make amd-pstate enable CPPC on resume from S3 (Jinzhou Su)"
* tag 'pm-5.19-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
PM / devfreq: passive: revert an editing accident in SPDX-License line
PM / devfreq: Fix kernel warning with cpufreq passive register fail
PM / devfreq: Rework freq_table to be local to devfreq struct
PM / devfreq: exynos-ppmu: Fix refcount leak in of_get_devfreq_events
PM / devfreq: passive: Use HZ_PER_KHZ macro in units.h
PM / devfreq: Fix cpufreq passive unregister erroring on PROBE_DEFER
PM / devfreq: Mute warning on governor PROBE_DEFER
PM / devfreq: Fix kernel panic with cpu based scaling to passive gov
cpufreq: Add MT8186 to cpufreq-dt-platdev blocklist
cpufreq: pmac32-cpufreq: Fix refcount leak bug
cpufreq: qcom-hw: Don't do lmh things without a throttle interrupt
drivers: cpufreq: Add missing of_node_put() in qoriq-cpufreq.c
cpufreq: amd-pstate: Add resume and suspend callbacks
Merge cpufreq fixes for 5.19-rc5, including ARM cpufreq fixes and the
following one:
- Make amd-pstate enable CPPC on resume from S3 (Jinzhou Su).
* pm-cpufreq:
cpufreq: Add MT8186 to cpufreq-dt-platdev blocklist
cpufreq: pmac32-cpufreq: Fix refcount leak bug
cpufreq: qcom-hw: Don't do lmh things without a throttle interrupt
drivers: cpufreq: Add missing of_node_put() in qoriq-cpufreq.c
cpufreq: amd-pstate: Add resume and suspend callbacks
* Fix error handling in ibmaem driver initialization
* Fix bad data reported by occ driver after setting power cap
* Fix typos in pmbus/ucd9200 driver comments
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEiHPvMQj9QTOCiqgVyx8mb86fmYEFAmK/RBgACgkQyx8mb86f
mYGTGhAAiHWPec2wsUEGenrsXHrWqTJlNJntr/mtkCvihC05nPO7/qc08L519QF0
82uAQGV/oxLZznT8VH4rpLpjDuSHXlq5WMiZAGlAgURi4EqD274kJ6rFz6P41QRY
488kYob0SV0SglsuqDf/lwbaJvDaOOMDZu96rDlWwVrWq/qbfciKl9hRtYi/V6IA
UCYovlGQH8HOb038s8ufM9XGpHmotfmUZuxLShBOkqmGTqfgJYWF656n+fsf7s7C
qus6XjeE8/8Cjb6xMVx+eBBnUtzPkmkzbYIzqrt7zieIJVk/9BmxGdv9YR7TiPru
+fnEnjcBUPqKGMkRttpoKoRCp8ZIQxpDAOozxfjBt0RQdY31hDNwmx4xuN8eKy9U
tIuwhISuITO6NjSCJZCyFwlrO+GbkWoVWYHpIWHsX9/YMyVe4Q0UBQB/a1WwNxfN
U8ASJs9qsEN6lxgmMVAeNTMukzXIehgD6W71YEtWO++1WLpK21IJQdA96tLbHO6l
2BeoDvaKAmuciWu7fsXLBAP+AbCbjo676sESnZxkThK5cYX3ic9KJ+3eNQ7/uORC
lZRf/foE4CLBqn0iu5V250uBeAVEBJ0AE3y9FZvZZ8vBKZ9TXaBAOOU20YawW87d
9OpSPWjnxML9NxJ8eJyZjsWWXR+190yWS+qV0WFyz/vdj0qy6TU=
=OV79
-----END PGP SIGNATURE-----
Merge tag 'hwmon-for-v5.19-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
Pull hwmon fixes from Guenter Roeck:
- Fix error handling in ibmaem driver initialization
- Fix bad data reported by occ driver after setting power cap
- Fix typos in pmbus/ucd9200 driver comments
* tag 'hwmon-for-v5.19-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
hwmon: (ibmaem) don't call platform_device_del() if platform_device_add() fails
hwmon: (pmbus/ucd9200) fix typos in comments
hwmon: (occ) Prevent power cap command overwriting poll response
If platform_device_add() fails, it no need to call platform_device_del(), split
platform_device_unregister() into platform_device_del/put(), so platform_device_put()
can be called separately.
Fixes: 8808a793f0 ("ibmaem: new driver for power/energy/temp meters in IBM System X hardware")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Link: https://lore.kernel.org/r/20220701074153.4021556-1-yangyingliang@huawei.com
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
contiguous ptes (missed in a recent clean-up).
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAmK/L6kACgkQa9axLQDI
XvGmfw//VXSwUarnSbiu54MiJ7jMFW5pf08La1u8dol7O8pN9c1sWn7QDMj6JyN5
KJx4DInPsDkEKX9ZK8HEVXCixg0c2Kfml5y1tCWNtIX3etX5AHc2eRUTp1dy6e96
f+e5lKGayI9gGJmKQFvKfofjienDHtp9UDNqYDmUljhJQ0kAGbI8pUawFC243UAo
DWEuTytm08DBn5vSAxumNQyrvSoYdyzkKpaESO2s9x4/BY6iWFqDmYdA7BdySKmC
ZLShCpHatz4/BdwqLjMpefjWYz0BJZGffIv+9fU0cvvwoVsB5BWdO6JR7b4Xktfi
coYbRGtsBvQClmtH7jCKQBwomyYxNIJW6i0zlVS5nKRrbIwsa9K0yaG2rIX6zxZ1
z2xqpBZcJpRBuHwVMs2Zfyvy5LPpVQptN0YxBv6orAVGamyBfKF9MLqFnr1giHXi
q/Ryx6GsCM15TQrJLxV0lq72lBF6PhXerQ9flXa1TCktKf2Zpdnh0xBoWQEw/Jiz
mzROq1zBDPJEz/MmN1BPR1cQTFDGa4VHVl+F3s0Q5ZATVPAuQDbltU5OQTQCiBdg
J54qgjvN+rDczJTulHJlquwmBI9OIz6pVfeutHlW1YNF3iZFg1bGgUSCcOrD54Xm
2bxNKn1Q+Intky0d84k5Af7CDiUrCMRbfLCDhM09XgDYE2yCxts=
=aTjQ
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fix from Catalin Marinas:
"Restore TLB invalidation for the 'break-before-make' rule on
contiguous ptes (missed in a recent clean-up)"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: hugetlb: Restore TLB invalidation for BBM on contiguous ptes
- Fix purgatory build process so bin2c tool does not get built
unnecessarily and the Makefile is more consistent with other
architectures.
- Return earlier simple design of arch_get_random_seed_long|int()
and arch_get_random_long|int() callbacks as result of changes
in generic RNG code.
- Fix minor comment typos and spelling mistakes.
-----BEGIN PGP SIGNATURE-----
iI0EABYIADUWIQQrtrZiYVkVzKQcYivNdxKlNrRb8AUCYr63pxccYWdvcmRlZXZA
bGludXguaWJtLmNvbQAKCRDNdxKlNrRb8FgXAQCWbdCbbMkkFJzqNa8zz0m6NrWe
81G58wQN2qrZMl9NnQD+IyYAEI59j72LG/yPAfBr2QKfqLb2ufIwH9Z6FA408QE=
=evBx
-----END PGP SIGNATURE-----
Merge tag 's390-5.19-5' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 fixes from Alexander Gordeev:
- Fix purgatory build process so bin2c tool does not get built
unnecessarily and the Makefile is more consistent with other
architectures.
- Return earlier simple design of arch_get_random_seed_long|int() and
arch_get_random_long|int() callbacks as result of changes in generic
RNG code.
- Fix minor comment typos and spelling mistakes.
* tag 's390-5.19-5' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/qdio: Fix spelling mistake
s390/sclp: Fix typo in comments
s390/archrandom: simplify back to earlier design and initialize earlier
s390/purgatory: remove duplicated build rule of kexec-purgatory.o
s390/purgatory: hard-code obj-y in Makefile
s390: remove unneeded 'select BUILD_BIN2C'
- Bugfixes:
- Allocate a fattr for _nfs4_discover_trunking()
- Fix module reference count leak in nfs4_run_state_manager()
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEnZ5MQTpR7cLU7KEp18tUv7ClQOsFAmK/NHYACgkQ18tUv7Cl
QOtbXBAAhivn5bqZvrKz4WI4WjddTRcjyvITiW6m26GZZVgNHz1Sc2Tp6TA6QEL8
c7OLkj2SoA0SmO4kZX+gCpOzsapQA7FUWULFxpPxJFp/NCsgoxYqO5ZfX0qVpk6t
1w8lGFqsMve3LdRmcqvbaIrvzJPMdsVvixrZwXRQMe/atvtUMmgHo4pkBPuQ5nv9
FzWh0KRiohhEWSmncD0fjdYBCq4VOqrUEEn4BTeMoXwvg/noLj5GX3mS14CFH12Y
+iOqQ05O48Ny8qhzeQ8bRat43t4cZoCpFUcwEPB0CWoNCqS4Qoqvn48Ic+4WMxpN
nPg2CqkqaG2RUJozSJz8m+GQNbEohoGkruZXJh7TQaqWXrIJGRMBhKhI2b7hujBG
meu0ypETzlbofjleCpevfvnNStoRTuakssMpcU/hfKjnfNIsHnADSYey0HWHWTAH
ZaBT6N5Z3C6hRWz7wXIn3uTuWadZbDC9+HGvvyMpuP+PDQFM9exJfhoO0JQvQc/G
dPB7SyINVj2Z9gguJaew4csRoSxZqxeLB28XHQ7yyYsmo7lbUWaeUgGGpynrrsZL
Ysuh7JVoZyhSSdNb3b2vlnI7zK+F1qIkzg0/u2ZN8CLPfMaBCmPL031oFudpWJ2n
TLnxoFHhsG2MTwua3CaOfk8QvNc2dT7ZsvxXHl7HxBlxGtBgSN4=
=OYKI
-----END PGP SIGNATURE-----
Merge tag 'nfs-for-5.19-3' of git://git.linux-nfs.org/projects/anna/linux-nfs
Pull NFS client fixes from Anna Schumaker:
- Allocate a fattr for _nfs4_discover_trunking()
- Fix module reference count leak in nfs4_run_state_manager()
* tag 'nfs-for-5.19-3' of git://git.linux-nfs.org/projects/anna/linux-nfs:
NFSv4: Add an fattr allocation to _nfs4_discover_trunking()
NFS: restore module put when manager exits.
issue on the MDS side, but for now we are going with this one-liner
to avoid busy looping and potential soft lockups.
-----BEGIN PGP SIGNATURE-----
iQFHBAABCAAxFiEEydHwtzie9C7TfviiSn/eOAIR84sFAmK/C8MTHGlkcnlvbW92
QGdtYWlsLmNvbQAKCRBKf944AhHzi1cEB/9CiJoDsc1v+DrP/4Ud/AbI4LMffMcr
tkHmUo8ZT5D4feUzSFE6iKgb3gRCJUkYKzesywQ7Xhv7Mr6/DKB4+t9QtrympZFd
sAg775mHkL0NI6/OLnLSRva/r627PFk6f1v8OWENOjsw01PLOtWAB/B5FqlgN8tG
EQLfX0G83o4AXt4NcPCcsucPh7FxC2iKe8XWqAE6VTjkKnyz3IQHvSLweWV68U8R
ht6eun8H+slx8Kw1lSZfW/XoFGFO4uKntCh/CKKH28ZqaXrxrdsfmXSVOMlOi351
qxPfrTPgaSfvWQLbYQfPdQZCsfyyPgP2wdAVfpy56vk0yoxi2TLGBPsD
=bu9O
-----END PGP SIGNATURE-----
Merge tag 'ceph-for-5.19-rc5' of https://github.com/ceph/ceph-client
Pull ceph fix from Ilya Dryomov:
"A ceph filesystem fix, marked for stable.
There appears to be a deeper issue on the MDS side, but for now we are
going with this one-liner to avoid busy looping and potential soft
lockups"
* tag 'ceph-for-5.19-rc5' of https://github.com/ceph/ceph-client:
ceph: wait on async create before checking caps for syncfs
running the lvm2 testsuite's dm-raid tests. Includes changes to MD's
raid5.c given the dependency dm-raid has on the MD code.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEJfWUX4UqZ4x1O2wixSPxCi2dA1oFAmK/HrUACgkQxSPxCi2d
A1r8ygf6A1D837Z0x3cuncGPPwtRxK7XjGGmhn1L+ycxacdq2bnIdbDUqCQbdtp/
fB+M3s0D+CWPx0F1fTPtMpGfpKZoVvv7KST2Xlf7hhn14yECZDaa7NHupNZvYFtt
ydL40GCBVsrxOqqcJ88MMK1R0YHWkgVpwixnAsRSAAe4QhL9JM9gF6Uv2XVRh9y+
P6zxXjJbzyhvA2iLi3BW4KwD6EBhjOjoE50L059e9X9mv06ZRHP/WCjMBuXTrbKp
HrswsxopQwh078W6kMuzgyZZbB+vUx7O6tzETtYlwt9MtT2ger7UfZj1EHfcNjlP
FMBE+a4tgKsLrJng9NQyM/j3NOr15A==
=Ve3U
-----END PGP SIGNATURE-----
Merge tag 'for-5.19/dm-fixes-5' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:
"Three fixes for invalid memory accesses discovered by using KASAN
while running the lvm2 testsuite's dm-raid tests. Includes changes to
MD's raid5.c given the dependency dm-raid has on the MD code"
* tag 'for-5.19/dm-fixes-5' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm raid: fix KASAN warning in raid5_add_disks
dm raid: fix KASAN warning in raid5_remove_disk
dm raid: fix accesses beyond end of raid member array
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmK+6CoQHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgpsZPD/9xPZTAJhX3/HNTjbi+FlSvTaJ/4rll98No
1pzW+nZyBVr4yesnHW2qtLwLRaYMNAFjdJmakn1BIUau4IT4Eqhb8NEz4ZCKnDD2
Kwi0q/9c0I/GxTnVXmwXPQzQkZarYLa8cppQr1L/L3el1xTU9qXUdpR7+vxPKi4J
ADDP+7buRYp7Td2RfBD2lD4B7jNMpZYVC/2/Y3fixkuJvK4eYKuf+5K7zgmbahm5
YOm86k3P7QN7saTxUeyUrwR/G6CoY99Dd54KadQAS4XkU1f6XuNjF6IsYjPUEZ1B
pKlhK4mhGieMlW8yBti0BdJLLTAHVsL9Pa0Aqsv1EdZ3x/Mfp9kmwig9RAGREyQX
gNs316VgsfnZb+AdImZ9EItRnPZ/1Z0//VOWiDy7CijKABCZCSFXqOwQ+Yonyfab
ZoVXlwlvOaxmiQAWhJe2XKxzRtAfeQgyirmF95N+c/wtIH6dWzJeIs2xFLPIKCaY
tkv5Ah4IBGxofJj1SNqKNRUcv6N/Hr7zs/p6yTQpVEoUzsKqzh1eNz8PDA3ewrq4
C6nkXnZfidyqPuUZJIfOa02N/cPLUSclxdll6pHQfIMiwLBlV60pFcSsylgdYTE+
XT/iwiiaSTPUUIkCTYhyoUpfZnNX6IoVpxKOuh5gLOmTz/+xlRfcRjcjuXIoneHQ
D9qlUWbYLA==
=Edge
-----END PGP SIGNATURE-----
Merge tag 'io_uring-5.19-2022-07-01' of git://git.kernel.dk/linux-block
Pull io_uring fixes from Jens Axboe:
"Two minor tweaks:
- While we still can, adjust the send/recv based flags to be in
->ioprio rather than in ->addr2. This is consistent with eg accept,
and also doesn't waste a full 64-bit field for flags (Pavel)
- 5.18-stable fix for re-importing provided buffers. Not much real
world relevance here as it'll only impact non-pollable files gone
async, which is more of a practical test case rather than something
that is used in the wild (Dylan)"
* tag 'io_uring-5.19-2022-07-01' of git://git.kernel.dk/linux-block:
io_uring: fix provided buffer import
io_uring: keep sendrecv flags in ioprio