License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifndef __LINUX_DCACHE_H
|
|
|
|
#define __LINUX_DCACHE_H
|
|
|
|
|
2011-07-27 07:09:06 +08:00
|
|
|
#include <linux/atomic.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/list.h>
|
2008-05-13 03:21:05 +08:00
|
|
|
#include <linux/rculist.h>
|
2011-01-07 14:50:05 +08:00
|
|
|
#include <linux/rculist_bl.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/spinlock.h>
|
fs: rcu-walk for path lookup
Perform common cases of path lookups without any stores or locking in the
ancestor dentry elements. This is called rcu-walk, as opposed to the current
algorithm which is a refcount based walk, or ref-walk.
This results in far fewer atomic operations on every path element,
significantly improving path lookup performance. It also avoids cacheline
bouncing on common dentries, significantly improving scalability.
The overall design is like this:
* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
* Take the RCU lock for the entire path walk, starting with the acquiring
of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
not required for dentry persistence.
* synchronize_rcu is called when unregistering a filesystem, so we can
access d_ops and i_ops during rcu-walk.
* Similarly take the vfsmount lock for the entire path walk. So now mnt
refcounts are not required for persistence. Also we are free to perform mount
lookups, and to assume dentry mount points and mount roots are stable up and
down the path.
* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
so we can load this tuple atomically, and also check whether any of its
members have changed.
* Dentry lookups (based on parent, candidate string tuple) recheck the parent
sequence after the child is found in case anything changed in the parent
during the path walk.
* inode is also RCU protected so we can load d_inode and use the inode for
limited things.
* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
* i_op can be loaded.
When we reach the destination dentry, we lock it, recheck lookup sequence,
and increment its refcount and mountpoint refcount. RCU and vfsmount locks
are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
not match, we can not drop rcu-walk gracefully at the current point in the
lokup, so instead return -ECHILD (for want of a better errno). This signals the
path walking code to re-do the entire lookup with a ref-walk.
Aside from the final dentry, there are other situations that may be encounted
where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
a reference on the last good dentry) and continue with a ref-walk. Again, if
we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
using ref-walk. But it is very important that we can continue with ref-walk
for most cases, particularly to avoid the overhead of double lookups, and to
gain the scalability advantages on common path elements (like cwd and root).
The cases where rcu-walk cannot continue are:
* NULL dentry (ie. any uncached path element)
* parent with d_inode->i_op->permission or ACLs
* dentries with d_revalidate
* Following links
In future patches, permission checks and d_revalidate become rcu-walk aware. It
may be possible eventually to make following links rcu-walk aware.
Uncached path elements will always require dropping to ref-walk mode, at the
very least because i_mutex needs to be grabbed, and objects allocated.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
2011-01-07 14:49:52 +08:00
|
|
|
#include <linux/seqlock.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/cache.h>
|
|
|
|
#include <linux/rcupdate.h>
|
2013-08-29 09:24:59 +08:00
|
|
|
#include <linux/lockref.h>
|
2016-05-20 19:26:00 +08:00
|
|
|
#include <linux/stringhash.h>
|
2017-02-06 16:50:49 +08:00
|
|
|
#include <linux/wait.h>
|
2024-06-11 20:26:44 +08:00
|
|
|
#include <linux/kabi.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-02-15 11:38:44 +08:00
|
|
|
struct path;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct vfsmount;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* linux/include/linux/dcache.h
|
|
|
|
*
|
|
|
|
* Dirent cache data structures
|
|
|
|
*
|
|
|
|
* (C) Copyright 1997 Thomas Schoebel-Theuer,
|
|
|
|
* with heavy changes by Linus Torvalds
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define IS_ROOT(x) ((x) == (x)->d_parent)
|
|
|
|
|
2012-05-11 04:14:12 +08:00
|
|
|
/* The hash is always the low bits of hash_len */
|
|
|
|
#ifdef __LITTLE_ENDIAN
|
2016-01-15 07:17:53 +08:00
|
|
|
#define HASH_LEN_DECLARE u32 hash; u32 len
|
2013-12-13 01:40:21 +08:00
|
|
|
#define bytemask_from_count(cnt) (~(~0ul << (cnt)*8))
|
2012-05-11 04:14:12 +08:00
|
|
|
#else
|
2016-01-15 07:17:53 +08:00
|
|
|
#define HASH_LEN_DECLARE u32 len; u32 hash
|
2013-12-13 01:40:21 +08:00
|
|
|
#define bytemask_from_count(cnt) (~(~0ul >> (cnt)*8))
|
2012-05-11 04:14:12 +08:00
|
|
|
#endif
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* "quick string" -- eases parameter passing, but more importantly
|
|
|
|
* saves "metadata" about the string (ie length and the hash).
|
|
|
|
*
|
|
|
|
* hash comes first so it snuggles against d_parent in the
|
|
|
|
* dentry.
|
|
|
|
*/
|
|
|
|
struct qstr {
|
2012-05-11 04:14:12 +08:00
|
|
|
union {
|
|
|
|
struct {
|
|
|
|
HASH_LEN_DECLARE;
|
|
|
|
};
|
|
|
|
u64 hash_len;
|
|
|
|
};
|
2005-04-17 06:20:36 +08:00
|
|
|
const unsigned char *name;
|
|
|
|
};
|
|
|
|
|
2012-05-11 04:14:12 +08:00
|
|
|
#define QSTR_INIT(n,l) { { { .len = l } }, .name = n }
|
|
|
|
|
2017-07-05 00:25:22 +08:00
|
|
|
extern const struct qstr empty_name;
|
|
|
|
extern const struct qstr slash_name;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
struct dentry_stat_t {
|
fs: bump inode and dentry counters to long
This series reworks our current object cache shrinking infrastructure in
two main ways:
* Noticing that a lot of users copy and paste their own version of LRU
lists for objects, we put some effort in providing a generic version.
It is modeled after the filesystem users: dentries, inodes, and xfs
(for various tasks), but we expect that other users could benefit in
the near future with little or no modification. Let us know if you
have any issues.
* The underlying list_lru being proposed automatically and
transparently keeps the elements in per-node lists, and is able to
manipulate the node lists individually. Given this infrastructure, we
are able to modify the up-to-now hammer called shrink_slab to proceed
with node-reclaim instead of always searching memory from all over like
it has been doing.
Per-node lru lists are also expected to lead to less contention in the lru
locks on multi-node scans, since we are now no longer fighting for a
global lock. The locks usually disappear from the profilers with this
change.
Although we have no official benchmarks for this version - be our guest to
independently evaluate this - earlier versions of this series were
performance tested (details at
http://permalink.gmane.org/gmane.linux.kernel.mm/100537) yielding no
visible performance regressions while yielding a better qualitative
behavior in NUMA machines.
With this infrastructure in place, we can use the list_lru entry point to
provide memcg isolation and per-memcg targeted reclaim. Historically,
those two pieces of work have been posted together. This version presents
only the infrastructure work, deferring the memcg work for a later time,
so we can focus on getting this part tested. You can see more about the
history of such work at http://lwn.net/Articles/552769/
Dave Chinner (18):
dcache: convert dentry_stat.nr_unused to per-cpu counters
dentry: move to per-sb LRU locks
dcache: remove dentries from LRU before putting on dispose list
mm: new shrinker API
shrinker: convert superblock shrinkers to new API
list: add a new LRU list type
inode: convert inode lru list to generic lru list code.
dcache: convert to use new lru list infrastructure
list_lru: per-node list infrastructure
shrinker: add node awareness
fs: convert inode and dentry shrinking to be node aware
xfs: convert buftarg LRU to generic code
xfs: rework buffer dispose list tracking
xfs: convert dquot cache lru to list_lru
fs: convert fs shrinkers to new scan/count API
drivers: convert shrinkers to new count/scan API
shrinker: convert remaining shrinkers to count/scan API
shrinker: Kill old ->shrink API.
Glauber Costa (7):
fs: bump inode and dentry counters to long
super: fix calculation of shrinkable objects for small numbers
list_lru: per-node API
vmscan: per-node deferred work
i915: bail out earlier when shrinker cannot acquire mutex
hugepage: convert huge zero page shrinker to new shrinker API
list_lru: dynamically adjust node arrays
This patch:
There are situations in very large machines in which we can have a large
quantity of dirty inodes, unused dentries, etc. This is particularly true
when umounting a filesystem, where eventually since every live object will
eventually be discarded.
Dave Chinner reported a problem with this while experimenting with the
shrinker revamp patchset. So we believe it is time for a change. This
patch just moves int to longs. Machines where it matters should have a
big long anyway.
Signed-off-by: Glauber Costa <glommer@openvz.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: J. Bruce Fields <bfields@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2013-08-28 08:17:53 +08:00
|
|
|
long nr_dentry;
|
|
|
|
long nr_unused;
|
2019-01-31 02:52:38 +08:00
|
|
|
long age_limit; /* age in seconds */
|
|
|
|
long want_pages; /* pages requested by system */
|
|
|
|
long nr_negative; /* # of unused negative dentries */
|
|
|
|
long dummy; /* Reserved for future use */
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
extern struct dentry_stat_t dentry_stat;
|
|
|
|
|
2008-12-01 16:33:43 +08:00
|
|
|
/*
|
|
|
|
* Try to keep struct dentry aligned on 64 byte cachelines (this will
|
|
|
|
* give reasonable cacheline footprint with larger lines without the
|
|
|
|
* large memory footprint increase).
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_64BIT
|
2011-01-07 14:49:56 +08:00
|
|
|
# define DNAME_INLINE_LEN 32 /* 192 bytes */
|
2008-12-01 16:33:43 +08:00
|
|
|
#else
|
2011-01-07 14:49:56 +08:00
|
|
|
# ifdef CONFIG_SMP
|
|
|
|
# define DNAME_INLINE_LEN 36 /* 128 bytes */
|
|
|
|
# else
|
|
|
|
# define DNAME_INLINE_LEN 40 /* 128 bytes */
|
|
|
|
# endif
|
2008-12-01 16:33:43 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-08-29 09:24:59 +08:00
|
|
|
#define d_lock d_lockref.lock
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
struct dentry {
|
2011-01-07 14:49:56 +08:00
|
|
|
/* RCU lookup touched fields */
|
2005-04-17 06:20:36 +08:00
|
|
|
unsigned int d_flags; /* protected by d_lock */
|
fs: rcu-walk for path lookup
Perform common cases of path lookups without any stores or locking in the
ancestor dentry elements. This is called rcu-walk, as opposed to the current
algorithm which is a refcount based walk, or ref-walk.
This results in far fewer atomic operations on every path element,
significantly improving path lookup performance. It also avoids cacheline
bouncing on common dentries, significantly improving scalability.
The overall design is like this:
* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
* Take the RCU lock for the entire path walk, starting with the acquiring
of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
not required for dentry persistence.
* synchronize_rcu is called when unregistering a filesystem, so we can
access d_ops and i_ops during rcu-walk.
* Similarly take the vfsmount lock for the entire path walk. So now mnt
refcounts are not required for persistence. Also we are free to perform mount
lookups, and to assume dentry mount points and mount roots are stable up and
down the path.
* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
so we can load this tuple atomically, and also check whether any of its
members have changed.
* Dentry lookups (based on parent, candidate string tuple) recheck the parent
sequence after the child is found in case anything changed in the parent
during the path walk.
* inode is also RCU protected so we can load d_inode and use the inode for
limited things.
* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
* i_op can be loaded.
When we reach the destination dentry, we lock it, recheck lookup sequence,
and increment its refcount and mountpoint refcount. RCU and vfsmount locks
are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
not match, we can not drop rcu-walk gracefully at the current point in the
lokup, so instead return -ECHILD (for want of a better errno). This signals the
path walking code to re-do the entire lookup with a ref-walk.
Aside from the final dentry, there are other situations that may be encounted
where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
a reference on the last good dentry) and continue with a ref-walk. Again, if
we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
using ref-walk. But it is very important that we can continue with ref-walk
for most cases, particularly to avoid the overhead of double lookups, and to
gain the scalability advantages on common path elements (like cwd and root).
The cases where rcu-walk cannot continue are:
* NULL dentry (ie. any uncached path element)
* parent with d_inode->i_op->permission or ACLs
* dentries with d_revalidate
* Following links
In future patches, permission checks and d_revalidate become rcu-walk aware. It
may be possible eventually to make following links rcu-walk aware.
Uncached path elements will always require dropping to ref-walk mode, at the
very least because i_mutex needs to be grabbed, and objects allocated.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
2011-01-07 14:49:52 +08:00
|
|
|
seqcount_t d_seq; /* per dentry seqlock */
|
2011-01-07 14:50:05 +08:00
|
|
|
struct hlist_bl_node d_hash; /* lookup hash list */
|
2005-04-17 06:20:36 +08:00
|
|
|
struct dentry *d_parent; /* parent directory */
|
|
|
|
struct qstr d_name;
|
2011-01-07 14:49:56 +08:00
|
|
|
struct inode *d_inode; /* Where the name belongs to - NULL is
|
|
|
|
* negative */
|
|
|
|
unsigned char d_iname[DNAME_INLINE_LEN]; /* small names */
|
|
|
|
|
|
|
|
/* Ref lookup also touches following */
|
2013-08-29 09:24:59 +08:00
|
|
|
struct lockref d_lockref; /* per-dentry lock and refcount */
|
2011-01-07 14:49:56 +08:00
|
|
|
const struct dentry_operations *d_op;
|
|
|
|
struct super_block *d_sb; /* The root of the dentry tree */
|
|
|
|
unsigned long d_time; /* used by d_revalidate */
|
|
|
|
void *d_fsdata; /* fs-specific data */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2016-04-15 15:33:13 +08:00
|
|
|
union {
|
|
|
|
struct list_head d_lru; /* LRU list */
|
|
|
|
wait_queue_head_t *d_wait; /* in-lookup ones only */
|
|
|
|
};
|
2014-10-27 07:19:16 +08:00
|
|
|
struct list_head d_child; /* child of parent list */
|
|
|
|
struct list_head d_subdirs; /* our children */
|
[PATCH] shrink dentry struct
Some long time ago, dentry struct was carefully tuned so that on 32 bits
UP, sizeof(struct dentry) was exactly 128, ie a power of 2, and a multiple
of memory cache lines.
Then RCU was added and dentry struct enlarged by two pointers, with nice
results for SMP, but not so good on UP, because breaking the above tuning
(128 + 8 = 136 bytes)
This patch reverts this unwanted side effect, by using an union (d_u),
where d_rcu and d_child are placed so that these two fields can share their
memory needs.
At the time d_free() is called (and d_rcu is really used), d_child is known
to be empty and not touched by the dentry freeing.
Lockless lookups only access d_name, d_parent, d_lock, d_op, d_flags (so
the previous content of d_child is not needed if said dentry was unhashed
but still accessed by a CPU because of RCU constraints)
As dentry cache easily contains millions of entries, a size reduction is
worth the extra complexity of the ugly C union.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: Maneesh Soni <maneesh@in.ibm.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Cc: Ian Kent <raven@themaw.net>
Cc: Paul Jackson <pj@sgi.com>
Cc: Al Viro <viro@ftp.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Cc: Neil Brown <neilb@cse.unsw.edu.au>
Cc: James Morris <jmorris@namei.org>
Cc: Stephen Smalley <sds@epoch.ncsc.mil>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 17:03:32 +08:00
|
|
|
/*
|
2014-10-27 07:19:16 +08:00
|
|
|
* d_alias and d_rcu can share memory
|
[PATCH] shrink dentry struct
Some long time ago, dentry struct was carefully tuned so that on 32 bits
UP, sizeof(struct dentry) was exactly 128, ie a power of 2, and a multiple
of memory cache lines.
Then RCU was added and dentry struct enlarged by two pointers, with nice
results for SMP, but not so good on UP, because breaking the above tuning
(128 + 8 = 136 bytes)
This patch reverts this unwanted side effect, by using an union (d_u),
where d_rcu and d_child are placed so that these two fields can share their
memory needs.
At the time d_free() is called (and d_rcu is really used), d_child is known
to be empty and not touched by the dentry freeing.
Lockless lookups only access d_name, d_parent, d_lock, d_op, d_flags (so
the previous content of d_child is not needed if said dentry was unhashed
but still accessed by a CPU because of RCU constraints)
As dentry cache easily contains millions of entries, a size reduction is
worth the extra complexity of the ugly C union.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: Maneesh Soni <maneesh@in.ibm.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Cc: Ian Kent <raven@themaw.net>
Cc: Paul Jackson <pj@sgi.com>
Cc: Al Viro <viro@ftp.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Cc: Neil Brown <neilb@cse.unsw.edu.au>
Cc: James Morris <jmorris@namei.org>
Cc: Stephen Smalley <sds@epoch.ncsc.mil>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 17:03:32 +08:00
|
|
|
*/
|
|
|
|
union {
|
2014-10-27 07:19:16 +08:00
|
|
|
struct hlist_node d_alias; /* inode alias list */
|
2016-04-15 14:42:04 +08:00
|
|
|
struct hlist_bl_node d_in_lookup_hash; /* only for in-lookup ones */
|
[PATCH] shrink dentry struct
Some long time ago, dentry struct was carefully tuned so that on 32 bits
UP, sizeof(struct dentry) was exactly 128, ie a power of 2, and a multiple
of memory cache lines.
Then RCU was added and dentry struct enlarged by two pointers, with nice
results for SMP, but not so good on UP, because breaking the above tuning
(128 + 8 = 136 bytes)
This patch reverts this unwanted side effect, by using an union (d_u),
where d_rcu and d_child are placed so that these two fields can share their
memory needs.
At the time d_free() is called (and d_rcu is really used), d_child is known
to be empty and not touched by the dentry freeing.
Lockless lookups only access d_name, d_parent, d_lock, d_op, d_flags (so
the previous content of d_child is not needed if said dentry was unhashed
but still accessed by a CPU because of RCU constraints)
As dentry cache easily contains millions of entries, a size reduction is
worth the extra complexity of the ugly C union.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: Maneesh Soni <maneesh@in.ibm.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Cc: Ian Kent <raven@themaw.net>
Cc: Paul Jackson <pj@sgi.com>
Cc: Al Viro <viro@ftp.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Cc: Neil Brown <neilb@cse.unsw.edu.au>
Cc: James Morris <jmorris@namei.org>
Cc: Stephen Smalley <sds@epoch.ncsc.mil>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 17:03:32 +08:00
|
|
|
struct rcu_head d_rcu;
|
|
|
|
} d_u;
|
2024-06-11 20:26:44 +08:00
|
|
|
|
|
|
|
KABI_RESERVE(1);
|
|
|
|
KABI_RESERVE(2);
|
|
|
|
KABI_RESERVE(3);
|
|
|
|
KABI_RESERVE(4);
|
2016-10-28 16:22:25 +08:00
|
|
|
} __randomize_layout;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 15:25:04 +08:00
|
|
|
/*
|
|
|
|
* dentry->d_lock spinlock nesting subclasses:
|
|
|
|
*
|
|
|
|
* 0: normal
|
|
|
|
* 1: nested
|
|
|
|
*/
|
|
|
|
enum dentry_d_lock_class
|
|
|
|
{
|
|
|
|
DENTRY_D_LOCK_NORMAL, /* implicitly used by plain spin_lock() APIs. */
|
|
|
|
DENTRY_D_LOCK_NESTED
|
|
|
|
};
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
struct dentry_operations {
|
2012-06-11 04:03:43 +08:00
|
|
|
int (*d_revalidate)(struct dentry *, unsigned int);
|
2013-02-21 00:19:05 +08:00
|
|
|
int (*d_weak_revalidate)(struct dentry *, unsigned int);
|
2013-05-22 06:22:44 +08:00
|
|
|
int (*d_hash)(const struct dentry *, struct qstr *);
|
2016-08-01 04:37:25 +08:00
|
|
|
int (*d_compare)(const struct dentry *,
|
2011-01-07 14:49:27 +08:00
|
|
|
unsigned int, const char *, const struct qstr *);
|
2011-01-07 14:49:23 +08:00
|
|
|
int (*d_delete)(const struct dentry *);
|
2016-06-28 17:47:32 +08:00
|
|
|
int (*d_init)(struct dentry *);
|
2005-04-17 06:20:36 +08:00
|
|
|
void (*d_release)(struct dentry *);
|
2011-10-29 01:02:42 +08:00
|
|
|
void (*d_prune)(struct dentry *);
|
2005-04-17 06:20:36 +08:00
|
|
|
void (*d_iput)(struct dentry *, struct inode *);
|
2007-05-08 15:26:18 +08:00
|
|
|
char *(*d_dname)(struct dentry *, char *, int);
|
Add a dentry op to handle automounting rather than abusing follow_link()
Add a dentry op (d_automount) to handle automounting directories rather than
abusing the follow_link() inode operation. The operation is keyed off a new
dentry flag (DCACHE_NEED_AUTOMOUNT).
This also makes it easier to add an AT_ flag to suppress terminal segment
automount during pathwalk and removes the need for the kludge code in the
pathwalk algorithm to handle directories with follow_link() semantics.
The ->d_automount() dentry operation:
struct vfsmount *(*d_automount)(struct path *mountpoint);
takes a pointer to the directory to be mounted upon, which is expected to
provide sufficient data to determine what should be mounted. If successful, it
should return the vfsmount struct it creates (which it should also have added
to the namespace using do_add_mount() or similar). If there's a collision with
another automount attempt, NULL should be returned. If the directory specified
by the parameter should be used directly rather than being mounted upon,
-EISDIR should be returned. In any other case, an error code should be
returned.
The ->d_automount() operation is called with no locks held and may sleep. At
this point the pathwalk algorithm will be in ref-walk mode.
Within fs/namei.c itself, a new pathwalk subroutine (follow_automount()) is
added to handle mountpoints. It will return -EREMOTE if the automount flag was
set, but no d_automount() op was supplied, -ELOOP if we've encountered too many
symlinks or mountpoints, -EISDIR if the walk point should be used without
mounting and 0 if successful. The path will be updated to point to the mounted
filesystem if a successful automount took place.
__follow_mount() is replaced by follow_managed() which is more generic
(especially with the patch that adds ->d_manage()). This handles transits from
directories during pathwalk, including automounting and skipping over
mountpoints (and holding processes with the next patch).
__follow_mount_rcu() will jump out of RCU-walk mode if it encounters an
automount point with nothing mounted on it.
follow_dotdot*() does not handle automounts as you don't want to trigger them
whilst following "..".
I've also extracted the mount/don't-mount logic from autofs4 and included it
here. It makes the mount go ahead anyway if someone calls open() or creat(),
tries to traverse the directory, tries to chdir/chroot/etc. into the directory,
or sticks a '/' on the end of the pathname. If they do a stat(), however,
they'll only trigger the automount if they didn't also say O_NOFOLLOW.
I've also added an inode flag (S_AUTOMOUNT) so that filesystems can mark their
inodes as automount points. This flag is automatically propagated to the
dentry as DCACHE_NEED_AUTOMOUNT by __d_instantiate(). This saves NFS and could
save AFS a private flag bit apiece, but is not strictly necessary. It would be
preferable to do the propagation in d_set_d_op(), but that doesn't normally
have access to the inode.
[AV: fixed breakage in case if __follow_mount_rcu() fails and nameidata_drop_rcu()
succeeds in RCU case of do_lookup(); we need to fall through to non-RCU case after
that, rather than just returning with ungrabbed *path]
Signed-off-by: David Howells <dhowells@redhat.com>
Was-Acked-by: Ian Kent <raven@themaw.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-01-15 02:45:21 +08:00
|
|
|
struct vfsmount *(*d_automount)(struct path *);
|
2016-11-24 05:03:41 +08:00
|
|
|
int (*d_manage)(const struct path *, bool);
|
2018-07-18 21:44:44 +08:00
|
|
|
struct dentry *(*d_real)(struct dentry *, const struct inode *);
|
2024-06-11 20:26:44 +08:00
|
|
|
|
|
|
|
KABI_RESERVE(1);
|
|
|
|
KABI_RESERVE(2);
|
|
|
|
KABI_RESERVE(3);
|
|
|
|
KABI_RESERVE(4);
|
2011-01-07 14:49:56 +08:00
|
|
|
} ____cacheline_aligned;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-01-07 14:49:22 +08:00
|
|
|
/*
|
|
|
|
* Locking rules for dentry_operations callbacks are to be found in
|
2019-07-26 20:51:27 +08:00
|
|
|
* Documentation/filesystems/locking.rst. Keep it updated!
|
2011-01-07 14:49:22 +08:00
|
|
|
*
|
2019-06-08 02:54:35 +08:00
|
|
|
* FUrther descriptions are found in Documentation/filesystems/vfs.rst.
|
2011-01-07 14:49:27 +08:00
|
|
|
* Keep it updated too!
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
|
|
|
/* d_flags entries */
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_OP_HASH 0x00000001
|
|
|
|
#define DCACHE_OP_COMPARE 0x00000002
|
|
|
|
#define DCACHE_OP_REVALIDATE 0x00000004
|
|
|
|
#define DCACHE_OP_DELETE 0x00000008
|
|
|
|
#define DCACHE_OP_PRUNE 0x00000010
|
2011-01-07 14:49:54 +08:00
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_DISCONNECTED 0x00000020
|
2011-01-07 14:49:54 +08:00
|
|
|
/* This dentry is possibly not currently connected to the dcache tree, in
|
|
|
|
* which case its parent will either be itself, or will have this flag as
|
|
|
|
* well. nfsd will not use a dentry with this bit set, but will first
|
|
|
|
* endeavour to clear the bit either by discovering that it is connected,
|
|
|
|
* or by performing lookup operations. Any filesystem which supports
|
|
|
|
* nfsd_operations MUST have a lookup function which, if it finds a
|
|
|
|
* directory inode with a DCACHE_DISCONNECTED dentry, will d_move that
|
|
|
|
* dentry into place and return that dentry rather than the passed one,
|
|
|
|
* typically using d_splice_alias. */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_REFERENCED 0x00000040 /* Recently used, don't discard. */
|
2009-05-22 05:01:29 +08:00
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_CANT_MOUNT 0x00000100
|
|
|
|
#define DCACHE_GENOCIDE 0x00000200
|
|
|
|
#define DCACHE_SHRINK_LIST 0x00000400
|
2011-01-07 14:49:54 +08:00
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_OP_WEAK_REVALIDATE 0x00000800
|
2013-02-21 00:19:05 +08:00
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_NFSFS_RENAMED 0x00001000
|
2011-08-07 13:41:50 +08:00
|
|
|
/* this dentry has been "silly renamed" and has to be deleted on the last
|
|
|
|
* dput() */
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_COOKIE 0x00002000 /* For use by dcookie subsystem */
|
|
|
|
#define DCACHE_FSNOTIFY_PARENT_WATCHED 0x00004000
|
2011-08-07 13:41:50 +08:00
|
|
|
/* Parent inode is watched by some fsnotify listener */
|
2011-01-07 14:49:55 +08:00
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_DENTRY_KILLED 0x00008000
|
|
|
|
|
|
|
|
#define DCACHE_MOUNTED 0x00010000 /* is a mountpoint */
|
|
|
|
#define DCACHE_NEED_AUTOMOUNT 0x00020000 /* handle automount on this dir */
|
|
|
|
#define DCACHE_MANAGE_TRANSIT 0x00040000 /* manage transit from this dirent */
|
Add a dentry op to handle automounting rather than abusing follow_link()
Add a dentry op (d_automount) to handle automounting directories rather than
abusing the follow_link() inode operation. The operation is keyed off a new
dentry flag (DCACHE_NEED_AUTOMOUNT).
This also makes it easier to add an AT_ flag to suppress terminal segment
automount during pathwalk and removes the need for the kludge code in the
pathwalk algorithm to handle directories with follow_link() semantics.
The ->d_automount() dentry operation:
struct vfsmount *(*d_automount)(struct path *mountpoint);
takes a pointer to the directory to be mounted upon, which is expected to
provide sufficient data to determine what should be mounted. If successful, it
should return the vfsmount struct it creates (which it should also have added
to the namespace using do_add_mount() or similar). If there's a collision with
another automount attempt, NULL should be returned. If the directory specified
by the parameter should be used directly rather than being mounted upon,
-EISDIR should be returned. In any other case, an error code should be
returned.
The ->d_automount() operation is called with no locks held and may sleep. At
this point the pathwalk algorithm will be in ref-walk mode.
Within fs/namei.c itself, a new pathwalk subroutine (follow_automount()) is
added to handle mountpoints. It will return -EREMOTE if the automount flag was
set, but no d_automount() op was supplied, -ELOOP if we've encountered too many
symlinks or mountpoints, -EISDIR if the walk point should be used without
mounting and 0 if successful. The path will be updated to point to the mounted
filesystem if a successful automount took place.
__follow_mount() is replaced by follow_managed() which is more generic
(especially with the patch that adds ->d_manage()). This handles transits from
directories during pathwalk, including automounting and skipping over
mountpoints (and holding processes with the next patch).
__follow_mount_rcu() will jump out of RCU-walk mode if it encounters an
automount point with nothing mounted on it.
follow_dotdot*() does not handle automounts as you don't want to trigger them
whilst following "..".
I've also extracted the mount/don't-mount logic from autofs4 and included it
here. It makes the mount go ahead anyway if someone calls open() or creat(),
tries to traverse the directory, tries to chdir/chroot/etc. into the directory,
or sticks a '/' on the end of the pathname. If they do a stat(), however,
they'll only trigger the automount if they didn't also say O_NOFOLLOW.
I've also added an inode flag (S_AUTOMOUNT) so that filesystems can mark their
inodes as automount points. This flag is automatically propagated to the
dentry as DCACHE_NEED_AUTOMOUNT by __d_instantiate(). This saves NFS and could
save AFS a private flag bit apiece, but is not strictly necessary. It would be
preferable to do the propagation in d_set_d_op(), but that doesn't normally
have access to the inode.
[AV: fixed breakage in case if __follow_mount_rcu() fails and nameidata_drop_rcu()
succeeds in RCU case of do_lookup(); we need to fall through to non-RCU case after
that, rather than just returning with ungrabbed *path]
Signed-off-by: David Howells <dhowells@redhat.com>
Was-Acked-by: Ian Kent <raven@themaw.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-01-15 02:45:21 +08:00
|
|
|
#define DCACHE_MANAGED_DENTRY \
|
Add a dentry op to allow processes to be held during pathwalk transit
Add a dentry op (d_manage) to permit a filesystem to hold a process and make it
sleep when it tries to transit away from one of that filesystem's directories
during a pathwalk. The operation is keyed off a new dentry flag
(DCACHE_MANAGE_TRANSIT).
The filesystem is allowed to be selective about which processes it holds and
which it permits to continue on or prohibits from transiting from each flagged
directory. This will allow autofs to hold up client processes whilst letting
its userspace daemon through to maintain the directory or the stuff behind it
or mounted upon it.
The ->d_manage() dentry operation:
int (*d_manage)(struct path *path, bool mounting_here);
takes a pointer to the directory about to be transited away from and a flag
indicating whether the transit is undertaken by do_add_mount() or
do_move_mount() skipping through a pile of filesystems mounted on a mountpoint.
It should return 0 if successful and to let the process continue on its way;
-EISDIR to prohibit the caller from skipping to overmounted filesystems or
automounting, and to use this directory; or some other error code to return to
the user.
->d_manage() is called with namespace_sem writelocked if mounting_here is true
and no other locks held, so it may sleep. However, if mounting_here is true,
it may not initiate or wait for a mount or unmount upon the parameter
directory, even if the act is actually performed by userspace.
Within fs/namei.c, follow_managed() is extended to check with d_manage() first
on each managed directory, before transiting away from it or attempting to
automount upon it.
follow_down() is renamed follow_down_one() and should only be used where the
filesystem deliberately intends to avoid management steps (e.g. autofs).
A new follow_down() is added that incorporates the loop done by all other
callers of follow_down() (do_add/move_mount(), autofs and NFSD; whilst AFS, NFS
and CIFS do use it, their use is removed by converting them to use
d_automount()). The new follow_down() calls d_manage() as appropriate. It
also takes an extra parameter to indicate if it is being called from mount code
(with namespace_sem writelocked) which it passes to d_manage(). follow_down()
ignores automount points so that it can be used to mount on them.
__follow_mount_rcu() is made to abort rcu-walk mode if it hits a directory with
DCACHE_MANAGE_TRANSIT set on the basis that we're probably going to have to
sleep. It would be possible to enter d_manage() in rcu-walk mode too, and have
that determine whether to abort or not itself. That would allow the autofs
daemon to continue on in rcu-walk mode.
Note that DCACHE_MANAGE_TRANSIT on a directory should be cleared when it isn't
required as every tranist from that directory will cause d_manage() to be
invoked. It can always be set again when necessary.
==========================
WHAT THIS MEANS FOR AUTOFS
==========================
Autofs currently uses the lookup() inode op and the d_revalidate() dentry op to
trigger the automounting of indirect mounts, and both of these can be called
with i_mutex held.
autofs knows that the i_mutex will be held by the caller in lookup(), and so
can drop it before invoking the daemon - but this isn't so for d_revalidate(),
since the lock is only held on _some_ of the code paths that call it. This
means that autofs can't risk dropping i_mutex from its d_revalidate() function
before it calls the daemon.
The bug could manifest itself as, for example, a process that's trying to
validate an automount dentry that gets made to wait because that dentry is
expired and needs cleaning up:
mkdir S ffffffff8014e05a 0 32580 24956
Call Trace:
[<ffffffff885371fd>] :autofs4:autofs4_wait+0x674/0x897
[<ffffffff80127f7d>] avc_has_perm+0x46/0x58
[<ffffffff8009fdcf>] autoremove_wake_function+0x0/0x2e
[<ffffffff88537be6>] :autofs4:autofs4_expire_wait+0x41/0x6b
[<ffffffff88535cfc>] :autofs4:autofs4_revalidate+0x91/0x149
[<ffffffff80036d96>] __lookup_hash+0xa0/0x12f
[<ffffffff80057a2f>] lookup_create+0x46/0x80
[<ffffffff800e6e31>] sys_mkdirat+0x56/0xe4
versus the automount daemon which wants to remove that dentry, but can't
because the normal process is holding the i_mutex lock:
automount D ffffffff8014e05a 0 32581 1 32561
Call Trace:
[<ffffffff80063c3f>] __mutex_lock_slowpath+0x60/0x9b
[<ffffffff8000ccf1>] do_path_lookup+0x2ca/0x2f1
[<ffffffff80063c89>] .text.lock.mutex+0xf/0x14
[<ffffffff800e6d55>] do_rmdir+0x77/0xde
[<ffffffff8005d229>] tracesys+0x71/0xe0
[<ffffffff8005d28d>] tracesys+0xd5/0xe0
which means that the system is deadlocked.
This patch allows autofs to hold up normal processes whilst the daemon goes
ahead and does things to the dentry tree behind the automouter point without
risking a deadlock as almost no locks are held in d_manage() and none in
d_automount().
Signed-off-by: David Howells <dhowells@redhat.com>
Was-Acked-by: Ian Kent <raven@themaw.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-01-15 02:45:26 +08:00
|
|
|
(DCACHE_MOUNTED|DCACHE_NEED_AUTOMOUNT|DCACHE_MANAGE_TRANSIT)
|
Add a dentry op to handle automounting rather than abusing follow_link()
Add a dentry op (d_automount) to handle automounting directories rather than
abusing the follow_link() inode operation. The operation is keyed off a new
dentry flag (DCACHE_NEED_AUTOMOUNT).
This also makes it easier to add an AT_ flag to suppress terminal segment
automount during pathwalk and removes the need for the kludge code in the
pathwalk algorithm to handle directories with follow_link() semantics.
The ->d_automount() dentry operation:
struct vfsmount *(*d_automount)(struct path *mountpoint);
takes a pointer to the directory to be mounted upon, which is expected to
provide sufficient data to determine what should be mounted. If successful, it
should return the vfsmount struct it creates (which it should also have added
to the namespace using do_add_mount() or similar). If there's a collision with
another automount attempt, NULL should be returned. If the directory specified
by the parameter should be used directly rather than being mounted upon,
-EISDIR should be returned. In any other case, an error code should be
returned.
The ->d_automount() operation is called with no locks held and may sleep. At
this point the pathwalk algorithm will be in ref-walk mode.
Within fs/namei.c itself, a new pathwalk subroutine (follow_automount()) is
added to handle mountpoints. It will return -EREMOTE if the automount flag was
set, but no d_automount() op was supplied, -ELOOP if we've encountered too many
symlinks or mountpoints, -EISDIR if the walk point should be used without
mounting and 0 if successful. The path will be updated to point to the mounted
filesystem if a successful automount took place.
__follow_mount() is replaced by follow_managed() which is more generic
(especially with the patch that adds ->d_manage()). This handles transits from
directories during pathwalk, including automounting and skipping over
mountpoints (and holding processes with the next patch).
__follow_mount_rcu() will jump out of RCU-walk mode if it encounters an
automount point with nothing mounted on it.
follow_dotdot*() does not handle automounts as you don't want to trigger them
whilst following "..".
I've also extracted the mount/don't-mount logic from autofs4 and included it
here. It makes the mount go ahead anyway if someone calls open() or creat(),
tries to traverse the directory, tries to chdir/chroot/etc. into the directory,
or sticks a '/' on the end of the pathname. If they do a stat(), however,
they'll only trigger the automount if they didn't also say O_NOFOLLOW.
I've also added an inode flag (S_AUTOMOUNT) so that filesystems can mark their
inodes as automount points. This flag is automatically propagated to the
dentry as DCACHE_NEED_AUTOMOUNT by __d_instantiate(). This saves NFS and could
save AFS a private flag bit apiece, but is not strictly necessary. It would be
preferable to do the propagation in d_set_d_op(), but that doesn't normally
have access to the inode.
[AV: fixed breakage in case if __follow_mount_rcu() fails and nameidata_drop_rcu()
succeeds in RCU case of do_lookup(); we need to fall through to non-RCU case after
that, rather than just returning with ungrabbed *path]
Signed-off-by: David Howells <dhowells@redhat.com>
Was-Acked-by: Ian Kent <raven@themaw.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-01-15 02:45:21 +08:00
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
#define DCACHE_LRU_LIST 0x00080000
|
|
|
|
|
|
|
|
#define DCACHE_ENTRY_TYPE 0x00700000
|
2015-01-29 20:02:27 +08:00
|
|
|
#define DCACHE_MISS_TYPE 0x00000000 /* Negative dentry (maybe fallthru to nowhere) */
|
|
|
|
#define DCACHE_WHITEOUT_TYPE 0x00100000 /* Whiteout dentry (stop pathwalk) */
|
|
|
|
#define DCACHE_DIRECTORY_TYPE 0x00200000 /* Normal directory */
|
|
|
|
#define DCACHE_AUTODIR_TYPE 0x00300000 /* Lookupless directory (presumed automount) */
|
2015-01-29 20:02:29 +08:00
|
|
|
#define DCACHE_REGULAR_TYPE 0x00400000 /* Regular file type (or fallthru to such) */
|
|
|
|
#define DCACHE_SPECIAL_TYPE 0x00500000 /* Other file type (or fallthru to such) */
|
|
|
|
#define DCACHE_SYMLINK_TYPE 0x00600000 /* Symlink (or fallthru to such) */
|
2012-09-18 04:31:38 +08:00
|
|
|
|
dentry_kill(): don't try to remove from shrink list
If the victim in on the shrink list, don't remove it from there.
If shrink_dentry_list() manages to remove it from the list before
we are done - fine, we'll just free it as usual. If not - mark
it with new flag (DCACHE_MAY_FREE) and leave it there.
Eventually, shrink_dentry_list() will get to it, remove the sucker
from shrink list and call dentry_kill(dentry, 0). Which is where
we'll deal with freeing.
Since now dentry_kill(dentry, 0) may happen after or during
dentry_kill(dentry, 1), we need to recognize that (by seeing
DCACHE_DENTRY_KILLED already set), unlock everything
and either free the sucker (in case DCACHE_MAY_FREE has been
set) or leave it for ongoing dentry_kill(dentry, 1) to deal with.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2014-05-01 22:30:00 +08:00
|
|
|
#define DCACHE_MAY_FREE 0x00800000
|
2015-01-29 20:02:28 +08:00
|
|
|
#define DCACHE_FALLTHRU 0x01000000 /* Fall through to lower layer */
|
2019-03-21 02:39:09 +08:00
|
|
|
#define DCACHE_ENCRYPTED_NAME 0x02000000 /* Encrypted name (dir key was unavailable) */
|
2016-06-30 14:53:27 +08:00
|
|
|
#define DCACHE_OP_REAL 0x04000000
|
2015-05-16 07:26:10 +08:00
|
|
|
|
2016-04-15 07:52:13 +08:00
|
|
|
#define DCACHE_PAR_LOOKUP 0x10000000 /* being looked up (with parent locked shared) */
|
2016-06-10 23:32:47 +08:00
|
|
|
#define DCACHE_DENTRY_CURSOR 0x20000000
|
dcache: sort the freeing-without-RCU-delay mess for good.
For lockless accesses to dentries we don't have pinned we rely
(among other things) upon having an RCU delay between dropping
the last reference and actually freeing the memory.
On the other hand, for things like pipes and sockets we neither
do that kind of lockless access, nor want to deal with the
overhead of an RCU delay every time a socket gets closed.
So delay was made optional - setting DCACHE_RCUACCESS in ->d_flags
made sure it would happen. We tried to avoid setting it unless
we knew we need it. Unfortunately, that had led to recurring
class of bugs, in which we missed the need to set it.
We only really need it for dentries that are created by
d_alloc_pseudo(), so let's not bother with trying to be smart -
just make having an RCU delay the default. The ones that do
*not* get it set the replacement flag (DCACHE_NORCU) and we'd
better use that sparingly. d_alloc_pseudo() is the only
such user right now.
FWIW, the race that finally prompted that switch had been
between __lock_parent() of immediate subdirectory of what's
currently the root of a disconnected tree (e.g. from
open-by-handle in progress) racing with d_splice_alias()
elsewhere picking another alias for the same inode, either
on outright corrupted fs image, or (in case of open-by-handle
on NFS) that subdirectory having been just moved on server.
It's not easy to hit, so the sky is not falling, but that's
not the first race on similar missed cases and the logics
for settinf DCACHE_RCUACCESS has gotten ridiculously
convoluted.
Cc: stable@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-03-16 10:23:19 +08:00
|
|
|
#define DCACHE_NORCU 0x40000000 /* No RCU delay for freeing */
|
2016-04-15 07:52:13 +08:00
|
|
|
|
[PATCH] audit: watching subtrees
New kind of audit rule predicates: "object is visible in given subtree".
The part that can be sanely implemented, that is. Limitations:
* if you have hardlink from outside of tree, you'd better watch
it too (or just watch the object itself, obviously)
* if you mount something under a watched tree, tell audit
that new chunk should be added to watched subtrees
* if you umount something in a watched tree and it's still mounted
elsewhere, you will get matches on events happening there. New command
tells audit to recalculate the trees, trimming such sources of false
positives.
Note that it's _not_ about path - if something mounted in several places
(multiple mount, bindings, different namespaces, etc.), the match does
_not_ depend on which one we are using for access.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2007-07-22 20:04:18 +08:00
|
|
|
extern seqlock_t rename_lock;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* These are the low-level FS interfaces to the dcache..
|
|
|
|
*/
|
|
|
|
extern void d_instantiate(struct dentry *, struct inode *);
|
2018-05-04 20:23:01 +08:00
|
|
|
extern void d_instantiate_new(struct dentry *, struct inode *);
|
2005-04-17 06:20:36 +08:00
|
|
|
extern struct dentry * d_instantiate_unique(struct dentry *, struct inode *);
|
2018-01-19 18:39:52 +08:00
|
|
|
extern struct dentry * d_instantiate_anon(struct dentry *, struct inode *);
|
2011-01-07 14:49:30 +08:00
|
|
|
extern void __d_drop(struct dentry *dentry);
|
|
|
|
extern void d_drop(struct dentry *dentry);
|
2005-04-17 06:20:36 +08:00
|
|
|
extern void d_delete(struct dentry *);
|
2011-01-07 14:49:55 +08:00
|
|
|
extern void d_set_d_op(struct dentry *dentry, const struct dentry_operations *op);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* allocate/de-allocate */
|
|
|
|
extern struct dentry * d_alloc(struct dentry *, const struct qstr *);
|
2018-01-19 18:39:52 +08:00
|
|
|
extern struct dentry * d_alloc_anon(struct super_block *);
|
2016-04-15 15:33:13 +08:00
|
|
|
extern struct dentry * d_alloc_parallel(struct dentry *, const struct qstr *,
|
|
|
|
wait_queue_head_t *);
|
2005-04-17 06:20:36 +08:00
|
|
|
extern struct dentry * d_splice_alias(struct inode *, struct dentry *);
|
2008-08-08 05:49:07 +08:00
|
|
|
extern struct dentry * d_add_ci(struct dentry *, struct inode *, struct qstr *);
|
2016-03-09 01:44:17 +08:00
|
|
|
extern struct dentry * d_exact_alias(struct dentry *, struct inode *);
|
2012-01-11 01:04:37 +08:00
|
|
|
extern struct dentry *d_find_any_alias(struct inode *inode);
|
2008-08-11 21:48:57 +08:00
|
|
|
extern struct dentry * d_obtain_alias(struct inode *);
|
2014-02-15 06:35:37 +08:00
|
|
|
extern struct dentry * d_obtain_root(struct inode *);
|
2005-04-17 06:20:36 +08:00
|
|
|
extern void shrink_dcache_sb(struct super_block *);
|
|
|
|
extern void shrink_dcache_parent(struct dentry *);
|
2006-10-11 16:22:19 +08:00
|
|
|
extern void shrink_dcache_for_umount(struct super_block *);
|
2014-02-14 01:46:25 +08:00
|
|
|
extern void d_invalidate(struct dentry *);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* only used at mount-time */
|
2012-01-09 05:49:21 +08:00
|
|
|
extern struct dentry * d_make_root(struct inode *);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* <clickety>-<click> the ramfs-type tree */
|
|
|
|
extern void d_genocide(struct dentry *);
|
|
|
|
|
2013-06-07 13:20:27 +08:00
|
|
|
extern void d_tmpfile(struct dentry *, struct inode *);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
extern struct dentry *d_find_alias(struct inode *);
|
|
|
|
extern void d_prune_aliases(struct inode *);
|
|
|
|
|
|
|
|
/* test whether we have any submounts in a subdir tree */
|
2016-11-24 05:03:41 +08:00
|
|
|
extern int path_has_submounts(const struct path *);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This adds the entry to the hash queues.
|
|
|
|
*/
|
|
|
|
extern void d_rehash(struct dentry *);
|
|
|
|
|
2016-03-09 10:01:03 +08:00
|
|
|
extern void d_add(struct dentry *, struct inode *);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* used for rename() and baskets */
|
|
|
|
extern void d_move(struct dentry *, struct dentry *);
|
2014-04-01 23:08:43 +08:00
|
|
|
extern void d_exchange(struct dentry *, struct dentry *);
|
2008-10-16 06:50:28 +08:00
|
|
|
extern struct dentry *d_ancestor(struct dentry *, struct dentry *);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* appendix may either be NULL or be used for transname suffixes */
|
2013-01-25 07:29:34 +08:00
|
|
|
extern struct dentry *d_lookup(const struct dentry *, const struct qstr *);
|
fs: rcu-walk for path lookup
Perform common cases of path lookups without any stores or locking in the
ancestor dentry elements. This is called rcu-walk, as opposed to the current
algorithm which is a refcount based walk, or ref-walk.
This results in far fewer atomic operations on every path element,
significantly improving path lookup performance. It also avoids cacheline
bouncing on common dentries, significantly improving scalability.
The overall design is like this:
* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
* Take the RCU lock for the entire path walk, starting with the acquiring
of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
not required for dentry persistence.
* synchronize_rcu is called when unregistering a filesystem, so we can
access d_ops and i_ops during rcu-walk.
* Similarly take the vfsmount lock for the entire path walk. So now mnt
refcounts are not required for persistence. Also we are free to perform mount
lookups, and to assume dentry mount points and mount roots are stable up and
down the path.
* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
so we can load this tuple atomically, and also check whether any of its
members have changed.
* Dentry lookups (based on parent, candidate string tuple) recheck the parent
sequence after the child is found in case anything changed in the parent
during the path walk.
* inode is also RCU protected so we can load d_inode and use the inode for
limited things.
* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
* i_op can be loaded.
When we reach the destination dentry, we lock it, recheck lookup sequence,
and increment its refcount and mountpoint refcount. RCU and vfsmount locks
are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
not match, we can not drop rcu-walk gracefully at the current point in the
lokup, so instead return -ECHILD (for want of a better errno). This signals the
path walking code to re-do the entire lookup with a ref-walk.
Aside from the final dentry, there are other situations that may be encounted
where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
a reference on the last good dentry) and continue with a ref-walk. Again, if
we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
using ref-walk. But it is very important that we can continue with ref-walk
for most cases, particularly to avoid the overhead of double lookups, and to
gain the scalability advantages on common path elements (like cwd and root).
The cases where rcu-walk cannot continue are:
* NULL dentry (ie. any uncached path element)
* parent with d_inode->i_op->permission or ACLs
* dentries with d_revalidate
* Following links
In future patches, permission checks and d_revalidate become rcu-walk aware. It
may be possible eventually to make following links rcu-walk aware.
Uncached path elements will always require dropping to ref-walk mode, at the
very least because i_mutex needs to be grabbed, and objects allocated.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
2011-01-07 14:49:52 +08:00
|
|
|
extern struct dentry *d_hash_and_lookup(struct dentry *, struct qstr *);
|
2013-01-25 07:27:00 +08:00
|
|
|
extern struct dentry *__d_lookup(const struct dentry *, const struct qstr *);
|
2012-03-03 06:23:30 +08:00
|
|
|
extern struct dentry *__d_lookup_rcu(const struct dentry *parent,
|
2013-05-22 06:22:44 +08:00
|
|
|
const struct qstr *name, unsigned *seq);
|
fs: rcu-walk for path lookup
Perform common cases of path lookups without any stores or locking in the
ancestor dentry elements. This is called rcu-walk, as opposed to the current
algorithm which is a refcount based walk, or ref-walk.
This results in far fewer atomic operations on every path element,
significantly improving path lookup performance. It also avoids cacheline
bouncing on common dentries, significantly improving scalability.
The overall design is like this:
* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
* Take the RCU lock for the entire path walk, starting with the acquiring
of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
not required for dentry persistence.
* synchronize_rcu is called when unregistering a filesystem, so we can
access d_ops and i_ops during rcu-walk.
* Similarly take the vfsmount lock for the entire path walk. So now mnt
refcounts are not required for persistence. Also we are free to perform mount
lookups, and to assume dentry mount points and mount roots are stable up and
down the path.
* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
so we can load this tuple atomically, and also check whether any of its
members have changed.
* Dentry lookups (based on parent, candidate string tuple) recheck the parent
sequence after the child is found in case anything changed in the parent
during the path walk.
* inode is also RCU protected so we can load d_inode and use the inode for
limited things.
* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
* i_op can be loaded.
When we reach the destination dentry, we lock it, recheck lookup sequence,
and increment its refcount and mountpoint refcount. RCU and vfsmount locks
are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
not match, we can not drop rcu-walk gracefully at the current point in the
lokup, so instead return -ECHILD (for want of a better errno). This signals the
path walking code to re-do the entire lookup with a ref-walk.
Aside from the final dentry, there are other situations that may be encounted
where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
a reference on the last good dentry) and continue with a ref-walk. Again, if
we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
using ref-walk. But it is very important that we can continue with ref-walk
for most cases, particularly to avoid the overhead of double lookups, and to
gain the scalability advantages on common path elements (like cwd and root).
The cases where rcu-walk cannot continue are:
* NULL dentry (ie. any uncached path element)
* parent with d_inode->i_op->permission or ACLs
* dentries with d_revalidate
* Following links
In future patches, permission checks and d_revalidate become rcu-walk aware. It
may be possible eventually to make following links rcu-walk aware.
Uncached path elements will always require dropping to ref-walk mode, at the
very least because i_mutex needs to be grabbed, and objects allocated.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
2011-01-07 14:49:52 +08:00
|
|
|
|
2013-07-18 22:09:08 +08:00
|
|
|
static inline unsigned d_count(const struct dentry *dentry)
|
2013-07-05 22:59:33 +08:00
|
|
|
{
|
2013-08-29 09:24:59 +08:00
|
|
|
return dentry->d_lockref.count;
|
2013-07-05 22:59:33 +08:00
|
|
|
}
|
|
|
|
|
2007-05-08 15:26:18 +08:00
|
|
|
/*
|
|
|
|
* helper function for dentry_operations.d_dname() members
|
|
|
|
*/
|
2015-07-18 07:23:42 +08:00
|
|
|
extern __printf(4, 5)
|
|
|
|
char *dynamic_dname(struct dentry *, char *, int, const char *, ...);
|
2007-05-08 15:26:18 +08:00
|
|
|
|
fix apparmor dereferencing potentially freed dentry, sanitize __d_path() API
__d_path() API is asking for trouble and in case of apparmor d_namespace_path()
getting just that. The root cause is that when __d_path() misses the root
it had been told to look for, it stores the location of the most remote ancestor
in *root. Without grabbing references. Sure, at the moment of call it had
been pinned down by what we have in *path. And if we raced with umount -l, we
could have very well stopped at vfsmount/dentry that got freed as soon as
prepend_path() dropped vfsmount_lock.
It is safe to compare these pointers with pre-existing (and known to be still
alive) vfsmount and dentry, as long as all we are asking is "is it the same
address?". Dereferencing is not safe and apparmor ended up stepping into
that. d_namespace_path() really wants to examine the place where we stopped,
even if it's not connected to our namespace. As the result, it looked
at ->d_sb->s_magic of a dentry that might've been already freed by that point.
All other callers had been careful enough to avoid that, but it's really
a bad interface - it invites that kind of trouble.
The fix is fairly straightforward, even though it's bigger than I'd like:
* prepend_path() root argument becomes const.
* __d_path() is never called with NULL/NULL root. It was a kludge
to start with. Instead, we have an explicit function - d_absolute_root().
Same as __d_path(), except that it doesn't get root passed and stops where
it stops. apparmor and tomoyo are using it.
* __d_path() returns NULL on path outside of root. The main
caller is show_mountinfo() and that's precisely what we pass root for - to
skip those outside chroot jail. Those who don't want that can (and do)
use d_path().
* __d_path() root argument becomes const. Everyone agrees, I hope.
* apparmor does *NOT* try to use __d_path() or any of its variants
when it sees that path->mnt is an internal vfsmount. In that case it's
definitely not mounted anywhere and dentry_path() is exactly what we want
there. Handling of sysctl()-triggered weirdness is moved to that place.
* if apparmor is asked to do pathname relative to chroot jail
and __d_path() tells it we it's not in that jail, the sucker just calls
d_absolute_path() instead. That's the other remaining caller of __d_path(),
BTW.
* seq_path_root() does _NOT_ return -ENAMETOOLONG (it's stupid anyway -
the normal seq_file logics will take care of growing the buffer and redoing
the call of ->show() just fine). However, if it gets path not reachable
from root, it returns SEQ_SKIP. The only caller adjusted (i.e. stopped
ignoring the return value as it used to do).
Reviewed-by: John Johansen <john.johansen@canonical.com>
ACKed-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: stable@vger.kernel.org
2011-12-05 21:43:34 +08:00
|
|
|
extern char *__d_path(const struct path *, const struct path *, char *, int);
|
|
|
|
extern char *d_absolute_path(const struct path *, char *, int);
|
2008-06-10 07:40:36 +08:00
|
|
|
extern char *d_path(const struct path *, char *, int);
|
2011-01-07 14:49:29 +08:00
|
|
|
extern char *dentry_path_raw(struct dentry *, char *, int);
|
2008-03-27 20:06:20 +08:00
|
|
|
extern char *dentry_path(struct dentry *, char *, int);
|
2008-02-15 11:38:44 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Allocation counts.. */
|
|
|
|
|
|
|
|
/**
|
2011-01-07 14:49:43 +08:00
|
|
|
* dget, dget_dlock - get a reference to a dentry
|
2005-04-17 06:20:36 +08:00
|
|
|
* @dentry: dentry to get a reference to
|
|
|
|
*
|
|
|
|
* Given a dentry or %NULL pointer increment the reference count
|
|
|
|
* if appropriate and return the dentry. A dentry will not be
|
2011-01-07 14:49:43 +08:00
|
|
|
* destroyed when it has references.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2011-01-07 14:49:32 +08:00
|
|
|
static inline struct dentry *dget_dlock(struct dentry *dentry)
|
|
|
|
{
|
2011-01-07 14:49:43 +08:00
|
|
|
if (dentry)
|
2013-08-29 09:24:59 +08:00
|
|
|
dentry->d_lockref.count++;
|
2011-01-07 14:49:32 +08:00
|
|
|
return dentry;
|
|
|
|
}
|
2011-01-07 14:49:34 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static inline struct dentry *dget(struct dentry *dentry)
|
|
|
|
{
|
2013-08-29 09:24:59 +08:00
|
|
|
if (dentry)
|
|
|
|
lockref_get(&dentry->d_lockref);
|
2005-04-17 06:20:36 +08:00
|
|
|
return dentry;
|
|
|
|
}
|
|
|
|
|
2011-01-07 14:49:32 +08:00
|
|
|
extern struct dentry *dget_parent(struct dentry *dentry);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* d_unhashed - is dentry hashed
|
|
|
|
* @dentry: entry to check
|
|
|
|
*
|
|
|
|
* Returns true if the dentry passed is not currently hashed.
|
|
|
|
*/
|
|
|
|
|
2013-09-06 00:11:29 +08:00
|
|
|
static inline int d_unhashed(const struct dentry *dentry)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
vfs: get rid of insane dentry hashing rules
The dentry hashing rules have been really quite complicated for a long
while, in odd ways. That made functions like __d_drop() very fragile
and non-obvious.
In particular, whether a dentry was hashed or not was indicated with an
explicit DCACHE_UNHASHED bit. That's despite the fact that the hash
abstraction that the dentries use actually have a 'is this entry hashed
or not' model (which is a simple test of the 'pprev' pointer).
The reason that was done is because we used the normal 'is this entry
unhashed' model to mark whether the dentry had _ever_ been hashed in the
dentry hash tables, and that logic goes back many years (commit
b3423415fbc2: "dcache: avoid RCU for never-hashed dentries").
That, in turn, meant that __d_drop had totally different unhashing logic
for the dentry hash table case and for the anonymous dcache case,
because in order to use the "is this dentry hashed" logic as a flag for
whether it had ever been on the RCU hash table, we had to unhash such a
dentry differently so that we'd never think that it wasn't 'unhashed'
and wouldn't be free'd correctly.
That's just insane. It made the logic really hard to follow, when there
were two different kinds of "unhashed" states, and one of them (the one
that used "list_bl_unhashed()") really had nothing at all to do with
being unhashed per se, but with a very subtle lifetime rule instead.
So turn all of it around, and make it logical.
Instead of having a DENTRY_UNHASHED bit in d_flags to indicate whether
the dentry is on the hash chains or not, use the hash chain unhashed
logic for that. Suddenly "d_unhashed()" just uses "list_bl_unhashed()",
and everything makes sense.
And for the lifetime rule, just use an explicit DENTRY_RCUACCEES bit.
If we ever insert the dentry into the dentry hash table so that it is
visible to RCU lookup, we mark it DENTRY_RCUACCESS to show that it now
needs the RCU lifetime rules. Now suddently that test at dentry free
time makes sense too.
And because unhashing now is sane and doesn't depend on where the dentry
got unhashed from (because the dentry hash chain details doesn't have
some subtle side effects), we can re-unify the __d_drop() logic and use
common code for the unhashing.
Also fix one more open-coded hash chain bit_spin_lock() that I missed in
the previous chain locking cleanup commit.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-04-24 22:58:46 +08:00
|
|
|
return hlist_bl_unhashed(&dentry->d_hash);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2013-09-06 00:11:29 +08:00
|
|
|
static inline int d_unlinked(const struct dentry *dentry)
|
2009-05-04 07:32:03 +08:00
|
|
|
{
|
|
|
|
return d_unhashed(dentry) && !IS_ROOT(dentry);
|
|
|
|
}
|
|
|
|
|
2013-09-06 00:11:29 +08:00
|
|
|
static inline int cant_mount(const struct dentry *dentry)
|
2010-05-01 05:17:09 +08:00
|
|
|
{
|
|
|
|
return (dentry->d_flags & DCACHE_CANT_MOUNT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void dont_mount(struct dentry *dentry)
|
|
|
|
{
|
|
|
|
spin_lock(&dentry->d_lock);
|
|
|
|
dentry->d_flags |= DCACHE_CANT_MOUNT;
|
|
|
|
spin_unlock(&dentry->d_lock);
|
|
|
|
}
|
|
|
|
|
2016-04-15 07:52:13 +08:00
|
|
|
extern void __d_lookup_done(struct dentry *);
|
|
|
|
|
2017-10-20 08:41:17 +08:00
|
|
|
static inline int d_in_lookup(const struct dentry *dentry)
|
2016-04-15 07:52:13 +08:00
|
|
|
{
|
|
|
|
return dentry->d_flags & DCACHE_PAR_LOOKUP;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void d_lookup_done(struct dentry *dentry)
|
|
|
|
{
|
|
|
|
if (unlikely(d_in_lookup(dentry))) {
|
|
|
|
spin_lock(&dentry->d_lock);
|
|
|
|
__d_lookup_done(dentry);
|
|
|
|
spin_unlock(&dentry->d_lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
extern void dput(struct dentry *);
|
|
|
|
|
2013-09-06 00:11:29 +08:00
|
|
|
static inline bool d_managed(const struct dentry *dentry)
|
Add a dentry op to allow processes to be held during pathwalk transit
Add a dentry op (d_manage) to permit a filesystem to hold a process and make it
sleep when it tries to transit away from one of that filesystem's directories
during a pathwalk. The operation is keyed off a new dentry flag
(DCACHE_MANAGE_TRANSIT).
The filesystem is allowed to be selective about which processes it holds and
which it permits to continue on or prohibits from transiting from each flagged
directory. This will allow autofs to hold up client processes whilst letting
its userspace daemon through to maintain the directory or the stuff behind it
or mounted upon it.
The ->d_manage() dentry operation:
int (*d_manage)(struct path *path, bool mounting_here);
takes a pointer to the directory about to be transited away from and a flag
indicating whether the transit is undertaken by do_add_mount() or
do_move_mount() skipping through a pile of filesystems mounted on a mountpoint.
It should return 0 if successful and to let the process continue on its way;
-EISDIR to prohibit the caller from skipping to overmounted filesystems or
automounting, and to use this directory; or some other error code to return to
the user.
->d_manage() is called with namespace_sem writelocked if mounting_here is true
and no other locks held, so it may sleep. However, if mounting_here is true,
it may not initiate or wait for a mount or unmount upon the parameter
directory, even if the act is actually performed by userspace.
Within fs/namei.c, follow_managed() is extended to check with d_manage() first
on each managed directory, before transiting away from it or attempting to
automount upon it.
follow_down() is renamed follow_down_one() and should only be used where the
filesystem deliberately intends to avoid management steps (e.g. autofs).
A new follow_down() is added that incorporates the loop done by all other
callers of follow_down() (do_add/move_mount(), autofs and NFSD; whilst AFS, NFS
and CIFS do use it, their use is removed by converting them to use
d_automount()). The new follow_down() calls d_manage() as appropriate. It
also takes an extra parameter to indicate if it is being called from mount code
(with namespace_sem writelocked) which it passes to d_manage(). follow_down()
ignores automount points so that it can be used to mount on them.
__follow_mount_rcu() is made to abort rcu-walk mode if it hits a directory with
DCACHE_MANAGE_TRANSIT set on the basis that we're probably going to have to
sleep. It would be possible to enter d_manage() in rcu-walk mode too, and have
that determine whether to abort or not itself. That would allow the autofs
daemon to continue on in rcu-walk mode.
Note that DCACHE_MANAGE_TRANSIT on a directory should be cleared when it isn't
required as every tranist from that directory will cause d_manage() to be
invoked. It can always be set again when necessary.
==========================
WHAT THIS MEANS FOR AUTOFS
==========================
Autofs currently uses the lookup() inode op and the d_revalidate() dentry op to
trigger the automounting of indirect mounts, and both of these can be called
with i_mutex held.
autofs knows that the i_mutex will be held by the caller in lookup(), and so
can drop it before invoking the daemon - but this isn't so for d_revalidate(),
since the lock is only held on _some_ of the code paths that call it. This
means that autofs can't risk dropping i_mutex from its d_revalidate() function
before it calls the daemon.
The bug could manifest itself as, for example, a process that's trying to
validate an automount dentry that gets made to wait because that dentry is
expired and needs cleaning up:
mkdir S ffffffff8014e05a 0 32580 24956
Call Trace:
[<ffffffff885371fd>] :autofs4:autofs4_wait+0x674/0x897
[<ffffffff80127f7d>] avc_has_perm+0x46/0x58
[<ffffffff8009fdcf>] autoremove_wake_function+0x0/0x2e
[<ffffffff88537be6>] :autofs4:autofs4_expire_wait+0x41/0x6b
[<ffffffff88535cfc>] :autofs4:autofs4_revalidate+0x91/0x149
[<ffffffff80036d96>] __lookup_hash+0xa0/0x12f
[<ffffffff80057a2f>] lookup_create+0x46/0x80
[<ffffffff800e6e31>] sys_mkdirat+0x56/0xe4
versus the automount daemon which wants to remove that dentry, but can't
because the normal process is holding the i_mutex lock:
automount D ffffffff8014e05a 0 32581 1 32561
Call Trace:
[<ffffffff80063c3f>] __mutex_lock_slowpath+0x60/0x9b
[<ffffffff8000ccf1>] do_path_lookup+0x2ca/0x2f1
[<ffffffff80063c89>] .text.lock.mutex+0xf/0x14
[<ffffffff800e6d55>] do_rmdir+0x77/0xde
[<ffffffff8005d229>] tracesys+0x71/0xe0
[<ffffffff8005d28d>] tracesys+0xd5/0xe0
which means that the system is deadlocked.
This patch allows autofs to hold up normal processes whilst the daemon goes
ahead and does things to the dentry tree behind the automouter point without
risking a deadlock as almost no locks are held in d_manage() and none in
d_automount().
Signed-off-by: David Howells <dhowells@redhat.com>
Was-Acked-by: Ian Kent <raven@themaw.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-01-15 02:45:26 +08:00
|
|
|
{
|
|
|
|
return dentry->d_flags & DCACHE_MANAGED_DENTRY;
|
|
|
|
}
|
|
|
|
|
2013-09-06 00:11:29 +08:00
|
|
|
static inline bool d_mountpoint(const struct dentry *dentry)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-01-07 14:49:54 +08:00
|
|
|
return dentry->d_flags & DCACHE_MOUNTED;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
/*
|
|
|
|
* Directory cache entry type accessor functions.
|
|
|
|
*/
|
|
|
|
static inline unsigned __d_entry_type(const struct dentry *dentry)
|
|
|
|
{
|
2016-03-01 01:12:46 +08:00
|
|
|
return dentry->d_flags & DCACHE_ENTRY_TYPE;
|
2013-09-13 02:22:53 +08:00
|
|
|
}
|
|
|
|
|
2015-01-29 20:02:27 +08:00
|
|
|
static inline bool d_is_miss(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return __d_entry_type(dentry) == DCACHE_MISS_TYPE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool d_is_whiteout(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return __d_entry_type(dentry) == DCACHE_WHITEOUT_TYPE;
|
|
|
|
}
|
|
|
|
|
2014-04-01 23:08:41 +08:00
|
|
|
static inline bool d_can_lookup(const struct dentry *dentry)
|
2013-09-13 02:22:53 +08:00
|
|
|
{
|
|
|
|
return __d_entry_type(dentry) == DCACHE_DIRECTORY_TYPE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool d_is_autodir(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return __d_entry_type(dentry) == DCACHE_AUTODIR_TYPE;
|
|
|
|
}
|
|
|
|
|
2014-04-01 23:08:41 +08:00
|
|
|
static inline bool d_is_dir(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return d_can_lookup(dentry) || d_is_autodir(dentry);
|
|
|
|
}
|
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
static inline bool d_is_symlink(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return __d_entry_type(dentry) == DCACHE_SYMLINK_TYPE;
|
|
|
|
}
|
|
|
|
|
2015-01-29 20:02:29 +08:00
|
|
|
static inline bool d_is_reg(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return __d_entry_type(dentry) == DCACHE_REGULAR_TYPE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool d_is_special(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return __d_entry_type(dentry) == DCACHE_SPECIAL_TYPE;
|
|
|
|
}
|
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
static inline bool d_is_file(const struct dentry *dentry)
|
|
|
|
{
|
2015-01-29 20:02:29 +08:00
|
|
|
return d_is_reg(dentry) || d_is_special(dentry);
|
2013-09-13 02:22:53 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool d_is_negative(const struct dentry *dentry)
|
|
|
|
{
|
2015-01-29 20:02:27 +08:00
|
|
|
// TODO: check d_is_whiteout(dentry) also.
|
|
|
|
return d_is_miss(dentry);
|
2013-09-13 02:22:53 +08:00
|
|
|
}
|
|
|
|
|
2024-06-11 20:26:44 +08:00
|
|
|
static inline bool d_flags_negative(unsigned flags)
|
|
|
|
{
|
|
|
|
return (flags & DCACHE_ENTRY_TYPE) == DCACHE_MISS_TYPE;
|
|
|
|
}
|
|
|
|
|
2013-09-13 02:22:53 +08:00
|
|
|
static inline bool d_is_positive(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return !d_is_negative(dentry);
|
|
|
|
}
|
|
|
|
|
2015-02-11 21:40:17 +08:00
|
|
|
/**
|
|
|
|
* d_really_is_negative - Determine if a dentry is really negative (ignoring fallthroughs)
|
|
|
|
* @dentry: The dentry in question
|
|
|
|
*
|
|
|
|
* Returns true if the dentry represents either an absent name or a name that
|
|
|
|
* doesn't map to an inode (ie. ->d_inode is NULL). The dentry could represent
|
|
|
|
* a true miss, a whiteout that isn't represented by a 0,0 chardev or a
|
|
|
|
* fallthrough marker in an opaque directory.
|
|
|
|
*
|
|
|
|
* Note! (1) This should be used *only* by a filesystem to examine its own
|
|
|
|
* dentries. It should not be used to look at some other filesystem's
|
|
|
|
* dentries. (2) It should also be used in combination with d_inode() to get
|
|
|
|
* the inode. (3) The dentry may have something attached to ->d_lower and the
|
|
|
|
* type field of the flags may be set to something other than miss or whiteout.
|
|
|
|
*/
|
|
|
|
static inline bool d_really_is_negative(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return dentry->d_inode == NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* d_really_is_positive - Determine if a dentry is really positive (ignoring fallthroughs)
|
|
|
|
* @dentry: The dentry in question
|
|
|
|
*
|
|
|
|
* Returns true if the dentry represents a name that maps to an inode
|
|
|
|
* (ie. ->d_inode is not NULL). The dentry might still represent a whiteout if
|
|
|
|
* that is represented on medium as a 0,0 chardev.
|
|
|
|
*
|
|
|
|
* Note! (1) This should be used *only* by a filesystem to examine its own
|
|
|
|
* dentries. It should not be used to look at some other filesystem's
|
|
|
|
* dentries. (2) It should also be used in combination with d_inode() to get
|
|
|
|
* the inode.
|
|
|
|
*/
|
|
|
|
static inline bool d_really_is_positive(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return dentry->d_inode != NULL;
|
|
|
|
}
|
|
|
|
|
2017-10-20 08:41:17 +08:00
|
|
|
static inline int simple_positive(const struct dentry *dentry)
|
2015-05-18 22:10:34 +08:00
|
|
|
{
|
|
|
|
return d_really_is_positive(dentry) && !d_unhashed(dentry);
|
|
|
|
}
|
|
|
|
|
2015-01-29 20:02:28 +08:00
|
|
|
extern void d_set_fallthru(struct dentry *dentry);
|
|
|
|
|
|
|
|
static inline bool d_is_fallthru(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return dentry->d_flags & DCACHE_FALLTHRU;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
extern int sysctl_vfs_cache_pressure;
|
|
|
|
|
2013-08-28 08:17:53 +08:00
|
|
|
static inline unsigned long vfs_pressure_ratio(unsigned long val)
|
|
|
|
{
|
|
|
|
return mult_frac(val, sysctl_vfs_cache_pressure, 100);
|
|
|
|
}
|
2015-01-29 20:02:27 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* d_inode - Get the actual inode of this dentry
|
|
|
|
* @dentry: The dentry to query
|
|
|
|
*
|
|
|
|
* This is the helper normal filesystems should use to get at their own inodes
|
|
|
|
* in their own dentries and ignore the layering superimposed upon them.
|
|
|
|
*/
|
|
|
|
static inline struct inode *d_inode(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
return dentry->d_inode;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
locking/atomics, fs/dcache: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE()
For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't currently harmful.
However, for some features it is necessary to instrument reads and
writes separately, which is not possible with ACCESS_ONCE(). This
distinction is critical to correct operation.
It's possible to transform the bulk of kernel code using the Coccinelle
script below. However, this doesn't handle comments, leaving references
to ACCESS_ONCE() instances which have been removed. As a preparatory
step, this patch converts the dcache code and comments to use
{READ,WRITE}_ONCE() consistently.
----
virtual patch
@ depends on patch @
expression E1, E2;
@@
- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)
@ depends on patch @
expression E;
@@
- ACCESS_ONCE(E)
+ READ_ONCE(E)
----
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: davem@davemloft.net
Cc: linux-arch@vger.kernel.org
Cc: mpe@ellerman.id.au
Cc: shuah@kernel.org
Cc: snitzer@redhat.com
Cc: thor.thayer@linux.intel.com
Cc: tj@kernel.org
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1508792849-3115-4-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-10-24 05:07:14 +08:00
|
|
|
* d_inode_rcu - Get the actual inode of this dentry with READ_ONCE()
|
2015-01-29 20:02:27 +08:00
|
|
|
* @dentry: The dentry to query
|
|
|
|
*
|
|
|
|
* This is the helper normal filesystems should use to get at their own inodes
|
|
|
|
* in their own dentries and ignore the layering superimposed upon them.
|
|
|
|
*/
|
|
|
|
static inline struct inode *d_inode_rcu(const struct dentry *dentry)
|
|
|
|
{
|
locking/atomics, fs/dcache: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE()
For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't currently harmful.
However, for some features it is necessary to instrument reads and
writes separately, which is not possible with ACCESS_ONCE(). This
distinction is critical to correct operation.
It's possible to transform the bulk of kernel code using the Coccinelle
script below. However, this doesn't handle comments, leaving references
to ACCESS_ONCE() instances which have been removed. As a preparatory
step, this patch converts the dcache code and comments to use
{READ,WRITE}_ONCE() consistently.
----
virtual patch
@ depends on patch @
expression E1, E2;
@@
- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)
@ depends on patch @
expression E;
@@
- ACCESS_ONCE(E)
+ READ_ONCE(E)
----
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: davem@davemloft.net
Cc: linux-arch@vger.kernel.org
Cc: mpe@ellerman.id.au
Cc: shuah@kernel.org
Cc: snitzer@redhat.com
Cc: thor.thayer@linux.intel.com
Cc: tj@kernel.org
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1508792849-3115-4-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-10-24 05:07:14 +08:00
|
|
|
return READ_ONCE(dentry->d_inode);
|
2015-01-29 20:02:27 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* d_backing_inode - Get upper or lower inode we should be using
|
|
|
|
* @upper: The upper layer
|
|
|
|
*
|
|
|
|
* This is the helper that should be used to get at the inode that will be used
|
|
|
|
* if this dentry were to be opened as a file. The inode may be on the upper
|
|
|
|
* dentry or it may be on a lower dentry pinned by the upper.
|
|
|
|
*
|
|
|
|
* Normal filesystems should not use this to access their own inodes.
|
|
|
|
*/
|
|
|
|
static inline struct inode *d_backing_inode(const struct dentry *upper)
|
|
|
|
{
|
|
|
|
struct inode *inode = upper->d_inode;
|
|
|
|
|
|
|
|
return inode;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* d_backing_dentry - Get upper or lower dentry we should be using
|
|
|
|
* @upper: The upper layer
|
|
|
|
*
|
|
|
|
* This is the helper that should be used to get the dentry of the inode that
|
|
|
|
* will be used if this dentry were opened as a file. It may be the upper
|
|
|
|
* dentry or it may be a lower dentry pinned by the upper.
|
|
|
|
*
|
|
|
|
* Normal filesystems should not use this to access their own dentries.
|
|
|
|
*/
|
|
|
|
static inline struct dentry *d_backing_dentry(struct dentry *upper)
|
|
|
|
{
|
|
|
|
return upper;
|
|
|
|
}
|
|
|
|
|
2016-06-30 14:53:27 +08:00
|
|
|
/**
|
|
|
|
* d_real - Return the real dentry
|
|
|
|
* @dentry: the dentry to query
|
|
|
|
* @inode: inode to select the dentry from multiple layers (can be NULL)
|
|
|
|
*
|
2017-02-28 06:28:49 +08:00
|
|
|
* If dentry is on a union/overlay, then return the underlying, real dentry.
|
2016-06-30 14:53:27 +08:00
|
|
|
* Otherwise return the dentry itself.
|
|
|
|
*
|
2019-06-08 02:54:35 +08:00
|
|
|
* See also: Documentation/filesystems/vfs.rst
|
2016-06-30 14:53:27 +08:00
|
|
|
*/
|
2016-06-30 14:53:27 +08:00
|
|
|
static inline struct dentry *d_real(struct dentry *dentry,
|
2018-07-18 21:44:44 +08:00
|
|
|
const struct inode *inode)
|
2016-03-27 04:14:37 +08:00
|
|
|
{
|
|
|
|
if (unlikely(dentry->d_flags & DCACHE_OP_REAL))
|
2018-07-18 21:44:44 +08:00
|
|
|
return dentry->d_op->d_real(dentry, inode);
|
2016-03-27 04:14:37 +08:00
|
|
|
else
|
|
|
|
return dentry;
|
|
|
|
}
|
|
|
|
|
2016-05-21 04:13:45 +08:00
|
|
|
/**
|
|
|
|
* d_real_inode - Return the real inode
|
|
|
|
* @dentry: The dentry to query
|
|
|
|
*
|
2017-02-28 06:28:49 +08:00
|
|
|
* If dentry is on a union/overlay, then return the underlying, real inode.
|
2016-05-21 04:13:45 +08:00
|
|
|
* Otherwise return d_inode().
|
|
|
|
*/
|
2016-09-16 18:44:20 +08:00
|
|
|
static inline struct inode *d_real_inode(const struct dentry *dentry)
|
2016-05-21 04:13:45 +08:00
|
|
|
{
|
2016-09-16 18:44:20 +08:00
|
|
|
/* This usage of d_real() results in const dentry */
|
2018-07-18 21:44:44 +08:00
|
|
|
return d_backing_inode(d_real((struct dentry *) dentry, NULL));
|
2016-05-21 04:13:45 +08:00
|
|
|
}
|
|
|
|
|
2017-07-08 02:51:19 +08:00
|
|
|
struct name_snapshot {
|
2019-04-27 01:07:27 +08:00
|
|
|
struct qstr name;
|
include/linux/dcache.h: use unsigned chars in struct name_snapshot
"kernel.h: handle pointers to arrays better in container_of()" triggers:
In file included from include/uapi/linux/stddef.h:1:0,
from include/linux/stddef.h:4,
from include/uapi/linux/posix_types.h:4,
from include/uapi/linux/types.h:13,
from include/linux/types.h:5,
from include/linux/syscalls.h:71,
from fs/dcache.c:17:
fs/dcache.c: In function 'release_dentry_name_snapshot':
include/linux/compiler.h:542:38: error: call to '__compiletime_assert_305' declared with attribute error: pointer type mismatch in container_of()
_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
^
include/linux/compiler.h:525:4: note: in definition of macro '__compiletime_assert'
prefix ## suffix(); \
^
include/linux/compiler.h:542:2: note: in expansion of macro '_compiletime_assert'
_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
^
include/linux/build_bug.h:46:37: note: in expansion of macro 'compiletime_assert'
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^
include/linux/kernel.h:860:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
BUILD_BUG_ON_MSG(!__same_type(*(ptr), ((type *)0)->member) && \
^
fs/dcache.c:305:7: note: in expansion of macro 'container_of'
p = container_of(name->name, struct external_name, name[0]);
Switch name_snapshot to use unsigned chars, matching struct qstr and
struct external_name.
Link: http://lkml.kernel.org/r/20170710152134.0f78c1e6@canb.auug.org.au
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-13 05:33:01 +08:00
|
|
|
unsigned char inline_name[DNAME_INLINE_LEN];
|
2017-07-08 02:51:19 +08:00
|
|
|
};
|
|
|
|
void take_dentry_name_snapshot(struct name_snapshot *, struct dentry *);
|
|
|
|
void release_dentry_name_snapshot(struct name_snapshot *);
|
2016-05-11 07:16:37 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif /* __LINUX_DCACHE_H */
|