OpenCloudOS-Kernel/include/linux/kernfs.h

585 lines
18 KiB
C
Raw Normal View History

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* kernfs.h - pseudo filesystem decoupled from vfs locking
*/
#ifndef __LINUX_KERNFS_H
#define __LINUX_KERNFS_H
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/idr.h>
#include <linux/lockdep.h>
#include <linux/rbtree.h>
#include <linux/atomic.h>
#include <linux/uidgid.h>
#include <linux/wait.h>
#include <linux/kabi.h>
struct file;
struct dentry;
struct iattr;
struct seq_file;
struct vm_area_struct;
struct super_block;
struct file_system_type;
fs: kernfs: add poll file operation Patch series "psi: pressure stall monitors", v3. Android is adopting psi to detect and remedy memory pressure that results in stuttering and decreased responsiveness on mobile devices. Psi gives us the stall information, but because we're dealing with latencies in the millisecond range, periodically reading the pressure files to detect stalls in a timely fashion is not feasible. Psi also doesn't aggregate its averages at a high enough frequency right now. This patch series extends the psi interface such that users can configure sensitive latency thresholds and use poll() and friends to be notified when these are breached. As high-frequency aggregation is costly, it implements an aggregation method that is optimized for fast, short-interval averaging, and makes the aggregation frequency adaptive, such that high-frequency updates only happen while monitored stall events are actively occurring. With these patches applied, Android can monitor for, and ward off, mounting memory shortages before they cause problems for the user. For example, using memory stall monitors in userspace low memory killer daemon (lmkd) we can detect mounting pressure and kill less important processes before device becomes visibly sluggish. In our memory stress testing psi memory monitors produce roughly 10x less false positives compared to vmpressure signals. Having ability to specify multiple triggers for the same psi metric allows other parts of Android framework to monitor memory state of the device and act accordingly. The new interface is straightforward. The user opens one of the pressure files for writing and writes a trigger description into the file descriptor that defines the stall state - some or full, and the maximum stall time over a given window of time. E.g.: /* Signal when stall time exceeds 100ms of a 1s window */ char trigger[] = "full 100000 1000000"; fd = open("/proc/pressure/memory"); write(fd, trigger, sizeof(trigger)); while (poll() >= 0) { ... } close(fd); When the monitored stall state is entered, psi adapts its aggregation frequency according to what the configured time window requires in order to emit event signals in a timely fashion. Once the stalling subsides, aggregation reverts back to normal. The trigger is associated with the open file descriptor. To stop monitoring, the user only needs to close the file descriptor and the trigger is discarded. Patches 1-4 prepare the psi code for polling support. Patch 5 implements the adaptive polling logic, the pressure growth detection optimized for short intervals, and hooks up write() and poll() on the pressure files. The patches were developed in collaboration with Johannes Weiner. This patch (of 5): Kernfs has a standardized poll/notification mechanism for waking all pollers on all fds when a filesystem node changes. To allow polling for custom events, add a .poll callback that can override the default. This is in preparation for pollable cgroup pressure files which have per-fd trigger configurations. Link: http://lkml.kernel.org/r/20190124211518.244221-2-surenb@google.com Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Suren Baghdasaryan <surenb@google.com> Cc: Dennis Zhou <dennis@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li Zefan <lizefan@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-06 07:45:45 +08:00
struct poll_table_struct;
kernfs, sysfs, cgroup, intel_rdt: Support fs_context Make kernfs support superblock creation/mount/remount with fs_context. This requires that sysfs, cgroup and intel_rdt, which are built on kernfs, be made to support fs_context also. Notes: (1) A kernfs_fs_context struct is created to wrap fs_context and the kernfs mount parameters are moved in here (or are in fs_context). (2) kernfs_mount{,_ns}() are made into kernfs_get_tree(). The extra namespace tag parameter is passed in the context if desired (3) kernfs_free_fs_context() is provided as a destructor for the kernfs_fs_context struct, but for the moment it does nothing except get called in the right places. (4) sysfs doesn't wrap kernfs_fs_context since it has no parameters to pass, but possibly this should be done anyway in case someone wants to add a parameter in future. (5) A cgroup_fs_context struct is created to wrap kernfs_fs_context and the cgroup v1 and v2 mount parameters are all moved there. (6) cgroup1 parameter parsing error messages are now handled by invalf(), which allows userspace to collect them directly. (7) cgroup1 parameter cleanup is now done in the context destructor rather than in the mount/get_tree and remount functions. Weirdies: (*) cgroup_do_get_tree() calls cset_cgroup_from_root() with locks held, but then uses the resulting pointer after dropping the locks. I'm told this is okay and needs commenting. (*) The cgroup refcount web. This really needs documenting. (*) cgroup2 only has one root? Add a suggestion from Thomas Gleixner in which the RDT enablement code is placed into its own function. [folded a leak fix from Andrey Vagin] Signed-off-by: David Howells <dhowells@redhat.com> cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> cc: Tejun Heo <tj@kernel.org> cc: Li Zefan <lizefan@huawei.com> cc: Johannes Weiner <hannes@cmpxchg.org> cc: cgroups@vger.kernel.org cc: fenghua.yu@intel.com Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-02 07:07:26 +08:00
struct fs_context;
kernfs, sysfs, cgroup, intel_rdt: Support fs_context Make kernfs support superblock creation/mount/remount with fs_context. This requires that sysfs, cgroup and intel_rdt, which are built on kernfs, be made to support fs_context also. Notes: (1) A kernfs_fs_context struct is created to wrap fs_context and the kernfs mount parameters are moved in here (or are in fs_context). (2) kernfs_mount{,_ns}() are made into kernfs_get_tree(). The extra namespace tag parameter is passed in the context if desired (3) kernfs_free_fs_context() is provided as a destructor for the kernfs_fs_context struct, but for the moment it does nothing except get called in the right places. (4) sysfs doesn't wrap kernfs_fs_context since it has no parameters to pass, but possibly this should be done anyway in case someone wants to add a parameter in future. (5) A cgroup_fs_context struct is created to wrap kernfs_fs_context and the cgroup v1 and v2 mount parameters are all moved there. (6) cgroup1 parameter parsing error messages are now handled by invalf(), which allows userspace to collect them directly. (7) cgroup1 parameter cleanup is now done in the context destructor rather than in the mount/get_tree and remount functions. Weirdies: (*) cgroup_do_get_tree() calls cset_cgroup_from_root() with locks held, but then uses the resulting pointer after dropping the locks. I'm told this is okay and needs commenting. (*) The cgroup refcount web. This really needs documenting. (*) cgroup2 only has one root? Add a suggestion from Thomas Gleixner in which the RDT enablement code is placed into its own function. [folded a leak fix from Andrey Vagin] Signed-off-by: David Howells <dhowells@redhat.com> cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> cc: Tejun Heo <tj@kernel.org> cc: Li Zefan <lizefan@huawei.com> cc: Johannes Weiner <hannes@cmpxchg.org> cc: cgroups@vger.kernel.org cc: fenghua.yu@intel.com Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-02 07:07:26 +08:00
struct kernfs_fs_context;
struct kernfs_open_node;
struct kernfs_iattrs;
enum kernfs_node_type {
KERNFS_DIR = 0x0001,
KERNFS_FILE = 0x0002,
KERNFS_LINK = 0x0004,
};
#define KERNFS_TYPE_MASK 0x000f
#define KERNFS_FLAG_MASK ~KERNFS_TYPE_MASK
enum kernfs_node_flag {
KERNFS_ACTIVATED = 0x0010,
KERNFS_NS = 0x0020,
KERNFS_HAS_SEQ_SHOW = 0x0040,
KERNFS_HAS_MMAP = 0x0080,
KERNFS_LOCKDEP = 0x0100,
kernfs, sysfs, driver-core: implement kernfs_remove_self() and its wrappers Sometimes it's necessary to implement a node which wants to delete nodes including itself. This isn't straightforward because of kernfs active reference. While a file operation is in progress, an active reference is held and kernfs_remove() waits for all such references to drain before completing. For a self-deleting node, this is a deadlock as kernfs_remove() ends up waiting for an active reference that itself is sitting on top of. This currently is worked around in the sysfs layer using sysfs_schedule_callback() which makes such removals asynchronous. While it works, it's rather cumbersome and inherently breaks synchronicity of the operation - the file operation which triggered the operation may complete before the removal is finished (or even started) and the removal may fail asynchronously. If a removal operation is immmediately followed by another operation which expects the specific name to be available (e.g. removal followed by rename onto the same name), there's no way to make the latter operation reliable. The thing is there's no inherent reason for this to be asynchrnous. All that's necessary to do this synchronous is a dedicated operation which drops its own active ref and deactivates self. This patch implements kernfs_remove_self() and its wrappers in sysfs and driver core. kernfs_remove_self() is to be called from one of the file operations, drops the active ref the task is holding, removes the self node, and restores active ref to the dead node so that the ref is balanced afterwards. __kernfs_remove() is updated so that it takes an early exit if the target node is already fully removed so that the active ref restored by kernfs_remove_self() after removal doesn't confuse the deactivation path. This makes implementing self-deleting nodes very easy. The normal removal path doesn't even need to be changed to use kernfs_remove_self() for the self-deleting node. The method can invoke kernfs_remove_self() on itself before proceeding the normal removal path. kernfs_remove() invoked on the node by the normal deletion path will simply be ignored. This will replace sysfs_schedule_callback(). A subtle feature of sysfs_schedule_callback() is that it collapses multiple invocations - even if multiple removals are triggered, the removal callback is run only once. An equivalent effect can be achieved by testing the return value of kernfs_remove_self() - only the one which gets %true return value should proceed with actual deletion. All other instances of kernfs_remove_self() will wait till the enclosing kernfs operation which invoked the winning instance of kernfs_remove_self() finishes and then return %false. This trivially makes all users of kernfs_remove_self() automatically show correct synchronous behavior even when there are multiple concurrent operations - all "echo 1 > delete" instances will finish only after the whole operation is completed by one of the instances. Note that manipulation of active ref is implemented in separate public functions - kernfs_[un]break_active_protection(). kernfs_remove_self() is the only user at the moment but this will be used to cater to more complex cases. v2: For !CONFIG_SYSFS, dummy version kernfs_remove_self() was missing and sysfs_remove_file_self() had incorrect return type. Fix it. Reported by kbuild test bot. v3: kernfs_[un]break_active_protection() separated out from kernfs_remove_self() and exposed as public API. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-04 03:03:01 +08:00
KERNFS_SUICIDAL = 0x0400,
KERNFS_SUICIDED = 0x0800,
KERNFS_EMPTY_DIR = 0x1000,
KERNFS_HAS_RELEASE = 0x2000,
};
/* @flags for kernfs_create_root() */
enum kernfs_root_flag {
2014-05-13 01:56:27 +08:00
/*
* kernfs_nodes are created in the deactivated state and invisible.
* They require explicit kernfs_activate() to become visible. This
* can be used to make related nodes become visible atomically
* after all nodes are created successfully.
*/
KERNFS_ROOT_CREATE_DEACTIVATED = 0x0001,
/*
* For regular files, if the opener has CAP_DAC_OVERRIDE, open(2)
2014-05-13 01:56:27 +08:00
* succeeds regardless of the RW permissions. sysfs had an extra
* layer of enforcement where open(2) fails with -EACCES regardless
* of CAP_DAC_OVERRIDE if the permission doesn't have the
* respective read or write access at all (none of S_IRUGO or
* S_IWUGO) or the respective operation isn't implemented. The
* following flag enables that behavior.
*/
KERNFS_ROOT_EXTRA_OPEN_PERM_CHECK = 0x0002,
/*
* The filesystem supports exportfs operation, so userspace can use
* fhandle to access nodes of the fs.
*/
KERNFS_ROOT_SUPPORT_EXPORTOP = 0x0004,
};
/* type-specific structures for kernfs_node union members */
struct kernfs_elem_dir {
unsigned long subdirs;
/* children rbtree starts here and goes through kn->rb */
struct rb_root children;
/*
* The kernfs hierarchy this directory belongs to. This fits
* better directly in kernfs_node but is here to save space.
*/
struct kernfs_root *root;
};
struct kernfs_elem_symlink {
struct kernfs_node *target_kn;
};
struct kernfs_elem_attr {
const struct kernfs_ops *ops;
struct kernfs_open_node *open;
loff_t size;
kernfs: kernfs_notify() must be useable from non-sleepable contexts d911d9874801 ("kernfs: make kernfs_notify() trigger inotify events too") added fsnotify triggering to kernfs_notify() which requires a sleepable context. There are already existing users of kernfs_notify() which invoke it from an atomic context and in general it's silly to require a sleepable context for triggering a notification. The following is an invalid context bug triggerd by md invoking sysfs_notify() from IO completion path. BUG: sleeping function called from invalid context at kernel/locking/mutex.c:586 in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/1 2 locks held by swapper/1/0: #0: (&(&vblk->vq_lock)->rlock){-.-...}, at: [<ffffffffa0039042>] virtblk_done+0x42/0xe0 [virtio_blk] #1: (&(&bitmap->counts.lock)->rlock){-.....}, at: [<ffffffff81633718>] bitmap_endwrite+0x68/0x240 irq event stamp: 33518 hardirqs last enabled at (33515): [<ffffffff8102544f>] default_idle+0x1f/0x230 hardirqs last disabled at (33516): [<ffffffff818122ed>] common_interrupt+0x6d/0x72 softirqs last enabled at (33518): [<ffffffff810a1272>] _local_bh_enable+0x22/0x50 softirqs last disabled at (33517): [<ffffffff810a29e0>] irq_enter+0x60/0x80 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.16.0-0.rc2.git2.1.fc21.x86_64 #1 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 0000000000000000 f90db13964f4ee05 ffff88007d403b80 ffffffff81807b4c 0000000000000000 ffff88007d403ba8 ffffffff810d4f14 0000000000000000 0000000000441800 ffff880078fa1780 ffff88007d403c38 ffffffff8180caf2 Call Trace: <IRQ> [<ffffffff81807b4c>] dump_stack+0x4d/0x66 [<ffffffff810d4f14>] __might_sleep+0x184/0x240 [<ffffffff8180caf2>] mutex_lock_nested+0x42/0x440 [<ffffffff812d76a0>] kernfs_notify+0x90/0x150 [<ffffffff8163377c>] bitmap_endwrite+0xcc/0x240 [<ffffffffa00de863>] close_write+0x93/0xb0 [raid1] [<ffffffffa00df029>] r1_bio_write_done+0x29/0x50 [raid1] [<ffffffffa00e0474>] raid1_end_write_request+0xe4/0x260 [raid1] [<ffffffff813acb8b>] bio_endio+0x6b/0xa0 [<ffffffff813b46c4>] blk_update_request+0x94/0x420 [<ffffffff813bf0ea>] blk_mq_end_io+0x1a/0x70 [<ffffffffa00392c2>] virtblk_request_done+0x32/0x80 [virtio_blk] [<ffffffff813c0648>] __blk_mq_complete_request+0x88/0x120 [<ffffffff813c070a>] blk_mq_complete_request+0x2a/0x30 [<ffffffffa0039066>] virtblk_done+0x66/0xe0 [virtio_blk] [<ffffffffa002535a>] vring_interrupt+0x3a/0xa0 [virtio_ring] [<ffffffff81116177>] handle_irq_event_percpu+0x77/0x340 [<ffffffff8111647d>] handle_irq_event+0x3d/0x60 [<ffffffff81119436>] handle_edge_irq+0x66/0x130 [<ffffffff8101c3e4>] handle_irq+0x84/0x150 [<ffffffff818146ad>] do_IRQ+0x4d/0xe0 [<ffffffff818122f2>] common_interrupt+0x72/0x72 <EOI> [<ffffffff8105f706>] ? native_safe_halt+0x6/0x10 [<ffffffff81025454>] default_idle+0x24/0x230 [<ffffffff81025f9f>] arch_cpu_idle+0xf/0x20 [<ffffffff810f5adc>] cpu_startup_entry+0x37c/0x7b0 [<ffffffff8104df1b>] start_secondary+0x25b/0x300 This patch fixes it by punting the notification delivery through a work item. This ends up adding an extra pointer to kernfs_elem_attr enlarging kernfs_node by a pointer, which is not ideal but not a very big deal either. If this turns out to be an actual issue, we can move kernfs_elem_attr->size to kernfs_node->iattr later. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Josh Boyer <jwboyer@fedoraproject.org> Cc: Jens Axboe <axboe@kernel.dk> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-07-02 04:41:03 +08:00
struct kernfs_node *notify_next; /* for kernfs_notify() */
};
/* represent a kernfs node */
union kernfs_node_id {
struct {
/*
* blktrace will export this struct as a simplified 'struct
* fid' (which is a big data struction), so userspace can use
* it to find kernfs node. The layout must match the first two
* fields of 'struct fid' exactly.
*/
u32 ino;
u32 generation;
};
u64 id;
};
/*
* kernfs_node - the building block of kernfs hierarchy. Each and every
* kernfs node is represented by single kernfs_node. Most fields are
* private to kernfs and shouldn't be accessed directly by kernfs users.
*
* As long as s_count reference is held, the kernfs_node itself is
* accessible. Dereferencing elem or any other outer entity requires
* active reference.
*/
struct kernfs_node {
atomic_t count;
atomic_t active;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map dep_map;
#endif
/*
* Use kernfs_get_parent() and kernfs_name/path() instead of
* accessing the following two fields directly. If the node is
* never moved to a different parent, it is safe to access the
* parent directly.
*/
struct kernfs_node *parent;
const char *name;
struct rb_node rb;
const void *ns; /* namespace tag */
unsigned int hash; /* ns + name hash */
union {
struct kernfs_elem_dir dir;
struct kernfs_elem_symlink symlink;
struct kernfs_elem_attr attr;
};
void *priv;
union kernfs_node_id id;
unsigned short flags;
umode_t mode;
struct kernfs_iattrs *iattr;
};
/*
* kernfs_syscall_ops may be specified on kernfs_create_root() to support
* syscalls. These optional callbacks are invoked on the matching syscalls
* and can perform any kernfs operations which don't necessarily have to be
* the exact operation requested. An active reference is held for each
* kernfs_node parameter.
*/
struct kernfs_syscall_ops {
int (*show_options)(struct seq_file *sf, struct kernfs_root *root);
int (*mkdir)(struct kernfs_node *parent, const char *name,
umode_t mode);
int (*rmdir)(struct kernfs_node *kn);
int (*rename)(struct kernfs_node *kn, struct kernfs_node *new_parent,
const char *new_name);
cgroup, kernfs: make mountinfo show properly scoped path for cgroup namespaces Patch summary: When showing a cgroupfs entry in mountinfo, show the path of the mount root dentry relative to the reader's cgroup namespace root. Short explanation (courtesy of mkerrisk): If we create a new cgroup namespace, then we want both /proc/self/cgroup and /proc/self/mountinfo to show cgroup paths that are correctly virtualized with respect to the cgroup mount point. Previous to this patch, /proc/self/cgroup shows the right info, but /proc/self/mountinfo does not. Long version: When a uid 0 task which is in freezer cgroup /a/b, unshares a new cgroup namespace, and then mounts a new instance of the freezer cgroup, the new mount will be rooted at /a/b. The root dentry field of the mountinfo entry will show '/a/b'. cat > /tmp/do1 << EOF mount -t cgroup -o freezer freezer /mnt grep freezer /proc/self/mountinfo EOF unshare -Gm bash /tmp/do1 > 330 160 0:34 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,freezer > 355 133 0:34 /a/b /mnt rw,relatime - cgroup freezer rw,freezer The task's freezer cgroup entry in /proc/self/cgroup will simply show '/': grep freezer /proc/self/cgroup 9:freezer:/ If instead the same task simply bind mounts the /a/b cgroup directory, the resulting mountinfo entry will again show /a/b for the dentry root. However in this case the task will find its own cgroup at /mnt/a/b, not at /mnt: mount --bind /sys/fs/cgroup/freezer/a/b /mnt 130 25 0:34 /a/b /mnt rw,nosuid,nodev,noexec,relatime shared:21 - cgroup cgroup rw,freezer In other words, there is no way for the task to know, based on what is in mountinfo, which cgroup directory is its own. Example (by mkerrisk): First, a little script to save some typing and verbiage: echo -e "\t/proc/self/cgroup:\t$(cat /proc/self/cgroup | grep freezer)" cat /proc/self/mountinfo | grep freezer | awk '{print "\tmountinfo:\t\t" $4 "\t" $5}' Create cgroup, place this shell into the cgroup, and look at the state of the /proc files: 2653 2653 # Our shell 14254 # cat(1) /proc/self/cgroup: 10:freezer:/a/b mountinfo: / /sys/fs/cgroup/freezer Create a shell in new cgroup and mount namespaces. The act of creating a new cgroup namespace causes the process's current cgroups directories to become its cgroup root directories. (Here, I'm using my own version of the "unshare" utility, which takes the same options as the util-linux version): Look at the state of the /proc files: /proc/self/cgroup: 10:freezer:/ mountinfo: / /sys/fs/cgroup/freezer The third entry in /proc/self/cgroup (the pathname of the cgroup inside the hierarchy) is correctly virtualized w.r.t. the cgroup namespace, which is rooted at /a/b in the outer namespace. However, the info in /proc/self/mountinfo is not for this cgroup namespace, since we are seeing a duplicate of the mount from the old mount namespace, and the info there does not correspond to the new cgroup namespace. However, trying to create a new mount still doesn't show us the right information in mountinfo: # propagating to other mountns /proc/self/cgroup: 7:freezer:/ mountinfo: /a/b /mnt/freezer The act of creating a new cgroup namespace caused the process's current freezer directory, "/a/b", to become its cgroup freezer root directory. In other words, the pathname directory of the directory within the newly mounted cgroup filesystem should be "/", but mountinfo wrongly shows us "/a/b". The consequence of this is that the process in the cgroup namespace cannot correctly construct the pathname of its cgroup root directory from the information in /proc/PID/mountinfo. With this patch, the dentry root field in mountinfo is shown relative to the reader's cgroup namespace. So the same steps as above: /proc/self/cgroup: 10:freezer:/a/b mountinfo: / /sys/fs/cgroup/freezer /proc/self/cgroup: 10:freezer:/ mountinfo: /../.. /sys/fs/cgroup/freezer /proc/self/cgroup: 10:freezer:/ mountinfo: / /mnt/freezer cgroup.clone_children freezer.parent_freezing freezer.state tasks cgroup.procs freezer.self_freezing notify_on_release 3164 2653 # First shell that placed in this cgroup 3164 # Shell started by 'unshare' 14197 # cat(1) Signed-off-by: Serge Hallyn <serge.hallyn@ubuntu.com> Tested-by: Michael Kerrisk <mtk.manpages@gmail.com> Acked-by: Michael Kerrisk <mtk.manpages@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2016-05-09 22:59:55 +08:00
int (*show_path)(struct seq_file *sf, struct kernfs_node *kn,
struct kernfs_root *root);
KABI_RESERVE(1);
KABI_RESERVE(2);
KABI_RESERVE(3);
};
sysfs, kernfs: implement kernfs_create/destroy_root() There currently is single kernfs hierarchy in the whole system which is used for sysfs. kernfs needs to support multiple hierarchies to allow other users. This patch introduces struct kernfs_root which serves as the root of each kernfs hierarchy and implements kernfs_create/destroy_root(). * Each kernfs_root is associated with a root sd (sysfs_dentry). The root is freed when the root sd is released and kernfs_destory_root() simply invokes kernfs_remove() on the root sd. sysfs_remove_one() is updated to handle release of the root sd. Note that ps_iattr update in sysfs_remove_one() is trivially updated for readability. * Root sd's are now dynamically allocated using sysfs_new_dirent(). Update sysfs_alloc_ino() so that it gives out ino from 1 so that the root sd still gets ino 1. * While kernfs currently only points to the root sd, it'll soon grow fields which are specific to each hierarchy. As determining a given sd's root will be necessary, sd->s_dir.root is added. This backlink fits better as a separate field in sd; however, sd->s_dir is inside union with space to spare, so use it to save space and provide kernfs_root() accessor to determine the root sd. * As hierarchies may be destroyed now, each mount needs to hold onto the hierarchy it's attached to. Update sysfs_fill_super() and sysfs_kill_sb() so that they get and put the kernfs_root respectively. * sysfs_root is replaced with kernfs_root which is dynamically created by invoking kernfs_create_root() from sysfs_init(). This patch doesn't introduce any visible behavior changes. v2: kernfs_create_root() forgot to set @sd->priv. Fixed. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:40 +08:00
struct kernfs_root {
/* published fields */
struct kernfs_node *kn;
unsigned int flags; /* KERNFS_ROOT_* flags */
/* private fields, do not use outside kernfs proper */
struct idr ino_idr;
u32 last_ino;
u32 next_generation;
struct kernfs_syscall_ops *syscall_ops;
/* list of kernfs_super_info of this root, protected by kernfs_mutex */
struct list_head supers;
wait_queue_head_t deactivate_waitq;
sysfs, kernfs: implement kernfs_create/destroy_root() There currently is single kernfs hierarchy in the whole system which is used for sysfs. kernfs needs to support multiple hierarchies to allow other users. This patch introduces struct kernfs_root which serves as the root of each kernfs hierarchy and implements kernfs_create/destroy_root(). * Each kernfs_root is associated with a root sd (sysfs_dentry). The root is freed when the root sd is released and kernfs_destory_root() simply invokes kernfs_remove() on the root sd. sysfs_remove_one() is updated to handle release of the root sd. Note that ps_iattr update in sysfs_remove_one() is trivially updated for readability. * Root sd's are now dynamically allocated using sysfs_new_dirent(). Update sysfs_alloc_ino() so that it gives out ino from 1 so that the root sd still gets ino 1. * While kernfs currently only points to the root sd, it'll soon grow fields which are specific to each hierarchy. As determining a given sd's root will be necessary, sd->s_dir.root is added. This backlink fits better as a separate field in sd; however, sd->s_dir is inside union with space to spare, so use it to save space and provide kernfs_root() accessor to determine the root sd. * As hierarchies may be destroyed now, each mount needs to hold onto the hierarchy it's attached to. Update sysfs_fill_super() and sysfs_kill_sb() so that they get and put the kernfs_root respectively. * sysfs_root is replaced with kernfs_root which is dynamically created by invoking kernfs_create_root() from sysfs_init(). This patch doesn't introduce any visible behavior changes. v2: kernfs_create_root() forgot to set @sd->priv. Fixed. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:40 +08:00
};
struct kernfs_open_file {
/* published fields */
struct kernfs_node *kn;
struct file *file;
struct seq_file *seq_file;
void *priv;
/* private fields, do not use outside kernfs proper */
struct mutex mutex;
kernfs: Move faulting copy_user operations outside of the mutex A fault in a user provided buffer may lead anywhere, and lockdep warns that we have a potential deadlock between the mm->mmap_sem and the kernfs file mutex: [ 82.811702] ====================================================== [ 82.811705] [ INFO: possible circular locking dependency detected ] [ 82.811709] 4.5.0-rc4-gfxbench+ #1 Not tainted [ 82.811711] ------------------------------------------------------- [ 82.811714] kms_setmode/5859 is trying to acquire lock: [ 82.811717] (&dev->struct_mutex){+.+.+.}, at: [<ffffffff8150d9c1>] drm_gem_mmap+0x1a1/0x270 [ 82.811731] but task is already holding lock: [ 82.811734] (&mm->mmap_sem){++++++}, at: [<ffffffff8117b364>] vm_mmap_pgoff+0x44/0xa0 [ 82.811745] which lock already depends on the new lock. [ 82.811749] the existing dependency chain (in reverse order) is: [ 82.811752] -> #3 (&mm->mmap_sem){++++++}: [ 82.811761] [<ffffffff810cc883>] lock_acquire+0xc3/0x1d0 [ 82.811766] [<ffffffff8118bc65>] __might_fault+0x75/0xa0 [ 82.811771] [<ffffffff8124da4a>] kernfs_fop_write+0x8a/0x180 [ 82.811787] [<ffffffff811d1023>] __vfs_write+0x23/0xe0 [ 82.811792] [<ffffffff811d1d74>] vfs_write+0xa4/0x190 [ 82.811797] [<ffffffff811d2c14>] SyS_write+0x44/0xb0 [ 82.811801] [<ffffffff817bb81b>] entry_SYSCALL_64_fastpath+0x16/0x73 [ 82.811807] -> #2 (s_active#6){++++.+}: [ 82.811814] [<ffffffff810cc883>] lock_acquire+0xc3/0x1d0 [ 82.811819] [<ffffffff8124c070>] __kernfs_remove+0x210/0x2f0 [ 82.811823] [<ffffffff8124d040>] kernfs_remove_by_name_ns+0x40/0xa0 [ 82.811828] [<ffffffff8124e9e0>] sysfs_remove_file_ns+0x10/0x20 [ 82.811832] [<ffffffff815318d4>] device_del+0x124/0x250 [ 82.811837] [<ffffffff81531a19>] device_unregister+0x19/0x60 [ 82.811841] [<ffffffff8153c051>] cpu_cache_sysfs_exit+0x51/0xb0 [ 82.811846] [<ffffffff8153c628>] cacheinfo_cpu_callback+0x38/0x70 [ 82.811851] [<ffffffff8109ae89>] notifier_call_chain+0x39/0xa0 [ 82.811856] [<ffffffff8109aef9>] __raw_notifier_call_chain+0x9/0x10 [ 82.811860] [<ffffffff810786de>] cpu_notify+0x1e/0x40 [ 82.811865] [<ffffffff81078779>] cpu_notify_nofail+0x9/0x20 [ 82.811869] [<ffffffff81078ac3>] _cpu_down+0x233/0x340 [ 82.811874] [<ffffffff81079019>] disable_nonboot_cpus+0xc9/0x350 [ 82.811878] [<ffffffff810d2e11>] suspend_devices_and_enter+0x5a1/0xb50 [ 82.811883] [<ffffffff810d3903>] pm_suspend+0x543/0x8d0 [ 82.811888] [<ffffffff810d1b77>] state_store+0x77/0xe0 [ 82.811892] [<ffffffff813fa68f>] kobj_attr_store+0xf/0x20 [ 82.811897] [<ffffffff8124e740>] sysfs_kf_write+0x40/0x50 [ 82.811902] [<ffffffff8124dafc>] kernfs_fop_write+0x13c/0x180 [ 82.811906] [<ffffffff811d1023>] __vfs_write+0x23/0xe0 [ 82.811910] [<ffffffff811d1d74>] vfs_write+0xa4/0x190 [ 82.811914] [<ffffffff811d2c14>] SyS_write+0x44/0xb0 [ 82.811918] [<ffffffff817bb81b>] entry_SYSCALL_64_fastpath+0x16/0x73 [ 82.811923] -> #1 (cpu_hotplug.lock){+.+.+.}: [ 82.811929] [<ffffffff810cc883>] lock_acquire+0xc3/0x1d0 [ 82.811933] [<ffffffff817b6f72>] mutex_lock_nested+0x62/0x3b0 [ 82.811940] [<ffffffff810784c1>] get_online_cpus+0x61/0x80 [ 82.811944] [<ffffffff811170eb>] stop_machine+0x1b/0xe0 [ 82.811949] [<ffffffffa0178edd>] gen8_ggtt_insert_entries__BKL+0x2d/0x30 [i915] [ 82.812009] [<ffffffffa017d3a6>] ggtt_bind_vma+0x46/0x70 [i915] [ 82.812045] [<ffffffffa017eb70>] i915_vma_bind+0x140/0x290 [i915] [ 82.812081] [<ffffffffa01862b9>] i915_gem_object_do_pin+0x899/0xb00 [i915] [ 82.812117] [<ffffffffa0186555>] i915_gem_object_pin+0x35/0x40 [i915] [ 82.812154] [<ffffffffa019a23e>] intel_init_pipe_control+0xbe/0x210 [i915] [ 82.812192] [<ffffffffa0197312>] intel_logical_rings_init+0xe2/0xde0 [i915] [ 82.812232] [<ffffffffa0186fe3>] i915_gem_init+0xf3/0x130 [i915] [ 82.812278] [<ffffffffa02097ed>] i915_driver_load+0xf2d/0x1770 [i915] [ 82.812318] [<ffffffff81512474>] drm_dev_register+0xa4/0xb0 [ 82.812323] [<ffffffff8151467e>] drm_get_pci_dev+0xce/0x1e0 [ 82.812328] [<ffffffffa01472cf>] i915_pci_probe+0x2f/0x50 [i915] [ 82.812360] [<ffffffff8143f907>] pci_device_probe+0x87/0xf0 [ 82.812366] [<ffffffff81535f89>] driver_probe_device+0x229/0x450 [ 82.812371] [<ffffffff81536233>] __driver_attach+0x83/0x90 [ 82.812375] [<ffffffff81533c61>] bus_for_each_dev+0x61/0xa0 [ 82.812380] [<ffffffff81535879>] driver_attach+0x19/0x20 [ 82.812384] [<ffffffff8153535f>] bus_add_driver+0x1ef/0x290 [ 82.812388] [<ffffffff81536e9b>] driver_register+0x5b/0xe0 [ 82.812393] [<ffffffff8143e83b>] __pci_register_driver+0x5b/0x60 [ 82.812398] [<ffffffff81514866>] drm_pci_init+0xd6/0x100 [ 82.812402] [<ffffffffa027c094>] 0xffffffffa027c094 [ 82.812406] [<ffffffff810003de>] do_one_initcall+0xae/0x1d0 [ 82.812412] [<ffffffff811595a0>] do_init_module+0x5b/0x1cb [ 82.812417] [<ffffffff81106160>] load_module+0x1c20/0x2480 [ 82.812422] [<ffffffff81106bae>] SyS_finit_module+0x7e/0xa0 [ 82.812428] [<ffffffff817bb81b>] entry_SYSCALL_64_fastpath+0x16/0x73 [ 82.812433] -> #0 (&dev->struct_mutex){+.+.+.}: [ 82.812439] [<ffffffff810cbe59>] __lock_acquire+0x1fc9/0x20f0 [ 82.812443] [<ffffffff810cc883>] lock_acquire+0xc3/0x1d0 [ 82.812456] [<ffffffff8150d9e7>] drm_gem_mmap+0x1c7/0x270 [ 82.812460] [<ffffffff81196a14>] mmap_region+0x334/0x580 [ 82.812466] [<ffffffff81196fc4>] do_mmap+0x364/0x410 [ 82.812470] [<ffffffff8117b38d>] vm_mmap_pgoff+0x6d/0xa0 [ 82.812474] [<ffffffff811950f4>] SyS_mmap_pgoff+0x184/0x220 [ 82.812479] [<ffffffff8100a0fd>] SyS_mmap+0x1d/0x20 [ 82.812484] [<ffffffff817bb81b>] entry_SYSCALL_64_fastpath+0x16/0x73 [ 82.812489] other info that might help us debug this: [ 82.812493] Chain exists of: &dev->struct_mutex --> s_active#6 --> &mm->mmap_sem [ 82.812502] Possible unsafe locking scenario: [ 82.812506] CPU0 CPU1 [ 82.812508] ---- ---- [ 82.812510] lock(&mm->mmap_sem); [ 82.812514] lock(s_active#6); [ 82.812519] lock(&mm->mmap_sem); [ 82.812522] lock(&dev->struct_mutex); [ 82.812526] *** DEADLOCK *** [ 82.812531] 1 lock held by kms_setmode/5859: [ 82.812533] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8117b364>] vm_mmap_pgoff+0x44/0xa0 [ 82.812541] stack backtrace: [ 82.812547] CPU: 0 PID: 5859 Comm: kms_setmode Not tainted 4.5.0-rc4-gfxbench+ #1 [ 82.812550] Hardware name: /NUC5CPYB, BIOS PYBSWCEL.86A.0040.2015.0814.1353 08/14/2015 [ 82.812553] 0000000000000000 ffff880079407bf0 ffffffff813f8505 ffffffff825fb270 [ 82.812560] ffffffff825c4190 ffff880079407c30 ffffffff810c84ac ffff880079407c90 [ 82.812566] ffff8800797ed328 ffff8800797ecb00 0000000000000001 ffff8800797ed350 [ 82.812573] Call Trace: [ 82.812578] [<ffffffff813f8505>] dump_stack+0x67/0x92 [ 82.812582] [<ffffffff810c84ac>] print_circular_bug+0x1fc/0x310 [ 82.812586] [<ffffffff810cbe59>] __lock_acquire+0x1fc9/0x20f0 [ 82.812590] [<ffffffff810cc883>] lock_acquire+0xc3/0x1d0 [ 82.812594] [<ffffffff8150d9c1>] ? drm_gem_mmap+0x1a1/0x270 [ 82.812599] [<ffffffff8150d9e7>] drm_gem_mmap+0x1c7/0x270 [ 82.812603] [<ffffffff8150d9c1>] ? drm_gem_mmap+0x1a1/0x270 [ 82.812608] [<ffffffff81196a14>] mmap_region+0x334/0x580 [ 82.812612] [<ffffffff81196fc4>] do_mmap+0x364/0x410 [ 82.812616] [<ffffffff8117b38d>] vm_mmap_pgoff+0x6d/0xa0 [ 82.812629] [<ffffffff811950f4>] SyS_mmap_pgoff+0x184/0x220 [ 82.812633] [<ffffffff8100a0fd>] SyS_mmap+0x1d/0x20 [ 82.812637] [<ffffffff817bb81b>] entry_SYSCALL_64_fastpath+0x16/0x73 Highly unlikely though this scenario is, we can avoid the issue entirely by moving the copy operation from out under the kernfs_get_active() tracking by assigning the preallocated buffer its own mutex. The temporary buffer allocation doesn't require mutex locking as it is entirely local. The locked section was extended by the addition of the preallocated buf to speed up md user operations in commit 2b75869bba676c248d8d25ae6d2bd9221dfffdb6 Author: NeilBrown <neilb@suse.de> Date: Mon Oct 13 16:41:28 2014 +1100 sysfs/kernfs: allow attributes to request write buffer be pre-allocated. Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94350 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: NeilBrown <neilb@suse.de> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-03-31 18:45:06 +08:00
struct mutex prealloc_mutex;
int event;
struct list_head list;
sysfs/kernfs: allow attributes to request write buffer be pre-allocated. md/raid allows metadata management to be performed in user-space. A various times, particularly on device failure, the metadata needs to be updated before further writes can be permitted. This means that the user-space program which updates metadata much not block on writeout, and so must not allocate memory. mlockall(MCL_CURRENT|MCL_FUTURE) and pre-allocation can avoid all memory allocation issues for user-memory, but that does not help kernel memory. Several kernel objects can be pre-allocated. e.g. files opened before any writes to the array are permitted. However some kernel allocation happens in places that cannot be pre-allocated. In particular, writes to sysfs files (to tell md that it can now allow writes to the array) allocate a buffer using GFP_KERNEL. This patch allows attributes to be marked as "PREALLOC". In that case the maximal buffer is allocated when the file is opened, and then used on each write instead of allocating a new buffer. As the same buffer is now shared for all writes on the same file description, the mutex is extended to cover full use of the buffer including the copy_from_user(). The new __ATTR_PREALLOC() 'or's a new flag in to the 'mode', which is inspected by sysfs_add_file_mode_ns() to determine if the file should be marked as requiring prealloc. Despite the comment, we *do* use ->seq_show together with ->prealloc in this patch. The next patch fixes that. Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-13 13:41:28 +08:00
char *prealloc_buf;
kernfs: cache atomic_write_len in kernfs_open_file While implementing atomic_write_len, 4d3773c4bb41 ("kernfs: implement kernfs_ops->atomic_write_len") moved data copy from userland inside kernfs_get_active() and kernfs_open_file->mutex so that kernfs_ops->atomic_write_len can be accessed before copying buffer from userland; unfortunately, this could lead to locking order inversion involving mmap_sem if copy_from_user() takes a page fault. ====================================================== [ INFO: possible circular locking dependency detected ] 3.14.0-rc4-next-20140228-sasha-00011-g4077c67-dirty #26 Tainted: G W ------------------------------------------------------- trinity-c236/10658 is trying to acquire lock: (&of->mutex#2){+.+.+.}, at: [<fs/kernfs/file.c:487>] kernfs_fop_mmap+0x54/0x120 but task is already holding lock: (&mm->mmap_sem){++++++}, at: [<mm/util.c:397>] vm_mmap_pgoff+0x6e/0xe0 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&mm->mmap_sem){++++++}: [<kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131>] validate_chain+0x6c5/0x7b0 [<kernel/locking/lockdep.c:3182>] __lock_acquire+0x4cd/0x5a0 [<arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602>] lock_acquire+0x182/0x1d0 [<mm/memory.c:4188>] might_fault+0x7e/0xb0 [<arch/x86/include/asm/uaccess.h:713 fs/kernfs/file.c:291>] kernfs_fop_write+0xd8/0x190 [<fs/read_write.c:473>] vfs_write+0xe3/0x1d0 [<fs/read_write.c:523 fs/read_write.c:515>] SyS_write+0x5d/0xa0 [<arch/x86/kernel/entry_64.S:749>] tracesys+0xdd/0xe2 -> #0 (&of->mutex#2){+.+.+.}: [<kernel/locking/lockdep.c:1840>] check_prev_add+0x13f/0x560 [<kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131>] validate_chain+0x6c5/0x7b0 [<kernel/locking/lockdep.c:3182>] __lock_acquire+0x4cd/0x5a0 [<arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602>] lock_acquire+0x182/0x1d0 [<kernel/locking/mutex.c:470 kernel/locking/mutex.c:571>] mutex_lock_nested+0x6a/0x510 [<fs/kernfs/file.c:487>] kernfs_fop_mmap+0x54/0x120 [<mm/mmap.c:1573>] mmap_region+0x310/0x5c0 [<mm/mmap.c:1365>] do_mmap_pgoff+0x385/0x430 [<mm/util.c:399>] vm_mmap_pgoff+0x8f/0xe0 [<mm/mmap.c:1416 mm/mmap.c:1374>] SyS_mmap_pgoff+0x1b0/0x210 [<arch/x86/kernel/sys_x86_64.c:72>] SyS_mmap+0x1d/0x20 [<arch/x86/kernel/entry_64.S:749>] tracesys+0xdd/0xe2 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&mm->mmap_sem); lock(&of->mutex#2); lock(&mm->mmap_sem); lock(&of->mutex#2); *** DEADLOCK *** 1 lock held by trinity-c236/10658: #0: (&mm->mmap_sem){++++++}, at: [<mm/util.c:397>] vm_mmap_pgoff+0x6e/0xe0 stack backtrace: CPU: 2 PID: 10658 Comm: trinity-c236 Tainted: G W 3.14.0-rc4-next-20140228-sasha-00011-g4077c67-dirty #26 0000000000000000 ffff88011911fa48 ffffffff8438e945 0000000000000000 0000000000000000 ffff88011911fa98 ffffffff811a0109 ffff88011911fab8 ffff88011911fab8 ffff88011911fa98 ffff880119128cc0 ffff880119128cf8 Call Trace: [<lib/dump_stack.c:52>] dump_stack+0x52/0x7f [<kernel/locking/lockdep.c:1213>] print_circular_bug+0x129/0x160 [<kernel/locking/lockdep.c:1840>] check_prev_add+0x13f/0x560 [<include/linux/spinlock.h:343 mm/slub.c:1933>] ? deactivate_slab+0x511/0x550 [<kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131>] validate_chain+0x6c5/0x7b0 [<kernel/locking/lockdep.c:3182>] __lock_acquire+0x4cd/0x5a0 [<mm/mmap.c:1552>] ? mmap_region+0x24a/0x5c0 [<arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602>] lock_acquire+0x182/0x1d0 [<fs/kernfs/file.c:487>] ? kernfs_fop_mmap+0x54/0x120 [<kernel/locking/mutex.c:470 kernel/locking/mutex.c:571>] mutex_lock_nested+0x6a/0x510 [<fs/kernfs/file.c:487>] ? kernfs_fop_mmap+0x54/0x120 [<kernel/sched/core.c:2477>] ? get_parent_ip+0x11/0x50 [<fs/kernfs/file.c:487>] ? kernfs_fop_mmap+0x54/0x120 [<fs/kernfs/file.c:487>] kernfs_fop_mmap+0x54/0x120 [<mm/mmap.c:1573>] mmap_region+0x310/0x5c0 [<mm/mmap.c:1365>] do_mmap_pgoff+0x385/0x430 [<mm/util.c:397>] ? vm_mmap_pgoff+0x6e/0xe0 [<mm/util.c:399>] vm_mmap_pgoff+0x8f/0xe0 [<kernel/rcu/update.c:97>] ? __rcu_read_unlock+0x44/0xb0 [<fs/file.c:641>] ? dup_fd+0x3c0/0x3c0 [<mm/mmap.c:1416 mm/mmap.c:1374>] SyS_mmap_pgoff+0x1b0/0x210 [<arch/x86/kernel/sys_x86_64.c:72>] SyS_mmap+0x1d/0x20 [<arch/x86/kernel/entry_64.S:749>] tracesys+0xdd/0xe2 Fix it by caching atomic_write_len in kernfs_open_file during open so that it can be determined without accessing kernfs_ops in kernfs_fop_write(). This restores the structure of kernfs_fop_write() before 4d3773c4bb41 with updated @len determination logic. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Sasha Levin <sasha.levin@oracle.com> References: http://lkml.kernel.org/g/53113485.2090407@oracle.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-05 04:38:46 +08:00
size_t atomic_write_len;
bool mmapped:1;
bool released:1;
const struct vm_operations_struct *vm_ops;
};
sysfs, kernfs: introduce kernfs_ops We're in the process of separating out core sysfs functionality into kernfs which will deal with sysfs_dirents directly. This patch introduces kernfs_ops which hosts methods kernfs users implement and updates fs/sysfs/file.c such that sysfs_kf_*() functions populate kernfs_ops and kernfs_file_*() functions call the matching entries from kernfs_ops. kernfs_ops contains the following groups of methods. * seq_show() - for kernfs files which use seq_file for reads. * read() - for direct read implementations. Used iff seq_show() is not implemented. * write() - for writes. * mmap() - for mmaps. Notes: * sysfs_elem_attr->ops is added so that kernfs_ops can be accessed from sysfs_dirent. kernfs_ops() helper is added to verify locking and access the field. * SYSFS_FLAG_HAS_(SEQ_SHOW|MMAP) added. sd->s_attr->ops is accessible only while holding active_ref and there are cases where we want to take different actions depending on which ops are implemented. These two flags cache whether the two ops are implemented for those. * kernfs_file_*() no longer test sysfs type but chooses different behaviors depending on which methods in kernfs_ops are implemented. The conversions are trivial except for the open path. As kernfs_file_open() now decides whether to allow read/write accesses depending on the kernfs_ops implemented, the presence of methods in kobjs and attribute_bin should be propagated to kernfs_ops. sysfs_add_file_mode_ns() is updated so that it propagates presence / absence of the callbacks through _empty, _ro, _wo, _rw kernfs_ops. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:21 +08:00
struct kernfs_ops {
/*
* Optional open/release methods. Both are called with
* @of->seq_file populated.
*/
int (*open)(struct kernfs_open_file *of);
void (*release)(struct kernfs_open_file *of);
sysfs, kernfs: introduce kernfs_ops We're in the process of separating out core sysfs functionality into kernfs which will deal with sysfs_dirents directly. This patch introduces kernfs_ops which hosts methods kernfs users implement and updates fs/sysfs/file.c such that sysfs_kf_*() functions populate kernfs_ops and kernfs_file_*() functions call the matching entries from kernfs_ops. kernfs_ops contains the following groups of methods. * seq_show() - for kernfs files which use seq_file for reads. * read() - for direct read implementations. Used iff seq_show() is not implemented. * write() - for writes. * mmap() - for mmaps. Notes: * sysfs_elem_attr->ops is added so that kernfs_ops can be accessed from sysfs_dirent. kernfs_ops() helper is added to verify locking and access the field. * SYSFS_FLAG_HAS_(SEQ_SHOW|MMAP) added. sd->s_attr->ops is accessible only while holding active_ref and there are cases where we want to take different actions depending on which ops are implemented. These two flags cache whether the two ops are implemented for those. * kernfs_file_*() no longer test sysfs type but chooses different behaviors depending on which methods in kernfs_ops are implemented. The conversions are trivial except for the open path. As kernfs_file_open() now decides whether to allow read/write accesses depending on the kernfs_ops implemented, the presence of methods in kobjs and attribute_bin should be propagated to kernfs_ops. sysfs_add_file_mode_ns() is updated so that it propagates presence / absence of the callbacks through _empty, _ro, _wo, _rw kernfs_ops. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:21 +08:00
/*
* Read is handled by either seq_file or raw_read().
*
* If seq_show() is present, seq_file path is active. Other seq
* operations are optional and if not implemented, the behavior is
* equivalent to single_open(). @sf->private points to the
* associated kernfs_open_file.
sysfs, kernfs: introduce kernfs_ops We're in the process of separating out core sysfs functionality into kernfs which will deal with sysfs_dirents directly. This patch introduces kernfs_ops which hosts methods kernfs users implement and updates fs/sysfs/file.c such that sysfs_kf_*() functions populate kernfs_ops and kernfs_file_*() functions call the matching entries from kernfs_ops. kernfs_ops contains the following groups of methods. * seq_show() - for kernfs files which use seq_file for reads. * read() - for direct read implementations. Used iff seq_show() is not implemented. * write() - for writes. * mmap() - for mmaps. Notes: * sysfs_elem_attr->ops is added so that kernfs_ops can be accessed from sysfs_dirent. kernfs_ops() helper is added to verify locking and access the field. * SYSFS_FLAG_HAS_(SEQ_SHOW|MMAP) added. sd->s_attr->ops is accessible only while holding active_ref and there are cases where we want to take different actions depending on which ops are implemented. These two flags cache whether the two ops are implemented for those. * kernfs_file_*() no longer test sysfs type but chooses different behaviors depending on which methods in kernfs_ops are implemented. The conversions are trivial except for the open path. As kernfs_file_open() now decides whether to allow read/write accesses depending on the kernfs_ops implemented, the presence of methods in kobjs and attribute_bin should be propagated to kernfs_ops. sysfs_add_file_mode_ns() is updated so that it propagates presence / absence of the callbacks through _empty, _ro, _wo, _rw kernfs_ops. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:21 +08:00
*
* read() is bounced through kernel buffer and a read larger than
* PAGE_SIZE results in partial operation of PAGE_SIZE.
*/
int (*seq_show)(struct seq_file *sf, void *v);
void *(*seq_start)(struct seq_file *sf, loff_t *ppos);
void *(*seq_next)(struct seq_file *sf, void *v, loff_t *ppos);
void (*seq_stop)(struct seq_file *sf, void *v);
sysfs, kernfs: introduce kernfs_ops We're in the process of separating out core sysfs functionality into kernfs which will deal with sysfs_dirents directly. This patch introduces kernfs_ops which hosts methods kernfs users implement and updates fs/sysfs/file.c such that sysfs_kf_*() functions populate kernfs_ops and kernfs_file_*() functions call the matching entries from kernfs_ops. kernfs_ops contains the following groups of methods. * seq_show() - for kernfs files which use seq_file for reads. * read() - for direct read implementations. Used iff seq_show() is not implemented. * write() - for writes. * mmap() - for mmaps. Notes: * sysfs_elem_attr->ops is added so that kernfs_ops can be accessed from sysfs_dirent. kernfs_ops() helper is added to verify locking and access the field. * SYSFS_FLAG_HAS_(SEQ_SHOW|MMAP) added. sd->s_attr->ops is accessible only while holding active_ref and there are cases where we want to take different actions depending on which ops are implemented. These two flags cache whether the two ops are implemented for those. * kernfs_file_*() no longer test sysfs type but chooses different behaviors depending on which methods in kernfs_ops are implemented. The conversions are trivial except for the open path. As kernfs_file_open() now decides whether to allow read/write accesses depending on the kernfs_ops implemented, the presence of methods in kobjs and attribute_bin should be propagated to kernfs_ops. sysfs_add_file_mode_ns() is updated so that it propagates presence / absence of the callbacks through _empty, _ro, _wo, _rw kernfs_ops. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:21 +08:00
ssize_t (*read)(struct kernfs_open_file *of, char *buf, size_t bytes,
sysfs, kernfs: introduce kernfs_ops We're in the process of separating out core sysfs functionality into kernfs which will deal with sysfs_dirents directly. This patch introduces kernfs_ops which hosts methods kernfs users implement and updates fs/sysfs/file.c such that sysfs_kf_*() functions populate kernfs_ops and kernfs_file_*() functions call the matching entries from kernfs_ops. kernfs_ops contains the following groups of methods. * seq_show() - for kernfs files which use seq_file for reads. * read() - for direct read implementations. Used iff seq_show() is not implemented. * write() - for writes. * mmap() - for mmaps. Notes: * sysfs_elem_attr->ops is added so that kernfs_ops can be accessed from sysfs_dirent. kernfs_ops() helper is added to verify locking and access the field. * SYSFS_FLAG_HAS_(SEQ_SHOW|MMAP) added. sd->s_attr->ops is accessible only while holding active_ref and there are cases where we want to take different actions depending on which ops are implemented. These two flags cache whether the two ops are implemented for those. * kernfs_file_*() no longer test sysfs type but chooses different behaviors depending on which methods in kernfs_ops are implemented. The conversions are trivial except for the open path. As kernfs_file_open() now decides whether to allow read/write accesses depending on the kernfs_ops implemented, the presence of methods in kobjs and attribute_bin should be propagated to kernfs_ops. sysfs_add_file_mode_ns() is updated so that it propagates presence / absence of the callbacks through _empty, _ro, _wo, _rw kernfs_ops. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:21 +08:00
loff_t off);
/*
* write() is bounced through kernel buffer. If atomic_write_len
* is not set, a write larger than PAGE_SIZE results in partial
* operations of PAGE_SIZE chunks. If atomic_write_len is set,
* writes upto the specified size are executed atomically but
* larger ones are rejected with -E2BIG.
sysfs, kernfs: introduce kernfs_ops We're in the process of separating out core sysfs functionality into kernfs which will deal with sysfs_dirents directly. This patch introduces kernfs_ops which hosts methods kernfs users implement and updates fs/sysfs/file.c such that sysfs_kf_*() functions populate kernfs_ops and kernfs_file_*() functions call the matching entries from kernfs_ops. kernfs_ops contains the following groups of methods. * seq_show() - for kernfs files which use seq_file for reads. * read() - for direct read implementations. Used iff seq_show() is not implemented. * write() - for writes. * mmap() - for mmaps. Notes: * sysfs_elem_attr->ops is added so that kernfs_ops can be accessed from sysfs_dirent. kernfs_ops() helper is added to verify locking and access the field. * SYSFS_FLAG_HAS_(SEQ_SHOW|MMAP) added. sd->s_attr->ops is accessible only while holding active_ref and there are cases where we want to take different actions depending on which ops are implemented. These two flags cache whether the two ops are implemented for those. * kernfs_file_*() no longer test sysfs type but chooses different behaviors depending on which methods in kernfs_ops are implemented. The conversions are trivial except for the open path. As kernfs_file_open() now decides whether to allow read/write accesses depending on the kernfs_ops implemented, the presence of methods in kobjs and attribute_bin should be propagated to kernfs_ops. sysfs_add_file_mode_ns() is updated so that it propagates presence / absence of the callbacks through _empty, _ro, _wo, _rw kernfs_ops. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:21 +08:00
*/
size_t atomic_write_len;
sysfs/kernfs: allow attributes to request write buffer be pre-allocated. md/raid allows metadata management to be performed in user-space. A various times, particularly on device failure, the metadata needs to be updated before further writes can be permitted. This means that the user-space program which updates metadata much not block on writeout, and so must not allocate memory. mlockall(MCL_CURRENT|MCL_FUTURE) and pre-allocation can avoid all memory allocation issues for user-memory, but that does not help kernel memory. Several kernel objects can be pre-allocated. e.g. files opened before any writes to the array are permitted. However some kernel allocation happens in places that cannot be pre-allocated. In particular, writes to sysfs files (to tell md that it can now allow writes to the array) allocate a buffer using GFP_KERNEL. This patch allows attributes to be marked as "PREALLOC". In that case the maximal buffer is allocated when the file is opened, and then used on each write instead of allocating a new buffer. As the same buffer is now shared for all writes on the same file description, the mutex is extended to cover full use of the buffer including the copy_from_user(). The new __ATTR_PREALLOC() 'or's a new flag in to the 'mode', which is inspected by sysfs_add_file_mode_ns() to determine if the file should be marked as requiring prealloc. Despite the comment, we *do* use ->seq_show together with ->prealloc in this patch. The next patch fixes that. Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-13 13:41:28 +08:00
/*
* "prealloc" causes a buffer to be allocated at open for
* all read/write requests. As ->seq_show uses seq_read()
* which does its own allocation, it is incompatible with
* ->prealloc. Provide ->read and ->write with ->prealloc.
*/
bool prealloc;
ssize_t (*write)(struct kernfs_open_file *of, char *buf, size_t bytes,
sysfs, kernfs: introduce kernfs_ops We're in the process of separating out core sysfs functionality into kernfs which will deal with sysfs_dirents directly. This patch introduces kernfs_ops which hosts methods kernfs users implement and updates fs/sysfs/file.c such that sysfs_kf_*() functions populate kernfs_ops and kernfs_file_*() functions call the matching entries from kernfs_ops. kernfs_ops contains the following groups of methods. * seq_show() - for kernfs files which use seq_file for reads. * read() - for direct read implementations. Used iff seq_show() is not implemented. * write() - for writes. * mmap() - for mmaps. Notes: * sysfs_elem_attr->ops is added so that kernfs_ops can be accessed from sysfs_dirent. kernfs_ops() helper is added to verify locking and access the field. * SYSFS_FLAG_HAS_(SEQ_SHOW|MMAP) added. sd->s_attr->ops is accessible only while holding active_ref and there are cases where we want to take different actions depending on which ops are implemented. These two flags cache whether the two ops are implemented for those. * kernfs_file_*() no longer test sysfs type but chooses different behaviors depending on which methods in kernfs_ops are implemented. The conversions are trivial except for the open path. As kernfs_file_open() now decides whether to allow read/write accesses depending on the kernfs_ops implemented, the presence of methods in kobjs and attribute_bin should be propagated to kernfs_ops. sysfs_add_file_mode_ns() is updated so that it propagates presence / absence of the callbacks through _empty, _ro, _wo, _rw kernfs_ops. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:21 +08:00
loff_t off);
fs: kernfs: add poll file operation Patch series "psi: pressure stall monitors", v3. Android is adopting psi to detect and remedy memory pressure that results in stuttering and decreased responsiveness on mobile devices. Psi gives us the stall information, but because we're dealing with latencies in the millisecond range, periodically reading the pressure files to detect stalls in a timely fashion is not feasible. Psi also doesn't aggregate its averages at a high enough frequency right now. This patch series extends the psi interface such that users can configure sensitive latency thresholds and use poll() and friends to be notified when these are breached. As high-frequency aggregation is costly, it implements an aggregation method that is optimized for fast, short-interval averaging, and makes the aggregation frequency adaptive, such that high-frequency updates only happen while monitored stall events are actively occurring. With these patches applied, Android can monitor for, and ward off, mounting memory shortages before they cause problems for the user. For example, using memory stall monitors in userspace low memory killer daemon (lmkd) we can detect mounting pressure and kill less important processes before device becomes visibly sluggish. In our memory stress testing psi memory monitors produce roughly 10x less false positives compared to vmpressure signals. Having ability to specify multiple triggers for the same psi metric allows other parts of Android framework to monitor memory state of the device and act accordingly. The new interface is straightforward. The user opens one of the pressure files for writing and writes a trigger description into the file descriptor that defines the stall state - some or full, and the maximum stall time over a given window of time. E.g.: /* Signal when stall time exceeds 100ms of a 1s window */ char trigger[] = "full 100000 1000000"; fd = open("/proc/pressure/memory"); write(fd, trigger, sizeof(trigger)); while (poll() >= 0) { ... } close(fd); When the monitored stall state is entered, psi adapts its aggregation frequency according to what the configured time window requires in order to emit event signals in a timely fashion. Once the stalling subsides, aggregation reverts back to normal. The trigger is associated with the open file descriptor. To stop monitoring, the user only needs to close the file descriptor and the trigger is discarded. Patches 1-4 prepare the psi code for polling support. Patch 5 implements the adaptive polling logic, the pressure growth detection optimized for short intervals, and hooks up write() and poll() on the pressure files. The patches were developed in collaboration with Johannes Weiner. This patch (of 5): Kernfs has a standardized poll/notification mechanism for waking all pollers on all fds when a filesystem node changes. To allow polling for custom events, add a .poll callback that can override the default. This is in preparation for pollable cgroup pressure files which have per-fd trigger configurations. Link: http://lkml.kernel.org/r/20190124211518.244221-2-surenb@google.com Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Suren Baghdasaryan <surenb@google.com> Cc: Dennis Zhou <dennis@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li Zefan <lizefan@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-06 07:45:45 +08:00
__poll_t (*poll)(struct kernfs_open_file *of,
struct poll_table_struct *pt);
int (*mmap)(struct kernfs_open_file *of, struct vm_area_struct *vma);
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lock_class_key lockdep_key;
#endif
KABI_RESERVE(1);
KABI_RESERVE(2);
sysfs, kernfs: introduce kernfs_ops We're in the process of separating out core sysfs functionality into kernfs which will deal with sysfs_dirents directly. This patch introduces kernfs_ops which hosts methods kernfs users implement and updates fs/sysfs/file.c such that sysfs_kf_*() functions populate kernfs_ops and kernfs_file_*() functions call the matching entries from kernfs_ops. kernfs_ops contains the following groups of methods. * seq_show() - for kernfs files which use seq_file for reads. * read() - for direct read implementations. Used iff seq_show() is not implemented. * write() - for writes. * mmap() - for mmaps. Notes: * sysfs_elem_attr->ops is added so that kernfs_ops can be accessed from sysfs_dirent. kernfs_ops() helper is added to verify locking and access the field. * SYSFS_FLAG_HAS_(SEQ_SHOW|MMAP) added. sd->s_attr->ops is accessible only while holding active_ref and there are cases where we want to take different actions depending on which ops are implemented. These two flags cache whether the two ops are implemented for those. * kernfs_file_*() no longer test sysfs type but chooses different behaviors depending on which methods in kernfs_ops are implemented. The conversions are trivial except for the open path. As kernfs_file_open() now decides whether to allow read/write accesses depending on the kernfs_ops implemented, the presence of methods in kobjs and attribute_bin should be propagated to kernfs_ops. sysfs_add_file_mode_ns() is updated so that it propagates presence / absence of the callbacks through _empty, _ro, _wo, _rw kernfs_ops. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:21 +08:00
};
kernfs, sysfs, cgroup, intel_rdt: Support fs_context Make kernfs support superblock creation/mount/remount with fs_context. This requires that sysfs, cgroup and intel_rdt, which are built on kernfs, be made to support fs_context also. Notes: (1) A kernfs_fs_context struct is created to wrap fs_context and the kernfs mount parameters are moved in here (or are in fs_context). (2) kernfs_mount{,_ns}() are made into kernfs_get_tree(). The extra namespace tag parameter is passed in the context if desired (3) kernfs_free_fs_context() is provided as a destructor for the kernfs_fs_context struct, but for the moment it does nothing except get called in the right places. (4) sysfs doesn't wrap kernfs_fs_context since it has no parameters to pass, but possibly this should be done anyway in case someone wants to add a parameter in future. (5) A cgroup_fs_context struct is created to wrap kernfs_fs_context and the cgroup v1 and v2 mount parameters are all moved there. (6) cgroup1 parameter parsing error messages are now handled by invalf(), which allows userspace to collect them directly. (7) cgroup1 parameter cleanup is now done in the context destructor rather than in the mount/get_tree and remount functions. Weirdies: (*) cgroup_do_get_tree() calls cset_cgroup_from_root() with locks held, but then uses the resulting pointer after dropping the locks. I'm told this is okay and needs commenting. (*) The cgroup refcount web. This really needs documenting. (*) cgroup2 only has one root? Add a suggestion from Thomas Gleixner in which the RDT enablement code is placed into its own function. [folded a leak fix from Andrey Vagin] Signed-off-by: David Howells <dhowells@redhat.com> cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> cc: Tejun Heo <tj@kernel.org> cc: Li Zefan <lizefan@huawei.com> cc: Johannes Weiner <hannes@cmpxchg.org> cc: cgroups@vger.kernel.org cc: fenghua.yu@intel.com Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-02 07:07:26 +08:00
/*
* The kernfs superblock creation/mount parameter context.
*/
struct kernfs_fs_context {
struct kernfs_root *root; /* Root of the hierarchy being mounted */
void *ns_tag; /* Namespace tag of the mount (or NULL) */
unsigned long magic; /* File system specific magic number */
/* The following are set/used by kernfs_mount() */
bool new_sb_created; /* Set to T if we allocated a new sb */
};
#ifdef CONFIG_KERNFS
static inline enum kernfs_node_type kernfs_type(struct kernfs_node *kn)
{
return kn->flags & KERNFS_TYPE_MASK;
}
/**
* kernfs_enable_ns - enable namespace under a directory
* @kn: directory of interest, should be empty
*
* This is to be called right after @kn is created to enable namespace
* under it. All children of @kn must have non-NULL namespace tags and
* only the ones which match the super_block's tag will be visible.
*/
static inline void kernfs_enable_ns(struct kernfs_node *kn)
{
WARN_ON_ONCE(kernfs_type(kn) != KERNFS_DIR);
WARN_ON_ONCE(!RB_EMPTY_ROOT(&kn->dir.children));
kn->flags |= KERNFS_NS;
}
/**
* kernfs_ns_enabled - test whether namespace is enabled
* @kn: the node to test
*
* Test whether namespace filtering is enabled for the children of @ns.
*/
static inline bool kernfs_ns_enabled(struct kernfs_node *kn)
{
return kn->flags & KERNFS_NS;
}
int kernfs_name(struct kernfs_node *kn, char *buf, size_t buflen);
int kernfs_path_from_node(struct kernfs_node *root_kn, struct kernfs_node *kn,
char *buf, size_t buflen);
void pr_cont_kernfs_name(struct kernfs_node *kn);
void pr_cont_kernfs_path(struct kernfs_node *kn);
struct kernfs_node *kernfs_get_parent(struct kernfs_node *kn);
struct kernfs_node *kernfs_find_and_get_ns(struct kernfs_node *parent,
const char *name, const void *ns);
struct kernfs_node *kernfs_walk_and_get_ns(struct kernfs_node *parent,
const char *path, const void *ns);
void kernfs_get(struct kernfs_node *kn);
void kernfs_put(struct kernfs_node *kn);
struct kernfs_node *kernfs_node_from_dentry(struct dentry *dentry);
struct kernfs_root *kernfs_root_from_sb(struct super_block *sb);
struct inode *kernfs_get_inode(struct super_block *sb, struct kernfs_node *kn);
struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
struct super_block *sb);
struct kernfs_root *kernfs_create_root(struct kernfs_syscall_ops *scops,
unsigned int flags, void *priv);
sysfs, kernfs: implement kernfs_create/destroy_root() There currently is single kernfs hierarchy in the whole system which is used for sysfs. kernfs needs to support multiple hierarchies to allow other users. This patch introduces struct kernfs_root which serves as the root of each kernfs hierarchy and implements kernfs_create/destroy_root(). * Each kernfs_root is associated with a root sd (sysfs_dentry). The root is freed when the root sd is released and kernfs_destory_root() simply invokes kernfs_remove() on the root sd. sysfs_remove_one() is updated to handle release of the root sd. Note that ps_iattr update in sysfs_remove_one() is trivially updated for readability. * Root sd's are now dynamically allocated using sysfs_new_dirent(). Update sysfs_alloc_ino() so that it gives out ino from 1 so that the root sd still gets ino 1. * While kernfs currently only points to the root sd, it'll soon grow fields which are specific to each hierarchy. As determining a given sd's root will be necessary, sd->s_dir.root is added. This backlink fits better as a separate field in sd; however, sd->s_dir is inside union with space to spare, so use it to save space and provide kernfs_root() accessor to determine the root sd. * As hierarchies may be destroyed now, each mount needs to hold onto the hierarchy it's attached to. Update sysfs_fill_super() and sysfs_kill_sb() so that they get and put the kernfs_root respectively. * sysfs_root is replaced with kernfs_root which is dynamically created by invoking kernfs_create_root() from sysfs_init(). This patch doesn't introduce any visible behavior changes. v2: kernfs_create_root() forgot to set @sd->priv. Fixed. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:40 +08:00
void kernfs_destroy_root(struct kernfs_root *root);
struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
const char *name, umode_t mode,
kuid_t uid, kgid_t gid,
void *priv, const void *ns);
struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
const char *name);
struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent,
const char *name, umode_t mode,
kuid_t uid, kgid_t gid,
loff_t size,
const struct kernfs_ops *ops,
void *priv, const void *ns,
struct lock_class_key *key);
struct kernfs_node *kernfs_create_link(struct kernfs_node *parent,
const char *name,
struct kernfs_node *target);
void kernfs_activate(struct kernfs_node *kn);
void kernfs_remove(struct kernfs_node *kn);
kernfs, sysfs, driver-core: implement kernfs_remove_self() and its wrappers Sometimes it's necessary to implement a node which wants to delete nodes including itself. This isn't straightforward because of kernfs active reference. While a file operation is in progress, an active reference is held and kernfs_remove() waits for all such references to drain before completing. For a self-deleting node, this is a deadlock as kernfs_remove() ends up waiting for an active reference that itself is sitting on top of. This currently is worked around in the sysfs layer using sysfs_schedule_callback() which makes such removals asynchronous. While it works, it's rather cumbersome and inherently breaks synchronicity of the operation - the file operation which triggered the operation may complete before the removal is finished (or even started) and the removal may fail asynchronously. If a removal operation is immmediately followed by another operation which expects the specific name to be available (e.g. removal followed by rename onto the same name), there's no way to make the latter operation reliable. The thing is there's no inherent reason for this to be asynchrnous. All that's necessary to do this synchronous is a dedicated operation which drops its own active ref and deactivates self. This patch implements kernfs_remove_self() and its wrappers in sysfs and driver core. kernfs_remove_self() is to be called from one of the file operations, drops the active ref the task is holding, removes the self node, and restores active ref to the dead node so that the ref is balanced afterwards. __kernfs_remove() is updated so that it takes an early exit if the target node is already fully removed so that the active ref restored by kernfs_remove_self() after removal doesn't confuse the deactivation path. This makes implementing self-deleting nodes very easy. The normal removal path doesn't even need to be changed to use kernfs_remove_self() for the self-deleting node. The method can invoke kernfs_remove_self() on itself before proceeding the normal removal path. kernfs_remove() invoked on the node by the normal deletion path will simply be ignored. This will replace sysfs_schedule_callback(). A subtle feature of sysfs_schedule_callback() is that it collapses multiple invocations - even if multiple removals are triggered, the removal callback is run only once. An equivalent effect can be achieved by testing the return value of kernfs_remove_self() - only the one which gets %true return value should proceed with actual deletion. All other instances of kernfs_remove_self() will wait till the enclosing kernfs operation which invoked the winning instance of kernfs_remove_self() finishes and then return %false. This trivially makes all users of kernfs_remove_self() automatically show correct synchronous behavior even when there are multiple concurrent operations - all "echo 1 > delete" instances will finish only after the whole operation is completed by one of the instances. Note that manipulation of active ref is implemented in separate public functions - kernfs_[un]break_active_protection(). kernfs_remove_self() is the only user at the moment but this will be used to cater to more complex cases. v2: For !CONFIG_SYSFS, dummy version kernfs_remove_self() was missing and sysfs_remove_file_self() had incorrect return type. Fix it. Reported by kbuild test bot. v3: kernfs_[un]break_active_protection() separated out from kernfs_remove_self() and exposed as public API. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-04 03:03:01 +08:00
void kernfs_break_active_protection(struct kernfs_node *kn);
void kernfs_unbreak_active_protection(struct kernfs_node *kn);
bool kernfs_remove_self(struct kernfs_node *kn);
int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name,
const void *ns);
int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
const char *new_name, const void *new_ns);
int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr);
fs: kernfs: add poll file operation Patch series "psi: pressure stall monitors", v3. Android is adopting psi to detect and remedy memory pressure that results in stuttering and decreased responsiveness on mobile devices. Psi gives us the stall information, but because we're dealing with latencies in the millisecond range, periodically reading the pressure files to detect stalls in a timely fashion is not feasible. Psi also doesn't aggregate its averages at a high enough frequency right now. This patch series extends the psi interface such that users can configure sensitive latency thresholds and use poll() and friends to be notified when these are breached. As high-frequency aggregation is costly, it implements an aggregation method that is optimized for fast, short-interval averaging, and makes the aggregation frequency adaptive, such that high-frequency updates only happen while monitored stall events are actively occurring. With these patches applied, Android can monitor for, and ward off, mounting memory shortages before they cause problems for the user. For example, using memory stall monitors in userspace low memory killer daemon (lmkd) we can detect mounting pressure and kill less important processes before device becomes visibly sluggish. In our memory stress testing psi memory monitors produce roughly 10x less false positives compared to vmpressure signals. Having ability to specify multiple triggers for the same psi metric allows other parts of Android framework to monitor memory state of the device and act accordingly. The new interface is straightforward. The user opens one of the pressure files for writing and writes a trigger description into the file descriptor that defines the stall state - some or full, and the maximum stall time over a given window of time. E.g.: /* Signal when stall time exceeds 100ms of a 1s window */ char trigger[] = "full 100000 1000000"; fd = open("/proc/pressure/memory"); write(fd, trigger, sizeof(trigger)); while (poll() >= 0) { ... } close(fd); When the monitored stall state is entered, psi adapts its aggregation frequency according to what the configured time window requires in order to emit event signals in a timely fashion. Once the stalling subsides, aggregation reverts back to normal. The trigger is associated with the open file descriptor. To stop monitoring, the user only needs to close the file descriptor and the trigger is discarded. Patches 1-4 prepare the psi code for polling support. Patch 5 implements the adaptive polling logic, the pressure growth detection optimized for short intervals, and hooks up write() and poll() on the pressure files. The patches were developed in collaboration with Johannes Weiner. This patch (of 5): Kernfs has a standardized poll/notification mechanism for waking all pollers on all fds when a filesystem node changes. To allow polling for custom events, add a .poll callback that can override the default. This is in preparation for pollable cgroup pressure files which have per-fd trigger configurations. Link: http://lkml.kernel.org/r/20190124211518.244221-2-surenb@google.com Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Suren Baghdasaryan <surenb@google.com> Cc: Dennis Zhou <dennis@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li Zefan <lizefan@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-06 07:45:45 +08:00
__poll_t kernfs_generic_poll(struct kernfs_open_file *of,
struct poll_table_struct *pt);
void kernfs_notify(struct kernfs_node *kn);
int kernfs_xattr_get(struct kernfs_node *kn, const char *name,
void *value, size_t size);
int kernfs_xattr_set(struct kernfs_node *kn, const char *name,
const void *value, size_t size, int flags);
const void *kernfs_super_ns(struct super_block *sb);
kernfs, sysfs, cgroup, intel_rdt: Support fs_context Make kernfs support superblock creation/mount/remount with fs_context. This requires that sysfs, cgroup and intel_rdt, which are built on kernfs, be made to support fs_context also. Notes: (1) A kernfs_fs_context struct is created to wrap fs_context and the kernfs mount parameters are moved in here (or are in fs_context). (2) kernfs_mount{,_ns}() are made into kernfs_get_tree(). The extra namespace tag parameter is passed in the context if desired (3) kernfs_free_fs_context() is provided as a destructor for the kernfs_fs_context struct, but for the moment it does nothing except get called in the right places. (4) sysfs doesn't wrap kernfs_fs_context since it has no parameters to pass, but possibly this should be done anyway in case someone wants to add a parameter in future. (5) A cgroup_fs_context struct is created to wrap kernfs_fs_context and the cgroup v1 and v2 mount parameters are all moved there. (6) cgroup1 parameter parsing error messages are now handled by invalf(), which allows userspace to collect them directly. (7) cgroup1 parameter cleanup is now done in the context destructor rather than in the mount/get_tree and remount functions. Weirdies: (*) cgroup_do_get_tree() calls cset_cgroup_from_root() with locks held, but then uses the resulting pointer after dropping the locks. I'm told this is okay and needs commenting. (*) The cgroup refcount web. This really needs documenting. (*) cgroup2 only has one root? Add a suggestion from Thomas Gleixner in which the RDT enablement code is placed into its own function. [folded a leak fix from Andrey Vagin] Signed-off-by: David Howells <dhowells@redhat.com> cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> cc: Tejun Heo <tj@kernel.org> cc: Li Zefan <lizefan@huawei.com> cc: Johannes Weiner <hannes@cmpxchg.org> cc: cgroups@vger.kernel.org cc: fenghua.yu@intel.com Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-02 07:07:26 +08:00
int kernfs_get_tree(struct fs_context *fc);
void kernfs_free_fs_context(struct fs_context *fc);
void kernfs_kill_sb(struct super_block *sb);
void kernfs_init(void);
struct kernfs_node *kernfs_get_node_by_id(struct kernfs_root *root,
const union kernfs_node_id *id);
#else /* CONFIG_KERNFS */
static inline enum kernfs_node_type kernfs_type(struct kernfs_node *kn)
{ return 0; } /* whatever */
static inline void kernfs_enable_ns(struct kernfs_node *kn) { }
static inline bool kernfs_ns_enabled(struct kernfs_node *kn)
{ return false; }
static inline int kernfs_name(struct kernfs_node *kn, char *buf, size_t buflen)
{ return -ENOSYS; }
static inline int kernfs_path_from_node(struct kernfs_node *root_kn,
struct kernfs_node *kn,
char *buf, size_t buflen)
{ return -ENOSYS; }
static inline void pr_cont_kernfs_name(struct kernfs_node *kn) { }
static inline void pr_cont_kernfs_path(struct kernfs_node *kn) { }
static inline struct kernfs_node *kernfs_get_parent(struct kernfs_node *kn)
{ return NULL; }
static inline struct kernfs_node *
kernfs_find_and_get_ns(struct kernfs_node *parent, const char *name,
const void *ns)
{ return NULL; }
static inline struct kernfs_node *
kernfs_walk_and_get_ns(struct kernfs_node *parent, const char *path,
const void *ns)
{ return NULL; }
static inline void kernfs_get(struct kernfs_node *kn) { }
static inline void kernfs_put(struct kernfs_node *kn) { }
static inline struct kernfs_node *kernfs_node_from_dentry(struct dentry *dentry)
{ return NULL; }
static inline struct kernfs_root *kernfs_root_from_sb(struct super_block *sb)
{ return NULL; }
static inline struct inode *
kernfs_get_inode(struct super_block *sb, struct kernfs_node *kn)
{ return NULL; }
static inline struct kernfs_root *
kernfs_create_root(struct kernfs_syscall_ops *scops, unsigned int flags,
void *priv)
sysfs, kernfs: implement kernfs_create/destroy_root() There currently is single kernfs hierarchy in the whole system which is used for sysfs. kernfs needs to support multiple hierarchies to allow other users. This patch introduces struct kernfs_root which serves as the root of each kernfs hierarchy and implements kernfs_create/destroy_root(). * Each kernfs_root is associated with a root sd (sysfs_dentry). The root is freed when the root sd is released and kernfs_destory_root() simply invokes kernfs_remove() on the root sd. sysfs_remove_one() is updated to handle release of the root sd. Note that ps_iattr update in sysfs_remove_one() is trivially updated for readability. * Root sd's are now dynamically allocated using sysfs_new_dirent(). Update sysfs_alloc_ino() so that it gives out ino from 1 so that the root sd still gets ino 1. * While kernfs currently only points to the root sd, it'll soon grow fields which are specific to each hierarchy. As determining a given sd's root will be necessary, sd->s_dir.root is added. This backlink fits better as a separate field in sd; however, sd->s_dir is inside union with space to spare, so use it to save space and provide kernfs_root() accessor to determine the root sd. * As hierarchies may be destroyed now, each mount needs to hold onto the hierarchy it's attached to. Update sysfs_fill_super() and sysfs_kill_sb() so that they get and put the kernfs_root respectively. * sysfs_root is replaced with kernfs_root which is dynamically created by invoking kernfs_create_root() from sysfs_init(). This patch doesn't introduce any visible behavior changes. v2: kernfs_create_root() forgot to set @sd->priv. Fixed. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29 03:54:40 +08:00
{ return ERR_PTR(-ENOSYS); }
static inline void kernfs_destroy_root(struct kernfs_root *root) { }
static inline struct kernfs_node *
kernfs_create_dir_ns(struct kernfs_node *parent, const char *name,
umode_t mode, kuid_t uid, kgid_t gid,
void *priv, const void *ns)
{ return ERR_PTR(-ENOSYS); }
static inline struct kernfs_node *
__kernfs_create_file(struct kernfs_node *parent, const char *name,
umode_t mode, kuid_t uid, kgid_t gid,
loff_t size, const struct kernfs_ops *ops,
void *priv, const void *ns, struct lock_class_key *key)
{ return ERR_PTR(-ENOSYS); }
static inline struct kernfs_node *
kernfs_create_link(struct kernfs_node *parent, const char *name,
struct kernfs_node *target)
{ return ERR_PTR(-ENOSYS); }
static inline void kernfs_activate(struct kernfs_node *kn) { }
static inline void kernfs_remove(struct kernfs_node *kn) { }
kernfs, sysfs, driver-core: implement kernfs_remove_self() and its wrappers Sometimes it's necessary to implement a node which wants to delete nodes including itself. This isn't straightforward because of kernfs active reference. While a file operation is in progress, an active reference is held and kernfs_remove() waits for all such references to drain before completing. For a self-deleting node, this is a deadlock as kernfs_remove() ends up waiting for an active reference that itself is sitting on top of. This currently is worked around in the sysfs layer using sysfs_schedule_callback() which makes such removals asynchronous. While it works, it's rather cumbersome and inherently breaks synchronicity of the operation - the file operation which triggered the operation may complete before the removal is finished (or even started) and the removal may fail asynchronously. If a removal operation is immmediately followed by another operation which expects the specific name to be available (e.g. removal followed by rename onto the same name), there's no way to make the latter operation reliable. The thing is there's no inherent reason for this to be asynchrnous. All that's necessary to do this synchronous is a dedicated operation which drops its own active ref and deactivates self. This patch implements kernfs_remove_self() and its wrappers in sysfs and driver core. kernfs_remove_self() is to be called from one of the file operations, drops the active ref the task is holding, removes the self node, and restores active ref to the dead node so that the ref is balanced afterwards. __kernfs_remove() is updated so that it takes an early exit if the target node is already fully removed so that the active ref restored by kernfs_remove_self() after removal doesn't confuse the deactivation path. This makes implementing self-deleting nodes very easy. The normal removal path doesn't even need to be changed to use kernfs_remove_self() for the self-deleting node. The method can invoke kernfs_remove_self() on itself before proceeding the normal removal path. kernfs_remove() invoked on the node by the normal deletion path will simply be ignored. This will replace sysfs_schedule_callback(). A subtle feature of sysfs_schedule_callback() is that it collapses multiple invocations - even if multiple removals are triggered, the removal callback is run only once. An equivalent effect can be achieved by testing the return value of kernfs_remove_self() - only the one which gets %true return value should proceed with actual deletion. All other instances of kernfs_remove_self() will wait till the enclosing kernfs operation which invoked the winning instance of kernfs_remove_self() finishes and then return %false. This trivially makes all users of kernfs_remove_self() automatically show correct synchronous behavior even when there are multiple concurrent operations - all "echo 1 > delete" instances will finish only after the whole operation is completed by one of the instances. Note that manipulation of active ref is implemented in separate public functions - kernfs_[un]break_active_protection(). kernfs_remove_self() is the only user at the moment but this will be used to cater to more complex cases. v2: For !CONFIG_SYSFS, dummy version kernfs_remove_self() was missing and sysfs_remove_file_self() had incorrect return type. Fix it. Reported by kbuild test bot. v3: kernfs_[un]break_active_protection() separated out from kernfs_remove_self() and exposed as public API. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-04 03:03:01 +08:00
static inline bool kernfs_remove_self(struct kernfs_node *kn)
{ return false; }
static inline int kernfs_remove_by_name_ns(struct kernfs_node *kn,
const char *name, const void *ns)
{ return -ENOSYS; }
static inline int kernfs_rename_ns(struct kernfs_node *kn,
struct kernfs_node *new_parent,
const char *new_name, const void *new_ns)
{ return -ENOSYS; }
static inline int kernfs_setattr(struct kernfs_node *kn,
const struct iattr *iattr)
{ return -ENOSYS; }
static inline void kernfs_notify(struct kernfs_node *kn) { }
static inline int kernfs_xattr_get(struct kernfs_node *kn, const char *name,
void *value, size_t size)
{ return -ENOSYS; }
static inline int kernfs_xattr_set(struct kernfs_node *kn, const char *name,
const void *value, size_t size, int flags)
{ return -ENOSYS; }
static inline const void *kernfs_super_ns(struct super_block *sb)
{ return NULL; }
kernfs, sysfs, cgroup, intel_rdt: Support fs_context Make kernfs support superblock creation/mount/remount with fs_context. This requires that sysfs, cgroup and intel_rdt, which are built on kernfs, be made to support fs_context also. Notes: (1) A kernfs_fs_context struct is created to wrap fs_context and the kernfs mount parameters are moved in here (or are in fs_context). (2) kernfs_mount{,_ns}() are made into kernfs_get_tree(). The extra namespace tag parameter is passed in the context if desired (3) kernfs_free_fs_context() is provided as a destructor for the kernfs_fs_context struct, but for the moment it does nothing except get called in the right places. (4) sysfs doesn't wrap kernfs_fs_context since it has no parameters to pass, but possibly this should be done anyway in case someone wants to add a parameter in future. (5) A cgroup_fs_context struct is created to wrap kernfs_fs_context and the cgroup v1 and v2 mount parameters are all moved there. (6) cgroup1 parameter parsing error messages are now handled by invalf(), which allows userspace to collect them directly. (7) cgroup1 parameter cleanup is now done in the context destructor rather than in the mount/get_tree and remount functions. Weirdies: (*) cgroup_do_get_tree() calls cset_cgroup_from_root() with locks held, but then uses the resulting pointer after dropping the locks. I'm told this is okay and needs commenting. (*) The cgroup refcount web. This really needs documenting. (*) cgroup2 only has one root? Add a suggestion from Thomas Gleixner in which the RDT enablement code is placed into its own function. [folded a leak fix from Andrey Vagin] Signed-off-by: David Howells <dhowells@redhat.com> cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> cc: Tejun Heo <tj@kernel.org> cc: Li Zefan <lizefan@huawei.com> cc: Johannes Weiner <hannes@cmpxchg.org> cc: cgroups@vger.kernel.org cc: fenghua.yu@intel.com Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-02 07:07:26 +08:00
static inline int kernfs_get_tree(struct fs_context *fc)
{ return -ENOSYS; }
static inline void kernfs_free_fs_context(struct fs_context *fc) { }
static inline void kernfs_kill_sb(struct super_block *sb) { }
static inline void kernfs_init(void) { }
#endif /* CONFIG_KERNFS */
/**
* kernfs_path - build full path of a given node
* @kn: kernfs_node of interest
* @buf: buffer to copy @kn's name into
* @buflen: size of @buf
*
* If @kn is NULL result will be "(null)".
*
* Returns the length of the full path. If the full length is equal to or
* greater than @buflen, @buf contains the truncated path with the trailing
* '\0'. On error, -errno is returned.
*/
static inline int kernfs_path(struct kernfs_node *kn, char *buf, size_t buflen)
{
return kernfs_path_from_node(kn, NULL, buf, buflen);
}
static inline struct kernfs_node *
kernfs_find_and_get(struct kernfs_node *kn, const char *name)
{
return kernfs_find_and_get_ns(kn, name, NULL);
}
static inline struct kernfs_node *
kernfs_walk_and_get(struct kernfs_node *kn, const char *path)
{
return kernfs_walk_and_get_ns(kn, path, NULL);
}
static inline struct kernfs_node *
kernfs_create_dir(struct kernfs_node *parent, const char *name, umode_t mode,
void *priv)
{
return kernfs_create_dir_ns(parent, name, mode,
GLOBAL_ROOT_UID, GLOBAL_ROOT_GID,
priv, NULL);
}
static inline struct kernfs_node *
kernfs_create_file_ns(struct kernfs_node *parent, const char *name,
umode_t mode, kuid_t uid, kgid_t gid,
loff_t size, const struct kernfs_ops *ops,
void *priv, const void *ns)
{
struct lock_class_key *key = NULL;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
key = (struct lock_class_key *)&ops->lockdep_key;
#endif
return __kernfs_create_file(parent, name, mode, uid, gid,
size, ops, priv, ns, key);
}
static inline struct kernfs_node *
kernfs_create_file(struct kernfs_node *parent, const char *name, umode_t mode,
loff_t size, const struct kernfs_ops *ops, void *priv)
{
return kernfs_create_file_ns(parent, name, mode,
GLOBAL_ROOT_UID, GLOBAL_ROOT_GID,
size, ops, priv, NULL);
}
static inline int kernfs_remove_by_name(struct kernfs_node *parent,
const char *name)
{
return kernfs_remove_by_name_ns(parent, name, NULL);
}
static inline int kernfs_rename(struct kernfs_node *kn,
struct kernfs_node *new_parent,
const char *new_name)
{
return kernfs_rename_ns(kn, new_parent, new_name, NULL);
}
#endif /* __LINUX_KERNFS_H */