License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2009-05-22 05:01:20 +08:00
|
|
|
/*
|
|
|
|
* Filesystem access notification for Linux
|
|
|
|
*
|
|
|
|
* Copyright (C) 2008 Red Hat, Inc., Eric Paris <eparis@redhat.com>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef __LINUX_FSNOTIFY_BACKEND_H
|
|
|
|
#define __LINUX_FSNOTIFY_BACKEND_H
|
|
|
|
|
|
|
|
#ifdef __KERNEL__
|
|
|
|
|
2009-05-22 05:02:01 +08:00
|
|
|
#include <linux/idr.h> /* inotify uses this */
|
2009-05-22 05:01:20 +08:00
|
|
|
#include <linux/fs.h> /* struct inode */
|
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/path.h> /* struct path */
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/types.h>
|
2011-07-27 07:09:06 +08:00
|
|
|
#include <linux/atomic.h>
|
2016-12-14 21:56:33 +08:00
|
|
|
#include <linux/user_namespace.h>
|
2017-10-20 18:26:01 +08:00
|
|
|
#include <linux/refcount.h>
|
2021-10-26 03:27:34 +08:00
|
|
|
#include <linux/mempool.h>
|
2022-04-22 20:03:17 +08:00
|
|
|
#include <linux/sched/mm.h>
|
2009-05-22 05:01:20 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* IN_* from inotfy.h lines up EXACTLY with FS_*, this is so we can easily
|
|
|
|
* convert between them. dnotify only needs conversion at watch creation
|
|
|
|
* so no perf loss there. fanotify isn't defined yet, so it can use the
|
|
|
|
* wholes if it needs more events.
|
|
|
|
*/
|
|
|
|
#define FS_ACCESS 0x00000001 /* File was accessed */
|
|
|
|
#define FS_MODIFY 0x00000002 /* File was modified */
|
|
|
|
#define FS_ATTRIB 0x00000004 /* Metadata changed */
|
|
|
|
#define FS_CLOSE_WRITE 0x00000008 /* Writtable file was closed */
|
|
|
|
#define FS_CLOSE_NOWRITE 0x00000010 /* Unwrittable file closed */
|
|
|
|
#define FS_OPEN 0x00000020 /* File was opened */
|
|
|
|
#define FS_MOVED_FROM 0x00000040 /* File was moved from X */
|
|
|
|
#define FS_MOVED_TO 0x00000080 /* File was moved to Y */
|
|
|
|
#define FS_CREATE 0x00000100 /* Subfile was created */
|
|
|
|
#define FS_DELETE 0x00000200 /* Subfile was deleted */
|
|
|
|
#define FS_DELETE_SELF 0x00000400 /* Self was deleted */
|
|
|
|
#define FS_MOVE_SELF 0x00000800 /* Self was moved */
|
2018-11-08 11:07:14 +08:00
|
|
|
#define FS_OPEN_EXEC 0x00001000 /* File was opened for exec */
|
2009-05-22 05:01:20 +08:00
|
|
|
|
|
|
|
#define FS_UNMOUNT 0x00002000 /* inode on umount fs */
|
|
|
|
#define FS_Q_OVERFLOW 0x00004000 /* Event queued overflowed */
|
2021-10-26 03:27:32 +08:00
|
|
|
#define FS_ERROR 0x00008000 /* Filesystem Error (fanotify) */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* FS_IN_IGNORED overloads FS_ERROR. It is only used internally by inotify
|
|
|
|
* which does not support FS_ERROR.
|
|
|
|
*/
|
2009-05-22 05:01:20 +08:00
|
|
|
#define FS_IN_IGNORED 0x00008000 /* last inotify event here */
|
|
|
|
|
2009-12-18 10:24:34 +08:00
|
|
|
#define FS_OPEN_PERM 0x00010000 /* open event in an permission hook */
|
|
|
|
#define FS_ACCESS_PERM 0x00020000 /* access event in a permissions hook */
|
2018-11-08 11:12:44 +08:00
|
|
|
#define FS_OPEN_EXEC_PERM 0x00040000 /* open/exec event in a permission hook */
|
2009-12-18 10:24:34 +08:00
|
|
|
|
2020-07-16 16:42:23 +08:00
|
|
|
/*
|
|
|
|
* Set on inode mark that cares about things that happen to its children.
|
|
|
|
* Always set for dnotify and inotify.
|
|
|
|
* Set on inode/sb/mount marks that care about parent/name info.
|
|
|
|
*/
|
2009-05-22 05:01:29 +08:00
|
|
|
#define FS_EVENT_ON_CHILD 0x08000000
|
|
|
|
|
2021-11-30 04:15:30 +08:00
|
|
|
#define FS_RENAME 0x10000000 /* File was renamed */
|
2020-03-19 23:10:09 +08:00
|
|
|
#define FS_DN_MULTISHOT 0x20000000 /* dnotify multishot */
|
|
|
|
#define FS_ISDIR 0x40000000 /* event occurred against dir */
|
|
|
|
|
2009-12-18 09:12:04 +08:00
|
|
|
#define FS_MOVE (FS_MOVED_FROM | FS_MOVED_TO)
|
|
|
|
|
2019-01-11 01:04:29 +08:00
|
|
|
/*
|
|
|
|
* Directory entry modification events - reported only to directory
|
|
|
|
* where entry is modified and not to a watching parent.
|
|
|
|
* The watching parent may get an FS_ATTRIB|FS_EVENT_ON_CHILD event
|
|
|
|
* when a directory entry inside a child subdir changes.
|
|
|
|
*/
|
2021-11-30 04:15:30 +08:00
|
|
|
#define ALL_FSNOTIFY_DIRENT_EVENTS (FS_CREATE | FS_DELETE | FS_MOVE | FS_RENAME)
|
2019-01-11 01:04:29 +08:00
|
|
|
|
2018-11-08 11:12:44 +08:00
|
|
|
#define ALL_FSNOTIFY_PERM_EVENTS (FS_OPEN_PERM | FS_ACCESS_PERM | \
|
|
|
|
FS_OPEN_EXEC_PERM)
|
2010-10-29 05:21:56 +08:00
|
|
|
|
2019-01-11 01:04:29 +08:00
|
|
|
/*
|
2020-07-16 16:42:23 +08:00
|
|
|
* This is a list of all events that may get sent to a parent that is watching
|
|
|
|
* with flag FS_EVENT_ON_CHILD based on fs event on a child of that directory.
|
2019-01-11 01:04:29 +08:00
|
|
|
*/
|
|
|
|
#define FS_EVENTS_POSS_ON_CHILD (ALL_FSNOTIFY_PERM_EVENTS | \
|
|
|
|
FS_ACCESS | FS_MODIFY | FS_ATTRIB | \
|
|
|
|
FS_CLOSE_WRITE | FS_CLOSE_NOWRITE | \
|
|
|
|
FS_OPEN | FS_OPEN_EXEC)
|
|
|
|
|
2020-07-16 16:42:23 +08:00
|
|
|
/*
|
|
|
|
* This is a list of all events that may get sent with the parent inode as the
|
|
|
|
* @to_tell argument of fsnotify().
|
|
|
|
* It may include events that can be sent to an inode/sb/mount mark, but cannot
|
|
|
|
* be sent to a parent watching children.
|
|
|
|
*/
|
|
|
|
#define FS_EVENTS_POSS_TO_PARENT (FS_EVENTS_POSS_ON_CHILD)
|
|
|
|
|
2018-10-04 05:25:33 +08:00
|
|
|
/* Events that can be reported to backends */
|
2019-01-11 01:04:29 +08:00
|
|
|
#define ALL_FSNOTIFY_EVENTS (ALL_FSNOTIFY_DIRENT_EVENTS | \
|
|
|
|
FS_EVENTS_POSS_ON_CHILD | \
|
2021-11-30 04:15:30 +08:00
|
|
|
FS_DELETE_SELF | FS_MOVE_SELF | \
|
2021-10-26 03:27:32 +08:00
|
|
|
FS_UNMOUNT | FS_Q_OVERFLOW | FS_IN_IGNORED | \
|
|
|
|
FS_ERROR)
|
2018-10-04 05:25:33 +08:00
|
|
|
|
|
|
|
/* Extra flags that may be reported with event or control handling of events */
|
2022-04-22 20:03:13 +08:00
|
|
|
#define ALL_FSNOTIFY_FLAGS (FS_ISDIR | FS_EVENT_ON_CHILD | FS_DN_MULTISHOT)
|
2010-07-28 22:18:37 +08:00
|
|
|
|
2018-10-04 05:25:33 +08:00
|
|
|
#define ALL_FSNOTIFY_BITS (ALL_FSNOTIFY_EVENTS | ALL_FSNOTIFY_FLAGS)
|
|
|
|
|
2009-05-22 05:01:20 +08:00
|
|
|
struct fsnotify_group;
|
|
|
|
struct fsnotify_event;
|
2009-12-18 10:24:24 +08:00
|
|
|
struct fsnotify_mark;
|
2009-05-22 05:01:50 +08:00
|
|
|
struct fsnotify_event_private_data;
|
2014-01-22 07:48:14 +08:00
|
|
|
struct fsnotify_fname;
|
2016-11-10 23:02:11 +08:00
|
|
|
struct fsnotify_iter_info;
|
2009-05-22 05:01:20 +08:00
|
|
|
|
fs: fsnotify: account fsnotify metadata to kmemcg
Patch series "Directed kmem charging", v8.
The Linux kernel's memory cgroup allows limiting the memory usage of the
jobs running on the system to provide isolation between the jobs. All
the kernel memory allocated in the context of the job and marked with
__GFP_ACCOUNT will also be included in the memory usage and be limited
by the job's limit.
The kernel memory can only be charged to the memcg of the process in
whose context kernel memory was allocated. However there are cases
where the allocated kernel memory should be charged to the memcg
different from the current processes's memcg. This patch series
contains two such concrete use-cases i.e. fsnotify and buffer_head.
The fsnotify event objects can consume a lot of system memory for large
or unlimited queues if there is either no or slow listener. The events
are allocated in the context of the event producer. However they should
be charged to the event consumer. Similarly the buffer_head objects can
be allocated in a memcg different from the memcg of the page for which
buffer_head objects are being allocated.
To solve this issue, this patch series introduces mechanism to charge
kernel memory to a given memcg. In case of fsnotify events, the memcg
of the consumer can be used for charging and for buffer_head, the memcg
of the page can be charged. For directed charging, the caller can use
the scope API memalloc_[un]use_memcg() to specify the memcg to charge
for all the __GFP_ACCOUNT allocations within the scope.
This patch (of 2):
A lot of memory can be consumed by the events generated for the huge or
unlimited queues if there is either no or slow listener. This can cause
system level memory pressure or OOMs. So, it's better to account the
fsnotify kmem caches to the memcg of the listener.
However the listener can be in a different memcg than the memcg of the
producer and these allocations happen in the context of the event
producer. This patch introduces remote memcg charging API which the
producer can use to charge the allocations to the memcg of the listener.
There are seven fsnotify kmem caches and among them allocations from
dnotify_struct_cache, dnotify_mark_cache, fanotify_mark_cache and
inotify_inode_mark_cachep happens in the context of syscall from the
listener. So, SLAB_ACCOUNT is enough for these caches.
The objects from fsnotify_mark_connector_cachep are not accounted as
they are small compared to the notification mark or events and it is
unclear whom to account connector to since it is shared by all events
attached to the inode.
The allocations from the event caches happen in the context of the event
producer. For such caches we will need to remote charge the allocations
to the listener's memcg. Thus we save the memcg reference in the
fsnotify_group structure of the listener.
This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.
[shakeelb@google.com: use GFP_KERNEL_ACCOUNT rather than open-coding it]
Link: http://lkml.kernel.org/r/20180702215439.211597-1-shakeelb@google.com
Link: http://lkml.kernel.org/r/20180627191250.209150-2-shakeelb@google.com
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Amir Goldstein <amir73il@gmail.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-18 06:46:39 +08:00
|
|
|
struct mem_cgroup;
|
|
|
|
|
2009-05-22 05:01:20 +08:00
|
|
|
/*
|
|
|
|
* Each group much define these ops. The fsnotify infrastructure will call
|
|
|
|
* these operations for each relevant group.
|
|
|
|
*
|
|
|
|
* handle_event - main call for a group to handle an fs event
|
2020-06-07 17:10:40 +08:00
|
|
|
* @group: group to notify
|
|
|
|
* @mask: event type and flags
|
|
|
|
* @data: object that event happened on
|
|
|
|
* @data_type: type of object for fanotify_data_XXX() accessors
|
|
|
|
* @dir: optional directory associated with event -
|
|
|
|
* if @file_name is not NULL, this is the directory that
|
|
|
|
* @file_name is relative to
|
|
|
|
* @file_name: optional file name associated with event
|
|
|
|
* @cookie: inotify rename cookie
|
|
|
|
* @iter_info: array of marks from this group that are interested in the event
|
|
|
|
*
|
2020-07-22 20:58:48 +08:00
|
|
|
* handle_inode_event - simple variant of handle_event() for groups that only
|
|
|
|
* have inode marks and don't have ignore mask
|
|
|
|
* @mark: mark to notify
|
|
|
|
* @mask: event type and flags
|
|
|
|
* @inode: inode that event happened on
|
|
|
|
* @dir: optional directory associated with event -
|
|
|
|
* if @file_name is not NULL, this is the directory that
|
|
|
|
* @file_name is relative to.
|
2021-10-26 03:27:26 +08:00
|
|
|
* Either @inode or @dir must be non-NULL.
|
2020-07-22 20:58:48 +08:00
|
|
|
* @file_name: optional file name associated with event
|
2020-12-02 20:07:07 +08:00
|
|
|
* @cookie: inotify rename cookie
|
2020-07-22 20:58:48 +08:00
|
|
|
*
|
2009-05-22 05:01:20 +08:00
|
|
|
* free_group_priv - called when a group refcnt hits 0 to clean up the private union
|
fsnotify: change locking order
On Mon, Aug 01, 2011 at 04:38:22PM -0400, Eric Paris wrote:
>
> I finally built and tested a v3.0 kernel with these patches (I know I'm
> SOOOOOO far behind). Not what I hoped for:
>
> > [ 150.937798] VFS: Busy inodes after unmount of tmpfs. Self-destruct in 5 seconds. Have a nice day...
> > [ 150.945290] BUG: unable to handle kernel NULL pointer dereference at 0000000000000070
> > [ 150.946012] IP: [<ffffffff810ffd58>] shmem_free_inode+0x18/0x50
> > [ 150.946012] PGD 2bf9e067 PUD 2bf9f067 PMD 0
> > [ 150.946012] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
> > [ 150.946012] CPU 0
> > [ 150.946012] Modules linked in: nfs lockd fscache auth_rpcgss nfs_acl sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ext4 jbd2 crc16 joydev ata_piix i2c_piix4 pcspkr uinput ipv6 autofs4 usbhid [last unloaded: scsi_wait_scan]
> > [ 150.946012]
> > [ 150.946012] Pid: 2764, comm: syscall_thrash Not tainted 3.0.0+ #1 Red Hat KVM
> > [ 150.946012] RIP: 0010:[<ffffffff810ffd58>] [<ffffffff810ffd58>] shmem_free_inode+0x18/0x50
> > [ 150.946012] RSP: 0018:ffff88002c2e5df8 EFLAGS: 00010282
> > [ 150.946012] RAX: 000000004e370d9f RBX: 0000000000000000 RCX: ffff88003a029438
> > [ 150.946012] RDX: 0000000033630a5f RSI: 0000000000000000 RDI: ffff88003491c240
> > [ 150.946012] RBP: ffff88002c2e5e08 R08: 0000000000000000 R09: 0000000000000000
> > [ 150.946012] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88003a029428
> > [ 150.946012] R13: ffff88003a029428 R14: ffff88003a029428 R15: ffff88003499a610
> > [ 150.946012] FS: 00007f5a05420700(0000) GS:ffff88003f600000(0000) knlGS:0000000000000000
> > [ 150.946012] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> > [ 150.946012] CR2: 0000000000000070 CR3: 000000002a662000 CR4: 00000000000006f0
> > [ 150.946012] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > [ 150.946012] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > [ 150.946012] Process syscall_thrash (pid: 2764, threadinfo ffff88002c2e4000, task ffff88002bfbc760)
> > [ 150.946012] Stack:
> > [ 150.946012] ffff88003a029438 ffff88003a029428 ffff88002c2e5e38 ffffffff81102f76
> > [ 150.946012] ffff88003a029438 ffff88003a029598 ffffffff8160f9c0 ffff88002c221250
> > [ 150.946012] ffff88002c2e5e68 ffffffff8115e9be ffff88002c2e5e68 ffff88003a029438
> > [ 150.946012] Call Trace:
> > [ 150.946012] [<ffffffff81102f76>] shmem_evict_inode+0x76/0x130
> > [ 150.946012] [<ffffffff8115e9be>] evict+0x7e/0x170
> > [ 150.946012] [<ffffffff8115ee40>] iput_final+0xd0/0x190
> > [ 150.946012] [<ffffffff8115ef33>] iput+0x33/0x40
> > [ 150.946012] [<ffffffff81180205>] fsnotify_destroy_mark_locked+0x145/0x160
> > [ 150.946012] [<ffffffff81180316>] fsnotify_destroy_mark+0x36/0x50
> > [ 150.946012] [<ffffffff81181937>] sys_inotify_rm_watch+0x77/0xd0
> > [ 150.946012] [<ffffffff815aca52>] system_call_fastpath+0x16/0x1b
> > [ 150.946012] Code: 67 4a 00 b8 e4 ff ff ff eb aa 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 48 83 ec 10 48 89 1c 24 4c 89 64 24 08 48 8b 9f 40 05 00 00
> > [ 150.946012] 83 7b 70 00 74 1c 4c 8d a3 80 00 00 00 4c 89 e7 e8 d2 5d 4a
> > [ 150.946012] RIP [<ffffffff810ffd58>] shmem_free_inode+0x18/0x50
> > [ 150.946012] RSP <ffff88002c2e5df8>
> > [ 150.946012] CR2: 0000000000000070
>
> Looks at aweful lot like the problem from:
> http://www.spinics.net/lists/linux-fsdevel/msg46101.html
>
I tried to reproduce this bug with your test program, but without success.
However, if I understand correctly, this occurs since we dont hold any locks when
we call iput() in mark_destroy(), right?
With the patches you tested, iput() is also not called within any lock, since the
groups mark_mutex is released temporarily before iput() is called. This is, since
the original codes behaviour is similar.
However since we now have a mutex as the biggest lock, we can do what you
suggested (http://www.spinics.net/lists/linux-fsdevel/msg46107.html) and
call iput() with the mutex held to avoid the race.
The patch below implements this. It uses nested locking to avoid deadlock in case
we do the final iput() on an inode which still holds marks and thus would take
the mutex again when calling fsnotify_inode_delete() in destroy_inode().
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Signed-off-by: Eric Paris <eparis@redhat.com>
2011-08-12 07:13:31 +08:00
|
|
|
* freeing_mark - called when a mark is being destroyed for some reason. The group
|
2020-07-22 20:58:48 +08:00
|
|
|
* MUST be holding a reference on each mark and that reference must be
|
|
|
|
* dropped in this function. inotify uses this function to send
|
|
|
|
* userspace messages that marks have been removed.
|
2009-05-22 05:01:20 +08:00
|
|
|
*/
|
|
|
|
struct fsnotify_ops {
|
2020-06-07 17:10:40 +08:00
|
|
|
int (*handle_event)(struct fsnotify_group *group, u32 mask,
|
|
|
|
const void *data, int data_type, struct inode *dir,
|
2019-04-27 01:51:03 +08:00
|
|
|
const struct qstr *file_name, u32 cookie,
|
2016-11-11 00:51:50 +08:00
|
|
|
struct fsnotify_iter_info *iter_info);
|
2020-07-22 20:58:48 +08:00
|
|
|
int (*handle_inode_event)(struct fsnotify_mark *mark, u32 mask,
|
|
|
|
struct inode *inode, struct inode *dir,
|
2020-12-02 20:07:07 +08:00
|
|
|
const struct qstr *file_name, u32 cookie);
|
2009-05-22 05:01:20 +08:00
|
|
|
void (*free_group_priv)(struct fsnotify_group *group);
|
2009-12-18 10:24:24 +08:00
|
|
|
void (*freeing_mark)(struct fsnotify_mark *mark, struct fsnotify_group *group);
|
2021-10-26 03:27:27 +08:00
|
|
|
void (*free_event)(struct fsnotify_group *group, struct fsnotify_event *event);
|
2016-12-22 01:06:12 +08:00
|
|
|
/* called on final put+free to free memory */
|
|
|
|
void (*free_mark)(struct fsnotify_mark *mark);
|
2014-01-22 07:48:14 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* all of the information about the original object we want to now send to
|
|
|
|
* a group. If you want to carry more info from the accessing task to the
|
|
|
|
* listener this structure is where you need to be adding fields.
|
|
|
|
*/
|
|
|
|
struct fsnotify_event {
|
|
|
|
struct list_head list;
|
2009-05-22 05:01:20 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A group is a "thing" that wants to receive notification about filesystem
|
|
|
|
* events. The mask holds the subset of event types this group cares about.
|
|
|
|
* refcnt on a group is up to the implementor and at any moment if it goes 0
|
|
|
|
* everything will be cleaned up.
|
|
|
|
*/
|
|
|
|
struct fsnotify_group {
|
fs: fsnotify: account fsnotify metadata to kmemcg
Patch series "Directed kmem charging", v8.
The Linux kernel's memory cgroup allows limiting the memory usage of the
jobs running on the system to provide isolation between the jobs. All
the kernel memory allocated in the context of the job and marked with
__GFP_ACCOUNT will also be included in the memory usage and be limited
by the job's limit.
The kernel memory can only be charged to the memcg of the process in
whose context kernel memory was allocated. However there are cases
where the allocated kernel memory should be charged to the memcg
different from the current processes's memcg. This patch series
contains two such concrete use-cases i.e. fsnotify and buffer_head.
The fsnotify event objects can consume a lot of system memory for large
or unlimited queues if there is either no or slow listener. The events
are allocated in the context of the event producer. However they should
be charged to the event consumer. Similarly the buffer_head objects can
be allocated in a memcg different from the memcg of the page for which
buffer_head objects are being allocated.
To solve this issue, this patch series introduces mechanism to charge
kernel memory to a given memcg. In case of fsnotify events, the memcg
of the consumer can be used for charging and for buffer_head, the memcg
of the page can be charged. For directed charging, the caller can use
the scope API memalloc_[un]use_memcg() to specify the memcg to charge
for all the __GFP_ACCOUNT allocations within the scope.
This patch (of 2):
A lot of memory can be consumed by the events generated for the huge or
unlimited queues if there is either no or slow listener. This can cause
system level memory pressure or OOMs. So, it's better to account the
fsnotify kmem caches to the memcg of the listener.
However the listener can be in a different memcg than the memcg of the
producer and these allocations happen in the context of the event
producer. This patch introduces remote memcg charging API which the
producer can use to charge the allocations to the memcg of the listener.
There are seven fsnotify kmem caches and among them allocations from
dnotify_struct_cache, dnotify_mark_cache, fanotify_mark_cache and
inotify_inode_mark_cachep happens in the context of syscall from the
listener. So, SLAB_ACCOUNT is enough for these caches.
The objects from fsnotify_mark_connector_cachep are not accounted as
they are small compared to the notification mark or events and it is
unclear whom to account connector to since it is shared by all events
attached to the inode.
The allocations from the event caches happen in the context of the event
producer. For such caches we will need to remote charge the allocations
to the listener's memcg. Thus we save the memcg reference in the
fsnotify_group structure of the listener.
This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.
[shakeelb@google.com: use GFP_KERNEL_ACCOUNT rather than open-coding it]
Link: http://lkml.kernel.org/r/20180702215439.211597-1-shakeelb@google.com
Link: http://lkml.kernel.org/r/20180627191250.209150-2-shakeelb@google.com
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Amir Goldstein <amir73il@gmail.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-18 06:46:39 +08:00
|
|
|
const struct fsnotify_ops *ops; /* how this group handles things */
|
|
|
|
|
2009-05-22 05:01:20 +08:00
|
|
|
/*
|
|
|
|
* How the refcnt is used is up to each group. When the refcnt hits 0
|
|
|
|
* fsnotify will clean up all of the resources associated with this group.
|
|
|
|
* As an example, the dnotify group will always have a refcnt=1 and that
|
|
|
|
* will never change. Inotify, on the other hand, has a group per
|
|
|
|
* inotify_init() and the refcnt will hit 0 only when that fd has been
|
|
|
|
* closed.
|
|
|
|
*/
|
2017-10-20 18:26:01 +08:00
|
|
|
refcount_t refcnt; /* things with interest in this group */
|
2009-05-22 05:01:20 +08:00
|
|
|
|
2009-05-22 05:01:37 +08:00
|
|
|
/* needed to send notification to userspace */
|
2016-10-08 07:56:52 +08:00
|
|
|
spinlock_t notification_lock; /* protect the notification_list */
|
2009-05-22 05:01:37 +08:00
|
|
|
struct list_head notification_list; /* list of event_holder this group needs to send to userspace */
|
|
|
|
wait_queue_head_t notification_waitq; /* read() on the notification file blocks on this waitq */
|
|
|
|
unsigned int q_len; /* events on the queue */
|
|
|
|
unsigned int max_events; /* maximum events allowed on the list */
|
2010-10-29 05:21:56 +08:00
|
|
|
/*
|
|
|
|
* Valid fsnotify group priorities. Events are send in order from highest
|
|
|
|
* priority to lowest priority. We default to the lowest priority.
|
|
|
|
*/
|
|
|
|
#define FS_PRIO_0 0 /* normal notifiers, no permissions */
|
|
|
|
#define FS_PRIO_1 1 /* fanotify content based access control */
|
|
|
|
#define FS_PRIO_2 2 /* fanotify pre-content access */
|
|
|
|
unsigned int priority;
|
2016-09-20 05:44:27 +08:00
|
|
|
bool shutdown; /* group is being shut down, don't queue more events */
|
2009-05-22 05:01:37 +08:00
|
|
|
|
2022-04-22 20:03:15 +08:00
|
|
|
#define FSNOTIFY_GROUP_USER 0x01 /* user allocated group */
|
2022-04-22 20:03:16 +08:00
|
|
|
#define FSNOTIFY_GROUP_DUPS 0x02 /* allow multiple marks per object */
|
2022-04-22 20:03:17 +08:00
|
|
|
#define FSNOTIFY_GROUP_NOFS 0x04 /* group lock is not direct reclaim safe */
|
2022-04-22 20:03:15 +08:00
|
|
|
int flags;
|
2022-04-22 20:03:17 +08:00
|
|
|
unsigned int owner_flags; /* stored flags of mark_mutex owner */
|
2022-04-22 20:03:15 +08:00
|
|
|
|
2009-12-18 10:24:24 +08:00
|
|
|
/* stores all fastpath marks assoc with this group so they can be cleaned on unregister */
|
2011-06-14 23:29:50 +08:00
|
|
|
struct mutex mark_mutex; /* protect marks_list */
|
fs: fsnotify: account fsnotify metadata to kmemcg
Patch series "Directed kmem charging", v8.
The Linux kernel's memory cgroup allows limiting the memory usage of the
jobs running on the system to provide isolation between the jobs. All
the kernel memory allocated in the context of the job and marked with
__GFP_ACCOUNT will also be included in the memory usage and be limited
by the job's limit.
The kernel memory can only be charged to the memcg of the process in
whose context kernel memory was allocated. However there are cases
where the allocated kernel memory should be charged to the memcg
different from the current processes's memcg. This patch series
contains two such concrete use-cases i.e. fsnotify and buffer_head.
The fsnotify event objects can consume a lot of system memory for large
or unlimited queues if there is either no or slow listener. The events
are allocated in the context of the event producer. However they should
be charged to the event consumer. Similarly the buffer_head objects can
be allocated in a memcg different from the memcg of the page for which
buffer_head objects are being allocated.
To solve this issue, this patch series introduces mechanism to charge
kernel memory to a given memcg. In case of fsnotify events, the memcg
of the consumer can be used for charging and for buffer_head, the memcg
of the page can be charged. For directed charging, the caller can use
the scope API memalloc_[un]use_memcg() to specify the memcg to charge
for all the __GFP_ACCOUNT allocations within the scope.
This patch (of 2):
A lot of memory can be consumed by the events generated for the huge or
unlimited queues if there is either no or slow listener. This can cause
system level memory pressure or OOMs. So, it's better to account the
fsnotify kmem caches to the memcg of the listener.
However the listener can be in a different memcg than the memcg of the
producer and these allocations happen in the context of the event
producer. This patch introduces remote memcg charging API which the
producer can use to charge the allocations to the memcg of the listener.
There are seven fsnotify kmem caches and among them allocations from
dnotify_struct_cache, dnotify_mark_cache, fanotify_mark_cache and
inotify_inode_mark_cachep happens in the context of syscall from the
listener. So, SLAB_ACCOUNT is enough for these caches.
The objects from fsnotify_mark_connector_cachep are not accounted as
they are small compared to the notification mark or events and it is
unclear whom to account connector to since it is shared by all events
attached to the inode.
The allocations from the event caches happen in the context of the event
producer. For such caches we will need to remote charge the allocations
to the listener's memcg. Thus we save the memcg reference in the
fsnotify_group structure of the listener.
This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.
[shakeelb@google.com: use GFP_KERNEL_ACCOUNT rather than open-coding it]
Link: http://lkml.kernel.org/r/20180702215439.211597-1-shakeelb@google.com
Link: http://lkml.kernel.org/r/20180627191250.209150-2-shakeelb@google.com
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Amir Goldstein <amir73il@gmail.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-18 06:46:39 +08:00
|
|
|
atomic_t user_waits; /* Number of tasks waiting for user
|
|
|
|
* response */
|
2009-12-18 10:24:24 +08:00
|
|
|
struct list_head marks_list; /* all inode marks for this group */
|
2009-05-22 05:01:26 +08:00
|
|
|
|
2014-01-22 07:48:14 +08:00
|
|
|
struct fasync_struct *fsn_fa; /* async notification */
|
|
|
|
|
2014-02-22 02:14:11 +08:00
|
|
|
struct fsnotify_event *overflow_event; /* Event we queue when the
|
2014-01-22 07:48:14 +08:00
|
|
|
* notification list is too
|
|
|
|
* full */
|
fs: fsnotify: account fsnotify metadata to kmemcg
Patch series "Directed kmem charging", v8.
The Linux kernel's memory cgroup allows limiting the memory usage of the
jobs running on the system to provide isolation between the jobs. All
the kernel memory allocated in the context of the job and marked with
__GFP_ACCOUNT will also be included in the memory usage and be limited
by the job's limit.
The kernel memory can only be charged to the memcg of the process in
whose context kernel memory was allocated. However there are cases
where the allocated kernel memory should be charged to the memcg
different from the current processes's memcg. This patch series
contains two such concrete use-cases i.e. fsnotify and buffer_head.
The fsnotify event objects can consume a lot of system memory for large
or unlimited queues if there is either no or slow listener. The events
are allocated in the context of the event producer. However they should
be charged to the event consumer. Similarly the buffer_head objects can
be allocated in a memcg different from the memcg of the page for which
buffer_head objects are being allocated.
To solve this issue, this patch series introduces mechanism to charge
kernel memory to a given memcg. In case of fsnotify events, the memcg
of the consumer can be used for charging and for buffer_head, the memcg
of the page can be charged. For directed charging, the caller can use
the scope API memalloc_[un]use_memcg() to specify the memcg to charge
for all the __GFP_ACCOUNT allocations within the scope.
This patch (of 2):
A lot of memory can be consumed by the events generated for the huge or
unlimited queues if there is either no or slow listener. This can cause
system level memory pressure or OOMs. So, it's better to account the
fsnotify kmem caches to the memcg of the listener.
However the listener can be in a different memcg than the memcg of the
producer and these allocations happen in the context of the event
producer. This patch introduces remote memcg charging API which the
producer can use to charge the allocations to the memcg of the listener.
There are seven fsnotify kmem caches and among them allocations from
dnotify_struct_cache, dnotify_mark_cache, fanotify_mark_cache and
inotify_inode_mark_cachep happens in the context of syscall from the
listener. So, SLAB_ACCOUNT is enough for these caches.
The objects from fsnotify_mark_connector_cachep are not accounted as
they are small compared to the notification mark or events and it is
unclear whom to account connector to since it is shared by all events
attached to the inode.
The allocations from the event caches happen in the context of the event
producer. For such caches we will need to remote charge the allocations
to the listener's memcg. Thus we save the memcg reference in the
fsnotify_group structure of the listener.
This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.
[shakeelb@google.com: use GFP_KERNEL_ACCOUNT rather than open-coding it]
Link: http://lkml.kernel.org/r/20180702215439.211597-1-shakeelb@google.com
Link: http://lkml.kernel.org/r/20180627191250.209150-2-shakeelb@google.com
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Amir Goldstein <amir73il@gmail.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-18 06:46:39 +08:00
|
|
|
|
|
|
|
struct mem_cgroup *memcg; /* memcg to charge allocations */
|
2011-10-15 05:43:39 +08:00
|
|
|
|
2009-05-22 05:01:20 +08:00
|
|
|
/* groups can define private fields here or use the void *private */
|
|
|
|
union {
|
|
|
|
void *private;
|
2009-05-22 05:02:01 +08:00
|
|
|
#ifdef CONFIG_INOTIFY_USER
|
|
|
|
struct inotify_group_private_data {
|
|
|
|
spinlock_t idr_lock;
|
|
|
|
struct idr idr;
|
2016-12-14 21:56:33 +08:00
|
|
|
struct ucounts *ucounts;
|
2009-05-22 05:02:01 +08:00
|
|
|
} inotify_data;
|
2009-12-18 10:24:34 +08:00
|
|
|
#endif
|
2010-07-28 22:18:37 +08:00
|
|
|
#ifdef CONFIG_FANOTIFY
|
2009-12-18 10:24:34 +08:00
|
|
|
struct fanotify_group_private_data {
|
2021-03-04 18:48:25 +08:00
|
|
|
/* Hash table of events for merge */
|
|
|
|
struct hlist_head *merge_hash;
|
2009-12-18 10:24:34 +08:00
|
|
|
/* allows a group to block waiting for a userspace response */
|
|
|
|
struct list_head access_list;
|
|
|
|
wait_queue_head_t access_waitq;
|
2018-09-22 02:20:30 +08:00
|
|
|
int flags; /* flags from fanotify_init() */
|
|
|
|
int f_flags; /* event_f_flags from fanotify_init() */
|
2021-03-04 19:29:20 +08:00
|
|
|
struct ucounts *ucounts;
|
2021-10-26 03:27:34 +08:00
|
|
|
mempool_t error_events_pool;
|
2009-12-18 10:24:34 +08:00
|
|
|
} fanotify_data;
|
2010-07-28 22:18:37 +08:00
|
|
|
#endif /* CONFIG_FANOTIFY */
|
2009-05-22 05:01:20 +08:00
|
|
|
};
|
|
|
|
};
|
|
|
|
|
2022-04-22 20:03:17 +08:00
|
|
|
/*
|
|
|
|
* These helpers are used to prevent deadlock when reclaiming inodes with
|
|
|
|
* evictable marks of the same group that is allocating a new mark.
|
|
|
|
*/
|
|
|
|
static inline void fsnotify_group_lock(struct fsnotify_group *group)
|
|
|
|
{
|
|
|
|
mutex_lock(&group->mark_mutex);
|
|
|
|
if (group->flags & FSNOTIFY_GROUP_NOFS)
|
|
|
|
group->owner_flags = memalloc_nofs_save();
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void fsnotify_group_unlock(struct fsnotify_group *group)
|
|
|
|
{
|
|
|
|
if (group->flags & FSNOTIFY_GROUP_NOFS)
|
|
|
|
memalloc_nofs_restore(group->owner_flags);
|
|
|
|
mutex_unlock(&group->mark_mutex);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void fsnotify_group_assert_locked(struct fsnotify_group *group)
|
|
|
|
{
|
|
|
|
WARN_ON_ONCE(!mutex_is_locked(&group->mark_mutex));
|
|
|
|
if (group->flags & FSNOTIFY_GROUP_NOFS)
|
|
|
|
WARN_ON_ONCE(!(current->flags & PF_MEMALLOC_NOFS));
|
|
|
|
}
|
|
|
|
|
2020-03-19 23:10:12 +08:00
|
|
|
/* When calling fsnotify tell it if the data is a path or inode */
|
|
|
|
enum fsnotify_data_type {
|
|
|
|
FSNOTIFY_EVENT_NONE,
|
|
|
|
FSNOTIFY_EVENT_PATH,
|
|
|
|
FSNOTIFY_EVENT_INODE,
|
2021-10-26 03:27:17 +08:00
|
|
|
FSNOTIFY_EVENT_DENTRY,
|
2021-10-26 03:27:32 +08:00
|
|
|
FSNOTIFY_EVENT_ERROR,
|
|
|
|
};
|
|
|
|
|
|
|
|
struct fs_error_report {
|
|
|
|
int error;
|
|
|
|
struct inode *inode;
|
|
|
|
struct super_block *sb;
|
2020-03-19 23:10:12 +08:00
|
|
|
};
|
|
|
|
|
2020-07-08 19:11:38 +08:00
|
|
|
static inline struct inode *fsnotify_data_inode(const void *data, int data_type)
|
2020-03-19 23:10:12 +08:00
|
|
|
{
|
|
|
|
switch (data_type) {
|
|
|
|
case FSNOTIFY_EVENT_INODE:
|
2020-07-08 19:11:38 +08:00
|
|
|
return (struct inode *)data;
|
2021-10-26 03:27:17 +08:00
|
|
|
case FSNOTIFY_EVENT_DENTRY:
|
|
|
|
return d_inode(data);
|
2020-03-19 23:10:12 +08:00
|
|
|
case FSNOTIFY_EVENT_PATH:
|
|
|
|
return d_inode(((const struct path *)data)->dentry);
|
2021-10-26 03:27:32 +08:00
|
|
|
case FSNOTIFY_EVENT_ERROR:
|
|
|
|
return ((struct fs_error_report *)data)->inode;
|
2020-03-19 23:10:12 +08:00
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
2021-10-26 03:27:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct dentry *fsnotify_data_dentry(const void *data, int data_type)
|
|
|
|
{
|
|
|
|
switch (data_type) {
|
|
|
|
case FSNOTIFY_EVENT_DENTRY:
|
|
|
|
/* Non const is needed for dget() */
|
|
|
|
return (struct dentry *)data;
|
|
|
|
case FSNOTIFY_EVENT_PATH:
|
|
|
|
return ((const struct path *)data)->dentry;
|
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
2020-03-19 23:10:12 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline const struct path *fsnotify_data_path(const void *data,
|
|
|
|
int data_type)
|
|
|
|
{
|
|
|
|
switch (data_type) {
|
|
|
|
case FSNOTIFY_EVENT_PATH:
|
|
|
|
return data;
|
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
2009-05-22 05:01:20 +08:00
|
|
|
|
2021-10-26 03:27:25 +08:00
|
|
|
static inline struct super_block *fsnotify_data_sb(const void *data,
|
|
|
|
int data_type)
|
|
|
|
{
|
|
|
|
switch (data_type) {
|
|
|
|
case FSNOTIFY_EVENT_INODE:
|
|
|
|
return ((struct inode *)data)->i_sb;
|
|
|
|
case FSNOTIFY_EVENT_DENTRY:
|
|
|
|
return ((struct dentry *)data)->d_sb;
|
|
|
|
case FSNOTIFY_EVENT_PATH:
|
|
|
|
return ((const struct path *)data)->dentry->d_sb;
|
2021-10-26 03:27:32 +08:00
|
|
|
case FSNOTIFY_EVENT_ERROR:
|
|
|
|
return ((struct fs_error_report *) data)->sb;
|
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct fs_error_report *fsnotify_data_error_report(
|
|
|
|
const void *data,
|
|
|
|
int data_type)
|
|
|
|
{
|
|
|
|
switch (data_type) {
|
|
|
|
case FSNOTIFY_EVENT_ERROR:
|
|
|
|
return (struct fs_error_report *) data;
|
2021-10-26 03:27:25 +08:00
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-11-30 04:15:28 +08:00
|
|
|
/*
|
|
|
|
* Index to merged marks iterator array that correlates to a type of watch.
|
|
|
|
* The type of watched object can be deduced from the iterator type, but not
|
|
|
|
* the other way around, because an event can match different watched objects
|
|
|
|
* of the same object type.
|
|
|
|
* For example, both parent and child are watching an object of type inode.
|
|
|
|
*/
|
|
|
|
enum fsnotify_iter_type {
|
|
|
|
FSNOTIFY_ITER_TYPE_INODE,
|
|
|
|
FSNOTIFY_ITER_TYPE_VFSMOUNT,
|
|
|
|
FSNOTIFY_ITER_TYPE_SB,
|
|
|
|
FSNOTIFY_ITER_TYPE_PARENT,
|
2021-11-30 04:15:30 +08:00
|
|
|
FSNOTIFY_ITER_TYPE_INODE2,
|
2021-11-30 04:15:28 +08:00
|
|
|
FSNOTIFY_ITER_TYPE_COUNT
|
|
|
|
};
|
|
|
|
|
|
|
|
/* The type of object that a mark is attached to */
|
2018-04-21 07:10:49 +08:00
|
|
|
enum fsnotify_obj_type {
|
2021-11-30 04:15:27 +08:00
|
|
|
FSNOTIFY_OBJ_TYPE_ANY = -1,
|
2018-04-21 07:10:49 +08:00
|
|
|
FSNOTIFY_OBJ_TYPE_INODE,
|
|
|
|
FSNOTIFY_OBJ_TYPE_VFSMOUNT,
|
2018-09-01 15:41:11 +08:00
|
|
|
FSNOTIFY_OBJ_TYPE_SB,
|
2018-04-21 07:10:49 +08:00
|
|
|
FSNOTIFY_OBJ_TYPE_COUNT,
|
|
|
|
FSNOTIFY_OBJ_TYPE_DETACHED = FSNOTIFY_OBJ_TYPE_COUNT
|
|
|
|
};
|
|
|
|
|
2021-11-30 04:15:27 +08:00
|
|
|
static inline bool fsnotify_valid_obj_type(unsigned int obj_type)
|
2018-06-23 22:54:48 +08:00
|
|
|
{
|
2021-11-30 04:15:27 +08:00
|
|
|
return (obj_type < FSNOTIFY_OBJ_TYPE_COUNT);
|
2018-06-23 22:54:48 +08:00
|
|
|
}
|
|
|
|
|
2018-04-21 07:10:50 +08:00
|
|
|
struct fsnotify_iter_info {
|
2021-11-30 04:15:28 +08:00
|
|
|
struct fsnotify_mark *marks[FSNOTIFY_ITER_TYPE_COUNT];
|
2022-05-12 03:02:12 +08:00
|
|
|
struct fsnotify_group *current_group;
|
2018-04-21 07:10:50 +08:00
|
|
|
unsigned int report_mask;
|
|
|
|
int srcu_idx;
|
|
|
|
};
|
|
|
|
|
2018-04-21 07:10:52 +08:00
|
|
|
static inline bool fsnotify_iter_should_report_type(
|
2021-11-30 04:15:28 +08:00
|
|
|
struct fsnotify_iter_info *iter_info, int iter_type)
|
2018-04-21 07:10:52 +08:00
|
|
|
{
|
2021-11-30 04:15:28 +08:00
|
|
|
return (iter_info->report_mask & (1U << iter_type));
|
2018-04-21 07:10:52 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void fsnotify_iter_set_report_type(
|
2021-11-30 04:15:28 +08:00
|
|
|
struct fsnotify_iter_info *iter_info, int iter_type)
|
2018-04-21 07:10:52 +08:00
|
|
|
{
|
2021-11-30 04:15:28 +08:00
|
|
|
iter_info->report_mask |= (1U << iter_type);
|
2018-04-21 07:10:52 +08:00
|
|
|
}
|
|
|
|
|
2022-05-12 03:02:12 +08:00
|
|
|
static inline struct fsnotify_mark *fsnotify_iter_mark(
|
|
|
|
struct fsnotify_iter_info *iter_info, int iter_type)
|
2018-04-21 07:10:52 +08:00
|
|
|
{
|
2022-05-12 03:02:12 +08:00
|
|
|
if (fsnotify_iter_should_report_type(iter_info, iter_type))
|
|
|
|
return iter_info->marks[iter_type];
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int fsnotify_iter_step(struct fsnotify_iter_info *iter, int type,
|
|
|
|
struct fsnotify_mark **markp)
|
|
|
|
{
|
|
|
|
while (type < FSNOTIFY_ITER_TYPE_COUNT) {
|
|
|
|
*markp = fsnotify_iter_mark(iter, type);
|
|
|
|
if (*markp)
|
|
|
|
break;
|
|
|
|
type++;
|
|
|
|
}
|
|
|
|
return type;
|
2018-04-21 07:10:52 +08:00
|
|
|
}
|
|
|
|
|
2018-04-21 07:10:50 +08:00
|
|
|
#define FSNOTIFY_ITER_FUNCS(name, NAME) \
|
|
|
|
static inline struct fsnotify_mark *fsnotify_iter_##name##_mark( \
|
|
|
|
struct fsnotify_iter_info *iter_info) \
|
|
|
|
{ \
|
2022-05-12 03:02:12 +08:00
|
|
|
return fsnotify_iter_mark(iter_info, FSNOTIFY_ITER_TYPE_##NAME); \
|
2018-04-21 07:10:50 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
FSNOTIFY_ITER_FUNCS(inode, INODE)
|
2020-12-02 20:07:09 +08:00
|
|
|
FSNOTIFY_ITER_FUNCS(parent, PARENT)
|
2018-04-21 07:10:50 +08:00
|
|
|
FSNOTIFY_ITER_FUNCS(vfsmount, VFSMOUNT)
|
2018-09-01 15:41:11 +08:00
|
|
|
FSNOTIFY_ITER_FUNCS(sb, SB)
|
2018-04-21 07:10:50 +08:00
|
|
|
|
2021-11-30 04:15:28 +08:00
|
|
|
#define fsnotify_foreach_iter_type(type) \
|
|
|
|
for (type = 0; type < FSNOTIFY_ITER_TYPE_COUNT; type++)
|
2022-05-12 03:02:12 +08:00
|
|
|
#define fsnotify_foreach_iter_mark_type(iter, mark, type) \
|
|
|
|
for (type = 0; \
|
|
|
|
type = fsnotify_iter_step(iter, type, &mark), \
|
|
|
|
type < FSNOTIFY_ITER_TYPE_COUNT; \
|
|
|
|
type++)
|
2018-04-21 07:10:52 +08:00
|
|
|
|
2018-06-23 22:54:49 +08:00
|
|
|
/*
|
|
|
|
* fsnotify_connp_t is what we embed in objects which connector can be attached
|
|
|
|
* to. fsnotify_connp_t * is how we refer from connector back to object.
|
|
|
|
*/
|
|
|
|
struct fsnotify_mark_connector;
|
|
|
|
typedef struct fsnotify_mark_connector __rcu *fsnotify_connp_t;
|
|
|
|
|
2017-03-14 19:31:02 +08:00
|
|
|
/*
|
2018-09-01 15:41:11 +08:00
|
|
|
* Inode/vfsmount/sb point to this structure which tracks all marks attached to
|
|
|
|
* the inode/vfsmount/sb. The reference to inode/vfsmount/sb is held by this
|
2017-02-01 16:21:58 +08:00
|
|
|
* structure. We destroy this structure when there are no more marks attached
|
|
|
|
* to it. The structure is protected by fsnotify_mark_srcu.
|
2017-03-14 19:31:02 +08:00
|
|
|
*/
|
|
|
|
struct fsnotify_mark_connector {
|
2017-02-01 15:19:43 +08:00
|
|
|
spinlock_t lock;
|
2019-06-19 18:34:44 +08:00
|
|
|
unsigned short type; /* Type of object [lock] */
|
|
|
|
#define FSNOTIFY_CONN_FLAG_HAS_FSID 0x01
|
2022-04-22 20:03:22 +08:00
|
|
|
#define FSNOTIFY_CONN_FLAG_HAS_IREF 0x02
|
2019-06-19 18:34:44 +08:00
|
|
|
unsigned short flags; /* flags [lock] */
|
2019-01-11 01:04:37 +08:00
|
|
|
__kernel_fsid_t fsid; /* fsid of filesystem containing object */
|
2018-06-23 22:54:49 +08:00
|
|
|
union {
|
|
|
|
/* Object pointer [lock] */
|
|
|
|
fsnotify_connp_t *obj;
|
2017-02-01 16:21:58 +08:00
|
|
|
/* Used listing heads to free after srcu period expires */
|
|
|
|
struct fsnotify_mark_connector *destroy_next;
|
|
|
|
};
|
2018-04-20 01:44:33 +08:00
|
|
|
struct hlist_head list;
|
2017-03-14 19:31:02 +08:00
|
|
|
};
|
|
|
|
|
2009-05-22 05:01:26 +08:00
|
|
|
/*
|
2015-09-05 06:43:06 +08:00
|
|
|
* A mark is simply an object attached to an in core inode which allows an
|
2009-05-22 05:01:26 +08:00
|
|
|
* fsnotify listener to indicate they are either no longer interested in events
|
|
|
|
* of a type matching mask or only interested in those events.
|
|
|
|
*
|
2015-09-05 06:43:06 +08:00
|
|
|
* These are flushed when an inode is evicted from core and may be flushed
|
|
|
|
* when the inode is modified (as seen by fsnotify_access). Some fsnotify
|
|
|
|
* users (such as dnotify) will flush these when the open fd is closed and not
|
|
|
|
* at inode eviction or modification.
|
|
|
|
*
|
|
|
|
* Text in brackets is showing the lock(s) protecting modifications of a
|
|
|
|
* particular entry. obj_lock means either inode->i_lock or
|
|
|
|
* mnt->mnt_root->d_lock depending on the mark type.
|
2009-05-22 05:01:26 +08:00
|
|
|
*/
|
2009-12-18 10:24:24 +08:00
|
|
|
struct fsnotify_mark {
|
2015-09-05 06:43:06 +08:00
|
|
|
/* Mask this mark is for [mark->lock, group->mark_mutex] */
|
|
|
|
__u32 mask;
|
|
|
|
/* We hold one for presence in g_list. Also one ref for each 'thing'
|
2009-05-22 05:01:26 +08:00
|
|
|
* in kernel that found and may be using this mark. */
|
2017-10-20 18:26:02 +08:00
|
|
|
refcount_t refcnt;
|
2015-09-05 06:43:06 +08:00
|
|
|
/* Group this mark is for. Set on mark creation, stable until last ref
|
|
|
|
* is dropped */
|
|
|
|
struct fsnotify_group *group;
|
2018-04-05 21:18:04 +08:00
|
|
|
/* List of marks by group->marks_list. Also reused for queueing
|
2015-09-05 06:43:06 +08:00
|
|
|
* mark into destroy_list when it's waiting for the end of SRCU period
|
|
|
|
* before it can be freed. [group->mark_mutex] */
|
2016-02-18 05:11:18 +08:00
|
|
|
struct list_head g_list;
|
2015-09-05 06:43:06 +08:00
|
|
|
/* Protects inode / mnt pointers, flags, masks */
|
|
|
|
spinlock_t lock;
|
2016-12-21 19:15:30 +08:00
|
|
|
/* List of marks for inode / vfsmount [connector->lock, mark ref] */
|
2015-09-05 06:43:06 +08:00
|
|
|
struct hlist_node obj_list;
|
2016-12-21 19:15:30 +08:00
|
|
|
/* Head of list of marks for an object [mark ref] */
|
2017-03-14 21:29:35 +08:00
|
|
|
struct fsnotify_mark_connector *connector;
|
2022-06-29 22:42:08 +08:00
|
|
|
/* Events types and flags to ignore [mark->lock, group->mark_mutex] */
|
|
|
|
__u32 ignore_mask;
|
2022-04-22 20:03:13 +08:00
|
|
|
/* General fsnotify mark flags */
|
|
|
|
#define FSNOTIFY_MARK_FLAG_ALIVE 0x0001
|
|
|
|
#define FSNOTIFY_MARK_FLAG_ATTACHED 0x0002
|
|
|
|
/* inotify mark flags */
|
|
|
|
#define FSNOTIFY_MARK_FLAG_EXCL_UNLINK 0x0010
|
|
|
|
#define FSNOTIFY_MARK_FLAG_IN_ONESHOT 0x0020
|
|
|
|
/* fanotify mark flags */
|
|
|
|
#define FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY 0x0100
|
2022-04-22 20:03:22 +08:00
|
|
|
#define FSNOTIFY_MARK_FLAG_NO_IREF 0x0200
|
2022-06-29 22:42:08 +08:00
|
|
|
#define FSNOTIFY_MARK_FLAG_HAS_IGNORE_FLAGS 0x0400
|
2015-09-05 06:43:06 +08:00
|
|
|
unsigned int flags; /* flags [mark->lock] */
|
2009-05-22 05:01:26 +08:00
|
|
|
};
|
|
|
|
|
2009-05-22 05:01:20 +08:00
|
|
|
#ifdef CONFIG_FSNOTIFY
|
|
|
|
|
|
|
|
/* called from the vfs helpers */
|
|
|
|
|
|
|
|
/* main fsnotify call to send events */
|
2020-07-22 20:58:46 +08:00
|
|
|
extern int fsnotify(__u32 mask, const void *data, int data_type,
|
|
|
|
struct inode *dir, const struct qstr *name,
|
|
|
|
struct inode *inode, u32 cookie);
|
2020-07-08 19:11:36 +08:00
|
|
|
extern int __fsnotify_parent(struct dentry *dentry, __u32 mask, const void *data,
|
2020-03-19 23:10:13 +08:00
|
|
|
int data_type);
|
2009-05-22 05:01:26 +08:00
|
|
|
extern void __fsnotify_inode_delete(struct inode *inode);
|
2009-12-18 10:24:27 +08:00
|
|
|
extern void __fsnotify_vfsmount_delete(struct vfsmount *mnt);
|
2018-09-01 15:41:11 +08:00
|
|
|
extern void fsnotify_sb_delete(struct super_block *sb);
|
2009-05-22 05:01:47 +08:00
|
|
|
extern u32 fsnotify_get_cookie(void);
|
2009-05-22 05:01:20 +08:00
|
|
|
|
2020-07-16 16:42:23 +08:00
|
|
|
static inline __u32 fsnotify_parent_needed_mask(__u32 mask)
|
|
|
|
{
|
|
|
|
/* FS_EVENT_ON_CHILD is set on marks that want parent/name info */
|
|
|
|
if (!(mask & FS_EVENT_ON_CHILD))
|
|
|
|
return 0;
|
|
|
|
/*
|
|
|
|
* This object might be watched by a mark that cares about parent/name
|
|
|
|
* info, does it care about the specific set of events that can be
|
|
|
|
* reported with parent/name info?
|
|
|
|
*/
|
|
|
|
return mask & FS_EVENTS_POSS_TO_PARENT;
|
|
|
|
}
|
|
|
|
|
2009-05-22 05:01:29 +08:00
|
|
|
static inline int fsnotify_inode_watches_children(struct inode *inode)
|
|
|
|
{
|
|
|
|
/* FS_EVENT_ON_CHILD is set if the inode may care */
|
|
|
|
if (!(inode->i_fsnotify_mask & FS_EVENT_ON_CHILD))
|
|
|
|
return 0;
|
|
|
|
/* this inode might care about child events, does it care about the
|
|
|
|
* specific set of events that can happen on a child? */
|
|
|
|
return inode->i_fsnotify_mask & FS_EVENTS_POSS_ON_CHILD;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update the dentry with a flag indicating the interest of its parent to receive
|
|
|
|
* filesystem events when those events happens to this dentry->d_inode.
|
|
|
|
*/
|
2016-05-30 06:35:12 +08:00
|
|
|
static inline void fsnotify_update_flags(struct dentry *dentry)
|
2009-05-22 05:01:29 +08:00
|
|
|
{
|
|
|
|
assert_spin_locked(&dentry->d_lock);
|
|
|
|
|
2011-01-07 14:49:38 +08:00
|
|
|
/*
|
|
|
|
* Serialisation of setting PARENT_WATCHED on the dentries is provided
|
|
|
|
* by d_lock. If inotify_inode_watched changes after we have taken
|
|
|
|
* d_lock, the following __fsnotify_update_child_dentry_flags call will
|
|
|
|
* find our entry, so it will spin until we complete here, and update
|
|
|
|
* us with the new state.
|
|
|
|
*/
|
2016-05-30 06:35:12 +08:00
|
|
|
if (fsnotify_inode_watches_children(dentry->d_parent->d_inode))
|
2009-05-22 05:01:29 +08:00
|
|
|
dentry->d_flags |= DCACHE_FSNOTIFY_PARENT_WATCHED;
|
|
|
|
else
|
|
|
|
dentry->d_flags &= ~DCACHE_FSNOTIFY_PARENT_WATCHED;
|
|
|
|
}
|
|
|
|
|
2009-05-22 05:01:20 +08:00
|
|
|
/* called from fsnotify listeners, such as fanotify or dnotify */
|
|
|
|
|
2011-06-14 23:29:46 +08:00
|
|
|
/* create a new group */
|
2022-04-22 20:03:15 +08:00
|
|
|
extern struct fsnotify_group *fsnotify_alloc_group(
|
|
|
|
const struct fsnotify_ops *ops,
|
|
|
|
int flags);
|
2011-06-14 23:29:46 +08:00
|
|
|
/* get reference to a group */
|
|
|
|
extern void fsnotify_get_group(struct fsnotify_group *group);
|
2009-12-18 10:24:22 +08:00
|
|
|
/* drop reference on a group from fsnotify_alloc_group */
|
2009-05-22 05:01:20 +08:00
|
|
|
extern void fsnotify_put_group(struct fsnotify_group *group);
|
2016-09-20 05:44:27 +08:00
|
|
|
/* group destruction begins, stop queuing new events */
|
|
|
|
extern void fsnotify_group_stop_queueing(struct fsnotify_group *group);
|
2011-06-14 23:29:45 +08:00
|
|
|
/* destroy group */
|
|
|
|
extern void fsnotify_destroy_group(struct fsnotify_group *group);
|
2011-10-15 05:43:39 +08:00
|
|
|
/* fasync handler function */
|
|
|
|
extern int fsnotify_fasync(int fd, struct file *file, int on);
|
2014-01-22 07:48:14 +08:00
|
|
|
/* Free event from memory */
|
|
|
|
extern void fsnotify_destroy_event(struct fsnotify_group *group,
|
|
|
|
struct fsnotify_event *event);
|
2009-05-22 05:01:37 +08:00
|
|
|
/* attach the event to the group notification queue */
|
2021-10-26 03:27:24 +08:00
|
|
|
extern int fsnotify_insert_event(struct fsnotify_group *group,
|
|
|
|
struct fsnotify_event *event,
|
|
|
|
int (*merge)(struct fsnotify_group *,
|
|
|
|
struct fsnotify_event *),
|
|
|
|
void (*insert)(struct fsnotify_group *,
|
|
|
|
struct fsnotify_event *));
|
|
|
|
|
|
|
|
static inline int fsnotify_add_event(struct fsnotify_group *group,
|
|
|
|
struct fsnotify_event *event,
|
|
|
|
int (*merge)(struct fsnotify_group *,
|
|
|
|
struct fsnotify_event *))
|
|
|
|
{
|
|
|
|
return fsnotify_insert_event(group, event, merge, NULL);
|
|
|
|
}
|
|
|
|
|
2018-02-21 22:07:52 +08:00
|
|
|
/* Queue overflow event to a notification group */
|
|
|
|
static inline void fsnotify_queue_overflow(struct fsnotify_group *group)
|
|
|
|
{
|
2021-10-26 03:27:24 +08:00
|
|
|
fsnotify_add_event(group, group->overflow_event, NULL);
|
2018-02-21 22:07:52 +08:00
|
|
|
}
|
|
|
|
|
2021-10-26 03:27:23 +08:00
|
|
|
static inline bool fsnotify_is_overflow_event(u32 mask)
|
|
|
|
{
|
|
|
|
return mask & FS_Q_OVERFLOW;
|
|
|
|
}
|
|
|
|
|
2021-03-04 18:48:22 +08:00
|
|
|
static inline bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group)
|
|
|
|
{
|
|
|
|
assert_spin_locked(&group->notification_lock);
|
|
|
|
|
|
|
|
return list_empty(&group->notification_list);
|
|
|
|
}
|
|
|
|
|
2009-05-22 05:01:37 +08:00
|
|
|
extern bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group);
|
|
|
|
/* return, but do not dequeue the first event on the notification queue */
|
2014-08-07 07:03:26 +08:00
|
|
|
extern struct fsnotify_event *fsnotify_peek_first_event(struct fsnotify_group *group);
|
2009-05-22 05:01:50 +08:00
|
|
|
/* return AND dequeue the first event on the notification queue */
|
2014-08-07 07:03:26 +08:00
|
|
|
extern struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group);
|
2019-01-09 20:15:23 +08:00
|
|
|
/* Remove event queued in the notification list */
|
|
|
|
extern void fsnotify_remove_queued_event(struct fsnotify_group *group,
|
|
|
|
struct fsnotify_event *event);
|
2009-05-22 05:01:37 +08:00
|
|
|
|
2009-05-22 05:01:26 +08:00
|
|
|
/* functions used to manipulate the marks attached to inodes */
|
|
|
|
|
2022-06-29 22:42:08 +08:00
|
|
|
/*
|
|
|
|
* Canonical "ignore mask" including event flags.
|
|
|
|
*
|
|
|
|
* Note the subtle semantic difference from the legacy ->ignored_mask.
|
|
|
|
* ->ignored_mask traditionally only meant which events should be ignored,
|
|
|
|
* while ->ignore_mask also includes flags regarding the type of objects on
|
|
|
|
* which events should be ignored.
|
|
|
|
*/
|
|
|
|
static inline __u32 fsnotify_ignore_mask(struct fsnotify_mark *mark)
|
|
|
|
{
|
|
|
|
__u32 ignore_mask = mark->ignore_mask;
|
|
|
|
|
|
|
|
/* The event flags in ignore mask take effect */
|
|
|
|
if (mark->flags & FSNOTIFY_MARK_FLAG_HAS_IGNORE_FLAGS)
|
|
|
|
return ignore_mask;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Legacy behavior:
|
|
|
|
* - Always ignore events on dir
|
|
|
|
* - Ignore events on child if parent is watching children
|
|
|
|
*/
|
|
|
|
ignore_mask |= FS_ISDIR;
|
|
|
|
ignore_mask &= ~FS_EVENT_ON_CHILD;
|
|
|
|
ignore_mask |= mark->mask & FS_EVENT_ON_CHILD;
|
|
|
|
|
|
|
|
return ignore_mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Legacy ignored_mask - only event types to ignore */
|
|
|
|
static inline __u32 fsnotify_ignored_events(struct fsnotify_mark *mark)
|
|
|
|
{
|
|
|
|
return mark->ignore_mask & ALL_FSNOTIFY_EVENTS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if mask (or ignore mask) should be applied depending if victim is a
|
|
|
|
* directory and whether it is reported to a watching parent.
|
|
|
|
*/
|
|
|
|
static inline bool fsnotify_mask_applicable(__u32 mask, bool is_dir,
|
|
|
|
int iter_type)
|
|
|
|
{
|
|
|
|
/* Should mask be applied to a directory? */
|
|
|
|
if (is_dir && !(mask & FS_ISDIR))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Should mask be applied to a child? */
|
|
|
|
if (iter_type == FSNOTIFY_ITER_TYPE_PARENT &&
|
|
|
|
!(mask & FS_EVENT_ON_CHILD))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Effective ignore mask taking into account if event victim is a
|
|
|
|
* directory and whether it is reported to a watching parent.
|
|
|
|
*/
|
|
|
|
static inline __u32 fsnotify_effective_ignore_mask(struct fsnotify_mark *mark,
|
|
|
|
bool is_dir, int iter_type)
|
|
|
|
{
|
|
|
|
__u32 ignore_mask = fsnotify_ignored_events(mark);
|
|
|
|
|
|
|
|
if (!ignore_mask)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* For non-dir and non-child, no need to consult the event flags */
|
|
|
|
if (!is_dir && iter_type != FSNOTIFY_ITER_TYPE_PARENT)
|
|
|
|
return ignore_mask;
|
|
|
|
|
|
|
|
ignore_mask = fsnotify_ignore_mask(mark);
|
|
|
|
if (!fsnotify_mask_applicable(ignore_mask, is_dir, iter_type))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return ignore_mask & ALL_FSNOTIFY_EVENTS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get mask for calculating object interest taking ignore mask into account */
|
2022-02-23 23:14:37 +08:00
|
|
|
static inline __u32 fsnotify_calc_mask(struct fsnotify_mark *mark)
|
|
|
|
{
|
|
|
|
__u32 mask = mark->mask;
|
|
|
|
|
2022-06-29 22:42:08 +08:00
|
|
|
if (!fsnotify_ignored_events(mark))
|
2022-02-23 23:14:37 +08:00
|
|
|
return mask;
|
|
|
|
|
2022-06-29 22:42:08 +08:00
|
|
|
/* Interest in FS_MODIFY may be needed for clearing ignore mask */
|
2022-02-23 23:14:38 +08:00
|
|
|
if (!(mark->flags & FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY))
|
|
|
|
mask |= FS_MODIFY;
|
|
|
|
|
2022-02-23 23:14:37 +08:00
|
|
|
/*
|
|
|
|
* If mark is interested in ignoring events on children, the object must
|
|
|
|
* show interest in those events for fsnotify_parent() to notice it.
|
|
|
|
*/
|
2022-06-29 22:42:08 +08:00
|
|
|
return mask | mark->ignore_mask;
|
2022-02-23 23:14:37 +08:00
|
|
|
}
|
|
|
|
|
2018-06-23 22:54:50 +08:00
|
|
|
/* Get mask of events for a list of marks */
|
|
|
|
extern __u32 fsnotify_conn_mask(struct fsnotify_mark_connector *conn);
|
2017-03-15 16:16:27 +08:00
|
|
|
/* Calculate mask of events for a list of marks */
|
|
|
|
extern void fsnotify_recalc_mask(struct fsnotify_mark_connector *conn);
|
2016-12-22 01:32:48 +08:00
|
|
|
extern void fsnotify_init_mark(struct fsnotify_mark *mark,
|
2016-12-22 01:06:12 +08:00
|
|
|
struct fsnotify_group *group);
|
2016-12-21 23:28:45 +08:00
|
|
|
/* Find mark belonging to given group in the list of marks */
|
2018-06-23 22:54:47 +08:00
|
|
|
extern struct fsnotify_mark *fsnotify_find_mark(fsnotify_connp_t *connp,
|
|
|
|
struct fsnotify_group *group);
|
2019-01-11 01:04:37 +08:00
|
|
|
/* Get cached fsid of filesystem containing object */
|
|
|
|
extern int fsnotify_get_conn_fsid(const struct fsnotify_mark_connector *conn,
|
|
|
|
__kernel_fsid_t *fsid);
|
2018-06-23 22:54:48 +08:00
|
|
|
/* attach the mark to the object */
|
|
|
|
extern int fsnotify_add_mark(struct fsnotify_mark *mark,
|
2021-11-30 04:15:27 +08:00
|
|
|
fsnotify_connp_t *connp, unsigned int obj_type,
|
2022-04-22 20:03:16 +08:00
|
|
|
int add_flags, __kernel_fsid_t *fsid);
|
2016-12-22 01:32:48 +08:00
|
|
|
extern int fsnotify_add_mark_locked(struct fsnotify_mark *mark,
|
2019-01-11 01:04:37 +08:00
|
|
|
fsnotify_connp_t *connp,
|
2022-04-22 20:03:16 +08:00
|
|
|
unsigned int obj_type, int add_flags,
|
2019-01-11 01:04:37 +08:00
|
|
|
__kernel_fsid_t *fsid);
|
|
|
|
|
2018-04-21 07:10:55 +08:00
|
|
|
/* attach the mark to the inode */
|
|
|
|
static inline int fsnotify_add_inode_mark(struct fsnotify_mark *mark,
|
|
|
|
struct inode *inode,
|
2022-04-22 20:03:16 +08:00
|
|
|
int add_flags)
|
2018-04-21 07:10:55 +08:00
|
|
|
{
|
2018-06-23 22:54:48 +08:00
|
|
|
return fsnotify_add_mark(mark, &inode->i_fsnotify_marks,
|
2022-04-22 20:03:16 +08:00
|
|
|
FSNOTIFY_OBJ_TYPE_INODE, add_flags, NULL);
|
2018-04-21 07:10:55 +08:00
|
|
|
}
|
|
|
|
static inline int fsnotify_add_inode_mark_locked(struct fsnotify_mark *mark,
|
|
|
|
struct inode *inode,
|
2022-04-22 20:03:16 +08:00
|
|
|
int add_flags)
|
2018-04-21 07:10:55 +08:00
|
|
|
{
|
2018-06-23 22:54:48 +08:00
|
|
|
return fsnotify_add_mark_locked(mark, &inode->i_fsnotify_marks,
|
2022-04-22 20:03:16 +08:00
|
|
|
FSNOTIFY_OBJ_TYPE_INODE, add_flags,
|
2019-01-11 01:04:37 +08:00
|
|
|
NULL);
|
2018-04-21 07:10:55 +08:00
|
|
|
}
|
2019-01-11 01:04:37 +08:00
|
|
|
|
2011-06-14 23:29:51 +08:00
|
|
|
/* given a group and a mark, flag mark to be freed when all references are dropped */
|
|
|
|
extern void fsnotify_destroy_mark(struct fsnotify_mark *mark,
|
|
|
|
struct fsnotify_group *group);
|
2015-09-05 06:43:12 +08:00
|
|
|
/* detach mark from inode / mount list, group list, drop inode reference */
|
|
|
|
extern void fsnotify_detach_mark(struct fsnotify_mark *mark);
|
|
|
|
/* free mark */
|
|
|
|
extern void fsnotify_free_mark(struct fsnotify_mark *mark);
|
2019-08-19 02:18:46 +08:00
|
|
|
/* Wait until all marks queued for destruction are destroyed */
|
|
|
|
extern void fsnotify_wait_marks_destroyed(void);
|
2021-11-30 04:15:27 +08:00
|
|
|
/* Clear all of the marks of a group attached to a given object type */
|
|
|
|
extern void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
|
|
|
|
unsigned int obj_type);
|
2016-12-21 23:20:32 +08:00
|
|
|
/* run all the marks in a group, and clear all of the vfsmount marks */
|
|
|
|
static inline void fsnotify_clear_vfsmount_marks_by_group(struct fsnotify_group *group)
|
|
|
|
{
|
2021-11-30 04:15:27 +08:00
|
|
|
fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_VFSMOUNT);
|
2016-12-21 23:20:32 +08:00
|
|
|
}
|
|
|
|
/* run all the marks in a group, and clear all of the inode marks */
|
|
|
|
static inline void fsnotify_clear_inode_marks_by_group(struct fsnotify_group *group)
|
|
|
|
{
|
2021-11-30 04:15:27 +08:00
|
|
|
fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_INODE);
|
2016-12-21 23:20:32 +08:00
|
|
|
}
|
2018-09-01 15:41:11 +08:00
|
|
|
/* run all the marks in a group, and clear all of the sn marks */
|
|
|
|
static inline void fsnotify_clear_sb_marks_by_group(struct fsnotify_group *group)
|
|
|
|
{
|
2021-11-30 04:15:27 +08:00
|
|
|
fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_SB);
|
2018-09-01 15:41:11 +08:00
|
|
|
}
|
2009-12-18 10:24:24 +08:00
|
|
|
extern void fsnotify_get_mark(struct fsnotify_mark *mark);
|
|
|
|
extern void fsnotify_put_mark(struct fsnotify_mark *mark);
|
2016-11-10 23:02:11 +08:00
|
|
|
extern void fsnotify_finish_user_wait(struct fsnotify_iter_info *iter_info);
|
|
|
|
extern bool fsnotify_prepare_user_wait(struct fsnotify_iter_info *iter_info);
|
2009-05-22 05:01:26 +08:00
|
|
|
|
2021-03-04 18:48:23 +08:00
|
|
|
static inline void fsnotify_init_event(struct fsnotify_event *event)
|
2019-01-11 01:04:31 +08:00
|
|
|
{
|
|
|
|
INIT_LIST_HEAD(&event->list);
|
|
|
|
}
|
2009-12-18 10:24:21 +08:00
|
|
|
|
2009-05-22 05:01:20 +08:00
|
|
|
#else
|
|
|
|
|
2020-07-22 20:58:46 +08:00
|
|
|
static inline int fsnotify(__u32 mask, const void *data, int data_type,
|
|
|
|
struct inode *dir, const struct qstr *name,
|
|
|
|
struct inode *inode, u32 cookie)
|
2009-12-18 10:24:34 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2009-05-22 05:01:26 +08:00
|
|
|
|
2020-07-08 19:11:36 +08:00
|
|
|
static inline int __fsnotify_parent(struct dentry *dentry, __u32 mask,
|
2020-03-19 23:10:13 +08:00
|
|
|
const void *data, int data_type)
|
2010-10-29 05:21:56 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2009-05-22 05:01:29 +08:00
|
|
|
|
2009-05-22 05:01:26 +08:00
|
|
|
static inline void __fsnotify_inode_delete(struct inode *inode)
|
|
|
|
{}
|
|
|
|
|
2009-12-18 10:24:27 +08:00
|
|
|
static inline void __fsnotify_vfsmount_delete(struct vfsmount *mnt)
|
|
|
|
{}
|
|
|
|
|
2018-09-01 15:41:11 +08:00
|
|
|
static inline void fsnotify_sb_delete(struct super_block *sb)
|
|
|
|
{}
|
|
|
|
|
2016-05-30 06:35:12 +08:00
|
|
|
static inline void fsnotify_update_flags(struct dentry *dentry)
|
2009-05-22 05:01:29 +08:00
|
|
|
{}
|
|
|
|
|
2009-05-22 05:01:47 +08:00
|
|
|
static inline u32 fsnotify_get_cookie(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-03-05 01:37:22 +08:00
|
|
|
static inline void fsnotify_unmount_inodes(struct super_block *sb)
|
2009-05-22 05:01:58 +08:00
|
|
|
{}
|
|
|
|
|
2009-05-22 05:01:20 +08:00
|
|
|
#endif /* CONFIG_FSNOTIFY */
|
|
|
|
|
|
|
|
#endif /* __KERNEL __ */
|
|
|
|
|
|
|
|
#endif /* __LINUX_FSNOTIFY_BACKEND_H */
|