Commit Graph

40581 Commits

Author SHA1 Message Date
Al Viro e4bd1c1a95 namei: move putname() call into filename_lookup()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:10:40 -04:00
Al Viro 625b6d1054 namei: pass the struct path to store the result down into path_lookupat()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:10:39 -04:00
Al Viro 18d8c86011 namei: uninline set_root{,_rcu}()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:10:39 -04:00
Al Viro aed434ada6 namei: be careful with mountpoint crossings in follow_dotdot_rcu()
Otherwise we are risking a hard error where nonlazy restart would be the right
thing to do; it's a very narrow race with mount --move and most of the time it
ends up being completely harmless, but it's possible to construct a case when
we'll get a bogus hard error instead of falling back to non-lazy walk...

For one thing, when crossing _into_ overmount of parent we need to check for
mount_lock bumps when we get NULL from __lookup_mnt() as well.

For another, and less exotically, we need to make sure that the data fetched
in follow_up_rcu() had been consistent.  ->mnt_mountpoint is pinned for as
long as it is a mountpoint, but we need to check mount_lock after fetching
to verify that.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:10:38 -04:00
Al Viro 89076bc319 get rid of assorted nameidata-related debris
pointless forward declarations, stale comments

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:10:37 -04:00
Al Viro 5a8d87e8ed namei: unlazy_walk() doesn't need to mess with current->fs anymore
now that we have ->root_seq, legitimize_path(&nd->root, nd->root_seq)
will do just fine...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:10:36 -04:00
Al Viro 8f47a0167c namei: handle absolute symlinks without dropping out of RCU mode
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:10:22 -04:00
Al Viro 8c1b456689 enable passing fast relative symlinks without dropping out of RCU mode
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:06:28 -04:00
NeilBrown 8fa9dd2466 VFS/namei: make the use of touch_atime() in get_link() RCU-safe.
touch_atime is not RCU-safe, and so cannot be called on an RCU walk.
However, in situations where RCU-walk makes a difference, the symlink
will likely to accessed much more often than it is useful to update
the atime.

So split out the test of "Does the atime actually need to be updated"
into  atime_needs_update(), and have get_link() unlazy if it finds that
it will need to do that update.

Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:06:27 -04:00
Al Viro bc40aee053 namei: don't unlazy until get_link()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:06:27 -04:00
Al Viro 7973387a2f namei: make unlazy_walk and terminate_walk handle nd->stack, add unlazy_link
We are almost done - primitives for leaving RCU mode are aware of nd->stack
now, a new primitive for going to non-RCU mode when we have a symlink on hands
added.

The thing we are heavily relying upon is that *any* unlazy failure will be
shortly followed by terminate_walk(), with no access to nameidata in between.
So it's enough to leave the things in a state terminate_walk() would cope with.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-15 01:06:01 -04:00
Theodore Ts'o b9576fc362 ext4: fix an ext3 collapse range regression in xfstests
The xfstests test suite assumes that an attempt to collapse range on
the range (0, 1) will return EOPNOTSUPP if the file system does not
support collapse range.  Commit 280227a75b56: "ext4: move check under
lock scope to close a race" broke this, and this caused xfstests to
fail when run when testing file systems that did not have the extents
feature enabled.

Reported-by: Eric Whitney <enwlinux@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-15 00:24:10 -04:00
Vladimir Davydov 499611ed45 kernfs: do not account ino_ida allocations to memcg
root->ino_ida is used for kernfs inode number allocations. Since IDA has
a layered structure, different IDs can reside on the same layer, which
is currently accounted to some memory cgroup. The problem is that each
kmem cache of a memory cgroup has its own directory on sysfs (under
/sys/fs/kernel/<cache-name>/cgroup). If the inode number of such a
directory or any file in it gets allocated from a layer accounted to the
cgroup which the cache is created for, the cgroup will get pinned for
good, because one has to free all kmem allocations accounted to a cgroup
in order to release it and destroy all its kmem caches. That said we
must not account layers of ino_ida to any memory cgroup.

Since per net init operations may create new sysfs entries directly
(e.g. lo device) or indirectly (nf_conntrack creates a new kmem cache
per each namespace, which, in turn, creates new sysfs entries), an easy
way to reproduce this issue is by creating network namespace(s) from
inside a kmem-active memory cgroup.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: <stable@vger.kernel.org>	[4.0.x]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14 17:55:51 -07:00
Darrick J. Wong e531d0bceb jbd2: fix r_count overflows leading to buffer overflow in journal recovery
The journal revoke block recovery code does not check r_count for
sanity, which means that an evil value of r_count could result in
the kernel reading off the end of the revoke table and into whatever
garbage lies beyond.  This could crash the kernel, so fix that.

However, in testing this fix, I discovered that the code to write
out the revoke tables also was not correctly checking to see if the
block was full -- the current offset check is fine so long as the
revoke table space size is a multiple of the record size, but this
is not true when either journal_csum_v[23] are set.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: stable@vger.kernel.org
2015-05-14 19:11:50 -04:00
Eryu Guan 2f974865ff ext4: check for zero length extent explicitly
The following commit introduced a bug when checking for zero length extent

5946d08 ext4: check for overlapping extents in ext4_valid_extent_entries()

Zero length extent could pass the check if lblock is zero.

Adding the explicit check for zero length back.

Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
2015-05-14 19:00:45 -04:00
Lukas Czerner 9d50659406 ext4: fix NULL pointer dereference when journal restart fails
Currently when journal restart fails, we'll have the h_transaction of
the handle set to NULL to indicate that the handle has been effectively
aborted. We handle this situation quietly in the jbd2_journal_stop() and just
free the handle and exit because everything else has been done before we
attempted (and failed) to restart the journal.

Unfortunately there are a number of problems with that approach
introduced with commit

41a5b91319 "jbd2: invalidate handle if jbd2_journal_restart()
fails"

First of all in ext4 jbd2_journal_stop() will be called through
__ext4_journal_stop() where we would try to get a hold of the superblock
by dereferencing h_transaction which in this case would lead to NULL
pointer dereference and crash.

In addition we're going to free the handle regardless of the refcount
which is bad as well, because others up the call chain will still
reference the handle so we might potentially reference already freed
memory.

Moreover it's expected that we'll get aborted handle as well as detached
handle in some of the journalling function as the error propagates up
the stack, so it's unnecessary to call WARN_ON every time we get
detached handle.

And finally we might leak some memory by forgetting to free reserved
handle in jbd2_journal_stop() in the case where handle was detached from
the transaction (h_transaction is NULL).

Fix the NULL pointer dereference in __ext4_journal_stop() by just
calling jbd2_journal_stop() quietly as suggested by Jan Kara. Also fix
the potential memory leak in jbd2_journal_stop() and use proper
handle refcounting before we attempt to free it to avoid use-after-free
issues.

And finally remove all WARN_ON(!transaction) from the code so that we do
not get random traces when something goes wrong because when journal
restart fails we will get to some of those functions.

Cc: stable@vger.kernel.org
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Jan Kara <jack@suse.cz>
2015-05-14 18:55:18 -04:00
Theodore Ts'o 92c8263910 ext4: remove unused function prototype from ext4.h
The ext4_extent_tree_init() function hasn't been in the ext4 code for
a long time ago, except in an unused function prototype in ext4.h

Google-Bug-Id: 4530137
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14 18:43:36 -04:00
Theodore Ts'o 1b46617b8d ext4: don't save the error information if the block device is read-only
Google-Bug-Id: 20939131
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14 18:37:30 -04:00
Theodore Ts'o 8f4d855839 ext4: fix lazytime optimization
We had a fencepost error in the lazytime optimization which means that
timestamp would get written to the wrong inode.

Cc: stable@vger.kernel.org
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14 18:19:01 -04:00
Miklos Szeredi d377c5eb54 ovl: don't remove non-empty opaque directory
When removing an opaque directory we can't just call rmdir() to check for
emptiness, because the directory will need to be replaced with a whiteout.
The replacement is done with RENAME_EXCHANGE, which doesn't check
emptiness.

Solution is just to check emptiness by reading the directory.  In the
future we could add a new rename flag to check for emptiness even for
RENAME_EXCHANGE to optimize this case.

Reported-by: Vincent Batts <vbatts@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Tested-by: Jordi Pujol Palomer <jordipujolp@gmail.com>
Fixes: 263b4a0fee ("ovl: dont replace opaque dir")
Cc: <stable@vger.kernel.org> # v4.0+
2015-05-14 10:04:44 +02:00
Jeff Layton feaff8e5b2 nfs: take extra reference to fl->fl_file when running a setlk
We had a report of a crash while stress testing the NFS client:

    BUG: unable to handle kernel NULL pointer dereference at 0000000000000150
    IP: [<ffffffff8127b698>] locks_get_lock_context+0x8/0x90
    PGD 0
    Oops: 0000 [] SMP
    Modules linked in: rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ebtable_nat ebtable_filter ebtable_broute bridge stp llc ebtables ip6table_security ip6table_mangle ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_raw ip6table_filter ip6_tables iptable_security iptable_mangle iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_raw coretemp crct10dif_pclmul ppdev crc32_pclmul crc32c_intel ghash_clmulni_intel vmw_balloon serio_raw vmw_vmci i2c_piix4 shpchp parport_pc acpi_cpufreq parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc vmwgfx drm_kms_helper ttm drm mptspi scsi_transport_spi mptscsih mptbase e1000 ata_generic pata_acpi
    CPU: 1 PID: 399 Comm: kworker/1:1H Not tainted 4.1.0-0.rc1.git0.1.fc23.x86_64 
    Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/30/2013
    Workqueue: rpciod rpc_async_schedule [sunrpc]
    task: ffff880036aea7c0 ti: ffff8800791f4000 task.ti: ffff8800791f4000
    RIP: 0010:[<ffffffff8127b698>]  [<ffffffff8127b698>] locks_get_lock_context+0x8/0x90
    RSP: 0018:ffff8800791f7c00  EFLAGS: 00010293
    RAX: ffff8800791f7c40 RBX: ffff88001f2ad8c0 RCX: ffffe8ffffc80305
    RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
    RBP: ffff8800791f7c88 R08: ffff88007fc971d8 R09: 279656d600000000
    R10: 0000034a01000000 R11: 279656d600000000 R12: ffff88001f2ad918
    R13: ffff88001f2ad8c0 R14: 0000000000000000 R15: 0000000100e73040
    FS:  0000000000000000(0000) GS:ffff88007fc80000(0000) knlGS:0000000000000000
    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 0000000000000150 CR3: 0000000001c0b000 CR4: 00000000000407e0
    Stack:
     ffffffff8127c5b0 ffff8800791f7c18 ffffffffa0171e29 ffff8800791f7c58
     ffffffffa0171ef8 ffff8800791f7c78 0000000000000246 ffff88001ea0ba00
     ffff8800791f7c40 ffff8800791f7c40 00000000ff5d86a3 ffff8800791f7ca8
    Call Trace:
     [<ffffffff8127c5b0>] ? __posix_lock_file+0x40/0x760
     [<ffffffffa0171e29>] ? rpc_make_runnable+0x99/0xa0 [sunrpc]
     [<ffffffffa0171ef8>] ? rpc_wake_up_task_queue_locked.part.35+0xc8/0x250 [sunrpc]
     [<ffffffff8127cd3a>] posix_lock_file_wait+0x4a/0x120
     [<ffffffffa03e4f12>] ? nfs41_wake_and_assign_slot+0x32/0x40 [nfsv4]
     [<ffffffffa03bf108>] ? nfs41_sequence_done+0xd8/0x2d0 [nfsv4]
     [<ffffffffa03c116d>] do_vfs_lock+0x2d/0x30 [nfsv4]
     [<ffffffffa03c251d>] nfs4_lock_done+0x1ad/0x210 [nfsv4]
     [<ffffffffa0171a30>] ? __rpc_sleep_on_priority+0x390/0x390 [sunrpc]
     [<ffffffffa0171a30>] ? __rpc_sleep_on_priority+0x390/0x390 [sunrpc]
     [<ffffffffa0171a5c>] rpc_exit_task+0x2c/0xa0 [sunrpc]
     [<ffffffffa0167450>] ? call_refreshresult+0x150/0x150 [sunrpc]
     [<ffffffffa0172640>] __rpc_execute+0x90/0x460 [sunrpc]
     [<ffffffffa0172a25>] rpc_async_schedule+0x15/0x20 [sunrpc]
     [<ffffffff810baa1b>] process_one_work+0x1bb/0x410
     [<ffffffff810bacc3>] worker_thread+0x53/0x480
     [<ffffffff810bac70>] ? process_one_work+0x410/0x410
     [<ffffffff810bac70>] ? process_one_work+0x410/0x410
     [<ffffffff810c0b38>] kthread+0xd8/0xf0
     [<ffffffff810c0a60>] ? kthread_worker_fn+0x180/0x180
     [<ffffffff817a1aa2>] ret_from_fork+0x42/0x70
     [<ffffffff810c0a60>] ? kthread_worker_fn+0x180/0x180

Jean says:

"Running locktests with a large number of iterations resulted in a
 client crash.  The test run took a while and hasn't finished after close
 to 2 hours. The crash happened right after I gave up and killed the test
 (after 107m) with Ctrl+C."

The crash happened because a NULL inode pointer got passed into
locks_get_lock_context. The call chain indicates that file_inode(filp)
returned NULL, which means that f_inode was NULL. Since that's zeroed
out in __fput, that suggests that this filp pointer outlived the last
reference.

Looking at the code, that seems possible. We copy the struct file_lock
that's passed in, but if the task is signalled at an inopportune time we
can end up trying to use that file_lock in rpciod context after the process
that requested it has already returned (and possibly put its filp
reference).

Fix this by taking an extra reference to the filp when we allocate the
lock info, and put it in nfs4_lock_release.

Reported-by: Jean Spector <jean@primarydata.com>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-05-13 14:56:06 -04:00
Chuck Lever 6b19687563 nfs: stat(2) fails during cthon04 basic test5 on NFSv4.0
When running the Connectathon basic tests against a Solaris NFS
server over NFSv4.0, test5 reports that stat(2) returns a file size
of zero instead of 1MB.

On success, nfs_commit_inode() can return a positive result; see
other call sites such as nfs_file_fsync_commit() and
nfs_commit_unstable_pages().

The call site recently added in nfs_wb_all() does not prevent that
positive return value from leaking to its callers. If it leaks
through nfs_sync_inode() back to nfs_getattr(), that causes stat(2)
to return a positive return value to user space while also not
filling in the passed-in struct stat.

Additional clean up: the new logic in nfs_wb_all() is rewritten in
bfields-normal form.

Fixes: 5bb89b4702 ("NFSv4.1/pnfs: Separate out metadata . . .")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-05-13 14:56:03 -04:00
Helge Deller d045c77c1a parisc,metag: Fix crashes due to stack randomization on stack-grows-upwards architectures
On architectures where the stack grows upwards (CONFIG_STACK_GROWSUP=y,
currently parisc and metag only) stack randomization sometimes leads to crashes
when the stack ulimit is set to lower values than STACK_RND_MASK (which is 8 MB
by default if not defined in arch-specific headers).

The problem is, that when the stack vm_area_struct is set up in fs/exec.c, the
additional space needed for the stack randomization (as defined by the value of
STACK_RND_MASK) was not taken into account yet and as such, when the stack
randomization code added a random offset to the stack start, the stack
effectively got smaller than what the user defined via rlimit_max(RLIMIT_STACK)
which then sometimes leads to out-of-stack situations and crashes.

This patch fixes it by adding the maximum possible amount of memory (based on
STACK_RND_MASK) which theoretically could be added by the stack randomization
code to the initial stack size. That way, the user-defined stack size is always
guaranteed to be at minimum what is defined via rlimit_max(RLIMIT_STACK).

This bug is currently not visible on the metag architecture, because on metag
STACK_RND_MASK is defined to 0 which effectively disables stack randomization.

The changes to fs/exec.c are inside an "#ifdef CONFIG_STACK_GROWSUP"
section, so it does not affect other platformws beside those where the
stack grows upwards (parisc and metag).

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org # v3.16+
2015-05-12 22:03:44 +02:00
Linus Torvalds 4cfceaf0c0 Merge branch 'for-4.1' of git://linux-nfs.org/~bfields/linux
Pull nfsd bugfixes from Bruce Fields:
 "Mainly pnfs fixes (and for problems with generic callback code made
  more obvious by pnfs)"

* 'for-4.1' of git://linux-nfs.org/~bfields/linux:
  nfsd: skip CB_NULL probes for 4.1 or later
  nfsd: fix callback restarts
  nfsd: split transport vs operation errors for callbacks
  svcrpc: fix potential GSSX_ACCEPT_SEC_CONTEXT decoding failures
  nfsd: fix pNFS return on close semantics
  nfsd: fix the check for confirmed openowner in nfs4_preprocess_stateid_op
  nfsd/blocklayout: pretend we can send deviceid notifications
2015-05-11 14:42:52 -07:00
Filipe Manana 062c19e9dd Btrfs: fix race when reusing stale extent buffers that leads to BUG_ON
There's a race between releasing extent buffers that are flagged as stale
and recycling them that makes us it the following BUG_ON at
btrfs_release_extent_buffer_page:

    BUG_ON(extent_buffer_under_io(eb))

The BUG_ON is triggered because the extent buffer has the flag
EXTENT_BUFFER_DIRTY set as a consequence of having been reused and made
dirty by another concurrent task.

Here follows a sequence of steps that leads to the BUG_ON.

      CPU 0                                                    CPU 1                                                CPU 2

path->nodes[0] == eb X
X->refs == 2 (1 for the tree, 1 for the path)
btrfs_header_generation(X) == current trans id
flag EXTENT_BUFFER_DIRTY set on X

btrfs_release_path(path)
    unlocks X

                                                      reads eb X
                                                         X->refs incremented to 3
                                                      locks eb X
                                                      btrfs_del_items(X)
                                                         X becomes empty
                                                         clean_tree_block(X)
                                                             clear EXTENT_BUFFER_DIRTY from X
                                                         btrfs_del_leaf(X)
                                                             unlocks X
                                                             extent_buffer_get(X)
                                                                X->refs incremented to 4
                                                             btrfs_free_tree_block(X)
                                                                X's range is not pinned
                                                                X's range added to free
                                                                  space cache
                                                             free_extent_buffer_stale(X)
                                                                lock X->refs_lock
                                                                set EXTENT_BUFFER_STALE on X
                                                                release_extent_buffer(X)
                                                                    X->refs decremented to 3
                                                                    unlocks X->refs_lock
                                                      btrfs_release_path()
                                                         unlocks X
                                                         free_extent_buffer(X)
                                                             X->refs becomes 2

                                                                                                      __btrfs_cow_block(Y)
                                                                                                          btrfs_alloc_tree_block()
                                                                                                              btrfs_reserve_extent()
                                                                                                                  find_free_extent()
                                                                                                                      gets offset == X->start
                                                                                                              btrfs_init_new_buffer(X->start)
                                                                                                                  btrfs_find_create_tree_block(X->start)
                                                                                                                      alloc_extent_buffer(X->start)
                                                                                                                          find_extent_buffer(X->start)
                                                                                                                              finds eb X in radix tree

    free_extent_buffer(X)
        lock X->refs_lock
            test X->refs == 2
            test bit EXTENT_BUFFER_STALE is set
            test !extent_buffer_under_io(eb)

                                                                                                                              increments X->refs to 3
                                                                                                                              mark_extent_buffer_accessed(X)
                                                                                                                                  check_buffer_tree_ref(X)
                                                                                                                                    --> does nothing,
                                                                                                                                        X->refs >= 2 and
                                                                                                                                        EXTENT_BUFFER_TREE_REF
                                                                                                                                        is set in X
                                                                                                              clear EXTENT_BUFFER_STALE from X
                                                                                                              locks X
                                                                                                          btrfs_mark_buffer_dirty()
                                                                                                              set_extent_buffer_dirty(X)
                                                                                                                  check_buffer_tree_ref(X)
                                                                                                                     --> does nothing, X->refs >= 2 and
                                                                                                                         EXTENT_BUFFER_TREE_REF is set
                                                                                                                  sets EXTENT_BUFFER_DIRTY on X

            test and clear EXTENT_BUFFER_TREE_REF
            decrements X->refs to 2
        release_extent_buffer(X)
            decrements X->refs to 1
            unlock X->refs_lock

                                                                                                      unlock X
                                                                                                      free_extent_buffer(X)
                                                                                                          lock X->refs_lock
                                                                                                          release_extent_buffer(X)
                                                                                                              decrements X->refs to 0
                                                                                                              btrfs_release_extent_buffer_page(X)
                                                                                                                   BUG_ON(extent_buffer_under_io(X))
                                                                                                                       --> EXTENT_BUFFER_DIRTY set on X

Fix this by making find_extent buffer wait for any ongoing task currently
executing free_extent_buffer()/free_extent_buffer_stale() if the extent
buffer has the stale flag set.
A more clean alternative would be to always increment the extent buffer's
reference count while holding its refs_lock spinlock but find_extent_buffer
is a performance critical area and that would cause lock contention whenever
multiple tasks search for the same extent buffer concurrently.

A build server running a SLES 12 kernel (3.12 kernel + over 450 upstream
btrfs patches backported from newer kernels) was hitting this often:

[1212302.461948] kernel BUG at ../fs/btrfs/extent_io.c:4507!
(...)
[1212302.470219] CPU: 1 PID: 19259 Comm: bs_sched Not tainted 3.12.36-38-default 
[1212302.540792] Hardware name: Supermicro PDSM4/PDSM4, BIOS 6.00 04/17/2006
[1212302.540792] task: ffff8800e07e0100 ti: ffff8800d6412000 task.ti: ffff8800d6412000
[1212302.540792] RIP: 0010:[<ffffffffa0507081>]  [<ffffffffa0507081>] btrfs_release_extent_buffer_page.constprop.51+0x101/0x110 [btrfs]
(...)
[1212302.630008] Call Trace:
[1212302.630008]  [<ffffffffa05070cd>] release_extent_buffer+0x3d/0xa0 [btrfs]
[1212302.630008]  [<ffffffffa04c2d9d>] btrfs_release_path+0x1d/0xa0 [btrfs]
[1212302.630008]  [<ffffffffa04c5c7e>] read_block_for_search.isra.33+0x13e/0x3a0 [btrfs]
[1212302.630008]  [<ffffffffa04c8094>] btrfs_search_slot+0x3f4/0xa80 [btrfs]
[1212302.630008]  [<ffffffffa04cf5d8>] lookup_inline_extent_backref+0xf8/0x630 [btrfs]
[1212302.630008]  [<ffffffffa04d13dd>] __btrfs_free_extent+0x11d/0xc40 [btrfs]
[1212302.630008]  [<ffffffffa04d64a4>] __btrfs_run_delayed_refs+0x394/0x11d0 [btrfs]
[1212302.630008]  [<ffffffffa04db379>] btrfs_run_delayed_refs.part.66+0x69/0x280 [btrfs]
[1212302.630008]  [<ffffffffa04ed2ad>] __btrfs_end_transaction+0x2ad/0x3d0 [btrfs]
[1212302.630008]  [<ffffffffa04f7505>] btrfs_evict_inode+0x4a5/0x500 [btrfs]
[1212302.630008]  [<ffffffff811b9e28>] evict+0xa8/0x190
[1212302.630008]  [<ffffffff811b0330>] do_unlinkat+0x1a0/0x2b0

I was also able to reproduce this on a 3.19 kernel, corresponding to Chris'
integration branch from about a month ago, running the following stress
test on a qemu/kvm guest (with 4 virtual cpus and 16Gb of ram):

  while true; do
     mkfs.btrfs -l 4096 -f -b `expr 20 \* 1024 \* 1024 \* 1024` /dev/sdd
     mount /dev/sdd /mnt
     snapshot_cmd="btrfs subvolume snapshot -r /mnt"
     snapshot_cmd="$snapshot_cmd /mnt/snap_\`date +'%H_%M_%S_%N'\`"
     fsstress -d /mnt -n 25000 -p 8 -x "$snapshot_cmd" -X 100
     umount /mnt
  done

Which usually triggers the BUG_ON within less than 24 hours:

[49558.618097] ------------[ cut here ]------------
[49558.619732] kernel BUG at fs/btrfs/extent_io.c:4551!
(...)
[49558.620031] CPU: 3 PID: 23908 Comm: fsstress Tainted: G        W      3.19.0-btrfs-next-7+ 
[49558.620031] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
[49558.620031] task: ffff8800319fc0d0 ti: ffff880220da8000 task.ti: ffff880220da8000
[49558.620031] RIP: 0010:[<ffffffffa0476b1a>]  [<ffffffffa0476b1a>] btrfs_release_extent_buffer_page+0x20/0xe9 [btrfs]
(...)
[49558.620031] Call Trace:
[49558.620031]  [<ffffffffa0476c73>] release_extent_buffer+0x90/0xd3 [btrfs]
[49558.620031]  [<ffffffff8142b10c>] ? _raw_spin_lock+0x3b/0x43
[49558.620031]  [<ffffffffa0477052>] ? free_extent_buffer+0x37/0x94 [btrfs]
[49558.620031]  [<ffffffffa04770ab>] free_extent_buffer+0x90/0x94 [btrfs]
[49558.620031]  [<ffffffffa04396d5>] btrfs_release_path+0x4a/0x69 [btrfs]
[49558.620031]  [<ffffffffa0444907>] __btrfs_free_extent+0x778/0x80c [btrfs]
[49558.620031]  [<ffffffffa044a485>] __btrfs_run_delayed_refs+0xad2/0xc62 [btrfs]
[49558.728054]  [<ffffffff811420d5>] ? kmemleak_alloc_recursive.constprop.52+0x16/0x18
[49558.728054]  [<ffffffffa044c1e8>] btrfs_run_delayed_refs+0x6d/0x1ba [btrfs]
[49558.728054]  [<ffffffffa045917f>] ? join_transaction.isra.9+0xb9/0x36b [btrfs]
[49558.728054]  [<ffffffffa045a75c>] btrfs_commit_transaction+0x4c/0x981 [btrfs]
[49558.728054]  [<ffffffffa0434f86>] btrfs_sync_fs+0xd5/0x10d [btrfs]
[49558.728054]  [<ffffffff81155923>] ? iterate_supers+0x60/0xc4
[49558.728054]  [<ffffffff8117966a>] ? do_sync_work+0x91/0x91
[49558.728054]  [<ffffffff8117968a>] sync_fs_one_sb+0x20/0x22
[49558.728054]  [<ffffffff81155939>] iterate_supers+0x76/0xc4
[49558.728054]  [<ffffffff811798e8>] sys_sync+0x55/0x83
[49558.728054]  [<ffffffff8142bbd2>] system_call_fastpath+0x12/0x17

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Chris Mason <clm@fb.com>
2015-05-11 07:59:11 -07:00
Filipe Manana ff1f8250a9 Btrfs: fix race between block group creation and their cache writeout
So creating a block group has 2 distinct phases:

Phase 1 - creates the btrfs_block_group_cache item and adds it to the
rbtree fs_info->block_group_cache_tree and to the corresponding list
space_info->block_groups[];

Phase 2 - adds the block group item to the extent tree and corresponding
items to the chunk tree.

The first phase adds the block_group_cache_item to a list of pending block
groups in the transaction handle, and phase 2 happens when
btrfs_end_transaction() is called against the transaction handle.

It happens that once phase 1 completes, other concurrent tasks that use
their own transaction handle, but points to the same running transaction
(struct btrfs_trans_handle->transaction), can use this block group for
space allocations and therefore mark it dirty. Dirty block groups are
tracked in a list belonging to the currently running transaction (struct
btrfs_transaction) and not in the transaction handle (btrfs_trans_handle).

This is a problem because once a task calls btrfs_commit_transaction(),
it calls btrfs_start_dirty_block_groups() which will see all dirty block
groups and attempt to start their writeout, including those that are
still attached to the transaction handle of some concurrent task that
hasn't called btrfs_end_transaction() yet - which means those block
groups haven't gone through phase 2 yet and therefore when
write_one_cache_group() is called, it won't find the block group items
in the extent tree and abort the current transaction with -ENOENT,
turning the fs into readonly mode and require a remount.

Fix this by ignoring -ENOENT when looking for block group items in the
extent tree when we attempt to start the writeout of the block group
caches outside the critical section of the transaction commit. We will
try again later during the critical section and if there we still don't
find the block group item in the extent tree, we then abort the current
transaction.

This issue happened twice, once while running fstests btrfs/067 and once
for btrfs/078, which produced the following trace:

[ 3278.703014] WARNING: CPU: 7 PID: 18499 at fs/btrfs/super.c:260 __btrfs_abort_transaction+0x52/0x114 [btrfs]()
[ 3278.707329] BTRFS: Transaction aborted (error -2)
(...)
[ 3278.731555] Call Trace:
[ 3278.732396]  [<ffffffff8142fa46>] dump_stack+0x4f/0x7b
[ 3278.733860]  [<ffffffff8108b6a2>] ? console_unlock+0x361/0x3ad
[ 3278.735312]  [<ffffffff81045ea5>] warn_slowpath_common+0xa1/0xbb
[ 3278.736874]  [<ffffffffa03ada6d>] ? __btrfs_abort_transaction+0x52/0x114 [btrfs]
[ 3278.738302]  [<ffffffff81045f05>] warn_slowpath_fmt+0x46/0x48
[ 3278.739520]  [<ffffffffa03ada6d>] __btrfs_abort_transaction+0x52/0x114 [btrfs]
[ 3278.741222]  [<ffffffffa03b9e56>] write_one_cache_group+0xae/0xbf [btrfs]
[ 3278.742797]  [<ffffffffa03c487b>] btrfs_start_dirty_block_groups+0x170/0x2b2 [btrfs]
[ 3278.744492]  [<ffffffffa03d309c>] btrfs_commit_transaction+0x130/0x9c9 [btrfs]
[ 3278.746084]  [<ffffffff8107d33d>] ? trace_hardirqs_on+0xd/0xf
[ 3278.747249]  [<ffffffffa03e5660>] btrfs_sync_file+0x313/0x387 [btrfs]
[ 3278.748744]  [<ffffffff8117acad>] vfs_fsync_range+0x95/0xa4
[ 3278.749958]  [<ffffffff81435b54>] ? ret_from_sys_call+0x1d/0x58
[ 3278.751218]  [<ffffffff8117acd8>] vfs_fsync+0x1c/0x1e
[ 3278.754197]  [<ffffffff8117ae54>] do_fsync+0x34/0x4e
[ 3278.755192]  [<ffffffff8117b07c>] SyS_fsync+0x10/0x14
[ 3278.756236]  [<ffffffff81435b32>] system_call_fastpath+0x12/0x17
[ 3278.757366] ---[ end trace 9a4d4df4969709aa ]---

Fixes: 1bbc621ef2 ("Btrfs: allow block group cache writeout
                      outside critical section in commit")

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-05-11 07:59:10 -07:00
Filipe Manana 28aeeac1dd Btrfs: fix panic when starting bg cache writeout after IO error
When waiting for the writeback of block group cache we returned
immediately if there was an error during writeback without waiting
for the ordered extent to complete. This left a short time window
where if some other task attempts to start the writeout for the same
block group cache it can attempt to add a new ordered extent, starting
at the same offset (0) before the previous one is removed from the
ordered tree, causing an ordered tree panic (calls BUG()).

This normally doesn't happen in other write paths, such as buffered
writes or direct IO writes for regular files, since before marking
page ranges dirty we lock the ranges and wait for any ordered extents
within the range to complete first.

Fix this by making btrfs_wait_ordered_range() not return immediately
if it gets an error from the writeback, waiting for all ordered extents
to complete first.

This issue happened often when running the fstest btrfs/088 and it's
easy to trigger it by running in a loop until the panic happens:

  for ((i = 1; i <= 10000; i++)) do ./check btrfs/088 ; done

[17156.862573] BTRFS critical (device sdc): panic in ordered_data_tree_panic:70: Inconsistency in ordered tree at offset 0 (errno=-17 Object already exists)
[17156.864052] ------------[ cut here ]------------
[17156.864052] kernel BUG at fs/btrfs/ordered-data.c:70!
(...)
[17156.864052] Call Trace:
[17156.864052]  [<ffffffffa03876e3>] btrfs_add_ordered_extent+0x12/0x14 [btrfs]
[17156.864052]  [<ffffffffa03787e2>] run_delalloc_nocow+0x5bf/0x747 [btrfs]
[17156.864052]  [<ffffffffa03789ff>] run_delalloc_range+0x95/0x353 [btrfs]
[17156.864052]  [<ffffffffa038b7fe>] writepage_delalloc.isra.16+0xb9/0x13f [btrfs]
[17156.864052]  [<ffffffffa038d75b>] __extent_writepage+0x129/0x1f7 [btrfs]
[17156.864052]  [<ffffffffa038da5a>] extent_write_cache_pages.isra.15.constprop.28+0x231/0x2f4 [btrfs]
[17156.864052]  [<ffffffff810ad2af>] ? __module_text_address+0x12/0x59
[17156.864052]  [<ffffffff8107d33d>] ? trace_hardirqs_on+0xd/0xf
[17156.864052]  [<ffffffffa038df76>] extent_writepages+0x4b/0x5c [btrfs]
[17156.864052]  [<ffffffff81144431>] ? kmem_cache_free+0x9b/0xce
[17156.864052]  [<ffffffffa0376a46>] ? btrfs_submit_direct+0x3fc/0x3fc [btrfs]
[17156.864052]  [<ffffffffa0389cd6>] ? free_extent_state+0x8c/0xc1 [btrfs]
[17156.864052]  [<ffffffffa0374871>] btrfs_writepages+0x28/0x2a [btrfs]
[17156.864052]  [<ffffffff8110c4c8>] do_writepages+0x23/0x2c
[17156.864052]  [<ffffffff81102f36>] __filemap_fdatawrite_range+0x5a/0x61
[17156.864052]  [<ffffffff81102f6e>] filemap_fdatawrite_range+0x13/0x15
[17156.864052]  [<ffffffffa0383ef7>] btrfs_fdatawrite_range+0x21/0x48 [btrfs]
[17156.864052]  [<ffffffffa03ab89e>] __btrfs_write_out_cache.isra.14+0x2d9/0x3a7 [btrfs]
[17156.864052]  [<ffffffffa03ac1ab>] ? btrfs_write_out_cache+0x41/0xdc [btrfs]
[17156.864052]  [<ffffffffa03ac1fd>] btrfs_write_out_cache+0x93/0xdc [btrfs]
[17156.864052]  [<ffffffffa0363847>] ? btrfs_start_dirty_block_groups+0x13a/0x2b2 [btrfs]
[17156.864052]  [<ffffffffa03638e6>] btrfs_start_dirty_block_groups+0x1d9/0x2b2 [btrfs]
[17156.864052]  [<ffffffff8107d33d>] ? trace_hardirqs_on+0xd/0xf
[17156.864052]  [<ffffffffa037209e>] btrfs_commit_transaction+0x130/0x9c9 [btrfs]
[17156.864052]  [<ffffffffa034c748>] btrfs_sync_fs+0xe1/0x12d [btrfs]

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-05-11 07:59:10 -07:00
Filipe Manana e43699d4b4 Btrfs: fix crash after inode cache writeback failure
If the writeback of an inode cache failed we were unnecessarilly
attempting to release again the delalloc metadata that we previously
reserved. However attempting to do this a second time triggers an
assertion at drop_outstanding_extent() because we have no more
outstanding extents for our inode cache's inode. If we were able
to start writeback of the cache the reserved metadata space is
released at btrfs_finished_ordered_io(), even if an error happens
during writeback.

So make sure we don't repeat the metadata space release if writeback
started for our inode cache.

This issue was trivial to reproduce by running the fstest btrfs/088
with "-o inode_cache", which triggered the assertion leading to a
BUG() call and requiring a reboot in order to run the remaining
fstests. Trace produced by btrfs/088:

[255289.385904] BTRFS: assertion failed: BTRFS_I(inode)->outstanding_extents >= num_extents, file: fs/btrfs/extent-tree.c, line: 5276
[255289.388094] ------------[ cut here ]------------
[255289.389184] kernel BUG at fs/btrfs/ctree.h:4057!
[255289.390125] invalid opcode: 0000 [] PREEMPT SMP DEBUG_PAGEALLOC
(...)
[255289.392068] Call Trace:
[255289.392068]  [<ffffffffa035e774>] drop_outstanding_extent+0x3d/0x6d [btrfs]
[255289.392068]  [<ffffffffa0364988>] btrfs_delalloc_release_metadata+0x54/0xe3 [btrfs]
[255289.392068]  [<ffffffffa03b4174>] btrfs_write_out_ino_cache+0x95/0xad [btrfs]
[255289.392068]  [<ffffffffa036f5c4>] btrfs_save_ino_cache+0x275/0x2dc [btrfs]
[255289.392068]  [<ffffffffa03e2d83>] commit_fs_roots.isra.12+0xaa/0x137 [btrfs]
[255289.392068]  [<ffffffff8107d33d>] ? trace_hardirqs_on+0xd/0xf
[255289.392068]  [<ffffffffa037841f>] ? btrfs_commit_transaction+0x4b1/0x9c9 [btrfs]
[255289.392068]  [<ffffffff814351a4>] ? _raw_spin_unlock+0x32/0x46
[255289.392068]  [<ffffffffa037842e>] btrfs_commit_transaction+0x4c0/0x9c9 [btrfs]
(...)

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-05-11 07:59:10 -07:00
Al Viro 0450b2d120 namei: store seq numbers in nd->stack[]
we'll need them for unlazy_walk()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:14 -04:00
Al Viro 294d71ff2f new helper: __legitimize_mnt()
same as legitimize_mnt(), except that it does *not* drop and regain
rcu_read_lock; return values are
0  =>  grabbed a reference, we are fine
1  =>  failed, just go away
-1 =>  failed, go away and mntput(bastard) when outside of rcu_read_lock

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:14 -04:00
Al Viro 31956502dd namei: make may_follow_link() safe in RCU mode
We *can't* call that audit garbage in RCU mode - it's doing a weird
mix of allocations (GFP_NOFS, immediately followed by GFP_KERNEL)
and I'm not touching that... thing again.

So if this security sclero^Whardening feature gets triggered when
we are in RCU mode, tough - we'll fail with -ECHILD and have
everything restarted in non-RCU mode.  Only to hit the same test
and fail, this time with EACCES and with (oh, rapture) an audit spew
produced.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:13 -04:00
Al Viro 6548fae2ec namei: make put_link() RCU-safe
very simple - just make path_put() conditional on !RCU.
Note that right now it doesn't get called in RCU mode -
we leave it before getting anything into stack.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:13 -04:00
Al Viro ecc087ff14 new helper: free_page_put_link()
similar to kfree_put_link()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:13 -04:00
Al Viro 5f2c4179e1 switch ->put_link() from dentry to inode
only one instance looks at that argument at all; that sole
exception wants inode rather than dentry.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:12 -04:00
NeilBrown bda0be7ad9 security: make inode_follow_link RCU-walk aware
inode_follow_link now takes an inode and rcu flag as well as the
dentry.

inode is used in preference to d_backing_inode(dentry), particularly
in RCU-walk mode.

selinux_inode_follow_link() gets dentry_has_perm() and
inode_has_perm() open-coded into it so that it can call
avc_has_perm_flags() in way that is safe if LOOKUP_RCU is set.

Calling avc_has_perm_flags() with rcu_read_lock() held means
that when avc_has_perm_noaudit calls avc_compute_av(), the attempt
to rcu_read_unlock() before calling security_compute_av() will not
actually drop the RCU read-lock.

However as security_compute_av() is completely in a read_lock()ed
region, it should be safe with the RCU read-lock held.

Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:11 -04:00
Al Viro 181548c051 namei: pick_link() callers already have inode
no need to refetch (and once we move unlazy out of there, recheck ->d_seq).

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:10 -04:00
David Howells 63afdfc781 VFS: Handle lower layer dentry/inode in pathwalk
Make use of d_backing_inode() in pathwalk to gain access to an
inode or dentry that's on a lower layer.

Signed-off-by: David Howells <dhowells@redhat.com>
2015-05-11 08:13:10 -04:00
Al Viro 237d8b327a namei: store inode in nd->stack[]
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:09 -04:00
Al Viro 254cf58212 namei: don't mangle nd->seq in lookup_fast()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:09 -04:00
Al Viro 6e9918b7b3 namei: explicitly pass seq number to unlazy_walk() when dentry != NULL
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:09 -04:00
Al Viro 3595e2346c link_path_walk: use explicit returns for failure exits
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:08 -04:00
Al Viro deb106c632 namei: lift terminate_walk() all the way up
Lift it from link_path_walk(), trailing_symlink(), lookup_last(),
mountpoint_last(), complete_walk() and do_last().  A _lot_ of
those suckers merge.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:13:08 -04:00
Al Viro 3bdba28b72 namei: lift link_path_walk() call out of trailing_symlink()
Make trailing_symlink() return the pathname to traverse or ERR_PTR(-E...).
A subtle point is that for "magic" symlinks it returns "" now - that
leads to link_path_walk("", nd), which is immediately returning 0 and
we are back to the treatment of the last component, at whereever the
damn thing has left us.

Reduces the stack footprint - link_path_walk() called on more shallow
stack now.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:12:57 -04:00
Al Viro 368ee9ba56 namei: path_init() calling conventions change
* lift link_path_walk() into callers; moving it down into path_init()
had been a mistake.  Stack footprint, among other things...
* do _not_ call path_cleanup() after path_init() failure; on all failure
exits out of it we have nothing for path_cleanup() to do
* have path_init() return pathname or ERR_PTR(-E...)

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:10:41 -04:00
Al Viro 34a26b99b7 namei: get rid of nameidata->base
we can do fdput() under rcu_read_lock() just fine; all we need to take
care of is fetching nd->inode value first.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11 08:05:05 -04:00
Al Viro 8bcb77fabd namei: split off filename_lookupat() with LOOKUP_PARENT
new functions: filename_parentat() and path_parentat() resp.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:20 -04:00
Al Viro b5cd339762 namei: may_follow_link() - lift terminate_walk() on failures into caller
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:20 -04:00
Al Viro ab10492345 namei: take increment of nd->depth into pick_link()
Makes the situation much more regular - we avoid a strange state
when the element just after the top of stack is used to store
struct path of symlink, but isn't counted in nd->depth.  This
is much more regular, so the normal failure exits, etc., work
fine.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:19 -04:00
Al Viro 1cf2665b5b namei: kill nd->link
Just store it in nd->stack[nd->depth].link right in pick_link().
Now that we make sure of stack expansion in pick_link(), we can
do so...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:19 -04:00
Al Viro fec2fa24e8 may_follow_link(): trim arguments
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:18 -04:00
Al Viro cd179f4468 namei: move bumping the refcount of link->mnt into pick_link()
update the failure cleanup in may_follow_link() to match that.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:18 -04:00
Al Viro e8bb73dfb0 namei: fold put_link() into the failure case of complete_walk()
... and don't open-code unlazy_walk() in there - the only reason
for that is to avoid verfication of cached nd->root, which is
trivially avoided by discarding said cached nd->root first.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:17 -04:00
Al Viro fab51e8ab2 namei: take the treatment of absolute symlinks to get_link()
rather than letting the callers handle the jump-to-root part of
semantics, do it right in get_link() and return the rest of the
body for the caller to deal with - at that point it's treated
the same way as relative symlinks would be.  And return NULL
when there's no "rest of the body" - those are treated the same
as pure jump symlink would be.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:17 -04:00
Al Viro 4f697a5e17 namei: simpler treatment of symlinks with nothing other that / in the body
Instead of saving name and branching to OK:, where we'll immediately restore
it, and call walk_component() with WALK_PUT|WALK_GET and nd->last_type being
LAST_BIND, which is equivalent to put_link(nd), err = 0, we can just treat
that the same way we'd treat procfs-style "jump" symlinks - do put_link(nd)
and move on.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:16 -04:00
Al Viro 6920a4405e namei: simplify failure exits in get_link()
when cookie is NULL, put_link() is equivalent to path_put(), so
as soon as we'd set last->cookie to NULL, we can bump nd->depth and
let the normal logics in terminate_walk() to take care of cleanups.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:16 -04:00
Al Viro 6e77137b36 don't pass nameidata to ->follow_link()
its only use is getting passed to nd_jump_link(), which can obtain
it from current->nameidata

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:15 -04:00
Al Viro 8402752ecf namei: simplify the callers of follow_managed()
now that it gets nameidata, no reason to have setting LOOKUP_JUMPED on
mountpoint crossing and calling path_put_conditional() on failures
done in every caller.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:15 -04:00
NeilBrown 756daf263e VFS: replace {, total_}link_count in task_struct with pointer to nameidata
task_struct currently contains two ad-hoc members for use by the VFS:
link_count and total_link_count.  These are only interesting to fs/namei.c,
so exposing them explicitly is poor layering.  Incidentally, link_count
isn't used anymore, so it can just die.

This patches replaces those with a single pointer to 'struct nameidata'.
This structure represents the current filename lookup of which
there can only be one per process, and is a natural place to
store total_link_count.

This will allow the current "nameidata" argument to all
follow_link operations to be removed as current->nameidata
can be used instead in the _very_ few instances that care about
it at all.

As there are occasional circumstances where pathname lookup can
recurse, such as through kern_path_locked, we always save and old
current->nameidata (if there is one) when setting a new value, and
make sure any active link_counts are preserved.

follow_mount and follow_automount now get a 'struct nameidata *'
rather than 'int flags' so that they can directly access
total_link_count, rather than going through 'current'.

Suggested-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:14 -04:00
Al Viro 626de99676 namei: move link count check and stack allocation into pick_link()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:13 -04:00
Al Viro d63ff28f0f namei: make should_follow_link() store the link in nd->link
... if it decides to follow, that is.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:13 -04:00
Al Viro 4693a547cd namei: new calling conventions for walk_component()
instead of a single flag (!= 0 => we want to follow symlinks) pass
two bits - WALK_GET (want to follow symlinks) and WALK_PUT (put_link()
once we are done looking at the name).  The latter matters only for
success exits - on failure the caller will discard everything anyway.

Suggestions for better variant are welcome; what this thing aims for
is making sure that pending put_link() is done *before* walk_component()
decides to pick a symlink up, rather than between picking it up and
acting upon it.  See the next commit for payoff.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:12 -04:00
Al Viro 8620c238ed link_path_walk: move the OK: inside the loop
fewer labels that way; in particular, resuming after the end of
nested symlink is straight-line.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:12 -04:00
Al Viro 1543972678 namei: have terminate_walk() do put_link() on everything left
All callers of terminate_walk() are followed by more or less
open-coded eqiuvalent of "do put_link() on everything left
in nd->stack".  Better done in terminate_walk() itself, and
when we go for RCU symlink traversal we'll have to do it
there anyway.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:11 -04:00
Al Viro 191d7f73e2 namei: take put_link() into {lookup,mountpoint,do}_last()
rationale: we'll need to have terminate_walk() do put_link() on
everything, which will mean that in some cases ..._last() will do
put_link() anyway.  Easier to have them do it in all cases.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:11 -04:00
Al Viro 1bc4b813e8 namei: lift (open-coded) terminate_walk() into callers of get_link()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:10 -04:00
Al Viro f0a9ba7021 lift terminate_walk() into callers of walk_component()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:10 -04:00
Al Viro 70291aecc6 namei: lift (open-coded) terminate_walk() in follow_dotdot_rcu() into callers
follow_dotdot_rcu() does an equivalent of terminate_walk() on failure;
shifting it into callers makes for simpler rules and those callers
already have terminate_walk() on other failure exits.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:09 -04:00
Al Viro e269f2a73f namei: we never need more than MAXSYMLINKS entries in nd->stack
The only reason why we needed one more was that purely nested
MAXSYMLINKS symlinks could lead to path_init() using that many
entries in addition to nd->stack[0] which it left unused.

That can't happen now - path_init() starts with entry 0 (and
trailing_symlink() is called only when we'd already encountered
one symlink, so no more than MAXSYMLINKS-1 are left).

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:08 -04:00
Al Viro 8eff733a45 link_path_walk: end of nd->depth massage
get rid of orig_depth - we only use it on error exit to tell whether
to stop doing put_link() when depth reaches 0 (call from path_init())
or when it reaches 1 (call from trailing_symlink()).  However, in
the latter case the caller would immediately follow with one more
put_link().  Just keep doing it until the depth reaches zero (and
simplify trailing_symlink() as the result).

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:08 -04:00
Al Viro 939724df56 link_path_walk: nd->depth massage, part 10
Get rid of orig_depth checks in OK: logics.  If nd->depth is
zero, we had been called from path_init() and we are done.
If it is greater than 1, we are not done, whether we'd been
called from path_init() or trailing_symlink().  And in
case when it's 1, we might have been called from path_init()
and reached the end of nested symlink (in which case
nd->stack[0].name will point to the rest of pathname and
we are not done) or from trailing_symlink(), in which case
we are done.

Just have trailing_symlink() leave NULL in nd->stack[0].name
and use that to discriminate between those cases.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:06 -04:00
Al Viro dc7af8dc05 link_path_walk: nd->depth massage, part 9
Make link_path_walk() work with any value of nd->depth on entry -
memorize it and use it in tests instead of comparing with 1.
Don't bother with increment/decrement in path_init().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:06 -04:00
Al Viro 21c3003d36 put_link: nd->depth massage, part 8
all calls are preceded by decrement of nd->depth; move it into
put_link() itself.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:05 -04:00
Al Viro 9ea57b72bf trailing_symlink: nd->depth massage, part 7
move decrement of nd->depth on successful returns into the callers.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:05 -04:00
Al Viro 0fd889d59e get_link: nd->depth massage, part 6
make get_link() increment nd->depth on successful exit

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:04 -04:00
Al Viro f7df08ee05 trailing_symlink: nd->depth massage, part 5
move increment of ->depth to the point where we'd discovered
that get_link() has not returned an error, adjust exits
accordingly.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:04 -04:00
Al Viro ef1a3e7b96 link_path_walk: nd->depth massage, part 4
lift increment/decrement into link_path_walk() callers.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:03 -04:00
Al Viro da4e0be04d link_path_walk: nd->depth massage, part 3
remove decrement/increment surrounding nd_alloc_stack(), adjust the
test in it.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:03 -04:00
Al Viro fd4620bbdf link_path_walk: nd->depth massage, part 2
collapse adjacent increment/decrement pairs.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:02 -04:00
Al Viro 071bf50137 link_path_walk: nd->depth massage, part 1
nd->stack[0] is unused until the handling of trailing symlinks and
we want to get rid of that.  Having fucked that transformation up
several times, I went for bloody pedantic series of provably equivalent
transformations.  Sorry.

Step 1: keep nd->depth higher by one in link_path_walk() - increment upon
entry, decrement on exits, adjust the arithmetics inside and surround the
calls of functions that care about nd->depth value (nd_alloc_stack(),
get_link(), put_link()) with decrement/increment pairs.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:02 -04:00
Al Viro 894bc8c466 namei: remove restrictions on nesting depth
The only restriction is that on the total amount of symlinks
crossed; how they are nested does not matter

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:01 -04:00
Al Viro 3b2e7f7539 namei: trim the arguments of get_link()
same story as the previous commit

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:01 -04:00
Al Viro b9ff44293c namei: trim redundant arguments of fs/namei.c:put_link()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:00 -04:00
Al Viro 1d8e03d359 namei: trim redundant arguments of trailing_symlink()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:20:00 -04:00
Al Viro 697fc6ca66 namei: move link/cookie pairs into nameidata
Array of MAX_NESTED_LINKS + 1 elements put into nameidata;
what used to be a local array in link_path_walk() occupies
entries 1 .. MAX_NESTED_LINKS in it, link and cookie from
the trailing symlink handling loops - entry 0.

This is _not_ the final arrangement; just an easily verified
incremental step.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:59 -04:00
Al Viro 9e18f10a30 link_path_walk: cleanup - turn goto start; into continue;
Deal with skipping leading slashes before what used to be the
recursive call.  That way we can get rid of that goto completely.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:59 -04:00
Al Viro 07681481b8 link_path_walk: split "return from recursive call" path
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:58 -04:00
Al Viro 32cd74685c link_path_walk: kill the recursion
absolutely straightforward now - the only variables we need to preserve
across the recursive call are name, link and cookie, and recursion depth
is limited (and can is equal to nd->depth).  So arrange an array of
triples to hold instances of those and be done with that.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:58 -04:00
Al Viro bdf6cbf179 link_path_walk: final preparations to killing recursion
reduce the number of returns in there - turn all places
where it returns zero into goto OK and places where it
returns non-zero into goto Err.  The only non-trivial
detail is that all breaks in the loop are guaranteed
to be with non-zero err.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:57 -04:00
Al Viro bb8603f8e1 link_path_walk: get rid of duplication
What we do after the second walk_component() + put_link() + depth
decrement in there is exactly equivalent to what's done right
after the first walk_component().  Easy to verify and not at all
surprising, seeing that there we have just walked the last
component of nested symlink.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:57 -04:00
Al Viro 48c8b0c571 link_path_walk: massage a bit more
Pull the block after the if-else in the end of what used to be do-while
body into all branches there.  We are almost done with the massage...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:56 -04:00
Al Viro d40bcc09ab link_path_walk: turn inner loop into explicit goto
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:56 -04:00
Al Viro 12b0957800 link_path_walk: don't bother with walk_component() after jumping link
... it does nothing if nd->last_type is LAST_BIND.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:55 -04:00
Al Viro b0c24c3bdf link_path_walk: handle get_link() returning ERR_PTR() immediately
If we get ERR_PTR() from get_link(), we are guaranteed to get err != 0
when we break out of do-while, so we are going to hit if (err) return err;
shortly after it.  Pull that into the if (IS_ERR(s)) body.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:55 -04:00
Al Viro 95fa25d9f2 namei: rename follow_link to trailing_symlink, move it down
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:54 -04:00
Al Viro 21fef2176e namei: move the calls of may_follow_link() into follow_link()
All remaining callers of the former are preceded by the latter

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:53 -04:00
Al Viro 172a39a059 namei: expand the call of follow_link() in link_path_walk()
... and strip __always_inline from follow_link() - remaining callers
don't need that.

Now link_path_walk() recursion is a direct one.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:53 -04:00
Al Viro 5a460275ef namei: expand nested_symlink() in its only caller
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:52 -04:00
Al Viro 896475d5bd do_last: move path there from caller's stack frame
We used to need it to feed to follow_link().  No more...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:52 -04:00
Al Viro caa8563443 namei: introduce nameidata->link
shares space with nameidata->next, walk_component() et.al. store
the struct path of symlink instead of returning it into a variable
passed by caller.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10 22:19:51 -04:00
Al Viro d4dee48bad namei: don't bother with ->follow_link() if ->i_link is set
with new calling conventions it's trivial

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>

Conflicts:
	fs/namei.c
2015-05-10 22:19:51 -04:00