linux-sg2042/kernel/locking
Linus Torvalds 7c0f6ba682 Replace <asm/uaccess.h> with <linux/uaccess.h> globally
This was entirely automated, using the script by Al:

  PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>'
  sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \
        $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)

to do the replacement at the end of the merge window.

Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-24 11:46:01 -08:00
..
Makefile locking/lglock: Remove lglock implementation 2016-09-22 15:25:56 +02:00
lockdep.c xfs: updates for 4.10-rc1 2016-12-14 21:35:31 -08:00
lockdep_internals.h lockdep: Limit static allocations if PROVE_LOCKING_SMALL is defined 2016-11-18 11:33:19 -08:00
lockdep_proc.c Replace <asm/uaccess.h> with <linux/uaccess.h> globally 2016-12-24 11:46:01 -08:00
lockdep_states.h locking: Move the lockdep code to kernel/locking/ 2013-11-06 07:55:08 +01:00
locktorture.c lcoking/locktorture: Simplify the torture_runnable computation 2016-04-28 10:57:51 +02:00
mcs_spinlock.h locking/core: Remove cpu_relax_lowlatency() users 2016-11-16 10:15:10 +01:00
mutex-debug.c locking/mutex: Rework mutex::owner 2016-10-25 11:31:50 +02:00
mutex-debug.h locking/mutex: Rework mutex::owner 2016-10-25 11:31:50 +02:00
mutex.c locking/mutex: Break out of expensive busy-loop on {mutex,rwsem}_spin_on_owner() when owner vCPU is preempted 2016-11-22 12:48:10 +01:00
mutex.h locking/mutex: Rework mutex::owner 2016-10-25 11:31:50 +02:00
osq_lock.c locking/osq: Break out of spin-wait busy waiting loop for a preempted vCPU in osq_lock() 2016-11-22 12:48:10 +01:00
percpu-rwsem.c locking/percpu-rwsem: Optimize readers and reduce global impact 2016-08-10 14:34:01 +02:00
qrwlock.c locking/core: Remove cpu_relax_lowlatency() users 2016-11-16 10:15:10 +01:00
qspinlock.c locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec() 2016-06-27 11:37:41 +02:00
qspinlock_paravirt.h locking/pv-qspinlock: Use cmpxchg_release() in __pv_queued_spin_unlock() 2016-09-22 15:25:51 +02:00
qspinlock_stat.h don't open-code file_inode() 2016-12-04 18:29:28 -05:00
rtmutex-debug.c rtmutex: Cleanup deadlock detector debug logic 2014-06-21 22:05:30 +02:00
rtmutex-debug.h rtmutex: Cleanup deadlock detector debug logic 2014-06-21 22:05:30 +02:00
rtmutex.c locking/rtmutex: Explain locking rules for rt_mutex_proxy_unlock()/init_proxy_locked() 2016-12-02 11:13:57 +01:00
rtmutex.h rtmutex: Cleanup deadlock detector debug logic 2014-06-21 22:05:30 +02:00
rtmutex_common.h locking/rtmutex: Get rid of RT_MUTEX_OWNER_MASKALL 2016-12-02 11:13:57 +01:00
rwsem-spinlock.c locking/rwsem: Introduce basis for down_write_killable() 2016-04-13 10:42:20 +02:00
rwsem-xadd.c locking/mutex: Break out of expensive busy-loop on {mutex,rwsem}_spin_on_owner() when owner vCPU is preempted 2016-11-22 12:48:10 +01:00
rwsem.c locking/rwsem: Add reader-owned state to the owner field 2016-06-08 15:16:59 +02:00
rwsem.h locking/rwsem: Protect all writes to owner by WRITE_ONCE() 2016-06-08 15:16:59 +02:00
semaphore.c locking/semaphore: Resolve some shadow warnings 2014-09-04 07:17:24 +02:00
spinlock.c spinlock: Add spin_lock_bh_nested() 2015-01-03 14:32:57 -05:00
spinlock_debug.c locking: Move the spinlock code to kernel/locking/ 2013-11-06 07:55:21 +01:00