The todo lists in the proc, thread, and node structures
are accessed by other procs/threads to place work
items on the queue.
The todo lists are protected by the new proc->inner_lock.
No locks should ever be nested under these locks. As the
name suggests, an outer lock will be introduced in
a later patch.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
For correct behavior we need to hold the inner lock when
dequeuing and processing node work in binder_thread_read.
We now hold the inner lock when we enter the switch statement
and release it after processing anything that might be
affected by other threads.
We also need to hold the inner lock to protect the node
weak/strong ref tracking fields as long as node->proc
is non-NULL (if it is NULL then we are guaranteed that
we don't have any node work queued).
This means that other functions that manipulate these fields
must hold the inner lock. Refactored these functions to use
the inner lock.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There are 3 main spinlocks which must be acquired in this
order:
1) proc->outer_lock : protects most fields of binder_proc,
binder_thread, and binder_ref structures. binder_proc_lock()
and binder_proc_unlock() are used to acq/rel.
2) node->lock : protects most fields of binder_node.
binder_node_lock() and binder_node_unlock() are
used to acq/rel
3) proc->inner_lock : protects the thread and node lists
(proc->threads, proc->nodes) and all todo lists associated
with the binder_proc (proc->todo, thread->todo,
proc->delivered_death and node->async_todo).
binder_inner_proc_lock() and binder_inner_proc_unlock()
are used to acq/rel
Any lock under procA must never be nested under any lock at the same
level or below on procB.
Functions that require a lock held on entry indicate which lock
in the suffix of the function name:
foo_olocked() : requires node->outer_lock
foo_nlocked() : requires node->lock
foo_ilocked() : requires proc->inner_lock
foo_iolocked(): requires proc->outer_lock and proc->inner_lock
foo_nilocked(): requires node->lock and proc->inner_lock
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When obtaining a node via binder_get_node(),
binder_get_node_from_ref() or binder_new_node(),
increment node->tmp_refs to take a
temporary reference on the node to ensure the node
persists while being used. binder_put_node() must
be called to remove the temporary reference.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Once locks are added, binder_ref's will only be accessed
safely with the proc lock held. Refactor the inc/dec paths
to make them atomic with the binder_get_ref* paths and
node inc/dec. For example, instead of:
ref = binder_get_ref(proc, handle, strong);
...
binder_dec_ref(ref, strong);
we now have:
ret = binder_dec_ref_for_handle(proc, handle, strong, &rdata);
Since the actual ref is no longer exposed to callers, a
new struct binder_ref_data is introduced which can be used
to return a copy of ref state.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
binder_thread and binder_proc may be accessed by other
threads when processing transaction. Therefore they
must be prevented from being freed while a transaction
is in progress that references them.
This is done by introducing a temporary reference
counter for threads and procs that indicates that the
object is in use and must not be freed. binder_thread_dec_tmpref()
and binder_proc_dec_tmpref() are used to decrement
the temporary reference.
It is safe to free a binder_thread if there
is no reference and it has been released
(indicated by thread->is_dead).
It is safe to free a binder_proc if it has no
remaining threads and no reference.
A spinlock is added to the binder_transaction
to safely access and set references for t->from
and for debug code to safely access t->to_thread
and t->to_proc.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When initiating a transaction, the target_node must
have a strong ref on it. Then we take a second
strong ref to make sure the node survives until the
transaction is complete.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Since errors are tracked in the return_error/return_error2
fields of the binder_thread object and BR_TRANSACTION_COMPLETEs
can be tracked either in those fields or via the thread todo
work list, it is possible for errors to be reported ahead
of the associated txn complete.
Use the thread todo work list for errors to guarantee
order. Also changed binder_send_failed_reply to pop
the transaction even if it failed to send a reply.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
binder_pop_transaction needs to be split into 2 pieces to
to allow the proc lock to be held on entry to dequeue the
transaction stack, but no lock when kfree'ing the transaction.
Split into binder_pop_transaction_locked and binder_free_transaction
(the actual locks are still to be added).
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The log->next index for the transaction log was
not protected when incremented. This led to a
case where log->next++ resulted in an index
larger than ARRAY_SIZE(log->entry) and eventually
a bad access to memory.
Fixed by making the log index an atomic64 and
converting to an array by using "% ARRAY_SIZE(log->entry)"
Also added "complete" field to the log entry which is
written last to tell the print code whether the
entry is complete
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Adds protection against malicious user code freeing
the same buffer at the same time which could cause
a crash. Cannot happen under normal use.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
node is always non-NULL in binder_get_ref_for_node so the
conditional and else clause are not needed
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The looper member of struct binder_thread is a bitmask
of control bits. All of the existing bits are modified
by the affected thread except for BINDER_LOOPER_STATE_NEED_RETURN
which can be modified in binder_deferred_flush() by
another thread.
To avoid adding a spinlock around all read-mod-writes to
modify a bit, the BINDER_LOOPER_STATE_NEED_RETURN flag
is replaced by a separate field in struct binder_thread.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently, the transaction complete work item is queued
after the transaction. This means that it is possible
for the transaction to be handled and a reply to be
enqueued in the current thread before the transaction
complete is enqueued, which violates the protocol
with userspace who may not expect the transaction
complete. Fixed by always enqueing the transaction
complete first.
Also, once the transaction is enqueued, it is unsafe
to access since it might be freed. Currently,
t->flags is accessed to determine whether a sync
wake is needed. Changed to access tr->flags
instead.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In binder_thread_read, the BINDER_WORK_NODE command is used
to communicate the references on the node to userspace. It
can take a couple of iterations in the loop to construct
the list of commands for user space. When locking is added,
the lock would need to be release on each iteration which
means the state could change. The work item is not dequeued
during this process which prevents a simpler queue management
that can just dequeue up front and handle the work item.
Fixed by changing the BINDER_WORK_NODE algorithm in
binder_thread_read to determine which commands to send
to userspace atomically in 1 pass so it stays consistent
with the kernel view.
The work item is now dequeued immediately since only
1 pass is needed.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add additional information to determine the cause of binder
failures. Adds the following to failed transaction log and
kernel messages:
return_error : value returned for transaction
return_error_param : errno returned by binder allocator
return_error_line : line number where error detected
Also, return BR_DEAD_REPLY if an allocation error indicates
a dead proc (-ESRCH)
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use an atomic for binder_last_id to avoid locking it
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use atomics for stats to avoid needing to lock for
increments/decrements
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add binder_dead_nodes_lock, binder_procs_lock, and
binder_context_mgr_node_lock to protect the associated global lists
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
With the global lock, there was a mechanism to access
binder driver debugging information with the global
lock disabled to debug deadlocks or other issues.
This mechanism is rarely (if ever) used anymore
and wasn't needed during the development of
fine-grained locking in the binder driver.
Removing it.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Move the binder allocator functionality to its own file
Continuation of splitting the binder allocator from the binder
driver. Split binder_alloc functions from normal binder functions.
Add kernel doc comments to functions declared extern in
binder_alloc.h
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Continuation of splitting the binder allocator from the binder
driver. Separate binder_alloc functions from normal binder
functions. Protect the allocator with a separate mutex.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The buffer's transaction has already been freed before
binder_deferred_release. No need to do it again.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The binder allocator is logically separate from the rest
of the binder drivers. Separating the data structures
to prepare for splitting into separate file with separate
locking.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use wake_up_interruptible_sync() to hint to the scheduler binder
transactions are synchronous wakeups. Disable preemption while waking
to avoid ping-ponging on the binder lock.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Omprakash Dhyade <odhyade@codeaurora.org>
Cc: stable <stable@vger.kernel.org> # 4.4+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The binder allocator assumes that the thread that
called binder_open will never die for the lifetime of
that proc. That thread is normally the group_leader,
however it may not be. Use the group_leader instead
of current.
Signed-off-by: Todd Kjos <tkjos@google.com>
Cc: stable <stable@vger.kernel.org> # 4.4+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This reverts commit a906d6931f.
The patch introduced a race in the binder driver. An attempt to fix the
race was submitted in "[PATCH v2] android: binder: fix dangling pointer
comparison", however the conclusion in the discussion for that patch
was that the original patch should be reverted.
The reversion is being done as part of the fine-grained locking
patchset since the patch would need to be refactored when
proc->vmm_vm_mm is removed from struct binder_proc and added
in the binder allocator.
Signed-off-by: Todd Kjos <tkjos@google.com>
Cc: stable <stable@vger.kernel.org> # 4.6+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We are going to split <linux/sched/signal.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/signal.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split <linux/sched/mm.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/mm.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
The APIs that are going to be moved first are:
mm_alloc()
__mmdrop()
mmdrop()
mmdrop_async_fn()
mmdrop_async()
mmget_not_zero()
mmput()
mmput_async()
get_task_mm()
mm_access()
mm_release()
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
->fault(), ->page_mkwrite(), and ->pfn_mkwrite() calls do not need to
take a vma and vmf parameter when the vma already resides in vmf.
Remove the vma parameter to simplify things.
[arnd@arndb.de: fix ARM build]
Link: http://lkml.kernel.org/r/20170125223558.1451224-1-arnd@arndb.de
Link: http://lkml.kernel.org/r/148521301778.19116.10840599906674778980.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
There's no users of zap_page_range() who wants non-NULL 'details'.
Let's drop it.
Link: http://lkml.kernel.org/r/20170118122429.43661-3-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch introduces a new binder_fd_array object,
that allows us to support one or more file descriptors
embedded in a buffer that is scatter-gathered.
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Martijn Coenen <maco@google.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Cc: Serban Constantinescu <serban.constantinescu@arm.com>
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Rom Lemarchand <romlem@google.com>
Cc: Android Kernel Team <kernel-team@android.com>
Signed-off-by: Martijn Coenen <maco@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Previously all data passed over binder needed
to be serialized, with the exception of Binder
objects and file descriptors.
This patchs adds support for scatter-gathering raw
memory buffers into a binder transaction, avoiding
the need to first serialize them into a Parcel.
To remain backwards compatibile with existing
binder clients, it introduces two new command
ioctls for this purpose - BC_TRANSACTION_SG and
BC_REPLY_SG. These commands may only be used with
the new binder_transaction_data_sg structure,
which adds a field for the total size of the
buffers we are scatter-gathering.
Because memory buffers may contain pointers to
other buffers, we allow callers to specify
a parent buffer and an offset into it, to indicate
this is a location pointing to the buffer that
we are fixing up. The kernel will then take care
of fixing up the pointer to that buffer as well.
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Martijn Coenen <maco@google.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Cc: Serban Constantinescu <serban.constantinescu@arm.com>
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Rom Lemarchand <romlem@google.com>
Cc: Android Kernel Team <kernel-team@android.com>
Signed-off-by: Martijn Coenen <maco@google.com>
[jstultz: Fold in small fix from Amit Pundir <amit.pundir@linaro.org>]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The binder_buffer allocator currently only allocates
space for the data and offsets buffers of a Parcel.
This change allows for requesting an additional chunk
of data in the buffer, which can for example be used
to hold additional meta-data about the transaction
(eg a security context).
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Martijn Coenen <maco@google.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Cc: Serban Constantinescu <serban.constantinescu@arm.com>
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Rom Lemarchand <romlem@google.com>
Cc: Android Kernel Team <kernel-team@android.com>
Signed-off-by: Martijn Coenen <maco@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add a new module parameter 'devices', that can be
used to specify the names of the binder device
nodes we want to populate in /dev.
Each device node has its own context manager, and
is therefore logically separated from all the other
device nodes.
The config option CONFIG_ANDROID_BINDER_DEVICES can
be used to set the default value of the parameter.
This approach was favored over using IPC namespaces,
mostly because we require a single process to be a
part of multiple binder contexts, which seemed harder
to achieve with namespaces.
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Martijn Coenen <maco@google.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Cc: Serban Constantinescu <serban.constantinescu@arm.com>
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Rom Lemarchand <romlem@google.com>
Cc: Android Kernel Team <kernel-team@android.com>
Signed-off-by: Martijn Coenen <maco@google.com>
[jstultz: minor checkpatch warning fix]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Move the context manager state into a separate
struct context, and allow for each process to have
its own context associated with it.
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Martijn Coenen <maco@google.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Cc: Serban Constantinescu <serban.constantinescu@arm.com>
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Rom Lemarchand <romlem@google.com>
Cc: Android Kernel Team <kernel-team@android.com>
Signed-off-by: Martijn Coenen <maco@google.com>
[jstultz: Minor checkpatch fix]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
flat_binder_object is used for both handling
binder objects and file descriptors, even though
the two are mostly independent. Since we'll
have more fixup objects in binder in the future,
instead of extending flat_binder_object again,
split out file descriptors to their own object
while retaining backwards compatibility to
existing user-space clients. All binder objects
just share a header.
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Martijn Coenen <maco@google.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Cc: Serban Constantinescu <serban.constantinescu@arm.com>
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Rom Lemarchand <romlem@google.com>
Cc: Android Kernel Team <kernel-team@android.com>
Signed-off-by: Martijn Coenen <maco@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Prevent using a binder_ref with only weak references where a strong
reference is required.
Signed-off-by: Arve Hjønnevåg <arve@android.com>
Signed-off-by: Martijn Coenen <maco@android.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The workqueue is being used to run deferred work for the android binder.
The "binder_deferred_workqueue" queues only a single work item and hence
does not require ordering. Also, this workqueue is not being used on a
memory recliam path. Hence, the singlethreaded workqueue has been
replaced with the use of system_wq.
System workqueues have been able to handle high level of concurrency
for a long time now and hence it's not required to have a singlethreaded
workqueue just to gain concurrency. Unlike a dedicated per-cpu workqueue
created with create_singlethread_workqueue(), system_wq allows multiple
work items to overlap executions even on the same CPU; however, a
per-cpu workqueue doesn't have any CPU locality or global ordering
guarantee unless the target CPU is explicitly specified and thus the
increase of local concurrency shouldn't make any difference.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This resolves the merge issues and confusions people were having with
the goldfish drivers due to changes for them showing up in two different
trees.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There's one point was missed in the patch commit da49889deb ("staging:
binder: Support concurrent 32 bit and 64 bit processes."). When configure
BINDER_IPC_32BIT, the size of binder_uintptr_t was 32bits, but size of
void * is 64bit on 64bit system. Correct it here.
Signed-off-by: Lisa Du <cldu@marvell.com>
Signed-off-by: Nicolas Boichat <drinkcat@chromium.org>
Fixes: da49889deb ("staging: binder: Support concurrent 32 bit and 64 bit processes.")
Cc: <stable@vger.kernel.org>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If a /d/binder/proc/[pid] entry is kept open after linux has
torn down the associated process, binder_proc_show can deference
an invalid binder_proc that has been stashed in the debugfs
inode. Validate that the binder_proc ptr passed into binder_proc_show
has not been freed by looking for it within the global process list
whilst the global lock is held. If the ptr is not valid, print nothing.
Cc: Colin Cross <ccross@android.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Rom Lemarchand <romlem@google.com>
Cc: Serban Constantinescu <serban.constantinescu@arm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Android Kernel Team <kernel-team@android.com>
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
[jstultz: Minor commit message tweaks]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sanity check at binder ioctl function,
Only allow the shared mm_struct to use the same binder-object
to do binder operate.
And add proc->vma_vm_mm = current->mm at the open function.
The libbinder do ioctl before mmap called.
V2: Fix compile error for error commit
V3: Change the condition to proc->vma_vm_mm
Signed-off-by: Chen Feng <puck.chen@hisilicon.com>
Signed-off-by: Wei Dong <weidong2@hisilicon.com>
Signed-off-by: Junmin Zhao <zhaojunmin@huawei.com>
Reviewed-by: Zhuangluan Su <suzhuangluan@hisilicon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
With two exceptions (drm/qxl and drm/radeon) all vm_operations_struct
structs should be constant.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
binder_update_page_range() initializes only addr and size
fields in 'struct vm_struct tmp_area;' and passes it to
map_vm_area().
Before 71394fe501 ("mm: vmalloc: add flag preventing guard hole allocation")
this was because map_vm_area() didn't use any other fields
in vm_struct except addr and size.
Now get_vm_area_size() (used in map_vm_area()) reads vm_struct's
flags to determine whether vm area has guard hole or not.
binder_update_page_range() don't initialize flags field, so
this causes following binder mmap failures:
-----------[ cut here ]------------
WARNING: CPU: 0 PID: 1971 at mm/vmalloc.c:130
vmap_page_range_noflush+0x119/0x144()
CPU: 0 PID: 1971 Comm: healthd Not tainted 4.0.0-rc1-00399-g7da3fdc-dirty #157
Hardware name: ARM-Versatile Express
[<c001246d>] (unwind_backtrace) from [<c000f7f9>] (show_stack+0x11/0x14)
[<c000f7f9>] (show_stack) from [<c049a221>] (dump_stack+0x59/0x7c)
[<c049a221>] (dump_stack) from [<c001cf21>] (warn_slowpath_common+0x55/0x84)
[<c001cf21>] (warn_slowpath_common) from [<c001cfe3>]
(warn_slowpath_null+0x17/0x1c)
[<c001cfe3>] (warn_slowpath_null) from [<c00c66c5>]
(vmap_page_range_noflush+0x119/0x144)
[<c00c66c5>] (vmap_page_range_noflush) from [<c00c716b>] (map_vm_area+0x27/0x48)
[<c00c716b>] (map_vm_area) from [<c038ddaf>]
(binder_update_page_range+0x12f/0x27c)
[<c038ddaf>] (binder_update_page_range) from [<c038e857>]
(binder_mmap+0xbf/0x1ac)
[<c038e857>] (binder_mmap) from [<c00c2dc7>] (mmap_region+0x2eb/0x4d4)
[<c00c2dc7>] (mmap_region) from [<c00c3197>] (do_mmap_pgoff+0x1e7/0x250)
[<c00c3197>] (do_mmap_pgoff) from [<c00b35b5>] (vm_mmap_pgoff+0x45/0x60)
[<c00b35b5>] (vm_mmap_pgoff) from [<c00c1f39>] (SyS_mmap_pgoff+0x5d/0x80)
[<c00c1f39>] (SyS_mmap_pgoff) from [<c000ce81>] (ret_fast_syscall+0x1/0x5c)
---[ end trace 48c2c4b9a1349e54 ]---
binder: 1982: binder_alloc_buf failed to map page at f0e00000 in kernel
binder: binder_mmap: 1982 b6bde000-b6cdc000 alloc small buf failed -12
Use map_kernel_range_noflush() instead of map_vm_area() as this is better
API for binder's purposes and it allows to get rid of 'vm_struct tmp_area' at all.
Fixes: 71394fe501 ("mm: vmalloc: add flag preventing guard hole allocation")
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Reported-by: Amit Pundir <amit.pundir@linaro.org>
Tested-by: Amit Pundir <amit.pundir@linaro.org>
Acked-by: David Rientjes <rientjes@google.com>
Tested-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>