Sync code to the same with tk4 pub/lts/0017-kabi, except deleted rue
and wujing. Partners can submit pull requests to this branch, and we
can pick the commits to tk4 pub/lts/0017-kabi easly.
Signed-off-by: Jianping Liu <frankjpliu@tencent.com>
Gitee limit the repo's size to 3GB, to reduce the size of the code,
sync codes to ock 5.4.119-20.0009.21 in one commit.
Signed-off-by: Jianping Liu <frankjpliu@tencent.com>
Lockdep reports a circular locking dependency with pages_lock taken in
the shrinker callback. The deadlock can't actually happen with current
users at least as a BO will never be purgeable when pages_lock is held.
To be safe, let's use mutex_trylock() instead and bail if a BO is locked
already.
WARNING: possible circular locking dependency detected
5.3.0-rc1+ #100 Tainted: G L
------------------------------------------------------
kswapd0/171 is trying to acquire lock:
000000009b9823fd (&shmem->pages_lock){+.+.}, at: drm_gem_shmem_purge+0x20/0x40
but task is already holding lock:
00000000f82369b6 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x0/0x40
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}:
fs_reclaim_acquire.part.18+0x34/0x40
fs_reclaim_acquire+0x20/0x28
__kmalloc_node+0x6c/0x4c0
kvmalloc_node+0x38/0xa8
drm_gem_get_pages+0x80/0x1d0
drm_gem_shmem_get_pages+0x58/0xa0
drm_gem_shmem_get_pages_sgt+0x48/0xd0
panfrost_mmu_map+0x38/0xf8 [panfrost]
panfrost_gem_open+0xc0/0xe8 [panfrost]
drm_gem_handle_create_tail+0xe8/0x198
drm_gem_handle_create+0x3c/0x50
panfrost_gem_create_with_handle+0x70/0xa0 [panfrost]
panfrost_ioctl_create_bo+0x48/0x80 [panfrost]
drm_ioctl_kernel+0xb8/0x110
drm_ioctl+0x244/0x3f0
do_vfs_ioctl+0xbc/0x910
ksys_ioctl+0x78/0xa8
__arm64_sys_ioctl+0x1c/0x28
el0_svc_common.constprop.0+0x90/0x168
el0_svc_handler+0x28/0x78
el0_svc+0x8/0xc
-> #0 (&shmem->pages_lock){+.+.}:
__lock_acquire+0xa2c/0x1d70
lock_acquire+0xdc/0x228
__mutex_lock+0x8c/0x800
mutex_lock_nested+0x1c/0x28
drm_gem_shmem_purge+0x20/0x40
panfrost_gem_shrinker_scan+0xc0/0x180 [panfrost]
do_shrink_slab+0x208/0x500
shrink_slab+0x10c/0x2c0
shrink_node+0x28c/0x4d8
balance_pgdat+0x2c8/0x570
kswapd+0x22c/0x638
kthread+0x128/0x130
ret_from_fork+0x10/0x18
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&shmem->pages_lock);
lock(fs_reclaim);
lock(&shmem->pages_lock);
*** DEADLOCK ***
3 locks held by kswapd0/171:
#0: 00000000f82369b6 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x0/0x40
#1: 00000000ceb37808 (shrinker_rwsem){++++}, at: shrink_slab+0xbc/0x2c0
#2: 00000000f31efa81 (&pfdev->shrinker_lock){+.+.}, at: panfrost_gem_shrinker_scan+0x34/0x180 [panfrost]
Fixes: 17acb9f35e ("drm/shmem: Add madvise state and purge helpers")
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: Sean Paul <sean@poorly.run>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Rob Herring <robh@kernel.org>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823021216.5862-6-robh@kernel.org
If a driver does its own management of pages, the shmem helper object's
pages array could be allocated when a SG table is not. There's not
really any good reason to tie putting pages with having a SG table when
freeing the object, so just put pages if the pages array is populated.
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: Sean Paul <sean@poorly.run>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190808222200.13176-3-robh@kernel.org
Add support to the shmem GEM helpers for tracking madvise state and
purging pages. This is based on the msm implementation.
The BO provides a list_head, but the list management is handled outside
of the shmem helpers as there are different locking requirements.
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: Sean Paul <sean@poorly.run>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Eric Anholt <eric@anholt.net>
Acked-by: Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190805143358.21245-1-robh@kernel.org
Do not rely on including drm.h from drm_file.h,
as the include in drm_file.h will be dropped.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Reviewed-by: Sean Paul <sean@poorly.run>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Eric Anholt <eric@anholt.net>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190718161507.2047-8-sam@ravnborg.org
Right now, the BO is mapped as a cached region when ->vmap() is called
and the underlying object is not a dmabuf.
Doing that makes cache management a bit more complicated (you'd need
to call dma_map/unmap_sg() on the ->sgt field everytime the BO is about
to be passed to the GPU/CPU), so let's map the BO with writecombine
attributes instead (as done in most drivers).
Fixes: 2194a63a81 ("drm: Add library for shmem backed GEM objects")
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190529065121.13485-1-boris.brezillon@collabora.com
The shmem->pages[] array has "num_pages" elements so the > should be >=
to prevent reading beyond the end of the array. The shmem->pages[]
array is allocated in drm_gem_shmem_prime_import_sg_table().
Fixes: 2194a63a81 ("drm: Add library for shmem backed GEM objects")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/20190322064125.GA12551@kadam
This adds a library for shmem backed GEM objects.
v8:
- export drm_gem_shmem_create_with_handle
- call mapping_set_gfp_mask to set default zone to GFP_HIGHUSER
- Add helper drm_gem_shmem_get_pages_sgt()
v7:
- Use write-combine for mmap instead. This is the more common
case. (robher)
v6:
- Fix uninitialized variable issue in an error path (anholt).
- Add a drm_gem_shmem_vm_open() to the fops to get proper refcounting
of the pages (anholt).
v5:
- Drop drm_gem_shmem_prime_mmap() (Daniel Vetter)
- drm_gem_shmem_mmap(): Subtract drm_vma_node_start() to get the real
vma->vm_pgoff
- drm_gem_shmem_fault(): Use vmf->pgoff now that vma->vm_pgoff is correct
v4:
- Drop cache modes (Thomas Hellstrom)
- Add a GEM attached vtable
v3:
- Grammar (Sam Ravnborg)
- s/drm_gem_shmem_put_pages_unlocked/drm_gem_shmem_put_pages_locked/
(Sam Ravnborg)
- Add debug output in error path (Sam Ravnborg)
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/20190313004344.24169-1-robh@kernel.org