Commit Graph

41 Commits

Author SHA1 Message Date
Daniel Vetter 804b6e5ee6 drm/shmem-helpers: Allocate wc pages on x86
intel-gfx-ci realized that something is not quite coherent anymore on
some platforms for our i915+vgem tests, when I tried to switch vgem
over to shmem helpers.

After lots of head-scratching I realized that I've removed calls to
drm_clflush. And we need those. To make this a bit cleaner use the
same page allocation tooling as ttm, which does internally clflush
(and more, as neeeded on any platform instead of just the intel x86
cpus i915 can be combined with).

Unfortunately this doesn't exist on arm, or as a generic feature. For
that I think only the dma-api can get at wc memory reliably, so maybe
we'd need some kind of GFP_WC flag to do this properly.

v2: Add a TODO comment about what should be done to support this in
other places (Thomas)

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210812131412.2487363-3-daniel.vetter@ffwll.ch
2021-08-12 21:41:10 +02:00
Daniel Vetter 8b93d1d7db drm/shmem-helper: Switch to vmf_insert_pfn
We want to stop gup, which isn't the case if we use vmf_insert_page
and VM_MIXEDMAP, because that does not set pte_special.

The motivation here is to stop get_user_pages from working on buffer
object mmaps in general. Quoting some discussion with Thomas:

On Thu, Jul 22, 2021 at 08:22:43PM +0200, Thomas Zimmermann wrote:
> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > We want to stop gup, which isn't the case if we use vmf_insert_page
>
> What is gup?

get_user_pages. It pins memory wherever it is, which badly wreaks at least
ttm and could also cause trouble with cma allocations. In both cases
becaue we can't move/reuse these pages anymore.

Now get_user_pages fails when the memory isn't considered "normal", like
with VM_PFNMAP and using vm_insert_pfn. For consistency across all dma-buf
I'm trying (together with Christian König) to roll this out everywhere,
for fewer surprises.

E.g. for 5.14 iirc we merged a patch to do the same for ttm, where it
closes an actual bug (ttm gets really badly confused when there's suddenly
pinned pages where it thought it can move them).

cma allcoations already use VM_PFNMAP (because that's what dma_mmap is
using underneath), as is anything that's using remap_pfn_range. Worst case
we have to revert this patch for shmem helpers if it breaks something, but
I hope that's not the case. On the ttm side we've also had some fallout
that we needed to paper over with clever tricks.
v2: With this shmem gem helpers now definitely need CONFIG_MMU (0day)

v3: add more depends on MMU. For usb drivers this is a bit awkward,
but really it's correct: To be able to provide a contig mapping of
buffers to userspace on !MMU platforms we'd need to use the cma
helpers for these drivers on those platforms. As-is this wont work.

Also not exactly sure why vm_insert_page doesn't go boom, because that
definitely wont fly in practice since the pages are non-contig to
begin with.

v4: Explain the entire motivation a lot more (Thomas)

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210812131412.2487363-2-daniel.vetter@ffwll.ch
2021-08-12 21:41:10 +02:00
Cai Huoqing 0ae865ef92 drm: Fix typo in comments
fix typo for drm

v1->v2:
respin with the change "iff ==> implies that"

Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210730132729.376-1-caihuoqing@baidu.com
2021-08-02 10:19:43 +02:00
Daniel Vetter 35d283658a drm/shmem-helper: Align to page size in dumb_create
shmem helpers seem a bit sloppy here by automatically rounding up when
actually creating the buffer, which results in under-reporting of what
we actually have. Caught by igt/vgem_basic tests.

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210603164113.1433476-4-daniel.vetter@ffwll.ch
2021-07-13 15:44:15 +02:00
Noralf Trønnes 64e194e278 drm/shmem-helpers: vunmap: Don't put pages for dma-buf
dma-buf importing was reworked in commit 7d2cd72a9a
("drm/shmem-helpers: Simplify dma-buf importing"). Before that commit
drm_gem_shmem_prime_import_sg_table() did set ->pages_use_count=1 and
drm_gem_shmem_vunmap_locked() could call drm_gem_shmem_put_pages()
unconditionally. Now without the use count set, put pages is called also
on dma-bufs. Fix this by only putting pages if it's not imported.

Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Fixes: 7d2cd72a9a ("drm/shmem-helpers: Simplify dma-buf importing")
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Tested-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20210219122203.51130-1-noralf@tronnes.org
(cherry picked from commit cdea72518a)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
2021-03-11 11:11:33 +01:00
Neil Roberts 11d5a4745e drm/shmem-helper: Don't remove the offset in vm_area_struct pgoff
When mmapping the shmem, it would previously adjust the pgoff in the
vm_area_struct to remove the fake offset that is added to be able to
identify the buffer. This patch removes the adjustment and makes the
fault handler use the vm_fault address to calculate the page offset
instead. Although using this address is apparently discouraged, several
DRM drivers seem to be doing it anyway.

The problem with removing the pgoff is that it prevents
drm_vma_node_unmap from working because that searches the mapping tree
by address. That doesn't work because all of the mappings are at offset
0. drm_vma_node_unmap is being used by the shmem helpers when purging
the buffer.

This fixes a bug in Panfrost which is using drm_gem_shmem_purge. Without
this the mapping for the purged buffer can still be accessed which might
mean it would access random pages from other buffers

v2: Don't check whether the unsigned page_offset is less than 0.

Cc: stable@vger.kernel.org
Fixes: 17acb9f35e ("drm/shmem: Add madvise state and purge helpers")
Signed-off-by: Neil Roberts <nroberts@igalia.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210223155125.199577-3-nroberts@igalia.com
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
2021-03-11 11:11:33 +01:00
Neil Roberts d611b4a090 drm/shmem-helper: Check for purged buffers in fault handler
When a buffer is madvised as not needed and then purged, any attempts to
access the buffer from user-space should cause a bus fault. This patch
adds a check for that.

Cc: stable@vger.kernel.org
Fixes: 17acb9f35e ("drm/shmem: Add madvise state and purge helpers")
Signed-off-by: Neil Roberts <nroberts@igalia.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210223155125.199577-2-nroberts@igalia.com
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
2021-03-11 11:11:33 +01:00
Thomas Zimmermann 2f04636f49 drm/shmem-helper: Removed drm_gem_shmem_create_object_cached()
Cached page mappings are now the default for SHMEM GEM objects. Remove
the obsolete create function for cached mappings.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Maxime Ripard <mripard@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20201117133156.26822-3-tzimmermann@suse.de
2020-11-24 09:10:33 +01:00
Thomas Zimmermann 0cf2ef46c6 drm/shmem-helper: Use cached mappings by default
SHMEM-buffer backing storage is allocated from system memory; which is
typically cachable. The default mode for SHMEM objects is writecombine
though.

Unify SHMEM semantics by defaulting to cached mappings. The exception
is pages imported via dma-buf. DMA memory is usually not cached.

DRM drivers that require write-combined mappings set the map_wc flag
in struct drm_gem_shmem_object to true. This currently affects lima,
panfrost and v3d.

The drivers mgag200, udl, virtio and vkms continue to use default
shmem mappings.

The drivers cirrus and gm12u320 change caching flags. Both used
writecombine and now switch over to shmem defaults. Both drivers use
SHMEM objects as shadow buffers for internal video memory, so cached
mappings will not affect them negatively.

v3:
	* set value of shmem pointer before dereferencing it in
	  __drm_gem_shmem_create() (Dan, kernel test robot)
v2:
	* recreate patch on top of latest SHMEM helpers
	* update lima, panfrost, v3d to select writecombine (Daniel, Rob)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Maxime Ripard <mripard@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20201117133156.26822-2-tzimmermann@suse.de
2020-11-24 09:10:21 +01:00
Thomas Zimmermann 49a3f51dfe drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.

TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.

v7:
	* init QXL cursor to mapped BO buffer (kernel test robot)
v5:
	* update vkms after switch to shmem
v4:
	* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
	* fix a trailing { in drm_gem_vmap()
	* remove several empty functions instead of converting them (Daniel)
	* comment uses of raw pointers with a TODO (Daniel)
	* TODO list: convert more helpers to use struct dma_buf_map

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20201103093015.1063-7-tzimmermann@suse.de
2020-11-09 09:19:24 +01:00
Maxime Ripard c489573b5b
Merge drm/drm-next into drm-misc-next
Daniel needs -rc2 in drm-misc-next to merge some patches

Signed-off-by: Maxime Ripard <maxime@cerno.tech>
2020-11-02 11:17:54 +01:00
Daniel Vetter f49a51bfdc drm/shme-helpers: Fix dma_buf_mmap forwarding bug
When we forward an mmap to the dma_buf exporter, they get to own
everything. Unfortunately drm_gem_mmap_obj() overwrote
vma->vm_private_data after the driver callback, wreaking the
exporter complete. This was noticed because vb2_common_vm_close blew
up on mali gpu with panfrost after commit 26d3ac3cb0
("drm/shmem-helpers: Redirect mmap for imported dma-buf").

Unfortunately drm_gem_mmap_obj also acquires a surplus reference that
we need to drop in shmem helpers, which is a bit of a mislayer
situation. Maybe the entire dma_buf_mmap forwarding should be pulled
into core gem code.

Note that the only two other drivers which forward mmap in their own
code (etnaviv and exynos) get this somewhat right by overwriting the
gem mmap code. But they seem to still have the leak. This might be a
good excuse to move these drivers over to shmem helpers completely.

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Christian König <christian.koenig@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Russell King <linux+etnaviv@armlinux.org.uk>
Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Fixes: 26d3ac3cb0 ("drm/shmem-helpers: Redirect mmap for imported dma-buf")
Cc: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: dri-devel@lists.freedesktop.org
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: <stable@vger.kernel.org> # v5.9+
Reported-and-tested-by: piotr.oniszczuk@gmail.com
Cc: piotr.oniszczuk@gmail.com
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201027214922.3566743-1-daniel.vetter@ffwll.ch
2020-10-28 12:27:41 +01:00
Thomas Zimmermann 20e76f1a70 dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
This patch updates dma_buf_vunmap() and dma-buf's vunmap callback to
use struct dma_buf_map. The interfaces used to receive a buffer address.
This address is now given in an instance of the structure.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

v2:
	* include dma-buf-heaps and i915 selftests (kernel test robot)
	* initialize cma_obj before using it in drm_gem_cma_free_object()
	  (kernel test robot)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Tomasz Figa <tfiga@chromium.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20200925115601.23955-4-tzimmermann@suse.de
2020-09-29 12:41:21 +02:00
Thomas Zimmermann 6619ccf1bb dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
struct dma_buf_map.

The interfaces used to return a buffer address. This address now gets
stored in an instance of the structure that is given as an additional
argument. The functions return an errno code on errors.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

v3:
	* update fastrpc driver (kernel test robot)
v2:
	* always clear map parameter in dma_buf_vmap() (Daniel)
	* include dma-buf-heaps and i915 selftests (kernel test robot)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Tomasz Figa <tfiga@chromium.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20200925115601.23955-3-tzimmermann@suse.de
2020-09-29 12:40:58 +02:00
Dave Airlie 6ea6be7708 drm-misc-next for 5.10:
UAPI Changes:
 
 Cross-subsystem Changes:
 
 Core Changes:
   - dev: More devm_drm convertions and removal of drm_dev_init
 
 Driver Changes:
   - i915: selftests improvements
   - panfrost: support for Amlogic SoC
   - vc4: one fix
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQRcEzekXsqa64kGDp7j7w1vZxhRxQUCX2jGxQAKCRDj7w1vZxhR
 xR3DAQCiZOnaxVcY49iG4343Z1aHHaIEShbnB0bDdaWstn7kiQD/UXBXUoOSFoFQ
 FkTsW31JsdXNnWP5e6/eJd2Lb6waVAA=
 =VlsU
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2020-09-21' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.10:

UAPI Changes:

Cross-subsystem Changes:
  - virtio: Merged a PR for patches that will affect drm/virtio

Core Changes:
  - dev: More devm_drm convertions and removal of drm_dev_init
  - atomic: Split out drm_atomic_helper_calc_timestamping_constants of
    drm_atomic_helper_update_legacy_modeset_state
  - ttm: More rework

Driver Changes:
  - i915: selftests improvements
  - panfrost: support for Amlogic SoC
  - vc4: one fix
  - tree-wide: conversions to devm_drm_dev_alloc,
  - ast: simplifications of the atomic modesetting code
  - panfrost: multiple fixes
  - vc4: multiple fixes
Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maxime Ripard <maxime@cerno.tech>
Link: https://patchwork.freedesktop.org/patch/msgid/20200921152956.2gxnsdgxmwhvjyut@gilmour.lan
2020-09-23 09:52:24 +10:00
Marek Szyprowski 6c6fa39ca9 drm: core: fix common struct sg_table related issues
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
returns the number of the created entries in the DMA address space.
However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
dma_unmap_sg must be called with the original number of the entries
passed to the dma_map_sg().

struct sg_table is a common structure used for describing a non-contiguous
memory buffer, used commonly in the DRM and graphics subsystems. It
consists of a scatterlist with memory pages and DMA addresses (sgl entry),
as well as the number of scatterlist entries: CPU pages (orig_nents entry)
and DMA mapped pages (nents entry).

It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling DMA-mapping functions with a wrong number of entries or
ignoring the number of mapped entries returned by the dma_map_sg()
function.

To avoid such issues, lets use a common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
2020-09-10 08:17:48 +02:00
Gerd Hoffmann 707d561f77 drm: allow limiting the scatter list size.
Add drm_device argument to drm_prime_pages_to_sg(), so we can
call dma_max_mapping_size() to figure the segment size limit
and call into __sg_alloc_table_from_pages() with the correct
limit.

This fixes virtio-gpu with sev.  Possibly it'll fix other bugs
too given that drm seems to totaly ignore segment size limits
so far ...

v2: place max_segment in drm driver not gem object.
v3: move max_segment next to the other gem fields.
v4: just use dma_max_mapping_size().

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20200907112425.15610-2-kraxel@redhat.com
2020-09-09 07:58:56 +02:00
Daniel Vetter cfe28f909d drm/shmem-helper: Only dma-buf imports are private obj
I broke that in my refactoring:

commit 7d2cd72a9a
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date:   Fri May 29 16:05:42 2020 +0200

    drm/shmem-helpers: Simplify dma-buf importing

I'm not entirely sure of the history here, but I suspect that in one
of the rebases or when applying the patch I moved the hunk from
drm_gem_shmem_prime_import_sg_table(), where it should be, to
drm_gem_shmem_create_with_handle(), which is totally wrong.

Remedy this.

Thanks for Thomas for the crucial hint in debugging this.

Tested-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Reported-by: Thomas Zimmermann <tzimmermann@suse.de>
Fixes: 7d2cd72a9a ("drm/shmem-helpers: Simplify dma-buf importing")
Cc: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200616114723.2363268-1-daniel.vetter@ffwll.ch
2020-06-16 19:11:51 +02:00
Daniel Vetter 5b9f5f11a2 drm/shmem-helper: Fix obj->filp derefence
I broke that in my refactoring:

commit 7d2cd72a9a
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date:   Fri May 29 16:05:42 2020 +0200

    drm/shmem-helpers: Simplify dma-buf importing

Tested-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Reported-by: Thomas Zimmermann <tzimmermann@suse.de>
Fixes: 7d2cd72a9a ("drm/shmem-helpers: Simplify dma-buf importing")
Cc: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200615151026.2339113-1-daniel.vetter@ffwll.ch
2020-06-16 19:07:31 +02:00
Thomas Zimmermann d18ee06b48 drm/shmem-helper: Add .gem_create_object helper that sets map_cached flag
The helper drm_gem_shmem_create_object_cached() allocates an GEM SHMEM
object and sets the map_cached flag. Useful for drivers that want cached
mappings.

v3:
	* style fixes

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200609090820.20256-2-tzimmermann@suse.de
2020-06-10 10:16:43 +02:00
Daniel Vetter 7d2cd72a9a drm/shmem-helpers: Simplify dma-buf importing
- Ditch the ->pages array
- Make it a private gem bo, which means no shmem object, which means
  fireworks if anyone calls drm_gem_object_get_pages. But we've just
  made sure that's all covered.

v2: Rebase

v3: I forgot to remove the page_count mangling from the free path too.
Noticed by Boris while testing.

Cc: Boris Brezillon <boris.brezillon@collabora.com>
Tested-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200529140542.2103713-1-daniel.vetter@ffwll.ch
2020-06-08 17:03:41 +02:00
Daniel Vetter 5264083573 drm/shmem-helpers: Ensure get_pages is not called on imported dma-buf
Just a bit of light paranoia. Also sprinkle this check over
drm_gem_shmem_get_sg_table, which should only be called when
exporting, same for the pin/unpin functions, on which it relies to
work correctly.

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Tested-by: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200511093554.211493-9-daniel.vetter@ffwll.ch
2020-06-08 17:03:41 +02:00
Daniel Vetter 26d3ac3cb0 drm/shmem-helpers: Redirect mmap for imported dma-buf
Currently this seems to work by converting the sgt into a pages array,
and then treating it like a native object. Do the right thing and
redirect mmap to the exporter.

With this nothing is calling get_pages anymore on imported dma-buf,
and we can start to remove the use of the ->pages array for that case.

v2: Rebase

Tested-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200511093554.211493-8-daniel.vetter@ffwll.ch
2020-06-03 15:07:40 +02:00
Daniel Vetter 0cc5fb4e87 drm/shmem-helpers: Don't call get/put_pages on imported dma-buf in vmap
There's no direct harm, because for the shmem helpers these are noops
on imported buffers. The trouble is in the locks these take - I want
to change dma_buf_vmap locking, and so need to make sure that we only
ever take certain locks on one side of the dma-buf interface: Either
for exporters, or for importers.

v2: Change the control flow less compared to what's there (Thomas)

Tested-by: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Noralf Trønnes <noralf@tronnes.org>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200514202256.490926-1-daniel.vetter@ffwll.ch
2020-06-03 15:06:38 +02:00
Daniel Vetter 0b638559aa drm/doc: Some polish for shmem helpers
- Move the shmem helper section to the drm-mm.rst file, next to the
  vram helpers. Makes a lot more sense there with the now wider scope.
  Also, that's where the all the other backing storage stuff resides.
  It's just the framebuffer helpers that should be in the kms helper
  section.

- Try to clarify which functiosn are for implementing
  drm_gem_object_funcs, and which for drivers to call directly. At
  least one driver screwed that up a bit.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200511093554.211493-4-daniel.vetter@ffwll.ch
2020-06-03 14:48:27 +02:00
Emil Velikov be6ee10234 drm: remove _unlocked suffix in drm_gem_object_put_unlocked
Spelling out _unlocked for each and every driver is a annoying.
Especially if we consider how many drivers, do not know (or need to)
about the horror stories involving struct_mutex.

Just drop the suffix. It makes the API cleaner.

Done via the following script:

__from=drm_gem_object_put_unlocked
__to=drm_gem_object_put
for __file in $(git grep --name-only $__from); do
  sed -i  "s/$__from/$__to/g" $__file;
done

Pay special attention to the compat #define

v2: keep sed and #define removal separate

Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Acked-by: Sam Ravnborg <sam@ravnborg.org> (v1)
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20200515095118.2743122-14-emil.l.velikov@gmail.com
2020-05-19 22:31:31 +01:00
Gerd Hoffmann 852d7655ea drm/shmem: drop pgprot_decrypted()
Was added by commit 95cf9264d5 ("x86, drm, fbdev: Do not specify
encrypted memory for video mappings"), then it was kept through various
changes.

While vram actually needs decrypted mappings this is not correct for
shmem gem objects which live in main memory not io memory, so remove the
call.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20200228104723.18757-1-kraxel@redhat.com
2020-03-02 07:13:19 +01:00
Gerd Hoffmann 1cad629257 drm/shmem: add support for per object caching flags.
Add map_cached bool to drm_gem_shmem_object, to request cached mappings
on a per-object base.  Check the flag before adding writecombine to
pgprot bits.

Cc: stable@vger.kernel.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Guillaume Gardet <Guillaume.Gardet@arm.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20200226154752.24328-2-kraxel@redhat.com
2020-02-27 13:54:38 +01:00
Gerd Hoffmann e551655399 drm: call drm_gem_object_funcs.mmap with fake offset
The fake offset is going to stay, so change the calling convention for
drm_gem_object_funcs.mmap to include the fake offset.  Update all users
accordingly.

Note that this reverts 83b8a6f242 ("drm/gem: Fix mmap fake offset
handling for drm_gem_object_funcs.mmap") and on top then adds the fake
offset to  drm_gem_prime_mmap to make sure all paths leading to
obj->funcs->mmap are consistent.

v3: move fake-offset tweak in drm_gem_prime_mmap() so we have this code
    only once in the function (Rob Herring).

Fixes: 83b8a6f242 ("drm/gem: Fix mmap fake offset handling for drm_gem_object_funcs.mmap")
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Rob Herring <robh@kernel.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20191127092523.5620-2-kraxel@redhat.com
2019-12-06 11:18:11 +01:00
Rob Herring 83b8a6f242 drm/gem: Fix mmap fake offset handling for drm_gem_object_funcs.mmap
Commit c40069cb7b ("drm: add mmap() to drm_gem_object_funcs")
introduced a GEM object mmap() hook which is expected to subtract the
fake offset from vm_pgoff. However, for mmap() on dmabufs, there is not
a fake offset.

To fix this, let's always call mmap() object callback with an offset of 0,
and leave it up to drm_gem_mmap_obj() to remove the fake offset.

TTM still needs the fake offset, so we have to add it back until that's
fixed.

Fixes: c40069cb7b ("drm: add mmap() to drm_gem_object_funcs")
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Rob Herring <robh@kernel.org>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024191859.31700-1-robh@kernel.org
2019-10-29 13:29:21 -05:00
Gerd Hoffmann 1bf01e1e35 drm/shmem: drop VM_IO
VM_IO is wrong here, shmem uses normal ram not io memory.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20191016115203.20095-5-kraxel@redhat.com
2019-10-17 13:59:16 +02:00
Gerd Hoffmann 5da932604d drm/shmem: drop VM_DONTDUMP
Not obvious why this is needed.  According to Deniel Vetter this is most
likely a historic artefact dating back to the days where drm drivers
exposed hardware registers as mmap'able gem objects, to avoid dumping
touching those registers.  shmem gem objects surely don't need that ...

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20191016115203.20095-4-kraxel@redhat.com
2019-10-17 13:59:16 +02:00
Gerd Hoffmann 0be8958936 drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
Switch gem shmem helper to the new mmap() workflow,
from &gem_driver.fops.mmap to &drm_gem_object_funcs.mmap.

v2: Fix vm_flags and vm_page_prot handling.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20191016115203.20095-3-kraxel@redhat.com
2019-10-17 13:59:16 +02:00
Rob Herring dfbc7a46b9 drm/shmem: Use mutex_trylock in drm_gem_shmem_purge
Lockdep reports a circular locking dependency with pages_lock taken in
the shrinker callback. The deadlock can't actually happen with current
users at least as a BO will never be purgeable when pages_lock is held.
To be safe, let's use mutex_trylock() instead and bail if a BO is locked
already.

WARNING: possible circular locking dependency detected
5.3.0-rc1+ #100 Tainted: G             L
------------------------------------------------------
kswapd0/171 is trying to acquire lock:
000000009b9823fd (&shmem->pages_lock){+.+.}, at: drm_gem_shmem_purge+0x20/0x40

but task is already holding lock:
00000000f82369b6 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x0/0x40

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}:
       fs_reclaim_acquire.part.18+0x34/0x40
       fs_reclaim_acquire+0x20/0x28
       __kmalloc_node+0x6c/0x4c0
       kvmalloc_node+0x38/0xa8
       drm_gem_get_pages+0x80/0x1d0
       drm_gem_shmem_get_pages+0x58/0xa0
       drm_gem_shmem_get_pages_sgt+0x48/0xd0
       panfrost_mmu_map+0x38/0xf8 [panfrost]
       panfrost_gem_open+0xc0/0xe8 [panfrost]
       drm_gem_handle_create_tail+0xe8/0x198
       drm_gem_handle_create+0x3c/0x50
       panfrost_gem_create_with_handle+0x70/0xa0 [panfrost]
       panfrost_ioctl_create_bo+0x48/0x80 [panfrost]
       drm_ioctl_kernel+0xb8/0x110
       drm_ioctl+0x244/0x3f0
       do_vfs_ioctl+0xbc/0x910
       ksys_ioctl+0x78/0xa8
       __arm64_sys_ioctl+0x1c/0x28
       el0_svc_common.constprop.0+0x90/0x168
       el0_svc_handler+0x28/0x78
       el0_svc+0x8/0xc

-> #0 (&shmem->pages_lock){+.+.}:
       __lock_acquire+0xa2c/0x1d70
       lock_acquire+0xdc/0x228
       __mutex_lock+0x8c/0x800
       mutex_lock_nested+0x1c/0x28
       drm_gem_shmem_purge+0x20/0x40
       panfrost_gem_shrinker_scan+0xc0/0x180 [panfrost]
       do_shrink_slab+0x208/0x500
       shrink_slab+0x10c/0x2c0
       shrink_node+0x28c/0x4d8
       balance_pgdat+0x2c8/0x570
       kswapd+0x22c/0x638
       kthread+0x128/0x130
       ret_from_fork+0x10/0x18

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&shmem->pages_lock);
                               lock(fs_reclaim);
  lock(&shmem->pages_lock);

 *** DEADLOCK ***

3 locks held by kswapd0/171:
 #0: 00000000f82369b6 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x0/0x40
 #1: 00000000ceb37808 (shrinker_rwsem){++++}, at: shrink_slab+0xbc/0x2c0
 #2: 00000000f31efa81 (&pfdev->shrinker_lock){+.+.}, at: panfrost_gem_shrinker_scan+0x34/0x180 [panfrost]

Fixes: 17acb9f35e ("drm/shmem: Add madvise state and purge helpers")
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: Sean Paul <sean@poorly.run>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Rob Herring <robh@kernel.org>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823021216.5862-6-robh@kernel.org
2019-08-23 11:54:45 -05:00
Rob Herring 4fa3d66f13 drm/shmem: Do dma_unmap_sg before purging pages
Calling dma_unmap_sg() in drm_gem_shmem_free_object() is too late if the
backing pages have already been released by the shrinker. The result is
the following abort:

Unable to handle kernel paging request at virtual address ffff8000098ed000
Mem abort info:
  ESR = 0x96000147
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000147
  CM = 1, WnR = 1
swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000000002f51000
[ffff8000098ed000] pgd=00000000401f8003, pud=00000000401f7003, pmd=00000000401b1003, pte=00e80000098ed712
Internal error: Oops: 96000147 [#1] SMP
Modules linked in: panfrost gpu_sched
CPU: 5 PID: 902 Comm: gnome-shell Not tainted 5.3.0-rc1+ #95
Hardware name: 96boards Rock960 (DT)
pstate: 40000005 (nZcv daif -PAN -UAO)
pc : __dma_inv_area+0x40/0x58
lr : arch_sync_dma_for_cpu+0x28/0x30
sp : ffff00001321ba30
x29: ffff00001321ba30 x28: ffff00001321bd08
x27: 0000000000000000 x26: 0000000000000009
x25: 0000ffffc1f86170 x24: 0000000000000000
x23: 0000000000000000 x22: 0000000000000000
x21: 0000000000021000 x20: ffff80003bb2d810
x19: 00000000098ed000 x18: 0000000000000000
x17: 0000000000000000 x16: ffff800023fd9480
x15: 0000000000000000 x14: 0000000000000000
x13: 0000000000000000 x12: 0000000000000000
x11: 00000000fffb9fff x10: 0000000000000000
x9 : 0000000000000000 x8 : ffff800023fd9c18
x7 : 0000000000000000 x6 : 00000000ffffffff
x5 : 0000000000000000 x4 : 0000000000021000
Purging 5693440 bytes
x3 : 000000000000003f x2 : 0000000000000040
x1 : ffff80000990e000 x0 : ffff8000098ed000
Call trace:
 __dma_inv_area+0x40/0x58
 dma_direct_sync_single_for_cpu+0x7c/0x80
 dma_direct_unmap_page+0x80/0x88
 dma_direct_unmap_sg+0x54/0x80
 drm_gem_shmem_free_object+0xfc/0x108
 panfrost_gem_free_object+0x118/0x128 [panfrost]
 drm_gem_object_free+0x18/0x90
 drm_gem_object_put_unlocked+0x58/0x80
 drm_gem_object_handle_put_unlocked+0x64/0xb0
 drm_gem_object_release_handle+0x70/0x98
 drm_gem_handle_delete+0x64/0xb0
 drm_gem_close_ioctl+0x28/0x38
 drm_ioctl_kernel+0xb8/0x110
 drm_ioctl+0x244/0x3f0
 do_vfs_ioctl+0xbc/0x910
 ksys_ioctl+0x78/0xa8
 __arm64_sys_ioctl+0x1c/0x28
 el0_svc_common.constprop.0+0x88/0x150
 el0_svc_handler+0x28/0x78
 el0_svc+0x8/0xc
 Code: 8a230000 54000060 d50b7e20 14000002 (d5087620)

Fixes: 17acb9f35e ("drm/shmem: Add madvise state and purge helpers")
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: Sean Paul <sean@poorly.run>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823021216.5862-5-robh@kernel.org
2019-08-23 11:54:40 -05:00
Rob Herring 3bf5189d93 drm/shmem: Put pages independent of a SG table being set
If a driver does its own management of pages, the shmem helper object's
pages array could be allocated when a SG table is not. There's not
really any  good reason to tie putting pages with having a SG table when
freeing the object, so just put pages if the pages array is populated.

Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: Sean Paul <sean@poorly.run>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190808222200.13176-3-robh@kernel.org
2019-08-12 14:18:42 -06:00
Rob Herring 17acb9f35e drm/shmem: Add madvise state and purge helpers
Add support to the shmem GEM helpers for tracking madvise state and
purging pages. This is based on the msm implementation.

The BO provides a list_head, but the list management is handled outside
of the shmem helpers as there are different locking requirements.

Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: Sean Paul <sean@poorly.run>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Eric Anholt <eric@anholt.net>
Acked-by: Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190805143358.21245-1-robh@kernel.org
2019-08-08 15:54:10 -06:00
Sam Ravnborg d3ea256aa4 drm: direct include of drm.h in drm_gem_shmem_helper.c
Do not rely on including drm.h from drm_file.h,
as the include in drm_file.h will be dropped.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Reviewed-by: Sean Paul <sean@poorly.run>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Eric Anholt <eric@anholt.net>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190718161507.2047-8-sam@ravnborg.org
2019-07-19 23:24:17 +02:00
Boris Brezillon be7d9f05c5 drm/gem_shmem: Use a writecombine mapping for ->vaddr
Right now, the BO is mapped as a cached region when ->vmap() is called
and the underlying object is not a dmabuf.
Doing that makes cache management a bit more complicated (you'd need
to call dma_map/unmap_sg() on the ->sgt field everytime the BO is about
to be passed to the GPU/CPU), so let's map the BO with writecombine
attributes instead (as done in most drivers).

Fixes: 2194a63a81 ("drm: Add library for shmem backed GEM objects")
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190529065121.13485-1-boris.brezillon@collabora.com
2019-06-10 09:14:01 -06:00
Dan Carpenter 3a3fe6e766 drm: shmem: Off by one in drm_gem_shmem_fault()
The shmem->pages[] array has "num_pages" elements so the > should be >=
to prevent reading beyond the end of the array.  The shmem->pages[]
array is allocated in drm_gem_shmem_prime_import_sg_table().

Fixes: 2194a63a81 ("drm: Add library for shmem backed GEM objects")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/20190322064125.GA12551@kadam
2019-04-01 10:44:34 -07:00
Noralf Trønnes 2194a63a81 drm: Add library for shmem backed GEM objects
This adds a library for shmem backed GEM objects.

v8:
- export drm_gem_shmem_create_with_handle
- call mapping_set_gfp_mask to set default zone to GFP_HIGHUSER
- Add helper drm_gem_shmem_get_pages_sgt()

v7:
- Use write-combine for mmap instead. This is the more common
  case. (robher)

v6:
- Fix uninitialized variable issue in an error path (anholt).
- Add a drm_gem_shmem_vm_open() to the fops to get proper refcounting
  of the pages (anholt).

v5:
- Drop drm_gem_shmem_prime_mmap() (Daniel Vetter)
- drm_gem_shmem_mmap(): Subtract drm_vma_node_start() to get the real
  vma->vm_pgoff
- drm_gem_shmem_fault(): Use vmf->pgoff now that vma->vm_pgoff is correct

v4:
- Drop cache modes (Thomas Hellstrom)
- Add a GEM attached vtable

v3:
- Grammar (Sam Ravnborg)
- s/drm_gem_shmem_put_pages_unlocked/drm_gem_shmem_put_pages_locked/
  (Sam Ravnborg)
- Add debug output in error path (Sam Ravnborg)

Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/20190313004344.24169-1-robh@kernel.org
2019-03-14 12:06:44 -07:00