drm/i915: fall through pwrite_gtt_slow to the shmem slow path

The gtt_pwrite slowpath grabs the userspace memory with
get_user_pages. This will not work for non-page backed memory, like a
gtt mmapped gem object. Hence fall throuh to the shmem paths if we hit
-EFAULT in the gtt paths.

Now the shmem paths have exactly the same problem, but this way we
only need to rearrange the code in one write path.

v2: v1 accidentaly falls back to shmem pwrite for phys objects. Fixed.

v3: Make the codeflow around phys_pwrite cleara as suggested by Chris
Wilson.

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This commit is contained in:
Daniel Vetter 2011-12-14 13:57:30 +01:00
parent ea16a3cdb9
commit 5c0480f21f
1 changed files with 21 additions and 12 deletions

View File

@ -996,9 +996,12 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
* pread/pwrite currently are reading and writing from the CPU
* perspective, requiring manual detiling by the client.
*/
if (obj->phys_obj)
if (obj->phys_obj) {
ret = i915_gem_phys_pwrite(dev, obj, args, file);
else if (obj->gtt_space &&
goto out;
}
if (obj->gtt_space &&
obj->base.write_domain != I915_GEM_DOMAIN_CPU) {
ret = i915_gem_object_pin(obj, 0, true);
if (ret)
@ -1018,7 +1021,14 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
out_unpin:
i915_gem_object_unpin(obj);
} else {
if (ret != -EFAULT)
goto out;
/* Fall through to the shmfs paths because the gtt paths might
* fail with non-page-backed user pointers (e.g. gtt mappings
* when moving data between textures). */
}
ret = i915_gem_object_set_to_cpu_domain(obj, 1);
if (ret)
goto out;
@ -1028,7 +1038,6 @@ out_unpin:
ret = i915_gem_shmem_pwrite_fast(dev, obj, args, file);
if (ret == -EFAULT)
ret = i915_gem_shmem_pwrite_slow(dev, obj, args, file);
}
out:
drm_gem_object_unreference(&obj->base);