drm-misc-next for v4.21, part 2:

UAPI Changes:
 - Remove syncobj timeline support from drm.
 
 Cross-subsystem Changes:
 - Document canvas provider node in the DT bindings.
 - Improve documentation for TPO TPG110 DT bindings.
 
 Core Changes:
 - Use explicit state in drm atomic functions.
 - Add panel quirk for new GPD Win2 firmware.
 - Add DRM_FORMAT_XYUV8888.
 - Set the default import/export function in prime to drm_gem_prime_import/export.
 - Add a separate drm_gem_object_funcs, to stop relying on dev->driver->*gem* functions.
 - Make sure that tinydrm sets the virtual address also on imported buffers.
 
 Driver Changes:
 - Support active-low data enable signal in sun4i.
 - Fix scaling in vc4.
 - Use canvas provider node in meson.
 - Remove unused variables in sti and qxl and cirrus.
 - Add overlay plane support and primary plane scaling to meson.
 - i2c fixes in drm/bridge/sii902x
 - Fix mailbox read size in rockchip.
 - Spelling fix in panel/s6d16d0.
 - Remove unnecessary null check from qxl_bo_unref.
 - Remove unused arguments from qxl_bo_pin.
 - Fix qxl cursor pinning.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuXvWqAysSYEJGuVH/lWMcqZwE8MFAlv1NiQACgkQ/lWMcqZw
 E8OZ3g/+M3UMxqL5NEscn7tQgD76uWFr7hr76qOTbwqntnGor1h+uqIE4ITF5yBv
 svsNNorzOtz6rDz8HWsi9aDmSYv/MTLURKGHWK3bpeSlY83uLS0e13EhdID9l5iN
 EwcqiA5t8PBR2xTU5+H2gCwb8p9ArmF4azyT12x6s7e33UmNpBlb/bmj03gw8HNA
 37cDVvp0SrPh+JEAqxkL2qBl3412ENYabJr/dioc+SakBMbCDYT2EMFqsV/TLYq1
 ZUQRxTUol2OTWcJTBv2LS86YfYRpK0vVVl7MtFyoFERbOY+o45VyKlA35qAbGcne
 Vem+06HEm6BPfPlCGu3DnsLkN0gnotWyPifm6tE8n3k0sPG0fGgb+qODSyl/8YKb
 k8KQ/ccqEzhqfNnS3X8fSeuyoW6aA7DaEmsVeXlnGNKBzWqGeLxC78uCr7jA+Zgc
 CqUwCgTgXUkIQaZz8RYwlWXzZZyaPRsodsVjRL+7gFl1JTFGvi9QcKTLVo2jq4a0
 V3pNpnuz/+NzePtUc0OE0zBnl3PpFQa1jY8NtRCI99BQ+S/OCGvZMfEkNslb+Z2C
 8p2ka70aVpnzS367Oqw0xpjYxSdbyoyLjDig7YYfMi2IqoNPNeGRF0tMNE+ht84k
 BU/RC7yHa9+EeUa8cM5FMigr/GuIs2lmHQOG6JBHasfSUSRTLio=
 =elpS
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2018-11-21' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v4.21, part 2:

UAPI Changes:
- Remove syncobj timeline support from drm.

Cross-subsystem Changes:
- Document canvas provider node in the DT bindings.
- Improve documentation for TPO TPG110 DT bindings.

Core Changes:
- Use explicit state in drm atomic functions.
- Add panel quirk for new GPD Win2 firmware.
- Add DRM_FORMAT_XYUV8888.
- Set the default import/export function in prime to drm_gem_prime_import/export.
- Add a separate drm_gem_object_funcs, to stop relying on dev->driver->*gem* functions.
- Make sure that tinydrm sets the virtual address also on imported buffers.

Driver Changes:
- Support active-low data enable signal in sun4i.
- Fix scaling in vc4.
- Use canvas provider node in meson.
- Remove unused variables in sti and qxl and cirrus.
- Add overlay plane support and primary plane scaling to meson.
- i2c fixes in drm/bridge/sii902x
- Fix mailbox read size in rockchip.
- Spelling fix in panel/s6d16d0.
- Remove unnecessary null check from qxl_bo_unref.
- Remove unused arguments from qxl_bo_pin.
- Fix qxl cursor pinning.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/9c0409e3-a85f-d2af-b4eb-baf1eb8bbae4@linux.intel.com
This commit is contained in:
Dave Airlie 2018-11-22 12:54:33 +10:00
commit b239499f92
67 changed files with 2324 additions and 849 deletions

View File

@ -67,6 +67,8 @@ Required properties:
Optional properties:
- power-domains: Optional phandle to associated power domain as described in
the file ../power/power_domain.txt
- amlogic,canvas: phandle to canvas provider node as described in the file
../soc/amlogic/amlogic,canvas.txt
Required nodes:

View File

@ -1,47 +1,70 @@
TPO TPG110 Panel
================
This binding builds on the DPI bindings, adding a few properties
as a superset of a DPI. See panel-dpi.txt for the required DPI
bindings.
This panel driver is a component that acts as an intermediary
between an RGB output and a variety of panels. The panel
driver is strapped up in electronics to the desired resolution
and other properties, and has a control interface over 3WIRE
SPI. By talking to the TPG110 over SPI, the strapped properties
can be discovered and the hardware is therefore mostly
self-describing.
+--------+
SPI -> | TPO | -> physical display
RGB -> | TPG110 |
+--------+
If some electrical strap or alternate resolution is desired,
this can be set up by taking software control of the display
over the SPI interface. The interface can also adjust
for properties of the display such as gamma correction and
certain electrical driving levels.
The TPG110 does not know the physical dimensions of the panel
connected, so this needs to be specified in the device tree.
It requires a GPIO line for control of its reset line.
The serial protocol has line names that resemble I2C but the
protocol is not I2C but 3WIRE SPI.
Required properties:
- compatible : "tpo,tpg110"
- compatible : one of:
"ste,nomadik-nhk15-display", "tpo,tpg110"
"tpo,tpg110"
- grestb-gpios : panel reset GPIO
- scen-gpios : serial control enable GPIO
- scl-gpios : serial control clock line GPIO
- sda-gpios : serial control data line GPIO
- width-mm : see display/panel/panel-common.txt
- height-mm : see display/panel/panel-common.txt
Required nodes:
- Video port for DPI input, see panel-dpi.txt
- Panel timing for DPI setup, see panel-dpi.txt
The device needs to be a child of an SPI bus, see
spi/spi-bus.txt. The SPI child must set the following
properties:
- spi-3wire
- spi-max-frequency = <3000000>;
as these are characteristics of this device.
The device node can contain one 'port' child node with one child
'endpoint' node, according to the bindings defined in
media/video-interfaces.txt. This node should describe panel's video bus.
Example
-------
panel {
compatible = "tpo,tpg110", "panel-dpi";
grestb-gpios = <&stmpe_gpio44 5 GPIO_ACTIVE_LOW>;
scen-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
scl-gpios = <&gpio0 5 GPIO_ACTIVE_HIGH>;
sda-gpios = <&gpio0 4 GPIO_ACTIVE_HIGH>;
panel: display@0 {
compatible = "tpo,tpg110";
reg = <0>;
spi-3wire;
/* 320 ns min period ~= 3 MHz */
spi-max-frequency = <3000000>;
/* Width and height from data sheet */
width-mm = <116>;
height-mm = <87>;
grestb-gpios = <&foo_gpio 5 GPIO_ACTIVE_LOW>;
backlight = <&bl>;
port {
nomadik_clcd_panel: endpoint {
remote-endpoint = <&nomadik_clcd_pads>;
remote-endpoint = <&foo>;
};
};
panel-timing {
clock-frequency = <33200000>;
hactive = <800>;
hback-porch = <216>;
hfront-porch = <40>;
hsync-len = <1>;
vactive = <480>;
vback-porch = <35>;
vfront-porch = <10>;
vsync-len = <1>;
};
};

View File

@ -234,6 +234,19 @@ efficient.
Contact: Daniel Vetter
Defaults for .gem_prime_import and export
-----------------------------------------
Most drivers don't need to set drm_driver->gem_prime_import and
->gem_prime_export now that drm_gem_prime_import() and drm_gem_prime_export()
are the default.
struct drm_gem_object_funcs
---------------------------
GEM objects can now have a function table instead of having the callbacks on the
DRM driver struct. This is now the preferred way and drivers can be moved over.
Core refactorings
=================

View File

@ -95,6 +95,7 @@ config DRM_SII902X
depends on OF
select DRM_KMS_HELPER
select REGMAP_I2C
select I2C_MUX
---help---
Silicon Image sii902x bridge chip driver.

View File

@ -1,4 +1,6 @@
/*
* Copyright (C) 2018 Renesas Electronics
*
* Copyright (C) 2016 Atmel
* Bo Shen <voice.shen@atmel.com>
*
@ -21,6 +23,7 @@
*/
#include <linux/gpio/consumer.h>
#include <linux/i2c-mux.h>
#include <linux/i2c.h>
#include <linux/module.h>
#include <linux/regmap.h>
@ -86,8 +89,49 @@ struct sii902x {
struct drm_bridge bridge;
struct drm_connector connector;
struct gpio_desc *reset_gpio;
struct i2c_mux_core *i2cmux;
};
static int sii902x_read_unlocked(struct i2c_client *i2c, u8 reg, u8 *val)
{
union i2c_smbus_data data;
int ret;
ret = __i2c_smbus_xfer(i2c->adapter, i2c->addr, i2c->flags,
I2C_SMBUS_READ, reg, I2C_SMBUS_BYTE_DATA, &data);
if (ret < 0)
return ret;
*val = data.byte;
return 0;
}
static int sii902x_write_unlocked(struct i2c_client *i2c, u8 reg, u8 val)
{
union i2c_smbus_data data;
data.byte = val;
return __i2c_smbus_xfer(i2c->adapter, i2c->addr, i2c->flags,
I2C_SMBUS_WRITE, reg, I2C_SMBUS_BYTE_DATA,
&data);
}
static int sii902x_update_bits_unlocked(struct i2c_client *i2c, u8 reg, u8 mask,
u8 val)
{
int ret;
u8 status;
ret = sii902x_read_unlocked(i2c, reg, &status);
if (ret)
return ret;
status &= ~mask;
status |= val & mask;
return sii902x_write_unlocked(i2c, reg, status);
}
static inline struct sii902x *bridge_to_sii902x(struct drm_bridge *bridge)
{
return container_of(bridge, struct sii902x, bridge);
@ -135,41 +179,11 @@ static const struct drm_connector_funcs sii902x_connector_funcs = {
static int sii902x_get_modes(struct drm_connector *connector)
{
struct sii902x *sii902x = connector_to_sii902x(connector);
struct regmap *regmap = sii902x->regmap;
u32 bus_format = MEDIA_BUS_FMT_RGB888_1X24;
struct device *dev = &sii902x->i2c->dev;
unsigned long timeout;
unsigned int retries;
unsigned int status;
struct edid *edid;
int num = 0;
int ret;
int num = 0, ret;
ret = regmap_update_bits(regmap, SII902X_SYS_CTRL_DATA,
SII902X_SYS_CTRL_DDC_BUS_REQ,
SII902X_SYS_CTRL_DDC_BUS_REQ);
if (ret)
return ret;
timeout = jiffies +
msecs_to_jiffies(SII902X_I2C_BUS_ACQUISITION_TIMEOUT_MS);
do {
ret = regmap_read(regmap, SII902X_SYS_CTRL_DATA, &status);
if (ret)
return ret;
} while (!(status & SII902X_SYS_CTRL_DDC_BUS_GRTD) &&
time_before(jiffies, timeout));
if (!(status & SII902X_SYS_CTRL_DDC_BUS_GRTD)) {
dev_err(dev, "failed to acquire the i2c bus\n");
return -ETIMEDOUT;
}
ret = regmap_write(regmap, SII902X_SYS_CTRL_DATA, status);
if (ret)
return ret;
edid = drm_get_edid(connector, sii902x->i2c->adapter);
edid = drm_get_edid(connector, sii902x->i2cmux->adapter[0]);
drm_connector_update_edid_property(connector, edid);
if (edid) {
num = drm_add_edid_modes(connector, edid);
@ -181,42 +195,6 @@ static int sii902x_get_modes(struct drm_connector *connector)
if (ret)
return ret;
/*
* Sometimes the I2C bus can stall after failure to use the
* EDID channel. Retry a few times to see if things clear
* up, else continue anyway.
*/
retries = 5;
do {
ret = regmap_read(regmap, SII902X_SYS_CTRL_DATA,
&status);
retries--;
} while (ret && retries);
if (ret)
dev_err(dev, "failed to read status (%d)\n", ret);
ret = regmap_update_bits(regmap, SII902X_SYS_CTRL_DATA,
SII902X_SYS_CTRL_DDC_BUS_REQ |
SII902X_SYS_CTRL_DDC_BUS_GRTD, 0);
if (ret)
return ret;
timeout = jiffies +
msecs_to_jiffies(SII902X_I2C_BUS_ACQUISITION_TIMEOUT_MS);
do {
ret = regmap_read(regmap, SII902X_SYS_CTRL_DATA, &status);
if (ret)
return ret;
} while (status & (SII902X_SYS_CTRL_DDC_BUS_REQ |
SII902X_SYS_CTRL_DDC_BUS_GRTD) &&
time_before(jiffies, timeout));
if (status & (SII902X_SYS_CTRL_DDC_BUS_REQ |
SII902X_SYS_CTRL_DDC_BUS_GRTD)) {
dev_err(dev, "failed to release the i2c bus\n");
return -ETIMEDOUT;
}
return num;
}
@ -366,6 +344,121 @@ static irqreturn_t sii902x_interrupt(int irq, void *data)
return IRQ_HANDLED;
}
/*
* The purpose of sii902x_i2c_bypass_select is to enable the pass through
* mode of the HDMI transmitter. Do not use regmap from within this function,
* only use sii902x_*_unlocked functions to read/modify/write registers.
* We are holding the parent adapter lock here, keep this in mind before
* adding more i2c transactions.
*
* Also, since SII902X_SYS_CTRL_DATA is used with regmap_update_bits elsewhere
* in this driver, we need to make sure that we only touch 0x1A[2:1] from
* within sii902x_i2c_bypass_select and sii902x_i2c_bypass_deselect, and that
* we leave the remaining bits as we have found them.
*/
static int sii902x_i2c_bypass_select(struct i2c_mux_core *mux, u32 chan_id)
{
struct sii902x *sii902x = i2c_mux_priv(mux);
struct device *dev = &sii902x->i2c->dev;
unsigned long timeout;
u8 status;
int ret;
ret = sii902x_update_bits_unlocked(sii902x->i2c, SII902X_SYS_CTRL_DATA,
SII902X_SYS_CTRL_DDC_BUS_REQ,
SII902X_SYS_CTRL_DDC_BUS_REQ);
if (ret)
return ret;
timeout = jiffies +
msecs_to_jiffies(SII902X_I2C_BUS_ACQUISITION_TIMEOUT_MS);
do {
ret = sii902x_read_unlocked(sii902x->i2c, SII902X_SYS_CTRL_DATA,
&status);
if (ret)
return ret;
} while (!(status & SII902X_SYS_CTRL_DDC_BUS_GRTD) &&
time_before(jiffies, timeout));
if (!(status & SII902X_SYS_CTRL_DDC_BUS_GRTD)) {
dev_err(dev, "Failed to acquire the i2c bus\n");
return -ETIMEDOUT;
}
return sii902x_write_unlocked(sii902x->i2c, SII902X_SYS_CTRL_DATA,
status);
}
/*
* The purpose of sii902x_i2c_bypass_deselect is to disable the pass through
* mode of the HDMI transmitter. Do not use regmap from within this function,
* only use sii902x_*_unlocked functions to read/modify/write registers.
* We are holding the parent adapter lock here, keep this in mind before
* adding more i2c transactions.
*
* Also, since SII902X_SYS_CTRL_DATA is used with regmap_update_bits elsewhere
* in this driver, we need to make sure that we only touch 0x1A[2:1] from
* within sii902x_i2c_bypass_select and sii902x_i2c_bypass_deselect, and that
* we leave the remaining bits as we have found them.
*/
static int sii902x_i2c_bypass_deselect(struct i2c_mux_core *mux, u32 chan_id)
{
struct sii902x *sii902x = i2c_mux_priv(mux);
struct device *dev = &sii902x->i2c->dev;
unsigned long timeout;
unsigned int retries;
u8 status;
int ret;
/*
* When the HDMI transmitter is in pass through mode, we need an
* (undocumented) additional delay between STOP and START conditions
* to guarantee the bus won't get stuck.
*/
udelay(30);
/*
* Sometimes the I2C bus can stall after failure to use the
* EDID channel. Retry a few times to see if things clear
* up, else continue anyway.
*/
retries = 5;
do {
ret = sii902x_read_unlocked(sii902x->i2c, SII902X_SYS_CTRL_DATA,
&status);
retries--;
} while (ret && retries);
if (ret) {
dev_err(dev, "failed to read status (%d)\n", ret);
return ret;
}
ret = sii902x_update_bits_unlocked(sii902x->i2c, SII902X_SYS_CTRL_DATA,
SII902X_SYS_CTRL_DDC_BUS_REQ |
SII902X_SYS_CTRL_DDC_BUS_GRTD, 0);
if (ret)
return ret;
timeout = jiffies +
msecs_to_jiffies(SII902X_I2C_BUS_ACQUISITION_TIMEOUT_MS);
do {
ret = sii902x_read_unlocked(sii902x->i2c, SII902X_SYS_CTRL_DATA,
&status);
if (ret)
return ret;
} while (status & (SII902X_SYS_CTRL_DDC_BUS_REQ |
SII902X_SYS_CTRL_DDC_BUS_GRTD) &&
time_before(jiffies, timeout));
if (status & (SII902X_SYS_CTRL_DDC_BUS_REQ |
SII902X_SYS_CTRL_DDC_BUS_GRTD)) {
dev_err(dev, "failed to release the i2c bus\n");
return -ETIMEDOUT;
}
return 0;
}
static int sii902x_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@ -375,6 +468,13 @@ static int sii902x_probe(struct i2c_client *client,
u8 chipid[4];
int ret;
ret = i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_BYTE_DATA);
if (!ret) {
dev_err(dev, "I2C adapter not suitable\n");
return -EIO;
}
sii902x = devm_kzalloc(dev, sizeof(*sii902x), GFP_KERNEL);
if (!sii902x)
return -ENOMEM;
@ -433,7 +533,15 @@ static int sii902x_probe(struct i2c_client *client,
i2c_set_clientdata(client, sii902x);
return 0;
sii902x->i2cmux = i2c_mux_alloc(client->adapter, dev,
1, 0, I2C_MUX_GATE,
sii902x_i2c_bypass_select,
sii902x_i2c_bypass_deselect);
if (!sii902x->i2cmux)
return -ENOMEM;
sii902x->i2cmux->priv = sii902x;
return i2c_mux_add_adapter(sii902x->i2cmux, 0, 0, 0);
}
static int sii902x_remove(struct i2c_client *client)
@ -441,6 +549,7 @@ static int sii902x_remove(struct i2c_client *client)
{
struct sii902x *sii902x = i2c_get_clientdata(client);
i2c_mux_del_adapters(sii902x->i2cmux);
drm_bridge_remove(&sii902x->bridge);
return 0;

View File

@ -169,7 +169,6 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
struct drm_mode_fb_cmd2 mode_cmd;
void *sysram;
struct drm_gem_object *gobj = NULL;
struct cirrus_bo *bo = NULL;
int size, ret;
mode_cmd.width = sizes->surface_width;
@ -185,8 +184,6 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
return ret;
}
bo = gem_to_cirrus_bo(gobj);
sysram = vmalloc(size);
if (!sysram)
return -ENOMEM;

View File

@ -315,9 +315,11 @@ drm_atomic_get_crtc_state(struct drm_atomic_state *state,
}
EXPORT_SYMBOL(drm_atomic_get_crtc_state);
static int drm_atomic_crtc_check(struct drm_crtc *crtc,
struct drm_crtc_state *state)
static int drm_atomic_crtc_check(const struct drm_crtc_state *old_crtc_state,
const struct drm_crtc_state *new_crtc_state)
{
struct drm_crtc *crtc = new_crtc_state->crtc;
/* NOTE: we explicitly don't enforce constraints such as primary
* layer covering entire screen, since that is something we want
* to allow (on hw that supports it). For hw that does not, it
@ -326,7 +328,7 @@ static int drm_atomic_crtc_check(struct drm_crtc *crtc,
* TODO: Add generic modeset state checks once we support those.
*/
if (state->active && !state->enable) {
if (new_crtc_state->active && !new_crtc_state->enable) {
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] active without enabled\n",
crtc->base.id, crtc->name);
return -EINVAL;
@ -336,14 +338,14 @@ static int drm_atomic_crtc_check(struct drm_crtc *crtc,
* as this is a kernel-internal detail that userspace should never
* be able to trigger. */
if (drm_core_check_feature(crtc->dev, DRIVER_ATOMIC) &&
WARN_ON(state->enable && !state->mode_blob)) {
WARN_ON(new_crtc_state->enable && !new_crtc_state->mode_blob)) {
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] enabled without mode blob\n",
crtc->base.id, crtc->name);
return -EINVAL;
}
if (drm_core_check_feature(crtc->dev, DRIVER_ATOMIC) &&
WARN_ON(!state->enable && state->mode_blob)) {
WARN_ON(!new_crtc_state->enable && new_crtc_state->mode_blob)) {
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] disabled with mode blob\n",
crtc->base.id, crtc->name);
return -EINVAL;
@ -359,7 +361,8 @@ static int drm_atomic_crtc_check(struct drm_crtc *crtc,
* and legacy page_flip IOCTL which also reject service on a disabled
* pipe.
*/
if (state->event && !state->active && !crtc->state->active) {
if (new_crtc_state->event &&
!new_crtc_state->active && !old_crtc_state->active) {
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] requesting event but off\n",
crtc->base.id, crtc->name);
return -EINVAL;
@ -489,14 +492,13 @@ drm_atomic_get_plane_state(struct drm_atomic_state *state,
EXPORT_SYMBOL(drm_atomic_get_plane_state);
static bool
plane_switching_crtc(struct drm_atomic_state *state,
struct drm_plane *plane,
struct drm_plane_state *plane_state)
plane_switching_crtc(const struct drm_plane_state *old_plane_state,
const struct drm_plane_state *new_plane_state)
{
if (!plane->state->crtc || !plane_state->crtc)
if (!old_plane_state->crtc || !new_plane_state->crtc)
return false;
if (plane->state->crtc == plane_state->crtc)
if (old_plane_state->crtc == new_plane_state->crtc)
return false;
/* This could be refined, but currently there's no helper or driver code
@ -509,88 +511,95 @@ plane_switching_crtc(struct drm_atomic_state *state,
/**
* drm_atomic_plane_check - check plane state
* @plane: plane to check
* @state: plane state to check
* @old_plane_state: old plane state to check
* @new_plane_state: new plane state to check
*
* Provides core sanity checks for plane state.
*
* RETURNS:
* Zero on success, error code on failure
*/
static int drm_atomic_plane_check(struct drm_plane *plane,
struct drm_plane_state *state)
static int drm_atomic_plane_check(const struct drm_plane_state *old_plane_state,
const struct drm_plane_state *new_plane_state)
{
struct drm_plane *plane = new_plane_state->plane;
struct drm_crtc *crtc = new_plane_state->crtc;
const struct drm_framebuffer *fb = new_plane_state->fb;
unsigned int fb_width, fb_height;
int ret;
/* either *both* CRTC and FB must be set, or neither */
if (state->crtc && !state->fb) {
if (crtc && !fb) {
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] CRTC set but no FB\n",
plane->base.id, plane->name);
return -EINVAL;
} else if (state->fb && !state->crtc) {
} else if (fb && !crtc) {
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] FB set but no CRTC\n",
plane->base.id, plane->name);
return -EINVAL;
}
/* if disabled, we don't care about the rest of the state: */
if (!state->crtc)
if (!crtc)
return 0;
/* Check whether this plane is usable on this CRTC */
if (!(plane->possible_crtcs & drm_crtc_mask(state->crtc))) {
if (!(plane->possible_crtcs & drm_crtc_mask(crtc))) {
DRM_DEBUG_ATOMIC("Invalid [CRTC:%d:%s] for [PLANE:%d:%s]\n",
state->crtc->base.id, state->crtc->name,
crtc->base.id, crtc->name,
plane->base.id, plane->name);
return -EINVAL;
}
/* Check whether this plane supports the fb pixel format. */
ret = drm_plane_check_pixel_format(plane, state->fb->format->format,
state->fb->modifier);
ret = drm_plane_check_pixel_format(plane, fb->format->format,
fb->modifier);
if (ret) {
struct drm_format_name_buf format_name;
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] invalid pixel format %s, modifier 0x%llx\n",
plane->base.id, plane->name,
drm_get_format_name(state->fb->format->format,
drm_get_format_name(fb->format->format,
&format_name),
state->fb->modifier);
fb->modifier);
return ret;
}
/* Give drivers some help against integer overflows */
if (state->crtc_w > INT_MAX ||
state->crtc_x > INT_MAX - (int32_t) state->crtc_w ||
state->crtc_h > INT_MAX ||
state->crtc_y > INT_MAX - (int32_t) state->crtc_h) {
if (new_plane_state->crtc_w > INT_MAX ||
new_plane_state->crtc_x > INT_MAX - (int32_t) new_plane_state->crtc_w ||
new_plane_state->crtc_h > INT_MAX ||
new_plane_state->crtc_y > INT_MAX - (int32_t) new_plane_state->crtc_h) {
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] invalid CRTC coordinates %ux%u+%d+%d\n",
plane->base.id, plane->name,
state->crtc_w, state->crtc_h,
state->crtc_x, state->crtc_y);
new_plane_state->crtc_w, new_plane_state->crtc_h,
new_plane_state->crtc_x, new_plane_state->crtc_y);
return -ERANGE;
}
fb_width = state->fb->width << 16;
fb_height = state->fb->height << 16;
fb_width = fb->width << 16;
fb_height = fb->height << 16;
/* Make sure source coordinates are inside the fb. */
if (state->src_w > fb_width ||
state->src_x > fb_width - state->src_w ||
state->src_h > fb_height ||
state->src_y > fb_height - state->src_h) {
if (new_plane_state->src_w > fb_width ||
new_plane_state->src_x > fb_width - new_plane_state->src_w ||
new_plane_state->src_h > fb_height ||
new_plane_state->src_y > fb_height - new_plane_state->src_h) {
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] invalid source coordinates "
"%u.%06ux%u.%06u+%u.%06u+%u.%06u (fb %ux%u)\n",
plane->base.id, plane->name,
state->src_w >> 16, ((state->src_w & 0xffff) * 15625) >> 10,
state->src_h >> 16, ((state->src_h & 0xffff) * 15625) >> 10,
state->src_x >> 16, ((state->src_x & 0xffff) * 15625) >> 10,
state->src_y >> 16, ((state->src_y & 0xffff) * 15625) >> 10,
state->fb->width, state->fb->height);
new_plane_state->src_w >> 16,
((new_plane_state->src_w & 0xffff) * 15625) >> 10,
new_plane_state->src_h >> 16,
((new_plane_state->src_h & 0xffff) * 15625) >> 10,
new_plane_state->src_x >> 16,
((new_plane_state->src_x & 0xffff) * 15625) >> 10,
new_plane_state->src_y >> 16,
((new_plane_state->src_y & 0xffff) * 15625) >> 10,
fb->width, fb->height);
return -ENOSPC;
}
if (plane_switching_crtc(state->state, plane, state)) {
if (plane_switching_crtc(old_plane_state, new_plane_state)) {
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] switching CRTC directly\n",
plane->base.id, plane->name);
return -EINVAL;
@ -927,6 +936,8 @@ int
drm_atomic_add_affected_planes(struct drm_atomic_state *state,
struct drm_crtc *crtc)
{
const struct drm_crtc_state *old_crtc_state =
drm_atomic_get_old_crtc_state(state, crtc);
struct drm_plane *plane;
WARN_ON(!drm_atomic_get_new_crtc_state(state, crtc));
@ -934,7 +945,7 @@ drm_atomic_add_affected_planes(struct drm_atomic_state *state,
DRM_DEBUG_ATOMIC("Adding all current planes for [CRTC:%d:%s] to %p\n",
crtc->base.id, crtc->name, state);
drm_for_each_plane_mask(plane, state->dev, crtc->state->plane_mask) {
drm_for_each_plane_mask(plane, state->dev, old_crtc_state->plane_mask) {
struct drm_plane_state *plane_state =
drm_atomic_get_plane_state(state, plane);
@ -961,17 +972,19 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
struct drm_device *dev = state->dev;
struct drm_mode_config *config = &dev->mode_config;
struct drm_plane *plane;
struct drm_plane_state *plane_state;
struct drm_plane_state *old_plane_state;
struct drm_plane_state *new_plane_state;
struct drm_crtc *crtc;
struct drm_crtc_state *crtc_state;
struct drm_crtc_state *old_crtc_state;
struct drm_crtc_state *new_crtc_state;
struct drm_connector *conn;
struct drm_connector_state *conn_state;
int i, ret = 0;
DRM_DEBUG_ATOMIC("checking %p\n", state);
for_each_new_plane_in_state(state, plane, plane_state, i) {
ret = drm_atomic_plane_check(plane, plane_state);
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {
ret = drm_atomic_plane_check(old_plane_state, new_plane_state);
if (ret) {
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] atomic core check failed\n",
plane->base.id, plane->name);
@ -979,8 +992,8 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
}
}
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
ret = drm_atomic_crtc_check(crtc, crtc_state);
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
ret = drm_atomic_crtc_check(old_crtc_state, new_crtc_state);
if (ret) {
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] atomic core check failed\n",
crtc->base.id, crtc->name);
@ -1008,8 +1021,8 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
}
if (!state->allow_modeset) {
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
if (drm_atomic_crtc_needs_modeset(crtc_state)) {
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
if (drm_atomic_crtc_needs_modeset(new_crtc_state)) {
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] requires full modeset\n",
crtc->base.id, crtc->name);
return -EINVAL;

View File

@ -81,8 +81,7 @@ int drm_client_init(struct drm_device *dev, struct drm_client_dev *client,
{
int ret;
if (!drm_core_check_feature(dev, DRIVER_MODESET) ||
!dev->driver->dumb_create || !dev->driver->gem_prime_vmap)
if (!drm_core_check_feature(dev, DRIVER_MODESET) || !dev->driver->dumb_create)
return -EOPNOTSUPP;
if (funcs && !try_module_get(funcs->owner))
@ -229,8 +228,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
{
struct drm_device *dev = buffer->client->dev;
if (buffer->vaddr && dev->driver->gem_prime_vunmap)
dev->driver->gem_prime_vunmap(buffer->gem, buffer->vaddr);
drm_gem_vunmap(buffer->gem, buffer->vaddr);
if (buffer->gem)
drm_gem_object_put_unlocked(buffer->gem);
@ -283,9 +281,9 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
vaddr = dev->driver->gem_prime_vmap(obj);
if (!vaddr) {
ret = -ENOMEM;
vaddr = drm_gem_vmap(obj);
if (IS_ERR(vaddr)) {
ret = PTR_ERR(vaddr);
goto err_delete;
}

View File

@ -224,6 +224,7 @@ const struct drm_format_info *__drm_format_info(u32 format)
{ .format = DRM_FORMAT_YVYU, .depth = 0, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 2, .vsub = 1, .is_yuv = true },
{ .format = DRM_FORMAT_UYVY, .depth = 0, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 2, .vsub = 1, .is_yuv = true },
{ .format = DRM_FORMAT_VYUY, .depth = 0, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 2, .vsub = 1, .is_yuv = true },
{ .format = DRM_FORMAT_XYUV8888, .depth = 0, .num_planes = 1, .cpp = { 4, 0, 0 }, .hsub = 1, .vsub = 1, .is_yuv = true },
{ .format = DRM_FORMAT_AYUV, .depth = 0, .num_planes = 1, .cpp = { 4, 0, 0 }, .hsub = 1, .vsub = 1, .has_alpha = true, .is_yuv = true },
{ .format = DRM_FORMAT_Y0L0, .depth = 0, .num_planes = 1,
.char_per_block = { 8, 0, 0 }, .block_w = { 2, 0, 0 }, .block_h = { 2, 0, 0 },

View File

@ -257,7 +257,9 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
struct drm_gem_object *obj = ptr;
struct drm_device *dev = obj->dev;
if (dev->driver->gem_close_object)
if (obj->funcs && obj->funcs->close)
obj->funcs->close(obj, file_priv);
else if (dev->driver->gem_close_object)
dev->driver->gem_close_object(obj, file_priv);
if (drm_core_check_feature(dev, DRIVER_PRIME))
@ -410,7 +412,11 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
if (ret)
goto err_remove;
if (dev->driver->gem_open_object) {
if (obj->funcs && obj->funcs->open) {
ret = obj->funcs->open(obj, file_priv);
if (ret)
goto err_revoke;
} else if (dev->driver->gem_open_object) {
ret = dev->driver->gem_open_object(obj, file_priv);
if (ret)
goto err_revoke;
@ -835,7 +841,9 @@ drm_gem_object_free(struct kref *kref)
container_of(kref, struct drm_gem_object, refcount);
struct drm_device *dev = obj->dev;
if (dev->driver->gem_free_object_unlocked) {
if (obj->funcs) {
obj->funcs->free(obj);
} else if (dev->driver->gem_free_object_unlocked) {
dev->driver->gem_free_object_unlocked(obj);
} else if (dev->driver->gem_free_object) {
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
@ -864,13 +872,13 @@ drm_gem_object_put_unlocked(struct drm_gem_object *obj)
dev = obj->dev;
if (dev->driver->gem_free_object_unlocked) {
kref_put(&obj->refcount, drm_gem_object_free);
} else {
if (dev->driver->gem_free_object) {
might_lock(&dev->struct_mutex);
if (kref_put_mutex(&obj->refcount, drm_gem_object_free,
&dev->struct_mutex))
mutex_unlock(&dev->struct_mutex);
} else {
kref_put(&obj->refcount, drm_gem_object_free);
}
}
EXPORT_SYMBOL(drm_gem_object_put_unlocked);
@ -960,11 +968,14 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
if (obj_size < vma->vm_end - vma->vm_start)
return -EINVAL;
if (!dev->driver->gem_vm_ops)
if (obj->funcs && obj->funcs->vm_ops)
vma->vm_ops = obj->funcs->vm_ops;
else if (dev->driver->gem_vm_ops)
vma->vm_ops = dev->driver->gem_vm_ops;
else
return -EINVAL;
vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_ops = dev->driver->gem_vm_ops;
vma->vm_private_data = obj;
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
@ -1066,6 +1077,86 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
drm_printf_indent(p, indent, "imported=%s\n",
obj->import_attach ? "yes" : "no");
if (obj->dev->driver->gem_print_info)
if (obj->funcs && obj->funcs->print_info)
obj->funcs->print_info(p, indent, obj);
else if (obj->dev->driver->gem_print_info)
obj->dev->driver->gem_print_info(p, indent, obj);
}
/**
* drm_gem_pin - Pin backing buffer in memory
* @obj: GEM object
*
* Make sure the backing buffer is pinned in memory.
*
* Returns:
* 0 on success or a negative error code on failure.
*/
int drm_gem_pin(struct drm_gem_object *obj)
{
if (obj->funcs && obj->funcs->pin)
return obj->funcs->pin(obj);
else if (obj->dev->driver->gem_prime_pin)
return obj->dev->driver->gem_prime_pin(obj);
else
return 0;
}
EXPORT_SYMBOL(drm_gem_pin);
/**
* drm_gem_unpin - Unpin backing buffer from memory
* @obj: GEM object
*
* Relax the requirement that the backing buffer is pinned in memory.
*/
void drm_gem_unpin(struct drm_gem_object *obj)
{
if (obj->funcs && obj->funcs->unpin)
obj->funcs->unpin(obj);
else if (obj->dev->driver->gem_prime_unpin)
obj->dev->driver->gem_prime_unpin(obj);
}
EXPORT_SYMBOL(drm_gem_unpin);
/**
* drm_gem_vmap - Map buffer into kernel virtual address space
* @obj: GEM object
*
* Returns:
* A virtual pointer to a newly created GEM object or an ERR_PTR-encoded negative
* error code on failure.
*/
void *drm_gem_vmap(struct drm_gem_object *obj)
{
void *vaddr;
if (obj->funcs && obj->funcs->vmap)
vaddr = obj->funcs->vmap(obj);
else if (obj->dev->driver->gem_prime_vmap)
vaddr = obj->dev->driver->gem_prime_vmap(obj);
else
vaddr = ERR_PTR(-EOPNOTSUPP);
if (!vaddr)
vaddr = ERR_PTR(-ENOMEM);
return vaddr;
}
EXPORT_SYMBOL(drm_gem_vmap);
/**
* drm_gem_vunmap - Remove buffer mapping from kernel virtual address space
* @obj: GEM object
* @vaddr: Virtual address (can be NULL)
*/
void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
{
if (!vaddr)
return;
if (obj->funcs && obj->funcs->vunmap)
obj->funcs->vunmap(obj, vaddr);
else if (obj->dev->driver->gem_prime_vunmap)
obj->dev->driver->gem_prime_vunmap(obj, vaddr);
}
EXPORT_SYMBOL(drm_gem_vunmap);

View File

@ -176,6 +176,7 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
*
* This function frees the backing memory of the CMA GEM object, cleans up the
* GEM object state and frees the memory used to store the object itself.
* If the buffer is imported and the virtual address is set, it is released.
* Drivers using the CMA helpers should set this as their
* &drm_driver.gem_free_object_unlocked callback.
*/
@ -189,6 +190,8 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
cma_obj->vaddr, cma_obj->paddr);
} else if (gem_obj->import_attach) {
if (cma_obj->vaddr)
dma_buf_vunmap(gem_obj->import_attach->dmabuf, cma_obj->vaddr);
drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
}
@ -575,3 +578,86 @@ void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
/* Nothing to do */
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
static const struct drm_gem_object_funcs drm_cma_gem_default_funcs = {
.free = drm_gem_cma_free_object,
.print_info = drm_gem_cma_print_info,
.get_sg_table = drm_gem_cma_prime_get_sg_table,
.vmap = drm_gem_cma_prime_vmap,
.vm_ops = &drm_gem_cma_vm_ops,
};
/**
* drm_cma_gem_create_object_default_funcs - Create a CMA GEM object with a
* default function table
* @dev: DRM device
* @size: Size of the object to allocate
*
* This sets the GEM object functions to the default CMA helper functions.
* This function can be used as the &drm_driver.gem_create_object callback.
*
* Returns:
* A pointer to a allocated GEM object or an error pointer on failure.
*/
struct drm_gem_object *
drm_cma_gem_create_object_default_funcs(struct drm_device *dev, size_t size)
{
struct drm_gem_cma_object *cma_obj;
cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
if (!cma_obj)
return NULL;
cma_obj->base.funcs = &drm_cma_gem_default_funcs;
return &cma_obj->base;
}
EXPORT_SYMBOL(drm_cma_gem_create_object_default_funcs);
/**
* drm_gem_cma_prime_import_sg_table_vmap - PRIME import another driver's
* scatter/gather table and get the virtual address of the buffer
* @dev: DRM device
* @attach: DMA-BUF attachment
* @sgt: Scatter/gather table of pinned pages
*
* This function imports a scatter/gather table using
* drm_gem_cma_prime_import_sg_table() and uses dma_buf_vmap() to get the kernel
* virtual address. This ensures that a CMA GEM object always has its virtual
* address set. This address is released when the object is freed.
*
* This function can be used as the &drm_driver.gem_prime_import_sg_table
* callback. The DRM_GEM_CMA_VMAP_DRIVER_OPS() macro provides a shortcut to set
* the necessary DRM driver operations.
*
* Returns:
* A pointer to a newly created GEM object or an ERR_PTR-encoded negative
* error code on failure.
*/
struct drm_gem_object *
drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt)
{
struct drm_gem_cma_object *cma_obj;
struct drm_gem_object *obj;
void *vaddr;
vaddr = dma_buf_vmap(attach->dmabuf);
if (!vaddr) {
DRM_ERROR("Failed to vmap PRIME buffer\n");
return ERR_PTR(-ENOMEM);
}
obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
if (IS_ERR(obj)) {
dma_buf_vunmap(attach->dmabuf, vaddr);
return obj;
}
cma_obj = to_drm_gem_cma_obj(obj);
cma_obj->vaddr = vaddr;
return obj;
}
EXPORT_SYMBOL(drm_gem_cma_prime_import_sg_table_vmap);

View File

@ -63,7 +63,7 @@ static const struct drm_dmi_panel_orientation_data gpd_win2 = {
.width = 720,
.height = 1280,
.bios_dates = (const char * const []){
"12/07/2017", "05/24/2018", NULL },
"12/07/2017", "05/24/2018", "06/29/2018", NULL },
.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
};

View File

@ -199,7 +199,6 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
{
struct drm_prime_attachment *prime_attach;
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
prime_attach = kzalloc(sizeof(*prime_attach), GFP_KERNEL);
if (!prime_attach)
@ -208,10 +207,7 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
prime_attach->dir = DMA_NONE;
attach->priv = prime_attach;
if (!dev->driver->gem_prime_pin)
return 0;
return dev->driver->gem_prime_pin(obj);
return drm_gem_pin(obj);
}
EXPORT_SYMBOL(drm_gem_map_attach);
@ -228,7 +224,6 @@ void drm_gem_map_detach(struct dma_buf *dma_buf,
{
struct drm_prime_attachment *prime_attach = attach->priv;
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
if (prime_attach) {
struct sg_table *sgt = prime_attach->sgt;
@ -247,8 +242,7 @@ void drm_gem_map_detach(struct dma_buf *dma_buf,
attach->priv = NULL;
}
if (dev->driver->gem_prime_unpin)
dev->driver->gem_prime_unpin(obj);
drm_gem_unpin(obj);
}
EXPORT_SYMBOL(drm_gem_map_detach);
@ -310,7 +304,10 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
if (WARN_ON(prime_attach->dir != DMA_NONE))
return ERR_PTR(-EBUSY);
sgt = obj->dev->driver->gem_prime_get_sg_table(obj);
if (obj->funcs)
sgt = obj->funcs->get_sg_table(obj);
else
sgt = obj->dev->driver->gem_prime_get_sg_table(obj);
if (!IS_ERR(sgt)) {
if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
@ -406,12 +403,13 @@ EXPORT_SYMBOL(drm_gem_dmabuf_release);
void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
{
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
void *vaddr;
if (dev->driver->gem_prime_vmap)
return dev->driver->gem_prime_vmap(obj);
else
return NULL;
vaddr = drm_gem_vmap(obj);
if (IS_ERR(vaddr))
vaddr = NULL;
return vaddr;
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@ -426,10 +424,8 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
{
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
if (dev->driver->gem_prime_vunmap)
dev->driver->gem_prime_vunmap(obj, vaddr);
drm_gem_vunmap(obj, vaddr);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
@ -529,7 +525,12 @@ static struct dma_buf *export_and_register_object(struct drm_device *dev,
return dmabuf;
}
dmabuf = dev->driver->gem_prime_export(dev, obj, flags);
if (obj->funcs && obj->funcs->export)
dmabuf = obj->funcs->export(obj, flags);
else if (dev->driver->gem_prime_export)
dmabuf = dev->driver->gem_prime_export(dev, obj, flags);
else
dmabuf = drm_gem_prime_export(dev, obj, flags);
if (IS_ERR(dmabuf)) {
/* normally the created dma-buf takes ownership of the ref,
* but if that fails then drop the ref
@ -648,6 +649,43 @@ out_unlock:
}
EXPORT_SYMBOL(drm_gem_prime_handle_to_fd);
/**
* drm_gem_prime_mmap - PRIME mmap function for GEM drivers
* @obj: GEM object
* @vma: Virtual address range
*
* This function sets up a userspace mapping for PRIME exported buffers using
* the same codepath that is used for regular GEM buffer mapping on the DRM fd.
* The fake GEM offset is added to vma->vm_pgoff and &drm_driver->fops->mmap is
* called to set up the mapping.
*
* Drivers can use this as their &drm_driver.gem_prime_mmap callback.
*/
int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
{
/* Used by drm_gem_mmap() to lookup the GEM object */
struct drm_file priv = {
.minor = obj->dev->primary,
};
struct file fil = {
.private_data = &priv,
};
int ret;
ret = drm_vma_node_allow(&obj->vma_node, &priv);
if (ret)
return ret;
vma->vm_pgoff += drm_vma_node_start(&obj->vma_node);
ret = obj->dev->driver->fops->mmap(&fil, vma);
drm_vma_node_revoke(&obj->vma_node, &priv);
return ret;
}
EXPORT_SYMBOL(drm_gem_prime_mmap);
/**
* drm_gem_prime_import_dev - core implementation of the import callback
* @dev: drm_device to import into
@ -762,7 +800,10 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev,
/* never seen this one, need to import */
mutex_lock(&dev->object_name_lock);
obj = dev->driver->gem_prime_import(dev, dma_buf);
if (dev->driver->gem_prime_import)
obj = dev->driver->gem_prime_import(dev, dma_buf);
else
obj = drm_gem_prime_import(dev, dma_buf);
if (IS_ERR(obj)) {
ret = PTR_ERR(obj);
goto out_unlock;

View File

@ -56,9 +56,6 @@
#include "drm_internal.h"
#include <drm/drm_syncobj.h>
/* merge normal syncobj to timeline syncobj, the point interval is 1 */
#define DRM_SYNCOBJ_BINARY_POINT 1
struct drm_syncobj_stub_fence {
struct dma_fence base;
spinlock_t lock;
@ -74,29 +71,7 @@ static const struct dma_fence_ops drm_syncobj_stub_fence_ops = {
.get_timeline_name = drm_syncobj_stub_fence_get_name,
};
struct drm_syncobj_signal_pt {
struct dma_fence_array *fence_array;
u64 value;
struct list_head list;
};
static DEFINE_SPINLOCK(signaled_fence_lock);
static struct dma_fence signaled_fence;
static struct dma_fence *drm_syncobj_get_stub_fence(void)
{
spin_lock(&signaled_fence_lock);
if (!signaled_fence.ops) {
dma_fence_init(&signaled_fence,
&drm_syncobj_stub_fence_ops,
&signaled_fence_lock,
0, 0);
dma_fence_signal_locked(&signaled_fence);
}
spin_unlock(&signaled_fence_lock);
return dma_fence_get(&signaled_fence);
}
/**
* drm_syncobj_find - lookup and reference a sync object.
* @file_private: drm file private pointer
@ -123,27 +98,6 @@ struct drm_syncobj *drm_syncobj_find(struct drm_file *file_private,
}
EXPORT_SYMBOL(drm_syncobj_find);
static struct dma_fence *
drm_syncobj_find_signal_pt_for_point(struct drm_syncobj *syncobj,
uint64_t point)
{
struct drm_syncobj_signal_pt *signal_pt;
if ((syncobj->type == DRM_SYNCOBJ_TYPE_TIMELINE) &&
(point <= syncobj->timeline))
return drm_syncobj_get_stub_fence();
list_for_each_entry(signal_pt, &syncobj->signal_pt_list, list) {
if (point > signal_pt->value)
continue;
if ((syncobj->type == DRM_SYNCOBJ_TYPE_BINARY) &&
(point != signal_pt->value))
continue;
return dma_fence_get(&signal_pt->fence_array->base);
}
return NULL;
}
static void drm_syncobj_add_callback_locked(struct drm_syncobj *syncobj,
struct drm_syncobj_cb *cb,
drm_syncobj_func_t func)
@ -152,158 +106,53 @@ static void drm_syncobj_add_callback_locked(struct drm_syncobj *syncobj,
list_add_tail(&cb->node, &syncobj->cb_list);
}
static void drm_syncobj_fence_get_or_add_callback(struct drm_syncobj *syncobj,
struct dma_fence **fence,
struct drm_syncobj_cb *cb,
drm_syncobj_func_t func)
static int drm_syncobj_fence_get_or_add_callback(struct drm_syncobj *syncobj,
struct dma_fence **fence,
struct drm_syncobj_cb *cb,
drm_syncobj_func_t func)
{
u64 pt_value = 0;
int ret;
WARN_ON(*fence);
*fence = drm_syncobj_fence_get(syncobj);
if (*fence)
return 1;
if (syncobj->type == DRM_SYNCOBJ_TYPE_BINARY) {
/*BINARY syncobj always wait on last pt */
pt_value = syncobj->signal_point;
if (pt_value == 0)
pt_value += DRM_SYNCOBJ_BINARY_POINT;
}
mutex_lock(&syncobj->cb_mutex);
spin_lock(&syncobj->pt_lock);
*fence = drm_syncobj_find_signal_pt_for_point(syncobj, pt_value);
spin_unlock(&syncobj->pt_lock);
if (!*fence)
spin_lock(&syncobj->lock);
/* We've already tried once to get a fence and failed. Now that we
* have the lock, try one more time just to be sure we don't add a
* callback when a fence has already been set.
*/
if (syncobj->fence) {
*fence = dma_fence_get(rcu_dereference_protected(syncobj->fence,
lockdep_is_held(&syncobj->lock)));
ret = 1;
} else {
*fence = NULL;
drm_syncobj_add_callback_locked(syncobj, cb, func);
mutex_unlock(&syncobj->cb_mutex);
}
static void drm_syncobj_remove_callback(struct drm_syncobj *syncobj,
struct drm_syncobj_cb *cb)
{
mutex_lock(&syncobj->cb_mutex);
list_del_init(&cb->node);
mutex_unlock(&syncobj->cb_mutex);
}
static void drm_syncobj_init(struct drm_syncobj *syncobj)
{
spin_lock(&syncobj->pt_lock);
syncobj->timeline_context = dma_fence_context_alloc(1);
syncobj->timeline = 0;
syncobj->signal_point = 0;
init_waitqueue_head(&syncobj->wq);
INIT_LIST_HEAD(&syncobj->signal_pt_list);
spin_unlock(&syncobj->pt_lock);
}
static void drm_syncobj_fini(struct drm_syncobj *syncobj)
{
struct drm_syncobj_signal_pt *signal_pt = NULL, *tmp;
spin_lock(&syncobj->pt_lock);
list_for_each_entry_safe(signal_pt, tmp,
&syncobj->signal_pt_list, list) {
list_del(&signal_pt->list);
dma_fence_put(&signal_pt->fence_array->base);
kfree(signal_pt);
ret = 0;
}
spin_unlock(&syncobj->pt_lock);
}
spin_unlock(&syncobj->lock);
static int drm_syncobj_create_signal_pt(struct drm_syncobj *syncobj,
struct dma_fence *fence,
u64 point)
{
struct drm_syncobj_signal_pt *signal_pt =
kzalloc(sizeof(struct drm_syncobj_signal_pt), GFP_KERNEL);
struct drm_syncobj_signal_pt *tail_pt;
struct dma_fence **fences;
int num_fences = 0;
int ret = 0, i;
if (!signal_pt)
return -ENOMEM;
if (!fence)
goto out;
fences = kmalloc_array(sizeof(void *), 2, GFP_KERNEL);
if (!fences) {
ret = -ENOMEM;
goto out;
}
fences[num_fences++] = dma_fence_get(fence);
/* timeline syncobj must take this dependency */
if (syncobj->type == DRM_SYNCOBJ_TYPE_TIMELINE) {
spin_lock(&syncobj->pt_lock);
if (!list_empty(&syncobj->signal_pt_list)) {
tail_pt = list_last_entry(&syncobj->signal_pt_list,
struct drm_syncobj_signal_pt, list);
fences[num_fences++] =
dma_fence_get(&tail_pt->fence_array->base);
}
spin_unlock(&syncobj->pt_lock);
}
signal_pt->fence_array = dma_fence_array_create(num_fences, fences,
syncobj->timeline_context,
point, false);
if (!signal_pt->fence_array) {
ret = -ENOMEM;
goto fail;
}
spin_lock(&syncobj->pt_lock);
if (syncobj->signal_point >= point) {
DRM_WARN("A later signal is ready!");
spin_unlock(&syncobj->pt_lock);
goto exist;
}
signal_pt->value = point;
list_add_tail(&signal_pt->list, &syncobj->signal_pt_list);
syncobj->signal_point = point;
spin_unlock(&syncobj->pt_lock);
wake_up_all(&syncobj->wq);
return 0;
exist:
dma_fence_put(&signal_pt->fence_array->base);
fail:
for (i = 0; i < num_fences; i++)
dma_fence_put(fences[i]);
kfree(fences);
out:
kfree(signal_pt);
return ret;
}
static void drm_syncobj_garbage_collection(struct drm_syncobj *syncobj)
void drm_syncobj_add_callback(struct drm_syncobj *syncobj,
struct drm_syncobj_cb *cb,
drm_syncobj_func_t func)
{
struct drm_syncobj_signal_pt *signal_pt, *tmp, *tail_pt;
spin_lock(&syncobj->pt_lock);
tail_pt = list_last_entry(&syncobj->signal_pt_list,
struct drm_syncobj_signal_pt,
list);
list_for_each_entry_safe(signal_pt, tmp,
&syncobj->signal_pt_list, list) {
if (syncobj->type == DRM_SYNCOBJ_TYPE_BINARY &&
signal_pt == tail_pt)
continue;
if (dma_fence_is_signaled(&signal_pt->fence_array->base)) {
syncobj->timeline = signal_pt->value;
list_del(&signal_pt->list);
dma_fence_put(&signal_pt->fence_array->base);
kfree(signal_pt);
} else {
/*signal_pt is in order in list, from small to big, so
* the later must not be signal either */
break;
}
}
spin_unlock(&syncobj->pt_lock);
spin_lock(&syncobj->lock);
drm_syncobj_add_callback_locked(syncobj, cb, func);
spin_unlock(&syncobj->lock);
}
void drm_syncobj_remove_callback(struct drm_syncobj *syncobj,
struct drm_syncobj_cb *cb)
{
spin_lock(&syncobj->lock);
list_del_init(&cb->node);
spin_unlock(&syncobj->lock);
}
/**
* drm_syncobj_replace_fence - replace fence in a sync object.
* @syncobj: Sync object to replace fence in
@ -316,30 +165,28 @@ void drm_syncobj_replace_fence(struct drm_syncobj *syncobj,
u64 point,
struct dma_fence *fence)
{
u64 pt_value = point;
struct dma_fence *old_fence;
struct drm_syncobj_cb *cur, *tmp;
drm_syncobj_garbage_collection(syncobj);
if (syncobj->type == DRM_SYNCOBJ_TYPE_BINARY) {
if (!fence) {
drm_syncobj_fini(syncobj);
drm_syncobj_init(syncobj);
return;
}
pt_value = syncobj->signal_point +
DRM_SYNCOBJ_BINARY_POINT;
}
drm_syncobj_create_signal_pt(syncobj, fence, pt_value);
if (fence) {
struct drm_syncobj_cb *cur, *tmp;
LIST_HEAD(cb_list);
if (fence)
dma_fence_get(fence);
mutex_lock(&syncobj->cb_mutex);
spin_lock(&syncobj->lock);
old_fence = rcu_dereference_protected(syncobj->fence,
lockdep_is_held(&syncobj->lock));
rcu_assign_pointer(syncobj->fence, fence);
if (fence != old_fence) {
list_for_each_entry_safe(cur, tmp, &syncobj->cb_list, node) {
list_del_init(&cur->node);
cur->func(syncobj, cur);
}
mutex_unlock(&syncobj->cb_mutex);
}
spin_unlock(&syncobj->lock);
dma_fence_put(old_fence);
}
EXPORT_SYMBOL(drm_syncobj_replace_fence);
@ -362,64 +209,6 @@ static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
return 0;
}
static int
drm_syncobj_point_get(struct drm_syncobj *syncobj, u64 point, u64 flags,
struct dma_fence **fence)
{
int ret = 0;
if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
ret = wait_event_interruptible(syncobj->wq,
point <= syncobj->signal_point);
if (ret < 0)
return ret;
}
spin_lock(&syncobj->pt_lock);
*fence = drm_syncobj_find_signal_pt_for_point(syncobj, point);
if (!*fence)
ret = -EINVAL;
spin_unlock(&syncobj->pt_lock);
return ret;
}
/**
* drm_syncobj_search_fence - lookup and reference the fence in a sync object or
* in a timeline point
* @syncobj: sync object pointer
* @point: timeline point
* @flags: DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT or not
* @fence: out parameter for the fence
*
* if flags is DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT, the function will block
* here until specific timeline points is reached.
* if not, you need a submit thread and block in userspace until all future
* timeline points have materialized, only then you can submit to the kernel,
* otherwise, function will fail to return fence.
*
* Returns 0 on success or a negative error value on failure. On success @fence
* contains a reference to the fence, which must be released by calling
* dma_fence_put().
*/
int drm_syncobj_search_fence(struct drm_syncobj *syncobj, u64 point,
u64 flags, struct dma_fence **fence)
{
u64 pt_value = point;
if (!syncobj)
return -ENOENT;
drm_syncobj_garbage_collection(syncobj);
if (syncobj->type == DRM_SYNCOBJ_TYPE_BINARY) {
/*BINARY syncobj always wait on last pt */
pt_value = syncobj->signal_point;
if (pt_value == 0)
pt_value += DRM_SYNCOBJ_BINARY_POINT;
}
return drm_syncobj_point_get(syncobj, pt_value, flags, fence);
}
EXPORT_SYMBOL(drm_syncobj_search_fence);
/**
* drm_syncobj_find_fence - lookup and reference the fence in a sync object
* @file_private: drm file private pointer
@ -429,7 +218,7 @@ EXPORT_SYMBOL(drm_syncobj_search_fence);
* @fence: out parameter for the fence
*
* This is just a convenience function that combines drm_syncobj_find() and
* drm_syncobj_lookup_fence().
* drm_syncobj_fence_get().
*
* Returns 0 on success or a negative error value on failure. On success @fence
* contains a reference to the fence, which must be released by calling
@ -440,11 +229,16 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
struct dma_fence **fence)
{
struct drm_syncobj *syncobj = drm_syncobj_find(file_private, handle);
int ret;
int ret = 0;
ret = drm_syncobj_search_fence(syncobj, point, flags, fence);
if (syncobj)
drm_syncobj_put(syncobj);
if (!syncobj)
return -ENOENT;
*fence = drm_syncobj_fence_get(syncobj);
if (!*fence) {
ret = -EINVAL;
}
drm_syncobj_put(syncobj);
return ret;
}
EXPORT_SYMBOL(drm_syncobj_find_fence);
@ -460,7 +254,7 @@ void drm_syncobj_free(struct kref *kref)
struct drm_syncobj *syncobj = container_of(kref,
struct drm_syncobj,
refcount);
drm_syncobj_fini(syncobj);
drm_syncobj_replace_fence(syncobj, 0, NULL);
kfree(syncobj);
}
EXPORT_SYMBOL(drm_syncobj_free);
@ -489,13 +283,7 @@ int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
kref_init(&syncobj->refcount);
INIT_LIST_HEAD(&syncobj->cb_list);
spin_lock_init(&syncobj->pt_lock);
mutex_init(&syncobj->cb_mutex);
if (flags & DRM_SYNCOBJ_CREATE_TYPE_TIMELINE)
syncobj->type = DRM_SYNCOBJ_TYPE_TIMELINE;
else
syncobj->type = DRM_SYNCOBJ_TYPE_BINARY;
drm_syncobj_init(syncobj);
spin_lock_init(&syncobj->lock);
if (flags & DRM_SYNCOBJ_CREATE_SIGNALED) {
ret = drm_syncobj_assign_null_handle(syncobj);
@ -778,8 +566,7 @@ drm_syncobj_create_ioctl(struct drm_device *dev, void *data,
return -EOPNOTSUPP;
/* no valid flags yet */
if (args->flags & ~(DRM_SYNCOBJ_CREATE_SIGNALED |
DRM_SYNCOBJ_CREATE_TYPE_TIMELINE))
if (args->flags & ~DRM_SYNCOBJ_CREATE_SIGNALED)
return -EINVAL;
return drm_syncobj_create_as_handle(file_private,
@ -872,8 +659,9 @@ static void syncobj_wait_syncobj_func(struct drm_syncobj *syncobj,
struct syncobj_wait_entry *wait =
container_of(cb, struct syncobj_wait_entry, syncobj_cb);
drm_syncobj_search_fence(syncobj, 0, 0, &wait->fence);
/* This happens inside the syncobj lock */
wait->fence = dma_fence_get(rcu_dereference_protected(syncobj->fence,
lockdep_is_held(&syncobj->lock)));
wake_up_process(wait->task);
}
@ -899,8 +687,7 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
signaled_count = 0;
for (i = 0; i < count; ++i) {
entries[i].task = current;
drm_syncobj_search_fence(syncobjs[i], 0, 0,
&entries[i].fence);
entries[i].fence = drm_syncobj_fence_get(syncobjs[i]);
if (!entries[i].fence) {
if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
continue;
@ -931,9 +718,6 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
for (i = 0; i < count; ++i) {
if (entries[i].fence)
continue;
drm_syncobj_fence_get_or_add_callback(syncobjs[i],
&entries[i].fence,
&entries[i].syncobj_cb,
@ -1165,13 +949,12 @@ drm_syncobj_reset_ioctl(struct drm_device *dev, void *data,
if (ret < 0)
return ret;
for (i = 0; i < args->count_handles; i++) {
drm_syncobj_fini(syncobjs[i]);
drm_syncobj_init(syncobjs[i]);
}
for (i = 0; i < args->count_handles; i++)
drm_syncobj_replace_fence(syncobjs[i], 0, NULL);
drm_syncobj_array_free(syncobjs, args->count_handles);
return ret;
return 0;
}
int

View File

@ -2157,7 +2157,7 @@ await_fence_array(struct i915_execbuffer *eb,
if (!(flags & I915_EXEC_FENCE_WAIT))
continue;
drm_syncobj_search_fence(syncobj, 0, 0, &fence);
fence = drm_syncobj_fence_get(syncobj);
if (!fence)
return -EINVAL;

View File

@ -7,6 +7,7 @@ config DRM_MESON
select DRM_GEM_CMA_HELPER
select VIDEOMODE_HELPERS
select REGMAP_MMIO
select MESON_CANVAS
config DRM_MESON_DW_HDMI
tristate "HDMI Synopsys Controller support for Amlogic Meson Display"

View File

@ -1,5 +1,5 @@
meson-drm-y := meson_drv.o meson_plane.o meson_crtc.o meson_venc_cvbs.o
meson-drm-y += meson_viu.o meson_vpp.o meson_venc.o meson_vclk.o meson_canvas.o
meson-drm-y += meson_viu.o meson_vpp.o meson_venc.o meson_vclk.o meson_canvas.o meson_overlay.o
obj-$(CONFIG_DRM_MESON) += meson-drm.o
obj-$(CONFIG_DRM_MESON_DW_HDMI) += meson_dw_hdmi.o

View File

@ -39,6 +39,7 @@
#define CANVAS_WIDTH_HBIT 0
#define CANVAS_HEIGHT_BIT 9
#define CANVAS_BLKMODE_BIT 24
#define CANVAS_ENDIAN_BIT 26
#define DMC_CAV_LUT_ADDR 0x50 /* 0x14 offset in data sheet */
#define CANVAS_LUT_WR_EN (0x2 << 8)
#define CANVAS_LUT_RD_EN (0x1 << 8)
@ -47,7 +48,8 @@ void meson_canvas_setup(struct meson_drm *priv,
uint32_t canvas_index, uint32_t addr,
uint32_t stride, uint32_t height,
unsigned int wrap,
unsigned int blkmode)
unsigned int blkmode,
unsigned int endian)
{
unsigned int val;
@ -60,7 +62,8 @@ void meson_canvas_setup(struct meson_drm *priv,
CANVAS_WIDTH_HBIT) |
(height << CANVAS_HEIGHT_BIT) |
(wrap << 22) |
(blkmode << CANVAS_BLKMODE_BIT));
(blkmode << CANVAS_BLKMODE_BIT) |
(endian << CANVAS_ENDIAN_BIT));
regmap_write(priv->dmc, DMC_CAV_LUT_ADDR,
CANVAS_LUT_WR_EN | canvas_index);

View File

@ -23,6 +23,9 @@
#define __MESON_CANVAS_H
#define MESON_CANVAS_ID_OSD1 0x4e
#define MESON_CANVAS_ID_VD1_0 0x60
#define MESON_CANVAS_ID_VD1_1 0x61
#define MESON_CANVAS_ID_VD1_2 0x62
/* Canvas configuration. */
#define MESON_CANVAS_WRAP_NONE 0x00
@ -33,10 +36,16 @@
#define MESON_CANVAS_BLKMODE_32x32 0x01
#define MESON_CANVAS_BLKMODE_64x64 0x02
#define MESON_CANVAS_ENDIAN_SWAP16 0x1
#define MESON_CANVAS_ENDIAN_SWAP32 0x3
#define MESON_CANVAS_ENDIAN_SWAP64 0x7
#define MESON_CANVAS_ENDIAN_SWAP128 0xf
void meson_canvas_setup(struct meson_drm *priv,
uint32_t canvas_index, uint32_t addr,
uint32_t stride, uint32_t height,
unsigned int wrap,
unsigned int blkmode);
unsigned int blkmode,
unsigned int endian);
#endif /* __MESON_CANVAS_H */

View File

@ -25,6 +25,7 @@
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/platform_device.h>
#include <linux/bitfield.h>
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
@ -98,6 +99,10 @@ static void meson_crtc_atomic_enable(struct drm_crtc *crtc,
writel(crtc_state->mode.hdisplay,
priv->io_base + _REG(VPP_POSTBLEND_H_SIZE));
/* VD1 Preblend vertical start/end */
writel(FIELD_PREP(GENMASK(11, 0), 2303),
priv->io_base + _REG(VPP_PREBLEND_VD1_V_START_END));
writel_bits_relaxed(VPP_POSTBLEND_ENABLE, VPP_POSTBLEND_ENABLE,
priv->io_base + _REG(VPP_MISC));
@ -110,11 +115,17 @@ static void meson_crtc_atomic_disable(struct drm_crtc *crtc,
struct meson_crtc *meson_crtc = to_meson_crtc(crtc);
struct meson_drm *priv = meson_crtc->priv;
DRM_DEBUG_DRIVER("\n");
priv->viu.osd1_enabled = false;
priv->viu.osd1_commit = false;
priv->viu.vd1_enabled = false;
priv->viu.vd1_commit = false;
/* Disable VPP Postblend */
writel_bits_relaxed(VPP_POSTBLEND_ENABLE, 0,
writel_bits_relaxed(VPP_OSD1_POSTBLEND | VPP_VD1_POSTBLEND |
VPP_VD1_PREBLEND | VPP_POSTBLEND_ENABLE, 0,
priv->io_base + _REG(VPP_MISC));
if (crtc->state->event && !crtc->state->active) {
@ -149,6 +160,7 @@ static void meson_crtc_atomic_flush(struct drm_crtc *crtc,
struct meson_drm *priv = meson_crtc->priv;
priv->viu.osd1_commit = true;
priv->viu.vd1_commit = true;
}
static const struct drm_crtc_helper_funcs meson_crtc_helper_funcs = {
@ -177,26 +189,37 @@ void meson_crtc_irq(struct meson_drm *priv)
priv->io_base + _REG(VIU_OSD1_BLK0_CFG_W3));
writel_relaxed(priv->viu.osd1_blk0_cfg[4],
priv->io_base + _REG(VIU_OSD1_BLK0_CFG_W4));
writel_relaxed(priv->viu.osd_sc_ctrl0,
priv->io_base + _REG(VPP_OSD_SC_CTRL0));
writel_relaxed(priv->viu.osd_sc_i_wh_m1,
priv->io_base + _REG(VPP_OSD_SCI_WH_M1));
writel_relaxed(priv->viu.osd_sc_o_h_start_end,
priv->io_base + _REG(VPP_OSD_SCO_H_START_END));
writel_relaxed(priv->viu.osd_sc_o_v_start_end,
priv->io_base + _REG(VPP_OSD_SCO_V_START_END));
writel_relaxed(priv->viu.osd_sc_v_ini_phase,
priv->io_base + _REG(VPP_OSD_VSC_INI_PHASE));
writel_relaxed(priv->viu.osd_sc_v_phase_step,
priv->io_base + _REG(VPP_OSD_VSC_PHASE_STEP));
writel_relaxed(priv->viu.osd_sc_h_ini_phase,
priv->io_base + _REG(VPP_OSD_HSC_INI_PHASE));
writel_relaxed(priv->viu.osd_sc_h_phase_step,
priv->io_base + _REG(VPP_OSD_HSC_PHASE_STEP));
writel_relaxed(priv->viu.osd_sc_h_ctrl0,
priv->io_base + _REG(VPP_OSD_HSC_CTRL0));
writel_relaxed(priv->viu.osd_sc_v_ctrl0,
priv->io_base + _REG(VPP_OSD_VSC_CTRL0));
/* If output is interlace, make use of the Scaler */
if (priv->viu.osd1_interlace) {
struct drm_plane *plane = priv->primary_plane;
struct drm_plane_state *state = plane->state;
struct drm_rect dest = {
.x1 = state->crtc_x,
.y1 = state->crtc_y,
.x2 = state->crtc_x + state->crtc_w,
.y2 = state->crtc_y + state->crtc_h,
};
meson_vpp_setup_interlace_vscaler_osd1(priv, &dest);
} else
meson_vpp_disable_interlace_vscaler_osd1(priv);
meson_canvas_setup(priv, MESON_CANVAS_ID_OSD1,
priv->viu.osd1_addr, priv->viu.osd1_stride,
priv->viu.osd1_height, MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR);
if (priv->canvas)
meson_canvas_config(priv->canvas, priv->canvas_id_osd1,
priv->viu.osd1_addr, priv->viu.osd1_stride,
priv->viu.osd1_height, MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR, 0);
else
meson_canvas_setup(priv, MESON_CANVAS_ID_OSD1,
priv->viu.osd1_addr, priv->viu.osd1_stride,
priv->viu.osd1_height, MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR, 0);
/* Enable OSD1 */
writel_bits_relaxed(VPP_OSD1_POSTBLEND, VPP_OSD1_POSTBLEND,
@ -205,6 +228,206 @@ void meson_crtc_irq(struct meson_drm *priv)
priv->viu.osd1_commit = false;
}
/* Update the VD1 registers */
if (priv->viu.vd1_enabled && priv->viu.vd1_commit) {
switch (priv->viu.vd1_planes) {
case 3:
if (priv->canvas)
meson_canvas_config(priv->canvas,
priv->canvas_id_vd1_2,
priv->viu.vd1_addr2,
priv->viu.vd1_stride2,
priv->viu.vd1_height2,
MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR,
MESON_CANVAS_ENDIAN_SWAP64);
else
meson_canvas_setup(priv, MESON_CANVAS_ID_VD1_2,
priv->viu.vd1_addr2,
priv->viu.vd1_stride2,
priv->viu.vd1_height2,
MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR,
MESON_CANVAS_ENDIAN_SWAP64);
/* fallthrough */
case 2:
if (priv->canvas)
meson_canvas_config(priv->canvas,
priv->canvas_id_vd1_1,
priv->viu.vd1_addr1,
priv->viu.vd1_stride1,
priv->viu.vd1_height1,
MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR,
MESON_CANVAS_ENDIAN_SWAP64);
else
meson_canvas_setup(priv, MESON_CANVAS_ID_VD1_1,
priv->viu.vd1_addr2,
priv->viu.vd1_stride2,
priv->viu.vd1_height2,
MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR,
MESON_CANVAS_ENDIAN_SWAP64);
/* fallthrough */
case 1:
if (priv->canvas)
meson_canvas_config(priv->canvas,
priv->canvas_id_vd1_0,
priv->viu.vd1_addr0,
priv->viu.vd1_stride0,
priv->viu.vd1_height0,
MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR,
MESON_CANVAS_ENDIAN_SWAP64);
else
meson_canvas_setup(priv, MESON_CANVAS_ID_VD1_0,
priv->viu.vd1_addr2,
priv->viu.vd1_stride2,
priv->viu.vd1_height2,
MESON_CANVAS_WRAP_NONE,
MESON_CANVAS_BLKMODE_LINEAR,
MESON_CANVAS_ENDIAN_SWAP64);
};
writel_relaxed(priv->viu.vd1_if0_gen_reg,
priv->io_base + _REG(VD1_IF0_GEN_REG));
writel_relaxed(priv->viu.vd1_if0_gen_reg,
priv->io_base + _REG(VD2_IF0_GEN_REG));
writel_relaxed(priv->viu.vd1_if0_gen_reg2,
priv->io_base + _REG(VD1_IF0_GEN_REG2));
writel_relaxed(priv->viu.viu_vd1_fmt_ctrl,
priv->io_base + _REG(VIU_VD1_FMT_CTRL));
writel_relaxed(priv->viu.viu_vd1_fmt_ctrl,
priv->io_base + _REG(VIU_VD2_FMT_CTRL));
writel_relaxed(priv->viu.viu_vd1_fmt_w,
priv->io_base + _REG(VIU_VD1_FMT_W));
writel_relaxed(priv->viu.viu_vd1_fmt_w,
priv->io_base + _REG(VIU_VD2_FMT_W));
writel_relaxed(priv->viu.vd1_if0_canvas0,
priv->io_base + _REG(VD1_IF0_CANVAS0));
writel_relaxed(priv->viu.vd1_if0_canvas0,
priv->io_base + _REG(VD1_IF0_CANVAS1));
writel_relaxed(priv->viu.vd1_if0_canvas0,
priv->io_base + _REG(VD2_IF0_CANVAS0));
writel_relaxed(priv->viu.vd1_if0_canvas0,
priv->io_base + _REG(VD2_IF0_CANVAS1));
writel_relaxed(priv->viu.vd1_if0_luma_x0,
priv->io_base + _REG(VD1_IF0_LUMA_X0));
writel_relaxed(priv->viu.vd1_if0_luma_x0,
priv->io_base + _REG(VD1_IF0_LUMA_X1));
writel_relaxed(priv->viu.vd1_if0_luma_x0,
priv->io_base + _REG(VD2_IF0_LUMA_X0));
writel_relaxed(priv->viu.vd1_if0_luma_x0,
priv->io_base + _REG(VD2_IF0_LUMA_X1));
writel_relaxed(priv->viu.vd1_if0_luma_y0,
priv->io_base + _REG(VD1_IF0_LUMA_Y0));
writel_relaxed(priv->viu.vd1_if0_luma_y0,
priv->io_base + _REG(VD1_IF0_LUMA_Y1));
writel_relaxed(priv->viu.vd1_if0_luma_y0,
priv->io_base + _REG(VD2_IF0_LUMA_Y0));
writel_relaxed(priv->viu.vd1_if0_luma_y0,
priv->io_base + _REG(VD2_IF0_LUMA_Y1));
writel_relaxed(priv->viu.vd1_if0_chroma_x0,
priv->io_base + _REG(VD1_IF0_CHROMA_X0));
writel_relaxed(priv->viu.vd1_if0_chroma_x0,
priv->io_base + _REG(VD1_IF0_CHROMA_X1));
writel_relaxed(priv->viu.vd1_if0_chroma_x0,
priv->io_base + _REG(VD2_IF0_CHROMA_X0));
writel_relaxed(priv->viu.vd1_if0_chroma_x0,
priv->io_base + _REG(VD2_IF0_CHROMA_X1));
writel_relaxed(priv->viu.vd1_if0_chroma_y0,
priv->io_base + _REG(VD1_IF0_CHROMA_Y0));
writel_relaxed(priv->viu.vd1_if0_chroma_y0,
priv->io_base + _REG(VD1_IF0_CHROMA_Y1));
writel_relaxed(priv->viu.vd1_if0_chroma_y0,
priv->io_base + _REG(VD2_IF0_CHROMA_Y0));
writel_relaxed(priv->viu.vd1_if0_chroma_y0,
priv->io_base + _REG(VD2_IF0_CHROMA_Y1));
writel_relaxed(priv->viu.vd1_if0_repeat_loop,
priv->io_base + _REG(VD1_IF0_RPT_LOOP));
writel_relaxed(priv->viu.vd1_if0_repeat_loop,
priv->io_base + _REG(VD2_IF0_RPT_LOOP));
writel_relaxed(priv->viu.vd1_if0_luma0_rpt_pat,
priv->io_base + _REG(VD1_IF0_LUMA0_RPT_PAT));
writel_relaxed(priv->viu.vd1_if0_luma0_rpt_pat,
priv->io_base + _REG(VD2_IF0_LUMA0_RPT_PAT));
writel_relaxed(priv->viu.vd1_if0_luma0_rpt_pat,
priv->io_base + _REG(VD1_IF0_LUMA1_RPT_PAT));
writel_relaxed(priv->viu.vd1_if0_luma0_rpt_pat,
priv->io_base + _REG(VD2_IF0_LUMA1_RPT_PAT));
writel_relaxed(priv->viu.vd1_if0_chroma0_rpt_pat,
priv->io_base + _REG(VD1_IF0_CHROMA0_RPT_PAT));
writel_relaxed(priv->viu.vd1_if0_chroma0_rpt_pat,
priv->io_base + _REG(VD2_IF0_CHROMA0_RPT_PAT));
writel_relaxed(priv->viu.vd1_if0_chroma0_rpt_pat,
priv->io_base + _REG(VD1_IF0_CHROMA1_RPT_PAT));
writel_relaxed(priv->viu.vd1_if0_chroma0_rpt_pat,
priv->io_base + _REG(VD2_IF0_CHROMA1_RPT_PAT));
writel_relaxed(0, priv->io_base + _REG(VD1_IF0_LUMA_PSEL));
writel_relaxed(0, priv->io_base + _REG(VD1_IF0_CHROMA_PSEL));
writel_relaxed(0, priv->io_base + _REG(VD2_IF0_LUMA_PSEL));
writel_relaxed(0, priv->io_base + _REG(VD2_IF0_CHROMA_PSEL));
writel_relaxed(priv->viu.vd1_range_map_y,
priv->io_base + _REG(VD1_IF0_RANGE_MAP_Y));
writel_relaxed(priv->viu.vd1_range_map_cb,
priv->io_base + _REG(VD1_IF0_RANGE_MAP_CB));
writel_relaxed(priv->viu.vd1_range_map_cr,
priv->io_base + _REG(VD1_IF0_RANGE_MAP_CR));
writel_relaxed(0x78404,
priv->io_base + _REG(VPP_SC_MISC));
writel_relaxed(priv->viu.vpp_pic_in_height,
priv->io_base + _REG(VPP_PIC_IN_HEIGHT));
writel_relaxed(priv->viu.vpp_postblend_vd1_h_start_end,
priv->io_base + _REG(VPP_POSTBLEND_VD1_H_START_END));
writel_relaxed(priv->viu.vpp_blend_vd2_h_start_end,
priv->io_base + _REG(VPP_BLEND_VD2_H_START_END));
writel_relaxed(priv->viu.vpp_postblend_vd1_v_start_end,
priv->io_base + _REG(VPP_POSTBLEND_VD1_V_START_END));
writel_relaxed(priv->viu.vpp_blend_vd2_v_start_end,
priv->io_base + _REG(VPP_BLEND_VD2_V_START_END));
writel_relaxed(priv->viu.vpp_hsc_region12_startp,
priv->io_base + _REG(VPP_HSC_REGION12_STARTP));
writel_relaxed(priv->viu.vpp_hsc_region34_startp,
priv->io_base + _REG(VPP_HSC_REGION34_STARTP));
writel_relaxed(priv->viu.vpp_hsc_region4_endp,
priv->io_base + _REG(VPP_HSC_REGION4_ENDP));
writel_relaxed(priv->viu.vpp_hsc_start_phase_step,
priv->io_base + _REG(VPP_HSC_START_PHASE_STEP));
writel_relaxed(priv->viu.vpp_hsc_region1_phase_slope,
priv->io_base + _REG(VPP_HSC_REGION1_PHASE_SLOPE));
writel_relaxed(priv->viu.vpp_hsc_region3_phase_slope,
priv->io_base + _REG(VPP_HSC_REGION3_PHASE_SLOPE));
writel_relaxed(priv->viu.vpp_line_in_length,
priv->io_base + _REG(VPP_LINE_IN_LENGTH));
writel_relaxed(priv->viu.vpp_preblend_h_size,
priv->io_base + _REG(VPP_PREBLEND_H_SIZE));
writel_relaxed(priv->viu.vpp_vsc_region12_startp,
priv->io_base + _REG(VPP_VSC_REGION12_STARTP));
writel_relaxed(priv->viu.vpp_vsc_region34_startp,
priv->io_base + _REG(VPP_VSC_REGION34_STARTP));
writel_relaxed(priv->viu.vpp_vsc_region4_endp,
priv->io_base + _REG(VPP_VSC_REGION4_ENDP));
writel_relaxed(priv->viu.vpp_vsc_start_phase_step,
priv->io_base + _REG(VPP_VSC_START_PHASE_STEP));
writel_relaxed(priv->viu.vpp_vsc_ini_phase,
priv->io_base + _REG(VPP_VSC_INI_PHASE));
writel_relaxed(priv->viu.vpp_vsc_phase_ctrl,
priv->io_base + _REG(VPP_VSC_PHASE_CTRL));
writel_relaxed(priv->viu.vpp_hsc_phase_ctrl,
priv->io_base + _REG(VPP_HSC_PHASE_CTRL));
writel_relaxed(0x42, priv->io_base + _REG(VPP_SCALE_COEF_IDX));
/* Enable VD1 */
writel_bits_relaxed(VPP_VD1_PREBLEND | VPP_VD1_POSTBLEND |
VPP_COLOR_MNG_ENABLE,
VPP_VD1_PREBLEND | VPP_VD1_POSTBLEND |
VPP_COLOR_MNG_ENABLE,
priv->io_base + _REG(VPP_MISC));
priv->viu.vd1_commit = false;
}
drm_crtc_handle_vblank(priv->crtc);
spin_lock_irqsave(&priv->drm->event_lock, flags);

View File

@ -41,6 +41,7 @@
#include "meson_drv.h"
#include "meson_plane.h"
#include "meson_overlay.h"
#include "meson_crtc.h"
#include "meson_venc_cvbs.h"
@ -208,24 +209,51 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
goto free_drm;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dmc");
if (!res) {
ret = -EINVAL;
goto free_drm;
}
/* Simply ioremap since it may be a shared register zone */
regs = devm_ioremap(dev, res->start, resource_size(res));
if (!regs) {
ret = -EADDRNOTAVAIL;
goto free_drm;
}
priv->canvas = meson_canvas_get(dev);
if (!IS_ERR(priv->canvas)) {
ret = meson_canvas_alloc(priv->canvas, &priv->canvas_id_osd1);
if (ret)
goto free_drm;
ret = meson_canvas_alloc(priv->canvas, &priv->canvas_id_vd1_0);
if (ret) {
meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
goto free_drm;
}
ret = meson_canvas_alloc(priv->canvas, &priv->canvas_id_vd1_1);
if (ret) {
meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
meson_canvas_free(priv->canvas, priv->canvas_id_vd1_0);
goto free_drm;
}
ret = meson_canvas_alloc(priv->canvas, &priv->canvas_id_vd1_2);
if (ret) {
meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
meson_canvas_free(priv->canvas, priv->canvas_id_vd1_0);
meson_canvas_free(priv->canvas, priv->canvas_id_vd1_1);
goto free_drm;
}
} else {
priv->canvas = NULL;
priv->dmc = devm_regmap_init_mmio(dev, regs,
&meson_regmap_config);
if (IS_ERR(priv->dmc)) {
dev_err(&pdev->dev, "Couldn't create the DMC regmap\n");
ret = PTR_ERR(priv->dmc);
goto free_drm;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dmc");
if (!res) {
ret = -EINVAL;
goto free_drm;
}
/* Simply ioremap since it may be a shared register zone */
regs = devm_ioremap(dev, res->start, resource_size(res));
if (!regs) {
ret = -EADDRNOTAVAIL;
goto free_drm;
}
priv->dmc = devm_regmap_init_mmio(dev, regs,
&meson_regmap_config);
if (IS_ERR(priv->dmc)) {
dev_err(&pdev->dev, "Couldn't create the DMC regmap\n");
ret = PTR_ERR(priv->dmc);
goto free_drm;
}
}
priv->vsync_irq = platform_get_irq(pdev, 0);
@ -264,6 +292,10 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
if (ret)
goto free_drm;
ret = meson_overlay_create(priv);
if (ret)
goto free_drm;
ret = meson_crtc_create(priv);
if (ret)
goto free_drm;
@ -300,6 +332,14 @@ static int meson_drv_bind(struct device *dev)
static void meson_drv_unbind(struct device *dev)
{
struct drm_device *drm = dev_get_drvdata(dev);
struct meson_drm *priv = drm->dev_private;
if (priv->canvas) {
meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
meson_canvas_free(priv->canvas, priv->canvas_id_vd1_0);
meson_canvas_free(priv->canvas, priv->canvas_id_vd1_1);
meson_canvas_free(priv->canvas, priv->canvas_id_vd1_2);
}
drm_dev_unregister(drm);
drm_kms_helper_poll_fini(drm);

View File

@ -22,6 +22,7 @@
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/of.h>
#include <linux/soc/amlogic/meson-canvas.h>
#include <drm/drmP.h>
struct meson_drm {
@ -31,9 +32,16 @@ struct meson_drm {
struct regmap *dmc;
int vsync_irq;
struct meson_canvas *canvas;
u8 canvas_id_osd1;
u8 canvas_id_vd1_0;
u8 canvas_id_vd1_1;
u8 canvas_id_vd1_2;
struct drm_device *drm;
struct drm_crtc *crtc;
struct drm_plane *primary_plane;
struct drm_plane *overlay_plane;
/* Components Data */
struct {
@ -45,6 +53,64 @@ struct meson_drm {
uint32_t osd1_addr;
uint32_t osd1_stride;
uint32_t osd1_height;
uint32_t osd_sc_ctrl0;
uint32_t osd_sc_i_wh_m1;
uint32_t osd_sc_o_h_start_end;
uint32_t osd_sc_o_v_start_end;
uint32_t osd_sc_v_ini_phase;
uint32_t osd_sc_v_phase_step;
uint32_t osd_sc_h_ini_phase;
uint32_t osd_sc_h_phase_step;
uint32_t osd_sc_h_ctrl0;
uint32_t osd_sc_v_ctrl0;
bool vd1_enabled;
bool vd1_commit;
unsigned int vd1_planes;
uint32_t vd1_if0_gen_reg;
uint32_t vd1_if0_luma_x0;
uint32_t vd1_if0_luma_y0;
uint32_t vd1_if0_chroma_x0;
uint32_t vd1_if0_chroma_y0;
uint32_t vd1_if0_repeat_loop;
uint32_t vd1_if0_luma0_rpt_pat;
uint32_t vd1_if0_chroma0_rpt_pat;
uint32_t vd1_range_map_y;
uint32_t vd1_range_map_cb;
uint32_t vd1_range_map_cr;
uint32_t viu_vd1_fmt_w;
uint32_t vd1_if0_canvas0;
uint32_t vd1_if0_gen_reg2;
uint32_t viu_vd1_fmt_ctrl;
uint32_t vd1_addr0;
uint32_t vd1_addr1;
uint32_t vd1_addr2;
uint32_t vd1_stride0;
uint32_t vd1_stride1;
uint32_t vd1_stride2;
uint32_t vd1_height0;
uint32_t vd1_height1;
uint32_t vd1_height2;
uint32_t vpp_pic_in_height;
uint32_t vpp_postblend_vd1_h_start_end;
uint32_t vpp_postblend_vd1_v_start_end;
uint32_t vpp_hsc_region12_startp;
uint32_t vpp_hsc_region34_startp;
uint32_t vpp_hsc_region4_endp;
uint32_t vpp_hsc_start_phase_step;
uint32_t vpp_hsc_region1_phase_slope;
uint32_t vpp_hsc_region3_phase_slope;
uint32_t vpp_line_in_length;
uint32_t vpp_preblend_h_size;
uint32_t vpp_vsc_region12_startp;
uint32_t vpp_vsc_region34_startp;
uint32_t vpp_vsc_region4_endp;
uint32_t vpp_vsc_start_phase_step;
uint32_t vpp_vsc_ini_phase;
uint32_t vpp_vsc_phase_ctrl;
uint32_t vpp_hsc_phase_ctrl;
uint32_t vpp_blend_vd2_h_start_end;
uint32_t vpp_blend_vd2_v_start_end;
} viu;
struct {

View File

@ -0,0 +1,586 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright (C) 2018 BayLibre, SAS
* Author: Neil Armstrong <narmstrong@baylibre.com>
* Copyright (C) 2015 Amlogic, Inc. All rights reserved.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/bitfield.h>
#include <linux/platform_device.h>
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_rect.h>
#include "meson_overlay.h"
#include "meson_vpp.h"
#include "meson_viu.h"
#include "meson_canvas.h"
#include "meson_registers.h"
/* VD1_IF0_GEN_REG */
#define VD_URGENT_CHROMA BIT(28)
#define VD_URGENT_LUMA BIT(27)
#define VD_HOLD_LINES(lines) FIELD_PREP(GENMASK(24, 19), lines)
#define VD_DEMUX_MODE_RGB BIT(16)
#define VD_BYTES_PER_PIXEL(val) FIELD_PREP(GENMASK(15, 14), val)
#define VD_CHRO_RPT_LASTL_CTRL BIT(6)
#define VD_LITTLE_ENDIAN BIT(4)
#define VD_SEPARATE_EN BIT(1)
#define VD_ENABLE BIT(0)
/* VD1_IF0_CANVAS0 */
#define CANVAS_ADDR2(addr) FIELD_PREP(GENMASK(23, 16), addr)
#define CANVAS_ADDR1(addr) FIELD_PREP(GENMASK(15, 8), addr)
#define CANVAS_ADDR0(addr) FIELD_PREP(GENMASK(7, 0), addr)
/* VD1_IF0_LUMA_X0 VD1_IF0_CHROMA_X0 */
#define VD_X_START(value) FIELD_PREP(GENMASK(14, 0), value)
#define VD_X_END(value) FIELD_PREP(GENMASK(30, 16), value)
/* VD1_IF0_LUMA_Y0 VD1_IF0_CHROMA_Y0 */
#define VD_Y_START(value) FIELD_PREP(GENMASK(12, 0), value)
#define VD_Y_END(value) FIELD_PREP(GENMASK(28, 16), value)
/* VD1_IF0_GEN_REG2 */
#define VD_COLOR_MAP(value) FIELD_PREP(GENMASK(1, 0), value)
/* VIU_VD1_FMT_CTRL */
#define VD_HORZ_Y_C_RATIO(value) FIELD_PREP(GENMASK(22, 21), value)
#define VD_HORZ_FMT_EN BIT(20)
#define VD_VERT_RPT_LINE0 BIT(16)
#define VD_VERT_INITIAL_PHASE(value) FIELD_PREP(GENMASK(11, 8), value)
#define VD_VERT_PHASE_STEP(value) FIELD_PREP(GENMASK(7, 1), value)
#define VD_VERT_FMT_EN BIT(0)
/* VPP_POSTBLEND_VD1_H_START_END */
#define VD_H_END(value) FIELD_PREP(GENMASK(11, 0), value)
#define VD_H_START(value) FIELD_PREP(GENMASK(27, 16), value)
/* VPP_POSTBLEND_VD1_V_START_END */
#define VD_V_END(value) FIELD_PREP(GENMASK(11, 0), value)
#define VD_V_START(value) FIELD_PREP(GENMASK(27, 16), value)
/* VPP_BLEND_VD2_V_START_END */
#define VD2_V_END(value) FIELD_PREP(GENMASK(11, 0), value)
#define VD2_V_START(value) FIELD_PREP(GENMASK(27, 16), value)
/* VIU_VD1_FMT_W */
#define VD_V_WIDTH(value) FIELD_PREP(GENMASK(11, 0), value)
#define VD_H_WIDTH(value) FIELD_PREP(GENMASK(27, 16), value)
/* VPP_HSC_REGION12_STARTP VPP_HSC_REGION34_STARTP */
#define VD_REGION24_START(value) FIELD_PREP(GENMASK(11, 0), value)
#define VD_REGION13_END(value) FIELD_PREP(GENMASK(27, 16), value)
struct meson_overlay {
struct drm_plane base;
struct meson_drm *priv;
};
#define to_meson_overlay(x) container_of(x, struct meson_overlay, base)
#define FRAC_16_16(mult, div) (((mult) << 16) / (div))
static int meson_overlay_atomic_check(struct drm_plane *plane,
struct drm_plane_state *state)
{
struct drm_crtc_state *crtc_state;
if (!state->crtc)
return 0;
crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc);
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
return drm_atomic_helper_check_plane_state(state, crtc_state,
FRAC_16_16(1, 5),
FRAC_16_16(5, 1),
true, true);
}
/* Takes a fixed 16.16 number and converts it to integer. */
static inline int64_t fixed16_to_int(int64_t value)
{
return value >> 16;
}
static const uint8_t skip_tab[6] = {
0x24, 0x04, 0x68, 0x48, 0x28, 0x08,
};
static void meson_overlay_get_vertical_phase(unsigned int ratio_y, int *phase,
int *repeat, bool interlace)
{
int offset_in = 0;
int offset_out = 0;
int repeat_skip = 0;
if (!interlace && ratio_y > (1 << 18))
offset_out = (1 * ratio_y) >> 10;
while ((offset_in + (4 << 8)) <= offset_out) {
repeat_skip++;
offset_in += 4 << 8;
}
*phase = (offset_out - offset_in) >> 2;
if (*phase > 0x100)
repeat_skip++;
*phase = *phase & 0xff;
if (repeat_skip > 5)
repeat_skip = 5;
*repeat = skip_tab[repeat_skip];
}
static void meson_overlay_setup_scaler_params(struct meson_drm *priv,
struct drm_plane *plane,
bool interlace_mode)
{
struct drm_crtc_state *crtc_state = priv->crtc->state;
int video_top, video_left, video_width, video_height;
struct drm_plane_state *state = plane->state;
unsigned int vd_start_lines, vd_end_lines;
unsigned int hd_start_lines, hd_end_lines;
unsigned int crtc_height, crtc_width;
unsigned int vsc_startp, vsc_endp;
unsigned int hsc_startp, hsc_endp;
unsigned int crop_top, crop_left;
int vphase, vphase_repeat_skip;
unsigned int ratio_x, ratio_y;
int temp_height, temp_width;
unsigned int w_in, h_in;
int temp, start, end;
if (!crtc_state) {
DRM_ERROR("Invalid crtc_state\n");
return;
}
crtc_height = crtc_state->mode.vdisplay;
crtc_width = crtc_state->mode.hdisplay;
w_in = fixed16_to_int(state->src_w);
h_in = fixed16_to_int(state->src_h);
crop_top = fixed16_to_int(state->src_x);
crop_left = fixed16_to_int(state->src_x);
video_top = state->crtc_y;
video_left = state->crtc_x;
video_width = state->crtc_w;
video_height = state->crtc_h;
DRM_DEBUG("crtc_width %d crtc_height %d interlace %d\n",
crtc_width, crtc_height, interlace_mode);
DRM_DEBUG("w_in %d h_in %d crop_top %d crop_left %d\n",
w_in, h_in, crop_top, crop_left);
DRM_DEBUG("video top %d left %d width %d height %d\n",
video_top, video_left, video_width, video_height);
ratio_x = (w_in << 18) / video_width;
ratio_y = (h_in << 18) / video_height;
if (ratio_x * video_width < (w_in << 18))
ratio_x++;
DRM_DEBUG("ratio x 0x%x y 0x%x\n", ratio_x, ratio_y);
meson_overlay_get_vertical_phase(ratio_y, &vphase, &vphase_repeat_skip,
interlace_mode);
DRM_DEBUG("vphase 0x%x skip %d\n", vphase, vphase_repeat_skip);
/* Vertical */
start = video_top + video_height / 2 - ((h_in << 17) / ratio_y);
end = (h_in << 18) / ratio_y + start - 1;
if (video_top < 0 && start < 0)
vd_start_lines = (-(start) * ratio_y) >> 18;
else if (start < video_top)
vd_start_lines = ((video_top - start) * ratio_y) >> 18;
else
vd_start_lines = 0;
if (video_top < 0)
temp_height = min_t(unsigned int,
video_top + video_height - 1,
crtc_height - 1);
else
temp_height = min_t(unsigned int,
video_top + video_height - 1,
crtc_height - 1) - video_top + 1;
temp = vd_start_lines + (temp_height * ratio_y >> 18);
vd_end_lines = (temp <= (h_in - 1)) ? temp : (h_in - 1);
vd_start_lines += crop_left;
vd_end_lines += crop_left;
/*
* TOFIX: Input frames are handled and scaled like progressive frames,
* proper handling of interlaced field input frames need to be figured
* out using the proper framebuffer flags set by userspace.
*/
if (interlace_mode) {
start >>= 1;
end >>= 1;
}
vsc_startp = max_t(int, start,
max_t(int, 0, video_top));
vsc_endp = min_t(int, end,
min_t(int, crtc_height - 1,
video_top + video_height - 1));
DRM_DEBUG("vsc startp %d endp %d start_lines %d end_lines %d\n",
vsc_startp, vsc_endp, vd_start_lines, vd_end_lines);
/* Horizontal */
start = video_left + video_width / 2 - ((w_in << 17) / ratio_x);
end = (w_in << 18) / ratio_x + start - 1;
if (video_left < 0 && start < 0)
hd_start_lines = (-(start) * ratio_x) >> 18;
else if (start < video_left)
hd_start_lines = ((video_left - start) * ratio_x) >> 18;
else
hd_start_lines = 0;
if (video_left < 0)
temp_width = min_t(unsigned int,
video_left + video_width - 1,
crtc_width - 1);
else
temp_width = min_t(unsigned int,
video_left + video_width - 1,
crtc_width - 1) - video_left + 1;
temp = hd_start_lines + (temp_width * ratio_x >> 18);
hd_end_lines = (temp <= (w_in - 1)) ? temp : (w_in - 1);
priv->viu.vpp_line_in_length = hd_end_lines - hd_start_lines + 1;
hsc_startp = max_t(int, start, max_t(int, 0, video_left));
hsc_endp = min_t(int, end, min_t(int, crtc_width - 1,
video_left + video_width - 1));
hd_start_lines += crop_top;
hd_end_lines += crop_top;
DRM_DEBUG("hsc startp %d endp %d start_lines %d end_lines %d\n",
hsc_startp, hsc_endp, hd_start_lines, hd_end_lines);
priv->viu.vpp_vsc_start_phase_step = ratio_y << 6;
priv->viu.vpp_vsc_ini_phase = vphase << 8;
priv->viu.vpp_vsc_phase_ctrl = (1 << 13) | (4 << 8) |
vphase_repeat_skip;
priv->viu.vd1_if0_luma_x0 = VD_X_START(hd_start_lines) |
VD_X_END(hd_end_lines);
priv->viu.vd1_if0_chroma_x0 = VD_X_START(hd_start_lines >> 1) |
VD_X_END(hd_end_lines >> 1);
priv->viu.viu_vd1_fmt_w =
VD_H_WIDTH(hd_end_lines - hd_start_lines + 1) |
VD_V_WIDTH(hd_end_lines/2 - hd_start_lines/2 + 1);
priv->viu.vd1_if0_luma_y0 = VD_Y_START(vd_start_lines) |
VD_Y_END(vd_end_lines);
priv->viu.vd1_if0_chroma_y0 = VD_Y_START(vd_start_lines >> 1) |
VD_Y_END(vd_end_lines >> 1);
priv->viu.vpp_pic_in_height = h_in;
priv->viu.vpp_postblend_vd1_h_start_end = VD_H_START(hsc_startp) |
VD_H_END(hsc_endp);
priv->viu.vpp_blend_vd2_h_start_end = VD_H_START(hd_start_lines) |
VD_H_END(hd_end_lines);
priv->viu.vpp_hsc_region12_startp = VD_REGION13_END(0) |
VD_REGION24_START(hsc_startp);
priv->viu.vpp_hsc_region34_startp =
VD_REGION13_END(hsc_startp) |
VD_REGION24_START(hsc_endp - hsc_startp);
priv->viu.vpp_hsc_region4_endp = hsc_endp - hsc_startp;
priv->viu.vpp_hsc_start_phase_step = ratio_x << 6;
priv->viu.vpp_hsc_region1_phase_slope = 0;
priv->viu.vpp_hsc_region3_phase_slope = 0;
priv->viu.vpp_hsc_phase_ctrl = (1 << 21) | (4 << 16);
priv->viu.vpp_line_in_length = hd_end_lines - hd_start_lines + 1;
priv->viu.vpp_preblend_h_size = hd_end_lines - hd_start_lines + 1;
priv->viu.vpp_postblend_vd1_v_start_end = VD_V_START(vsc_startp) |
VD_V_END(vsc_endp);
priv->viu.vpp_blend_vd2_v_start_end =
VD2_V_START((vd_end_lines + 1) >> 1) |
VD2_V_END(vd_end_lines);
priv->viu.vpp_vsc_region12_startp = 0;
priv->viu.vpp_vsc_region34_startp =
VD_REGION13_END(vsc_endp - vsc_startp) |
VD_REGION24_START(vsc_endp - vsc_startp);
priv->viu.vpp_vsc_region4_endp = vsc_endp - vsc_startp;
priv->viu.vpp_vsc_start_phase_step = ratio_y << 6;
}
static void meson_overlay_atomic_update(struct drm_plane *plane,
struct drm_plane_state *old_state)
{
struct meson_overlay *meson_overlay = to_meson_overlay(plane);
struct drm_plane_state *state = plane->state;
struct drm_framebuffer *fb = state->fb;
struct meson_drm *priv = meson_overlay->priv;
struct drm_gem_cma_object *gem;
unsigned long flags;
bool interlace_mode;
DRM_DEBUG_DRIVER("\n");
/* Fallback is canvas provider is not available */
if (!priv->canvas) {
priv->canvas_id_vd1_0 = MESON_CANVAS_ID_VD1_0;
priv->canvas_id_vd1_1 = MESON_CANVAS_ID_VD1_1;
priv->canvas_id_vd1_2 = MESON_CANVAS_ID_VD1_2;
}
interlace_mode = state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE;
spin_lock_irqsave(&priv->drm->event_lock, flags);
priv->viu.vd1_if0_gen_reg = VD_URGENT_CHROMA |
VD_URGENT_LUMA |
VD_HOLD_LINES(9) |
VD_CHRO_RPT_LASTL_CTRL |
VD_ENABLE;
/* Setup scaler params */
meson_overlay_setup_scaler_params(priv, plane, interlace_mode);
priv->viu.vd1_if0_repeat_loop = 0;
priv->viu.vd1_if0_luma0_rpt_pat = interlace_mode ? 8 : 0;
priv->viu.vd1_if0_chroma0_rpt_pat = interlace_mode ? 8 : 0;
priv->viu.vd1_range_map_y = 0;
priv->viu.vd1_range_map_cb = 0;
priv->viu.vd1_range_map_cr = 0;
/* Default values for RGB888/YUV444 */
priv->viu.vd1_if0_gen_reg2 = 0;
priv->viu.viu_vd1_fmt_ctrl = 0;
switch (fb->format->format) {
/* TOFIX DRM_FORMAT_RGB888 should be supported */
case DRM_FORMAT_YUYV:
priv->viu.vd1_if0_gen_reg |= VD_BYTES_PER_PIXEL(1);
priv->viu.vd1_if0_canvas0 =
CANVAS_ADDR2(priv->canvas_id_vd1_0) |
CANVAS_ADDR1(priv->canvas_id_vd1_0) |
CANVAS_ADDR0(priv->canvas_id_vd1_0);
priv->viu.viu_vd1_fmt_ctrl = VD_HORZ_Y_C_RATIO(1) | /* /2 */
VD_HORZ_FMT_EN |
VD_VERT_RPT_LINE0 |
VD_VERT_INITIAL_PHASE(12) |
VD_VERT_PHASE_STEP(16) | /* /2 */
VD_VERT_FMT_EN;
break;
case DRM_FORMAT_NV12:
case DRM_FORMAT_NV21:
priv->viu.vd1_if0_gen_reg |= VD_SEPARATE_EN;
priv->viu.vd1_if0_canvas0 =
CANVAS_ADDR2(priv->canvas_id_vd1_1) |
CANVAS_ADDR1(priv->canvas_id_vd1_1) |
CANVAS_ADDR0(priv->canvas_id_vd1_0);
if (fb->format->format == DRM_FORMAT_NV12)
priv->viu.vd1_if0_gen_reg2 = VD_COLOR_MAP(1);
else
priv->viu.vd1_if0_gen_reg2 = VD_COLOR_MAP(2);
priv->viu.viu_vd1_fmt_ctrl = VD_HORZ_Y_C_RATIO(1) | /* /2 */
VD_HORZ_FMT_EN |
VD_VERT_RPT_LINE0 |
VD_VERT_INITIAL_PHASE(12) |
VD_VERT_PHASE_STEP(8) | /* /4 */
VD_VERT_FMT_EN;
break;
case DRM_FORMAT_YUV444:
case DRM_FORMAT_YUV422:
case DRM_FORMAT_YUV420:
case DRM_FORMAT_YUV411:
case DRM_FORMAT_YUV410:
priv->viu.vd1_if0_gen_reg |= VD_SEPARATE_EN;
priv->viu.vd1_if0_canvas0 =
CANVAS_ADDR2(priv->canvas_id_vd1_2) |
CANVAS_ADDR1(priv->canvas_id_vd1_1) |
CANVAS_ADDR0(priv->canvas_id_vd1_0);
switch (fb->format->format) {
case DRM_FORMAT_YUV422:
priv->viu.viu_vd1_fmt_ctrl =
VD_HORZ_Y_C_RATIO(1) | /* /2 */
VD_HORZ_FMT_EN |
VD_VERT_RPT_LINE0 |
VD_VERT_INITIAL_PHASE(12) |
VD_VERT_PHASE_STEP(16) | /* /2 */
VD_VERT_FMT_EN;
break;
case DRM_FORMAT_YUV420:
priv->viu.viu_vd1_fmt_ctrl =
VD_HORZ_Y_C_RATIO(1) | /* /2 */
VD_HORZ_FMT_EN |
VD_VERT_RPT_LINE0 |
VD_VERT_INITIAL_PHASE(12) |
VD_VERT_PHASE_STEP(8) | /* /4 */
VD_VERT_FMT_EN;
break;
case DRM_FORMAT_YUV411:
priv->viu.viu_vd1_fmt_ctrl =
VD_HORZ_Y_C_RATIO(2) | /* /4 */
VD_HORZ_FMT_EN |
VD_VERT_RPT_LINE0 |
VD_VERT_INITIAL_PHASE(12) |
VD_VERT_PHASE_STEP(16) | /* /2 */
VD_VERT_FMT_EN;
break;
case DRM_FORMAT_YUV410:
priv->viu.viu_vd1_fmt_ctrl =
VD_HORZ_Y_C_RATIO(2) | /* /4 */
VD_HORZ_FMT_EN |
VD_VERT_RPT_LINE0 |
VD_VERT_INITIAL_PHASE(12) |
VD_VERT_PHASE_STEP(8) | /* /4 */
VD_VERT_FMT_EN;
break;
}
break;
}
/* Update Canvas with buffer address */
priv->viu.vd1_planes = drm_format_num_planes(fb->format->format);
switch (priv->viu.vd1_planes) {
case 3:
gem = drm_fb_cma_get_gem_obj(fb, 2);
priv->viu.vd1_addr2 = gem->paddr + fb->offsets[2];
priv->viu.vd1_stride2 = fb->pitches[2];
priv->viu.vd1_height2 =
drm_format_plane_height(fb->height,
fb->format->format, 2);
DRM_DEBUG("plane 2 addr 0x%x stride %d height %d\n",
priv->viu.vd1_addr2,
priv->viu.vd1_stride2,
priv->viu.vd1_height2);
/* fallthrough */
case 2:
gem = drm_fb_cma_get_gem_obj(fb, 1);
priv->viu.vd1_addr1 = gem->paddr + fb->offsets[1];
priv->viu.vd1_stride1 = fb->pitches[1];
priv->viu.vd1_height1 =
drm_format_plane_height(fb->height,
fb->format->format, 1);
DRM_DEBUG("plane 1 addr 0x%x stride %d height %d\n",
priv->viu.vd1_addr1,
priv->viu.vd1_stride1,
priv->viu.vd1_height1);
/* fallthrough */
case 1:
gem = drm_fb_cma_get_gem_obj(fb, 0);
priv->viu.vd1_addr0 = gem->paddr + fb->offsets[0];
priv->viu.vd1_stride0 = fb->pitches[0];
priv->viu.vd1_height0 =
drm_format_plane_height(fb->height,
fb->format->format, 0);
DRM_DEBUG("plane 0 addr 0x%x stride %d height %d\n",
priv->viu.vd1_addr0,
priv->viu.vd1_stride0,
priv->viu.vd1_height0);
}
priv->viu.vd1_enabled = true;
spin_unlock_irqrestore(&priv->drm->event_lock, flags);
DRM_DEBUG_DRIVER("\n");
}
static void meson_overlay_atomic_disable(struct drm_plane *plane,
struct drm_plane_state *old_state)
{
struct meson_overlay *meson_overlay = to_meson_overlay(plane);
struct meson_drm *priv = meson_overlay->priv;
DRM_DEBUG_DRIVER("\n");
priv->viu.vd1_enabled = false;
/* Disable VD1 */
writel_bits_relaxed(VPP_VD1_POSTBLEND | VPP_VD1_PREBLEND, 0,
priv->io_base + _REG(VPP_MISC));
}
static const struct drm_plane_helper_funcs meson_overlay_helper_funcs = {
.atomic_check = meson_overlay_atomic_check,
.atomic_disable = meson_overlay_atomic_disable,
.atomic_update = meson_overlay_atomic_update,
};
static const struct drm_plane_funcs meson_overlay_funcs = {
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.destroy = drm_plane_cleanup,
.reset = drm_atomic_helper_plane_reset,
.atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
};
static const uint32_t supported_drm_formats[] = {
DRM_FORMAT_YUYV,
DRM_FORMAT_NV12,
DRM_FORMAT_NV21,
DRM_FORMAT_YUV444,
DRM_FORMAT_YUV422,
DRM_FORMAT_YUV420,
DRM_FORMAT_YUV411,
DRM_FORMAT_YUV410,
};
int meson_overlay_create(struct meson_drm *priv)
{
struct meson_overlay *meson_overlay;
struct drm_plane *plane;
DRM_DEBUG_DRIVER("\n");
meson_overlay = devm_kzalloc(priv->drm->dev, sizeof(*meson_overlay),
GFP_KERNEL);
if (!meson_overlay)
return -ENOMEM;
meson_overlay->priv = priv;
plane = &meson_overlay->base;
drm_universal_plane_init(priv->drm, plane, 0xFF,
&meson_overlay_funcs,
supported_drm_formats,
ARRAY_SIZE(supported_drm_formats),
NULL,
DRM_PLANE_TYPE_OVERLAY, "meson_overlay_plane");
drm_plane_helper_add(plane, &meson_overlay_helper_funcs);
priv->overlay_plane = plane;
DRM_DEBUG_DRIVER("\n");
return 0;
}

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Copyright (C) 2018 BayLibre, SAS
* Author: Neil Armstrong <narmstrong@baylibre.com>
*/
#ifndef __MESON_OVERLAY_H
#define __MESON_OVERLAY_H
#include "meson_drv.h"
int meson_overlay_create(struct meson_drm *priv);
#endif /* __MESON_OVERLAY_H */

View File

@ -24,6 +24,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/bitfield.h>
#include <linux/platform_device.h>
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
@ -39,12 +40,50 @@
#include "meson_canvas.h"
#include "meson_registers.h"
/* OSD_SCI_WH_M1 */
#define SCI_WH_M1_W(w) FIELD_PREP(GENMASK(28, 16), w)
#define SCI_WH_M1_H(h) FIELD_PREP(GENMASK(12, 0), h)
/* OSD_SCO_H_START_END */
/* OSD_SCO_V_START_END */
#define SCO_HV_START(start) FIELD_PREP(GENMASK(27, 16), start)
#define SCO_HV_END(end) FIELD_PREP(GENMASK(11, 0), end)
/* OSD_SC_CTRL0 */
#define SC_CTRL0_PATH_EN BIT(3)
#define SC_CTRL0_SEL_OSD1 BIT(2)
/* OSD_VSC_CTRL0 */
#define VSC_BANK_LEN(value) FIELD_PREP(GENMASK(2, 0), value)
#define VSC_TOP_INI_RCV_NUM(value) FIELD_PREP(GENMASK(6, 3), value)
#define VSC_TOP_RPT_L0_NUM(value) FIELD_PREP(GENMASK(9, 8), value)
#define VSC_BOT_INI_RCV_NUM(value) FIELD_PREP(GENMASK(14, 11), value)
#define VSC_BOT_RPT_L0_NUM(value) FIELD_PREP(GENMASK(17, 16), value)
#define VSC_PROG_INTERLACE BIT(23)
#define VSC_VERTICAL_SCALER_EN BIT(24)
/* OSD_VSC_INI_PHASE */
#define VSC_INI_PHASE_BOT(bottom) FIELD_PREP(GENMASK(31, 16), bottom)
#define VSC_INI_PHASE_TOP(top) FIELD_PREP(GENMASK(15, 0), top)
/* OSD_HSC_CTRL0 */
#define HSC_BANK_LENGTH(value) FIELD_PREP(GENMASK(2, 0), value)
#define HSC_INI_RCV_NUM0(value) FIELD_PREP(GENMASK(6, 3), value)
#define HSC_RPT_P0_NUM0(value) FIELD_PREP(GENMASK(9, 8), value)
#define HSC_HORIZ_SCALER_EN BIT(22)
/* VPP_OSD_VSC_PHASE_STEP */
/* VPP_OSD_HSC_PHASE_STEP */
#define SC_PHASE_STEP(value) FIELD_PREP(GENMASK(27, 0), value)
struct meson_plane {
struct drm_plane base;
struct meson_drm *priv;
};
#define to_meson_plane(x) container_of(x, struct meson_plane, base)
#define FRAC_16_16(mult, div) (((mult) << 16) / (div))
static int meson_plane_atomic_check(struct drm_plane *plane,
struct drm_plane_state *state)
{
@ -57,10 +96,15 @@ static int meson_plane_atomic_check(struct drm_plane *plane,
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
/*
* Only allow :
* - Upscaling up to 5x, vertical and horizontal
* - Final coordinates must match crtc size
*/
return drm_atomic_helper_check_plane_state(state, crtc_state,
FRAC_16_16(1, 5),
DRM_PLANE_HELPER_NO_SCALING,
DRM_PLANE_HELPER_NO_SCALING,
true, true);
false, true);
}
/* Takes a fixed 16.16 number and converts it to integer. */
@ -74,22 +118,20 @@ static void meson_plane_atomic_update(struct drm_plane *plane,
{
struct meson_plane *meson_plane = to_meson_plane(plane);
struct drm_plane_state *state = plane->state;
struct drm_framebuffer *fb = state->fb;
struct drm_rect dest = drm_plane_state_dest(state);
struct meson_drm *priv = meson_plane->priv;
struct drm_framebuffer *fb = state->fb;
struct drm_gem_cma_object *gem;
struct drm_rect src = {
.x1 = (state->src_x),
.y1 = (state->src_y),
.x2 = (state->src_x + state->src_w),
.y2 = (state->src_y + state->src_h),
};
struct drm_rect dest = {
.x1 = state->crtc_x,
.y1 = state->crtc_y,
.x2 = state->crtc_x + state->crtc_w,
.y2 = state->crtc_y + state->crtc_h,
};
unsigned long flags;
int vsc_ini_rcv_num, vsc_ini_rpt_p0_num;
int vsc_bot_rcv_num, vsc_bot_rpt_p0_num;
int hsc_ini_rcv_num, hsc_ini_rpt_p0_num;
int hf_phase_step, vf_phase_step;
int src_w, src_h, dst_w, dst_h;
int bot_ini_phase;
int hf_bank_len;
int vf_bank_len;
u8 canvas_id_osd1;
/*
* Update Coordinates
@ -104,8 +146,13 @@ static void meson_plane_atomic_update(struct drm_plane *plane,
(0xFF << OSD_GLOBAL_ALPHA_SHIFT) |
OSD_BLK0_ENABLE;
if (priv->canvas)
canvas_id_osd1 = priv->canvas_id_osd1;
else
canvas_id_osd1 = MESON_CANVAS_ID_OSD1;
/* Set up BLK0 to point to the right canvas */
priv->viu.osd1_blk0_cfg[0] = ((MESON_CANVAS_ID_OSD1 << OSD_CANVAS_SEL) |
priv->viu.osd1_blk0_cfg[0] = ((canvas_id_osd1 << OSD_CANVAS_SEL) |
OSD_ENDIANNESS_LE);
/* On GXBB, Use the old non-HDR RGB2YUV converter */
@ -137,23 +184,115 @@ static void meson_plane_atomic_update(struct drm_plane *plane,
break;
};
if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) {
priv->viu.osd1_interlace = true;
/* Default scaler parameters */
vsc_bot_rcv_num = 0;
vsc_bot_rpt_p0_num = 0;
hf_bank_len = 4;
vf_bank_len = 4;
if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) {
vsc_bot_rcv_num = 6;
vsc_bot_rpt_p0_num = 2;
}
hsc_ini_rcv_num = hf_bank_len;
vsc_ini_rcv_num = vf_bank_len;
hsc_ini_rpt_p0_num = (hf_bank_len / 2) - 1;
vsc_ini_rpt_p0_num = (vf_bank_len / 2) - 1;
src_w = fixed16_to_int(state->src_w);
src_h = fixed16_to_int(state->src_h);
dst_w = state->crtc_w;
dst_h = state->crtc_h;
/*
* When the output is interlaced, the OSD must switch between
* each field using the INTERLACE_SEL_ODD (0) of VIU_OSD1_BLK0_CFG_W0
* at each vsync.
* But the vertical scaler can provide such funtionnality if
* is configured for 2:1 scaling with interlace options enabled.
*/
if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) {
dest.y1 /= 2;
dest.y2 /= 2;
} else
priv->viu.osd1_interlace = false;
dst_h /= 2;
}
hf_phase_step = ((src_w << 18) / dst_w) << 6;
vf_phase_step = (src_h << 20) / dst_h;
if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE)
bot_ini_phase = ((vf_phase_step / 2) >> 4);
else
bot_ini_phase = 0;
vf_phase_step = (vf_phase_step << 4);
/* In interlaced mode, scaler is always active */
if (src_h != dst_h || src_w != dst_w) {
priv->viu.osd_sc_i_wh_m1 = SCI_WH_M1_W(src_w - 1) |
SCI_WH_M1_H(src_h - 1);
priv->viu.osd_sc_o_h_start_end = SCO_HV_START(dest.x1) |
SCO_HV_END(dest.x2 - 1);
priv->viu.osd_sc_o_v_start_end = SCO_HV_START(dest.y1) |
SCO_HV_END(dest.y2 - 1);
/* Enable OSD Scaler */
priv->viu.osd_sc_ctrl0 = SC_CTRL0_PATH_EN | SC_CTRL0_SEL_OSD1;
} else {
priv->viu.osd_sc_i_wh_m1 = 0;
priv->viu.osd_sc_o_h_start_end = 0;
priv->viu.osd_sc_o_v_start_end = 0;
priv->viu.osd_sc_ctrl0 = 0;
}
/* In interlaced mode, vertical scaler is always active */
if (src_h != dst_h) {
priv->viu.osd_sc_v_ctrl0 =
VSC_BANK_LEN(vf_bank_len) |
VSC_TOP_INI_RCV_NUM(vsc_ini_rcv_num) |
VSC_TOP_RPT_L0_NUM(vsc_ini_rpt_p0_num) |
VSC_VERTICAL_SCALER_EN;
if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE)
priv->viu.osd_sc_v_ctrl0 |=
VSC_BOT_INI_RCV_NUM(vsc_bot_rcv_num) |
VSC_BOT_RPT_L0_NUM(vsc_bot_rpt_p0_num) |
VSC_PROG_INTERLACE;
priv->viu.osd_sc_v_phase_step = SC_PHASE_STEP(vf_phase_step);
priv->viu.osd_sc_v_ini_phase = VSC_INI_PHASE_BOT(bot_ini_phase);
} else {
priv->viu.osd_sc_v_ctrl0 = 0;
priv->viu.osd_sc_v_phase_step = 0;
priv->viu.osd_sc_v_ini_phase = 0;
}
/* Horizontal scaler is only used if width does not match */
if (src_w != dst_w) {
priv->viu.osd_sc_h_ctrl0 =
HSC_BANK_LENGTH(hf_bank_len) |
HSC_INI_RCV_NUM0(hsc_ini_rcv_num) |
HSC_RPT_P0_NUM0(hsc_ini_rpt_p0_num) |
HSC_HORIZ_SCALER_EN;
priv->viu.osd_sc_h_phase_step = SC_PHASE_STEP(hf_phase_step);
priv->viu.osd_sc_h_ini_phase = 0;
} else {
priv->viu.osd_sc_h_ctrl0 = 0;
priv->viu.osd_sc_h_phase_step = 0;
priv->viu.osd_sc_h_ini_phase = 0;
}
/*
* The format of these registers is (x2 << 16 | x1),
* where x2 is exclusive.
* e.g. +30x1920 would be (1919 << 16) | 30
*/
priv->viu.osd1_blk0_cfg[1] = ((fixed16_to_int(src.x2) - 1) << 16) |
fixed16_to_int(src.x1);
priv->viu.osd1_blk0_cfg[2] = ((fixed16_to_int(src.y2) - 1) << 16) |
fixed16_to_int(src.y1);
priv->viu.osd1_blk0_cfg[1] =
((fixed16_to_int(state->src.x2) - 1) << 16) |
fixed16_to_int(state->src.x1);
priv->viu.osd1_blk0_cfg[2] =
((fixed16_to_int(state->src.y2) - 1) << 16) |
fixed16_to_int(state->src.y1);
priv->viu.osd1_blk0_cfg[3] = ((dest.x2 - 1) << 16) | dest.x1;
priv->viu.osd1_blk0_cfg[4] = ((dest.y2 - 1) << 16) | dest.y1;

View File

@ -286,6 +286,7 @@
#define VIU_OSD1_MATRIX_COEF22_30 0x1a9d
#define VIU_OSD1_MATRIX_COEF31_32 0x1a9e
#define VIU_OSD1_MATRIX_COEF40_41 0x1a9f
#define VD1_IF0_GEN_REG3 0x1aa7
#define VIU_OSD1_EOTF_CTL 0x1ad4
#define VIU_OSD1_EOTF_COEF00_01 0x1ad5
#define VIU_OSD1_EOTF_COEF02_10 0x1ad6
@ -297,6 +298,7 @@
#define VIU_OSD1_OETF_CTL 0x1adc
#define VIU_OSD1_OETF_LUT_ADDR_PORT 0x1add
#define VIU_OSD1_OETF_LUT_DATA_PORT 0x1ade
#define AFBC_ENABLE 0x1ae0
/* vpp */
#define VPP_DUMMY_DATA 0x1d00
@ -349,6 +351,7 @@
#define VPP_VD2_PREBLEND BIT(15)
#define VPP_OSD1_PREBLEND BIT(16)
#define VPP_OSD2_PREBLEND BIT(17)
#define VPP_COLOR_MNG_ENABLE BIT(28)
#define VPP_OFIFO_SIZE 0x1d27
#define VPP_FIFO_STATUS 0x1d28
#define VPP_SMOKE_CTRL 0x1d29

View File

@ -329,6 +329,21 @@ void meson_viu_init(struct meson_drm *priv)
0xff << OSD_REPLACE_SHIFT,
priv->io_base + _REG(VIU_OSD2_CTRL_STAT2));
/* Disable VD1 AFBC */
/* di_mif0_en=0 mif0_to_vpp_en=0 di_mad_en=0 */
writel_bits_relaxed(0x7 << 16, 0,
priv->io_base + _REG(VIU_MISC_CTRL0));
/* afbc vd1 set=0 */
writel_bits_relaxed(BIT(20), 0,
priv->io_base + _REG(VIU_MISC_CTRL0));
writel_relaxed(0, priv->io_base + _REG(AFBC_ENABLE));
writel_relaxed(0x00FF00C0,
priv->io_base + _REG(VD1_IF0_LUMA_FIFO_SIZE));
writel_relaxed(0x00FF00C0,
priv->io_base + _REG(VD2_IF0_LUMA_FIFO_SIZE));
priv->viu.osd1_enabled = false;
priv->viu.osd1_commit = false;
priv->viu.osd1_interlace = false;

View File

@ -51,52 +51,6 @@ void meson_vpp_setup_mux(struct meson_drm *priv, unsigned int mux)
writel(mux, priv->io_base + _REG(VPU_VIU_VENC_MUX_CTRL));
}
/*
* When the output is interlaced, the OSD must switch between
* each field using the INTERLACE_SEL_ODD (0) of VIU_OSD1_BLK0_CFG_W0
* at each vsync.
* But the vertical scaler can provide such funtionnality if
* is configured for 2:1 scaling with interlace options enabled.
*/
void meson_vpp_setup_interlace_vscaler_osd1(struct meson_drm *priv,
struct drm_rect *input)
{
writel_relaxed(BIT(3) /* Enable scaler */ |
BIT(2), /* Select OSD1 */
priv->io_base + _REG(VPP_OSD_SC_CTRL0));
writel_relaxed(((drm_rect_width(input) - 1) << 16) |
(drm_rect_height(input) - 1),
priv->io_base + _REG(VPP_OSD_SCI_WH_M1));
/* 2:1 scaling */
writel_relaxed(((input->x1) << 16) | (input->x2),
priv->io_base + _REG(VPP_OSD_SCO_H_START_END));
writel_relaxed(((input->y1 >> 1) << 16) | (input->y2 >> 1),
priv->io_base + _REG(VPP_OSD_SCO_V_START_END));
/* 2:1 scaling values */
writel_relaxed(BIT(16), priv->io_base + _REG(VPP_OSD_VSC_INI_PHASE));
writel_relaxed(BIT(25), priv->io_base + _REG(VPP_OSD_VSC_PHASE_STEP));
writel_relaxed(0, priv->io_base + _REG(VPP_OSD_HSC_CTRL0));
writel_relaxed((4 << 0) /* osd_vsc_bank_length */ |
(4 << 3) /* osd_vsc_top_ini_rcv_num0 */ |
(1 << 8) /* osd_vsc_top_rpt_p0_num0 */ |
(6 << 11) /* osd_vsc_bot_ini_rcv_num0 */ |
(2 << 16) /* osd_vsc_bot_rpt_p0_num0 */ |
BIT(23) /* osd_prog_interlace */ |
BIT(24), /* Enable vertical scaler */
priv->io_base + _REG(VPP_OSD_VSC_CTRL0));
}
void meson_vpp_disable_interlace_vscaler_osd1(struct meson_drm *priv)
{
writel_relaxed(0, priv->io_base + _REG(VPP_OSD_SC_CTRL0));
writel_relaxed(0, priv->io_base + _REG(VPP_OSD_VSC_CTRL0));
writel_relaxed(0, priv->io_base + _REG(VPP_OSD_HSC_CTRL0));
}
static unsigned int vpp_filter_coefs_4point_bspline[] = {
0x15561500, 0x14561600, 0x13561700, 0x12561800,
0x11551a00, 0x11541b00, 0x10541c00, 0x0f541d00,
@ -122,6 +76,31 @@ static void meson_vpp_write_scaling_filter_coefs(struct meson_drm *priv,
priv->io_base + _REG(VPP_OSD_SCALE_COEF));
}
static const uint32_t vpp_filter_coefs_bicubic[] = {
0x00800000, 0x007f0100, 0xff7f0200, 0xfe7f0300,
0xfd7e0500, 0xfc7e0600, 0xfb7d0800, 0xfb7c0900,
0xfa7b0b00, 0xfa7a0dff, 0xf9790fff, 0xf97711ff,
0xf87613ff, 0xf87416fe, 0xf87218fe, 0xf8701afe,
0xf76f1dfd, 0xf76d1ffd, 0xf76b21fd, 0xf76824fd,
0xf76627fc, 0xf76429fc, 0xf7612cfc, 0xf75f2ffb,
0xf75d31fb, 0xf75a34fb, 0xf75837fa, 0xf7553afa,
0xf8523cfa, 0xf8503ff9, 0xf84d42f9, 0xf84a45f9,
0xf84848f8
};
static void meson_vpp_write_vd_scaling_filter_coefs(struct meson_drm *priv,
const unsigned int *coefs,
bool is_horizontal)
{
int i;
writel_relaxed(is_horizontal ? BIT(8) : 0,
priv->io_base + _REG(VPP_SCALE_COEF_IDX));
for (i = 0; i < 33; i++)
writel_relaxed(coefs[i],
priv->io_base + _REG(VPP_SCALE_COEF));
}
void meson_vpp_init(struct meson_drm *priv)
{
/* set dummy data default YUV black */
@ -150,17 +129,34 @@ void meson_vpp_init(struct meson_drm *priv)
/* Force all planes off */
writel_bits_relaxed(VPP_OSD1_POSTBLEND | VPP_OSD2_POSTBLEND |
VPP_VD1_POSTBLEND | VPP_VD2_POSTBLEND, 0,
VPP_VD1_POSTBLEND | VPP_VD2_POSTBLEND |
VPP_VD1_PREBLEND | VPP_VD2_PREBLEND, 0,
priv->io_base + _REG(VPP_MISC));
/* Setup default VD settings */
writel_relaxed(4096,
priv->io_base + _REG(VPP_PREBLEND_VD1_H_START_END));
writel_relaxed(4096,
priv->io_base + _REG(VPP_BLEND_VD2_H_START_END));
/* Disable Scalers */
writel_relaxed(0, priv->io_base + _REG(VPP_OSD_SC_CTRL0));
writel_relaxed(0, priv->io_base + _REG(VPP_OSD_VSC_CTRL0));
writel_relaxed(0, priv->io_base + _REG(VPP_OSD_HSC_CTRL0));
writel_relaxed(4 | (4 << 8) | BIT(15),
priv->io_base + _REG(VPP_SC_MISC));
writel_relaxed(1, priv->io_base + _REG(VPP_VADJ_CTRL));
/* Write in the proper filter coefficients. */
meson_vpp_write_scaling_filter_coefs(priv,
vpp_filter_coefs_4point_bspline, false);
meson_vpp_write_scaling_filter_coefs(priv,
vpp_filter_coefs_4point_bspline, true);
/* Write the VD proper filter coefficients. */
meson_vpp_write_vd_scaling_filter_coefs(priv, vpp_filter_coefs_bicubic,
false);
meson_vpp_write_vd_scaling_filter_coefs(priv, vpp_filter_coefs_bicubic,
true);
}

View File

@ -96,7 +96,7 @@ static int s6d16d0_prepare(struct drm_panel *panel)
ret = mipi_dsi_dcs_set_tear_on(dsi,
MIPI_DSI_DCS_TEAR_MODE_VBLANK);
if (ret) {
DRM_DEV_ERROR(s6->dev, "failed to enble vblank TE (%d)\n",
DRM_DEV_ERROR(s6->dev, "failed to enable vblank TE (%d)\n",
ret);
return ret;
}

View File

@ -622,10 +622,14 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
if (ret)
goto out_kunmap;
ret = qxl_release_reserve_list(release, true);
ret = qxl_bo_pin(cursor_bo);
if (ret)
goto out_free_bo;
ret = qxl_release_reserve_list(release, true);
if (ret)
goto out_unpin;
ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
if (ret)
goto out_backoff;
@ -670,15 +674,17 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
qxl_release_fence_buffer_objects(release);
if (old_cursor_bo)
qxl_bo_unref(&old_cursor_bo);
if (old_cursor_bo != NULL)
qxl_bo_unpin(old_cursor_bo);
qxl_bo_unref(&old_cursor_bo);
qxl_bo_unref(&cursor_bo);
return;
out_backoff:
qxl_release_backoff_reserve_list(release);
out_unpin:
qxl_bo_unpin(cursor_bo);
out_free_bo:
qxl_bo_unref(&cursor_bo);
out_kunmap:
@ -757,7 +763,7 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
}
}
ret = qxl_bo_pin(user_bo, QXL_GEM_DOMAIN_CPU, NULL);
ret = qxl_bo_pin(user_bo);
if (ret)
return ret;
@ -1104,7 +1110,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
}
qdev->monitors_config_bo = gem_to_qxl_bo(gobj);
ret = qxl_bo_pin(qdev->monitors_config_bo, QXL_GEM_DOMAIN_VRAM, NULL);
ret = qxl_bo_pin(qdev->monitors_config_bo);
if (ret)
return ret;

View File

@ -247,8 +247,7 @@ void qxl_draw_opaque_fb(const struct qxl_fb_image *qxl_fb_image,
qxl_release_fence_buffer_objects(release);
out_free_palette:
if (palette_bo)
qxl_bo_unref(&palette_bo);
qxl_bo_unref(&palette_bo);
out_free_image:
qxl_image_free_objects(qdev, dimage);
out_free_drawable:

View File

@ -111,7 +111,7 @@ static int qxlfb_create_pinned_object(struct qxl_device *qdev,
qbo->surf.stride = mode_cmd->pitches[0];
qbo->surf.format = SPICE_SURFACE_FMT_32_xRGB;
ret = qxl_bo_pin(qbo, QXL_GEM_DOMAIN_SURFACE, NULL);
ret = qxl_bo_pin(qbo);
if (ret) {
goto out_unref;
}

View File

@ -313,10 +313,8 @@ error:
void qxl_device_fini(struct qxl_device *qdev)
{
if (qdev->current_release_bo[0])
qxl_bo_unref(&qdev->current_release_bo[0]);
if (qdev->current_release_bo[1])
qxl_bo_unref(&qdev->current_release_bo[1]);
qxl_bo_unref(&qdev->current_release_bo[0]);
qxl_bo_unref(&qdev->current_release_bo[1]);
flush_work(&qdev->gc_work);
qxl_ring_free(qdev->command_ring);
qxl_ring_free(qdev->cursor_ring);

View File

@ -186,13 +186,9 @@ void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
struct qxl_bo *bo, void *pmap)
{
struct ttm_mem_type_manager *man = &bo->tbo.bdev->man[bo->tbo.mem.mem_type];
struct io_mapping *map;
if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
map = qdev->vram_mapping;
else if (bo->tbo.mem.mem_type == TTM_PL_PRIV)
map = qdev->surface_mapping;
else
if ((bo->tbo.mem.mem_type != TTM_PL_VRAM) &&
(bo->tbo.mem.mem_type != TTM_PL_PRIV))
goto fallback;
io_mapping_unmap_atomic(pmap);
@ -200,7 +196,7 @@ void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
(void) ttm_mem_io_lock(man, false);
ttm_mem_io_free(bo->tbo.bdev, &bo->tbo.mem);
ttm_mem_io_unlock(man);
return ;
return;
fallback:
qxl_bo_kunmap(bo);
}
@ -220,7 +216,7 @@ struct qxl_bo *qxl_bo_ref(struct qxl_bo *bo)
return bo;
}
static int __qxl_bo_pin(struct qxl_bo *bo, u32 domain, u64 *gpu_addr)
static int __qxl_bo_pin(struct qxl_bo *bo)
{
struct ttm_operation_ctx ctx = { false, false };
struct drm_device *ddev = bo->gem_base.dev;
@ -228,16 +224,12 @@ static int __qxl_bo_pin(struct qxl_bo *bo, u32 domain, u64 *gpu_addr)
if (bo->pin_count) {
bo->pin_count++;
if (gpu_addr)
*gpu_addr = qxl_bo_gpu_offset(bo);
return 0;
}
qxl_ttm_placement_from_domain(bo, domain, true);
qxl_ttm_placement_from_domain(bo, bo->type, true);
r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
if (likely(r == 0)) {
bo->pin_count = 1;
if (gpu_addr != NULL)
*gpu_addr = qxl_bo_gpu_offset(bo);
}
if (unlikely(r != 0))
dev_err(ddev->dev, "%p pin failed\n", bo);
@ -270,7 +262,7 @@ static int __qxl_bo_unpin(struct qxl_bo *bo)
* beforehand, use the internal version directly __qxl_bo_pin.
*
*/
int qxl_bo_pin(struct qxl_bo *bo, u32 domain, u64 *gpu_addr)
int qxl_bo_pin(struct qxl_bo *bo)
{
int r;
@ -278,7 +270,7 @@ int qxl_bo_pin(struct qxl_bo *bo, u32 domain, u64 *gpu_addr)
if (r)
return r;
r = __qxl_bo_pin(bo, bo->type, NULL);
r = __qxl_bo_pin(bo);
qxl_bo_unreserve(bo);
return r;
}

View File

@ -97,7 +97,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int pa
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
extern struct qxl_bo *qxl_bo_ref(struct qxl_bo *bo);
extern void qxl_bo_unref(struct qxl_bo **bo);
extern int qxl_bo_pin(struct qxl_bo *bo, u32 domain, u64 *gpu_addr);
extern int qxl_bo_pin(struct qxl_bo *bo);
extern int qxl_bo_unpin(struct qxl_bo *bo);
extern void qxl_ttm_placement_from_domain(struct qxl_bo *qbo, u32 domain, bool pinned);
extern bool qxl_ttm_bo_is_qxl_bo(struct ttm_buffer_object *bo);

View File

@ -427,8 +427,6 @@ void qxl_release_fence_buffer_objects(struct qxl_release *release)
struct ttm_buffer_object *bo;
struct ttm_bo_global *glob;
struct ttm_bo_device *bdev;
struct ttm_bo_driver *driver;
struct qxl_bo *qbo;
struct ttm_validate_buffer *entry;
struct qxl_device *qdev;
@ -449,14 +447,12 @@ void qxl_release_fence_buffer_objects(struct qxl_release *release)
release->id | 0xf0000000, release->base.seqno);
trace_dma_fence_emit(&release->base);
driver = bdev->driver;
glob = bdev->glob;
spin_lock(&glob->lru_lock);
list_for_each_entry(entry, &release->bos, head) {
bo = entry->bo;
qbo = to_qxl_bo(bo);
reservation_object_add_shared_fence(bo->resv, &release->base);
ttm_bo_add_to_lru(bo);

View File

@ -147,7 +147,7 @@ static int cdn_dp_mailbox_validate_receive(struct cdn_dp_device *dp,
}
static int cdn_dp_mailbox_read_receive(struct cdn_dp_device *dp,
u8 *buff, u8 buff_size)
u8 *buff, u16 buff_size)
{
u32 i;
int ret;

View File

@ -252,10 +252,8 @@ int sti_crtc_vblank_cb(struct notifier_block *nb,
struct sti_compositor *compo;
struct drm_crtc *crtc = data;
struct sti_mixer *mixer;
struct sti_private *priv;
unsigned int pipe;
priv = crtc->dev->dev_private;
pipe = drm_crtc_index(crtc);
compo = container_of(nb, struct sti_compositor, vtg_vblank_nb[pipe]);
mixer = compo->mixer[pipe];

View File

@ -478,8 +478,11 @@ static void sun4i_tcon0_mode_set_lvds(struct sun4i_tcon *tcon,
}
static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon,
const struct drm_encoder *encoder,
const struct drm_display_mode *mode)
{
struct drm_connector *connector = sun4i_tcon_get_connector(encoder);
struct drm_display_info display_info = connector->display_info;
unsigned int bp, hsync, vsync;
u8 clk_delay;
u32 val = 0;
@ -491,8 +494,7 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon,
sun4i_tcon0_mode_set_common(tcon, mode);
/* Set dithering if needed */
if (tcon->panel)
sun4i_tcon0_mode_set_dithering(tcon, tcon->panel->connector);
sun4i_tcon0_mode_set_dithering(tcon, connector);
/* Adjust clock delay */
clk_delay = sun4i_tcon_get_clk_delay(mode, 0);
@ -541,6 +543,9 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon,
if (mode->flags & DRM_MODE_FLAG_PVSYNC)
val |= SUN4I_TCON0_IO_POL_VSYNC_POSITIVE;
if (display_info.bus_flags & DRM_BUS_FLAG_DE_LOW)
val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
/*
* On A20 and similar SoCs, the only way to achieve Positive Edge
* (Rising Edge), is setting dclk clock phase to 2/3(240°).
@ -556,20 +561,16 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon,
* Following code is a way to avoid quirks all around TCON
* and DOTCLOCK drivers.
*/
if (tcon->panel) {
struct drm_panel *panel = tcon->panel;
struct drm_connector *connector = panel->connector;
struct drm_display_info display_info = connector->display_info;
if (display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_POSEDGE)
clk_set_phase(tcon->dclk, 240);
if (display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_POSEDGE)
clk_set_phase(tcon->dclk, 240);
if (display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_NEGEDGE)
clk_set_phase(tcon->dclk, 0);
}
if (display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_NEGEDGE)
clk_set_phase(tcon->dclk, 0);
regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,
SUN4I_TCON0_IO_POL_HSYNC_POSITIVE | SUN4I_TCON0_IO_POL_VSYNC_POSITIVE,
SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |
SUN4I_TCON0_IO_POL_VSYNC_POSITIVE |
SUN4I_TCON0_IO_POL_DE_NEGATIVE,
val);
/* Map output pins to channel 0 */
@ -684,7 +685,7 @@ void sun4i_tcon_mode_set(struct sun4i_tcon *tcon,
sun4i_tcon0_mode_set_lvds(tcon, encoder, mode);
break;
case DRM_MODE_ENCODER_NONE:
sun4i_tcon0_mode_set_rgb(tcon, mode);
sun4i_tcon0_mode_set_rgb(tcon, encoder, mode);
sun4i_tcon_set_mux(tcon, 0, encoder);
break;
case DRM_MODE_ENCODER_TVDAC:

View File

@ -116,6 +116,7 @@
#define SUN4I_TCON0_IO_POL_REG 0x88
#define SUN4I_TCON0_IO_POL_DCLK_PHASE(phase) ((phase & 3) << 28)
#define SUN4I_TCON0_IO_POL_DE_NEGATIVE BIT(27)
#define SUN4I_TCON0_IO_POL_HSYNC_POSITIVE BIT(25)
#define SUN4I_TCON0_IO_POL_VSYNC_POSITIVE BIT(24)

View File

@ -36,77 +36,6 @@
* and registers the DRM device using devm_tinydrm_register().
*/
/**
* tinydrm_gem_cma_prime_import_sg_table - Produce a CMA GEM object from
* another driver's scatter/gather table of pinned pages
* @drm: DRM device to import into
* @attach: DMA-BUF attachment
* @sgt: Scatter/gather table of pinned pages
*
* This function imports a scatter/gather table exported via DMA-BUF by
* another driver using drm_gem_cma_prime_import_sg_table(). It sets the
* kernel virtual address on the CMA object. Drivers should use this as their
* &drm_driver->gem_prime_import_sg_table callback if they need the virtual
* address. tinydrm_gem_cma_free_object() should be used in combination with
* this function.
*
* Returns:
* A pointer to a newly created GEM object or an ERR_PTR-encoded negative
* error code on failure.
*/
struct drm_gem_object *
tinydrm_gem_cma_prime_import_sg_table(struct drm_device *drm,
struct dma_buf_attachment *attach,
struct sg_table *sgt)
{
struct drm_gem_cma_object *cma_obj;
struct drm_gem_object *obj;
void *vaddr;
vaddr = dma_buf_vmap(attach->dmabuf);
if (!vaddr) {
DRM_ERROR("Failed to vmap PRIME buffer\n");
return ERR_PTR(-ENOMEM);
}
obj = drm_gem_cma_prime_import_sg_table(drm, attach, sgt);
if (IS_ERR(obj)) {
dma_buf_vunmap(attach->dmabuf, vaddr);
return obj;
}
cma_obj = to_drm_gem_cma_obj(obj);
cma_obj->vaddr = vaddr;
return obj;
}
EXPORT_SYMBOL(tinydrm_gem_cma_prime_import_sg_table);
/**
* tinydrm_gem_cma_free_object - Free resources associated with a CMA GEM
* object
* @gem_obj: GEM object to free
*
* This function frees the backing memory of the CMA GEM object, cleans up the
* GEM object state and frees the memory used to store the object itself using
* drm_gem_cma_free_object(). It also handles PRIME buffers which has the kernel
* virtual address set by tinydrm_gem_cma_prime_import_sg_table(). Drivers
* can use this as their &drm_driver->gem_free_object_unlocked callback.
*/
void tinydrm_gem_cma_free_object(struct drm_gem_object *gem_obj)
{
if (gem_obj->import_attach) {
struct drm_gem_cma_object *cma_obj;
cma_obj = to_drm_gem_cma_obj(gem_obj);
dma_buf_vunmap(gem_obj->import_attach->dmabuf, cma_obj->vaddr);
cma_obj->vaddr = NULL;
}
drm_gem_cma_free_object(gem_obj);
}
EXPORT_SYMBOL_GPL(tinydrm_gem_cma_free_object);
static struct drm_framebuffer *
tinydrm_fb_create(struct drm_device *drm, struct drm_file *file_priv,
const struct drm_mode_fb_cmd2 *mode_cmd)

View File

@ -9,12 +9,18 @@
#include <linux/backlight.h>
#include <linux/dma-buf.h>
#include <linux/module.h>
#include <linux/pm.h>
#include <linux/spi/spi.h>
#include <linux/swab.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_print.h>
#include <drm/tinydrm/tinydrm.h>
#include <drm/tinydrm/tinydrm-helpers.h>
#include <uapi/drm/drm.h>
static unsigned int spi_max;
module_param(spi_max, uint, 0400);

View File

@ -16,7 +16,7 @@
#include <linux/property.h>
#include <linux/spi/spi.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_modeset_helper.h>
#include <drm/tinydrm/mipi-dbi.h>
@ -188,7 +188,7 @@ DEFINE_DRM_GEM_CMA_FOPS(hx8357d_fops);
static struct drm_driver hx8357d_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_ATOMIC,
.fops = &hx8357d_fops,
TINYDRM_GEM_DRIVER_OPS,
DRM_GEM_CMA_VMAP_DRIVER_OPS,
.debugfs_init = mipi_dbi_debugfs_init,
.name = "hx8357d",
.desc = "HX8357D",

View File

@ -20,7 +20,8 @@
#include <linux/spi/spi.h>
#include <video/mipi_display.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/tinydrm/mipi-dbi.h>
#include <drm/tinydrm/tinydrm-helpers.h>
@ -367,7 +368,7 @@ static struct drm_driver ili9225_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME |
DRIVER_ATOMIC,
.fops = &ili9225_fops,
TINYDRM_GEM_DRIVER_OPS,
DRM_GEM_CMA_VMAP_DRIVER_OPS,
.name = "ili9225",
.desc = "Ilitek ILI9225",
.date = "20171106",

View File

@ -15,7 +15,7 @@
#include <linux/property.h>
#include <linux/spi/spi.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_modeset_helper.h>
#include <drm/tinydrm/mipi-dbi.h>
@ -144,7 +144,7 @@ DEFINE_DRM_GEM_CMA_FOPS(ili9341_fops);
static struct drm_driver ili9341_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_ATOMIC,
.fops = &ili9341_fops,
TINYDRM_GEM_DRIVER_OPS,
DRM_GEM_CMA_VMAP_DRIVER_OPS,
.debugfs_init = mipi_dbi_debugfs_init,
.name = "ili9341",
.desc = "Ilitek ILI9341",

View File

@ -17,9 +17,9 @@
#include <linux/regulator/consumer.h>
#include <linux/spi/spi.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_modeset_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_modeset_helper.h>
#include <drm/tinydrm/mipi-dbi.h>
#include <drm/tinydrm/tinydrm-helpers.h>
#include <video/mipi_display.h>
@ -153,7 +153,7 @@ static struct drm_driver mi0283qt_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME |
DRIVER_ATOMIC,
.fops = &mi0283qt_fops,
TINYDRM_GEM_DRIVER_OPS,
DRM_GEM_CMA_VMAP_DRIVER_OPS,
.debugfs_init = mipi_dbi_debugfs_init,
.name = "mi0283qt",
.desc = "Multi-Inno MI0283QT",

View File

@ -9,15 +9,19 @@
* (at your option) any later version.
*/
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/tinydrm/mipi-dbi.h>
#include <drm/tinydrm/tinydrm-helpers.h>
#include <linux/debugfs.h>
#include <linux/dma-buf.h>
#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <linux/regulator/consumer.h>
#include <linux/spi/spi.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/tinydrm/mipi-dbi.h>
#include <drm/tinydrm/tinydrm-helpers.h>
#include <uapi/drm/drm.h>
#include <video/mipi_display.h>
#define MIPI_DBI_MAX_SPI_READ_SPEED 2000000 /* 2MHz */

View File

@ -26,6 +26,8 @@
#include <linux/spi/spi.h>
#include <linux/thermal.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/tinydrm/tinydrm.h>
#include <drm/tinydrm/tinydrm-helpers.h>
@ -882,7 +884,7 @@ static struct drm_driver repaper_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME |
DRIVER_ATOMIC,
.fops = &repaper_fops,
TINYDRM_GEM_DRIVER_OPS,
DRM_GEM_CMA_VMAP_DRIVER_OPS,
.name = "repaper",
.desc = "Pervasive Displays RePaper e-ink panels",
.date = "20170405",

View File

@ -17,7 +17,8 @@
#include <linux/spi/spi.h>
#include <video/mipi_display.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/tinydrm/mipi-dbi.h>
#include <drm/tinydrm/tinydrm-helpers.h>
@ -303,7 +304,7 @@ static struct drm_driver st7586_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME |
DRIVER_ATOMIC,
.fops = &st7586_fops,
TINYDRM_GEM_DRIVER_OPS,
DRM_GEM_CMA_VMAP_DRIVER_OPS,
.debugfs_init = mipi_dbi_debugfs_init,
.name = "st7586",
.desc = "Sitronix ST7586",

View File

@ -14,7 +14,7 @@
#include <linux/spi/spi.h>
#include <video/mipi_display.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/tinydrm/mipi-dbi.h>
#include <drm/tinydrm/tinydrm-helpers.h>
@ -119,7 +119,7 @@ static struct drm_driver st7735r_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME |
DRIVER_ATOMIC,
.fops = &st7735r_fops,
TINYDRM_GEM_DRIVER_OPS,
DRM_GEM_CMA_VMAP_DRIVER_OPS,
.debugfs_init = mipi_dbi_debugfs_init,
.name = "st7735r",
.desc = "Sitronix ST7735R",

View File

@ -129,12 +129,12 @@ static const struct hvs_format *vc4_get_hvs_format(u32 drm_format)
static enum vc4_scaling_mode vc4_get_scaling_mode(u32 src, u32 dst)
{
if (dst > src)
return VC4_SCALING_PPF;
else if (dst < src)
return VC4_SCALING_TPZ;
else
if (dst == src)
return VC4_SCALING_NONE;
if (3 * dst >= 2 * src)
return VC4_SCALING_PPF;
else
return VC4_SCALING_TPZ;
}
static bool plane_enabled(struct drm_plane_state *state)
@ -341,12 +341,14 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
vc4_get_scaling_mode(vc4_state->src_h[1],
vc4_state->crtc_h);
/* YUV conversion requires that horizontal scaling be enabled,
* even on a plane that's otherwise 1:1. Looks like only PPF
* works in that case, so let's pick that one.
/* YUV conversion requires that horizontal scaling be enabled
* on the UV plane even if vc4_get_scaling_mode() returned
* VC4_SCALING_NONE (which can happen when the down-scaling
* ratio is 0.5). Let's force it to VC4_SCALING_PPF in this
* case.
*/
if (vc4_state->is_unity)
vc4_state->x_scaling[0] = VC4_SCALING_PPF;
if (vc4_state->x_scaling[1] == VC4_SCALING_NONE)
vc4_state->x_scaling[1] = VC4_SCALING_PPF;
} else {
vc4_state->is_yuv = false;
vc4_state->x_scaling[1] = VC4_SCALING_NONE;

View File

@ -47,8 +47,8 @@
#define DRIVER_DATE "0"
#define DRIVER_MAJOR 0
#define DRIVER_MINOR 0
#define DRIVER_PATCHLEVEL 1
#define DRIVER_MINOR 1
#define DRIVER_PATCHLEVEL 0
/* virtgpu_drm_bus.c */
int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev);
@ -131,6 +131,7 @@ struct virtio_gpu_framebuffer {
int x1, y1, x2, y2; /* dirty rect */
spinlock_t dirty_lock;
uint32_t hw_res_handle;
struct virtio_gpu_fence *fence;
};
#define to_virtio_gpu_framebuffer(x) \
container_of(x, struct virtio_gpu_framebuffer, base)
@ -346,6 +347,9 @@ void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev);
int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma);
/* virtio_gpu_fence.c */
struct virtio_gpu_fence *virtio_gpu_fence_alloc(
struct virtio_gpu_device *vgdev);
void virtio_gpu_fence_cleanup(struct virtio_gpu_fence *fence);
int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
struct virtio_gpu_ctrl_hdr *cmd_hdr,
struct virtio_gpu_fence **fence);

View File

@ -67,6 +67,28 @@ static const struct dma_fence_ops virtio_fence_ops = {
.timeline_value_str = virtio_timeline_value_str,
};
struct virtio_gpu_fence *virtio_gpu_fence_alloc(struct virtio_gpu_device *vgdev)
{
struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
struct virtio_gpu_fence *fence = kzalloc(sizeof(struct virtio_gpu_fence),
GFP_ATOMIC);
if (!fence)
return fence;
fence->drv = drv;
dma_fence_init(&fence->f, &virtio_fence_ops, &drv->lock, drv->context, 0);
return fence;
}
void virtio_gpu_fence_cleanup(struct virtio_gpu_fence *fence)
{
if (!fence)
return;
dma_fence_put(&fence->f);
}
int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
struct virtio_gpu_ctrl_hdr *cmd_hdr,
struct virtio_gpu_fence **fence)
@ -74,15 +96,8 @@ int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
unsigned long irq_flags;
*fence = kmalloc(sizeof(struct virtio_gpu_fence), GFP_ATOMIC);
if ((*fence) == NULL)
return -ENOMEM;
spin_lock_irqsave(&drv->lock, irq_flags);
(*fence)->drv = drv;
(*fence)->seq = ++drv->sync_seq;
dma_fence_init(&(*fence)->f, &virtio_fence_ops, &drv->lock,
drv->context, (*fence)->seq);
dma_fence_get(&(*fence)->f);
list_add_tail(&(*fence)->node, &drv->fences);
spin_unlock_irqrestore(&drv->lock, irq_flags);

View File

@ -28,6 +28,7 @@
#include <drm/drmP.h>
#include <drm/virtgpu_drm.h>
#include <drm/ttm/ttm_execbuf_util.h>
#include <linux/sync_file.h>
#include "virtgpu_drv.h"
@ -105,7 +106,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv;
struct drm_gem_object *gobj;
struct virtio_gpu_fence *fence;
struct virtio_gpu_fence *out_fence;
struct virtio_gpu_object *qobj;
int ret;
uint32_t *bo_handles = NULL;
@ -114,11 +115,46 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
struct ttm_validate_buffer *buflist = NULL;
int i;
struct ww_acquire_ctx ticket;
struct sync_file *sync_file;
int in_fence_fd = exbuf->fence_fd;
int out_fence_fd = -1;
void *buf;
if (vgdev->has_virgl_3d == false)
return -ENOSYS;
if ((exbuf->flags & ~VIRTGPU_EXECBUF_FLAGS))
return -EINVAL;
exbuf->fence_fd = -1;
if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_IN) {
struct dma_fence *in_fence;
in_fence = sync_file_get_fence(in_fence_fd);
if (!in_fence)
return -EINVAL;
/*
* Wait if the fence is from a foreign context, or if the fence
* array contains any fence from a foreign context.
*/
ret = 0;
if (!dma_fence_match_context(in_fence, vgdev->fence_drv.context))
ret = dma_fence_wait(in_fence, true);
dma_fence_put(in_fence);
if (ret)
return ret;
}
if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) {
out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
if (out_fence_fd < 0)
return out_fence_fd;
}
INIT_LIST_HEAD(&validate_list);
if (exbuf->num_bo_handles) {
@ -128,26 +164,22 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
sizeof(struct ttm_validate_buffer),
GFP_KERNEL | __GFP_ZERO);
if (!bo_handles || !buflist) {
kvfree(bo_handles);
kvfree(buflist);
return -ENOMEM;
ret = -ENOMEM;
goto out_unused_fd;
}
user_bo_handles = (void __user *)(uintptr_t)exbuf->bo_handles;
if (copy_from_user(bo_handles, user_bo_handles,
exbuf->num_bo_handles * sizeof(uint32_t))) {
ret = -EFAULT;
kvfree(bo_handles);
kvfree(buflist);
return ret;
goto out_unused_fd;
}
for (i = 0; i < exbuf->num_bo_handles; i++) {
gobj = drm_gem_object_lookup(drm_file, bo_handles[i]);
if (!gobj) {
kvfree(bo_handles);
kvfree(buflist);
return -ENOENT;
ret = -ENOENT;
goto out_unused_fd;
}
qobj = gem_to_virtio_gpu_obj(gobj);
@ -156,6 +188,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
list_add(&buflist[i].head, &validate_list);
}
kvfree(bo_handles);
bo_handles = NULL;
}
ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
@ -168,22 +201,48 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
ret = PTR_ERR(buf);
goto out_unresv;
}
virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
vfpriv->ctx_id, &fence);
ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f);
out_fence = virtio_gpu_fence_alloc(vgdev);
if(!out_fence) {
ret = -ENOMEM;
goto out_memdup;
}
if (out_fence_fd >= 0) {
sync_file = sync_file_create(&out_fence->f);
if (!sync_file) {
dma_fence_put(&out_fence->f);
ret = -ENOMEM;
goto out_memdup;
}
exbuf->fence_fd = out_fence_fd;
fd_install(out_fence_fd, sync_file->file);
}
virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
vfpriv->ctx_id, &out_fence);
ttm_eu_fence_buffer_objects(&ticket, &validate_list, &out_fence->f);
/* fence the command bo */
virtio_gpu_unref_list(&validate_list);
kvfree(buflist);
dma_fence_put(&fence->f);
return 0;
out_memdup:
kfree(buf);
out_unresv:
ttm_eu_backoff_reservation(&ticket, &validate_list);
out_free:
virtio_gpu_unref_list(&validate_list);
out_unused_fd:
kvfree(bo_handles);
kvfree(buflist);
if (out_fence_fd >= 0)
put_unused_fd(out_fence_fd);
return ret;
}
@ -283,11 +342,17 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
rc_3d.nr_samples = cpu_to_le32(rc->nr_samples);
rc_3d.flags = cpu_to_le32(rc->flags);
fence = virtio_gpu_fence_alloc(vgdev);
if (!fence) {
ret = -ENOMEM;
goto fail_backoff;
}
virtio_gpu_cmd_resource_create_3d(vgdev, qobj, &rc_3d, NULL);
ret = virtio_gpu_object_attach(vgdev, qobj, &fence);
if (ret) {
ttm_eu_backoff_reservation(&ticket, &validate_list);
goto fail_unref;
virtio_gpu_fence_cleanup(fence);
goto fail_backoff;
}
ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f);
}
@ -312,6 +377,8 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
dma_fence_put(&fence->f);
}
return 0;
fail_backoff:
ttm_eu_backoff_reservation(&ticket, &validate_list);
fail_unref:
if (vgdev->has_virgl_3d) {
virtio_gpu_unref_list(&validate_list);
@ -374,6 +441,12 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
goto out_unres;
convert_to_hw_box(&box, &args->box);
fence = virtio_gpu_fence_alloc(vgdev);
if (!fence) {
ret = -ENOMEM;
goto out_unres;
}
virtio_gpu_cmd_transfer_from_host_3d
(vgdev, qobj->hw_res_handle,
vfpriv->ctx_id, offset, args->level,
@ -423,6 +496,11 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
(vgdev, qobj, offset,
box.w, box.h, box.x, box.y, NULL);
} else {
fence = virtio_gpu_fence_alloc(vgdev);
if (!fence) {
ret = -ENOMEM;
goto out_unres;
}
virtio_gpu_cmd_transfer_to_host_3d
(vgdev, qobj,
vfpriv ? vfpriv->ctx_id : 0, offset,

View File

@ -55,10 +55,11 @@ static void virtio_gpu_config_changed_work_func(struct work_struct *work)
static int virtio_gpu_context_create(struct virtio_gpu_device *vgdev,
uint32_t nlen, const char *name)
{
int handle = ida_alloc_min(&vgdev->ctx_id_ida, 1, GFP_KERNEL);
int handle = ida_alloc(&vgdev->ctx_id_ida, GFP_KERNEL);
if (handle < 0)
return handle;
handle += 1;
virtio_gpu_cmd_context_create(vgdev, handle, nlen, name);
return handle;
}
@ -67,7 +68,7 @@ static void virtio_gpu_context_destroy(struct virtio_gpu_device *vgdev,
uint32_t ctx_id)
{
virtio_gpu_cmd_context_destroy(vgdev, ctx_id);
ida_free(&vgdev->ctx_id_ida, ctx_id);
ida_free(&vgdev->ctx_id_ida, ctx_id - 1);
}
static void virtio_gpu_init_vq(struct virtio_gpu_queue *vgvq,
@ -266,8 +267,10 @@ int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file)
get_task_comm(dbgname, current);
id = virtio_gpu_context_create(vgdev, strlen(dbgname), dbgname);
if (id < 0)
if (id < 0) {
kfree(vfpriv);
return id;
}
vfpriv->ctx_id = id;
file->driver_priv = vfpriv;

View File

@ -25,16 +25,21 @@
#include "virtgpu_drv.h"
static void virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
uint32_t *resid)
{
int handle = ida_alloc_min(&vgdev->resource_ida, 1, GFP_KERNEL);
*resid = handle;
int handle = ida_alloc(&vgdev->resource_ida, GFP_KERNEL);
if (handle < 0)
return handle;
*resid = handle + 1;
return 0;
}
static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id)
{
ida_free(&vgdev->resource_ida, id);
ida_free(&vgdev->resource_ida, id - 1);
}
static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
@ -94,7 +99,11 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
bo = kzalloc(sizeof(struct virtio_gpu_object), GFP_KERNEL);
if (bo == NULL)
return -ENOMEM;
virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
if (ret < 0) {
kfree(bo);
return ret;
}
size = roundup(size, PAGE_SIZE);
ret = drm_gem_object_init(vgdev->ddev, &bo->gem_base, size);
if (ret != 0) {

View File

@ -137,6 +137,41 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
plane->state->src_h >> 16);
}
static int virtio_gpu_cursor_prepare_fb(struct drm_plane *plane,
struct drm_plane_state *new_state)
{
struct drm_device *dev = plane->dev;
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_framebuffer *vgfb;
struct virtio_gpu_object *bo;
if (!new_state->fb)
return 0;
vgfb = to_virtio_gpu_framebuffer(new_state->fb);
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
if (bo && bo->dumb && (plane->state->fb != new_state->fb)) {
vgfb->fence = virtio_gpu_fence_alloc(vgdev);
if (!vgfb->fence)
return -ENOMEM;
}
return 0;
}
static void virtio_gpu_cursor_cleanup_fb(struct drm_plane *plane,
struct drm_plane_state *old_state)
{
struct virtio_gpu_framebuffer *vgfb;
if (!plane->state->fb)
return;
vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
if (vgfb->fence)
virtio_gpu_fence_cleanup(vgfb->fence);
}
static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
struct drm_plane_state *old_state)
{
@ -144,7 +179,6 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_output *output = NULL;
struct virtio_gpu_framebuffer *vgfb;
struct virtio_gpu_fence *fence = NULL;
struct virtio_gpu_object *bo = NULL;
uint32_t handle;
int ret = 0;
@ -170,13 +204,13 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
(vgdev, bo, 0,
cpu_to_le32(plane->state->crtc_w),
cpu_to_le32(plane->state->crtc_h),
0, 0, &fence);
0, 0, &vgfb->fence);
ret = virtio_gpu_object_reserve(bo, false);
if (!ret) {
reservation_object_add_excl_fence(bo->tbo.resv,
&fence->f);
dma_fence_put(&fence->f);
fence = NULL;
&vgfb->fence->f);
dma_fence_put(&vgfb->fence->f);
vgfb->fence = NULL;
virtio_gpu_object_unreserve(bo);
virtio_gpu_object_wait(bo, false);
}
@ -218,6 +252,8 @@ static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs = {
};
static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = {
.prepare_fb = virtio_gpu_cursor_prepare_fb,
.cleanup_fb = virtio_gpu_cursor_cleanup_fb,
.atomic_check = virtio_gpu_plane_atomic_check,
.atomic_update = virtio_gpu_cursor_plane_update,
};

View File

@ -896,9 +896,9 @@ void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *obj)
{
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
struct virtio_gpu_fence *fence;
if (use_dma_api && obj->mapped) {
struct virtio_gpu_fence *fence = virtio_gpu_fence_alloc(vgdev);
/* detach backing and wait for the host process it ... */
virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, &fence);
dma_fence_wait(&fence->f, true);

View File

@ -471,6 +471,8 @@ struct drm_driver {
* @gem_prime_export:
*
* export GEM -> dmabuf
*
* This defaults to drm_gem_prime_export() if not set.
*/
struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
struct drm_gem_object *obj, int flags);
@ -478,6 +480,8 @@ struct drm_driver {
* @gem_prime_import:
*
* import dmabuf -> GEM
*
* This defaults to drm_gem_prime_import() if not set.
*/
struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
struct dma_buf *dma_buf);

View File

@ -38,6 +38,121 @@
#include <drm/drm_vma_manager.h>
struct drm_gem_object;
/**
* struct drm_gem_object_funcs - GEM object functions
*/
struct drm_gem_object_funcs {
/**
* @free:
*
* Deconstructor for drm_gem_objects.
*
* This callback is mandatory.
*/
void (*free)(struct drm_gem_object *obj);
/**
* @open:
*
* Called upon GEM handle creation.
*
* This callback is optional.
*/
int (*open)(struct drm_gem_object *obj, struct drm_file *file);
/**
* @close:
*
* Called upon GEM handle release.
*
* This callback is optional.
*/
void (*close)(struct drm_gem_object *obj, struct drm_file *file);
/**
* @print_info:
*
* If driver subclasses struct &drm_gem_object, it can implement this
* optional hook for printing additional driver specific info.
*
* drm_printf_indent() should be used in the callback passing it the
* indent argument.
*
* This callback is called from drm_gem_print_info().
*
* This callback is optional.
*/
void (*print_info)(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *obj);
/**
* @export:
*
* Export backing buffer as a &dma_buf.
* If this is not set drm_gem_prime_export() is used.
*
* This callback is optional.
*/
struct dma_buf *(*export)(struct drm_gem_object *obj, int flags);
/**
* @pin:
*
* Pin backing buffer in memory.
*
* This callback is optional.
*/
int (*pin)(struct drm_gem_object *obj);
/**
* @unpin:
*
* Unpin backing buffer.
*
* This callback is optional.
*/
void (*unpin)(struct drm_gem_object *obj);
/**
* @get_sg_table:
*
* Returns a Scatter-Gather table representation of the buffer.
* Used when exporting a buffer.
*
* This callback is mandatory if buffer export is supported.
*/
struct sg_table *(*get_sg_table)(struct drm_gem_object *obj);
/**
* @vmap:
*
* Returns a virtual address for the buffer.
*
* This callback is optional.
*/
void *(*vmap)(struct drm_gem_object *obj);
/**
* @vunmap:
*
* Releases the the address previously returned by @vmap.
*
* This callback is optional.
*/
void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
/**
* @vm_ops:
*
* Virtual memory operations used with mmap.
*
* This is optional but necessary for mmap support.
*/
const struct vm_operations_struct *vm_ops;
};
/**
* struct drm_gem_object - GEM buffer object
*
@ -146,6 +261,17 @@ struct drm_gem_object {
* simply leave it as NULL.
*/
struct dma_buf_attachment *import_attach;
/**
* @funcs:
*
* Optional GEM object functions. If this is set, it will be used instead of the
* corresponding &drm_driver GEM callbacks.
*
* New drivers should use this.
*
*/
const struct drm_gem_object_funcs *funcs;
};
/**
@ -293,4 +419,9 @@ int drm_gem_dumb_destroy(struct drm_file *file,
struct drm_device *dev,
uint32_t handle);
int drm_gem_pin(struct drm_gem_object *obj);
void drm_gem_unpin(struct drm_gem_object *obj);
void *drm_gem_vmap(struct drm_gem_object *obj);
void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
#endif /* __DRM_GEM_H__ */

View File

@ -103,4 +103,28 @@ int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
struct drm_gem_object *
drm_cma_gem_create_object_default_funcs(struct drm_device *dev, size_t size);
/**
* DRM_GEM_CMA_VMAP_DRIVER_OPS - CMA GEM driver operations ensuring a virtual
* address on the buffer
*
* This macro provides a shortcut for setting the default GEM operations in the
* &drm_driver structure for drivers that need the virtual address also on
* imported buffers.
*/
#define DRM_GEM_CMA_VMAP_DRIVER_OPS \
.gem_create_object = drm_cma_gem_create_object_default_funcs, \
.dumb_create = drm_gem_cma_dumb_create, \
.prime_handle_to_fd = drm_gem_prime_handle_to_fd, \
.prime_fd_to_handle = drm_gem_prime_fd_to_handle, \
.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table_vmap, \
.gem_prime_mmap = drm_gem_prime_mmap
struct drm_gem_object *
drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *drm,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
#endif /* __DRM_GEM_CMA_HELPER_H__ */

View File

@ -70,6 +70,7 @@ struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
int drm_gem_prime_handle_to_fd(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle, uint32_t flags,
int *prime_fd);
int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);

View File

@ -30,15 +30,10 @@
struct drm_syncobj_cb;
enum drm_syncobj_type {
DRM_SYNCOBJ_TYPE_BINARY,
DRM_SYNCOBJ_TYPE_TIMELINE
};
/**
* struct drm_syncobj - sync object.
*
* This structure defines a generic sync object which is timeline based.
* This structure defines a generic sync object which wraps a &dma_fence.
*/
struct drm_syncobj {
/**
@ -46,42 +41,21 @@ struct drm_syncobj {
*/
struct kref refcount;
/**
* @type: indicate syncobj type
* @fence:
* NULL or a pointer to the fence bound to this object.
*
* This field should not be used directly. Use drm_syncobj_fence_get()
* and drm_syncobj_replace_fence() instead.
*/
enum drm_syncobj_type type;
struct dma_fence __rcu *fence;
/**
* @wq: wait signal operation work queue
* @cb_list: List of callbacks to call when the &fence gets replaced.
*/
wait_queue_head_t wq;
/**
* @timeline_context: fence context used by timeline
*/
u64 timeline_context;
/**
* @timeline: syncobj timeline value, which indicates point is signaled.
*/
u64 timeline;
/**
* @signal_point: which indicates the latest signaler point.
*/
u64 signal_point;
/**
* @signal_pt_list: signaler point list.
*/
struct list_head signal_pt_list;
/**
* @cb_list: List of callbacks to call when the &fence gets replaced.
*/
struct list_head cb_list;
/**
* @pt_lock: Protects pt list.
* @lock: Protects &cb_list and write-locks &fence.
*/
spinlock_t pt_lock;
/**
* @cb_mutex: Protects syncobj cb list.
*/
struct mutex cb_mutex;
spinlock_t lock;
/**
* @file: A file backing for this syncobj.
*/
@ -94,7 +68,7 @@ typedef void (*drm_syncobj_func_t)(struct drm_syncobj *syncobj,
/**
* struct drm_syncobj_cb - callback for drm_syncobj_add_callback
* @node: used by drm_syncob_add_callback to append this struct to
* &drm_syncobj.cb_list
* &drm_syncobj.cb_list
* @func: drm_syncobj_func_t to call
*
* This struct will be initialized by drm_syncobj_add_callback, additional
@ -132,6 +106,29 @@ drm_syncobj_put(struct drm_syncobj *obj)
kref_put(&obj->refcount, drm_syncobj_free);
}
/**
* drm_syncobj_fence_get - get a reference to a fence in a sync object
* @syncobj: sync object.
*
* This acquires additional reference to &drm_syncobj.fence contained in @obj,
* if not NULL. It is illegal to call this without already holding a reference.
* No locks required.
*
* Returns:
* Either the fence of @obj or NULL if there's none.
*/
static inline struct dma_fence *
drm_syncobj_fence_get(struct drm_syncobj *syncobj)
{
struct dma_fence *fence;
rcu_read_lock();
fence = dma_fence_get_rcu_safe(&syncobj->fence);
rcu_read_unlock();
return fence;
}
struct drm_syncobj *drm_syncobj_find(struct drm_file *file_private,
u32 handle);
void drm_syncobj_replace_fence(struct drm_syncobj *syncobj, u64 point,
@ -145,7 +142,5 @@ int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
int drm_syncobj_get_handle(struct drm_file *file_private,
struct drm_syncobj *syncobj, u32 *handle);
int drm_syncobj_get_fd(struct drm_syncobj *syncobj, int *p_fd);
int drm_syncobj_search_fence(struct drm_syncobj *syncobj, u64 point, u64 flags,
struct dma_fence **fence);
#endif

View File

@ -10,10 +10,15 @@
#ifndef __LINUX_TINYDRM_H
#define __LINUX_TINYDRM_H
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_fb_cma_helper.h>
#include <linux/mutex.h>
#include <drm/drm_simple_kms_helper.h>
struct drm_clip_rect;
struct drm_driver;
struct drm_file;
struct drm_framebuffer;
struct drm_framebuffer_funcs;
/**
* struct tinydrm_device - tinydrm device
*/
@ -53,27 +58,6 @@ pipe_to_tinydrm(struct drm_simple_display_pipe *pipe)
return container_of(pipe, struct tinydrm_device, pipe);
}
/**
* TINYDRM_GEM_DRIVER_OPS - default tinydrm gem operations
*
* This macro provides a shortcut for setting the tinydrm GEM operations in
* the &drm_driver structure.
*/
#define TINYDRM_GEM_DRIVER_OPS \
.gem_free_object_unlocked = tinydrm_gem_cma_free_object, \
.gem_print_info = drm_gem_cma_print_info, \
.gem_vm_ops = &drm_gem_cma_vm_ops, \
.prime_handle_to_fd = drm_gem_prime_handle_to_fd, \
.prime_fd_to_handle = drm_gem_prime_fd_to_handle, \
.gem_prime_import = drm_gem_prime_import, \
.gem_prime_export = drm_gem_prime_export, \
.gem_prime_get_sg_table = drm_gem_cma_prime_get_sg_table, \
.gem_prime_import_sg_table = tinydrm_gem_cma_prime_import_sg_table, \
.gem_prime_vmap = drm_gem_cma_prime_vmap, \
.gem_prime_vunmap = drm_gem_cma_prime_vunmap, \
.gem_prime_mmap = drm_gem_cma_prime_mmap, \
.dumb_create = drm_gem_cma_dumb_create
/**
* TINYDRM_MODE - tinydrm display mode
* @hd: Horizontal resolution, width
@ -97,11 +81,6 @@ pipe_to_tinydrm(struct drm_simple_display_pipe *pipe)
.type = DRM_MODE_TYPE_DRIVER, \
.clock = 1 /* pass validation */
void tinydrm_gem_cma_free_object(struct drm_gem_object *gem_obj);
struct drm_gem_object *
tinydrm_gem_cma_prime_import_sg_table(struct drm_device *drm,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
int devm_tinydrm_init(struct device *parent, struct tinydrm_device *tdev,
const struct drm_framebuffer_funcs *fb_funcs,
struct drm_driver *driver);

View File

@ -717,7 +717,6 @@ struct drm_prime_handle {
struct drm_syncobj_create {
__u32 handle;
#define DRM_SYNCOBJ_CREATE_SIGNALED (1 << 0)
#define DRM_SYNCOBJ_CREATE_TYPE_TIMELINE (1 << 1)
__u32 flags;
};

View File

@ -151,6 +151,7 @@ extern "C" {
#define DRM_FORMAT_VYUY fourcc_code('V', 'Y', 'U', 'Y') /* [31:0] Y1:Cb0:Y0:Cr0 8:8:8:8 little endian */
#define DRM_FORMAT_AYUV fourcc_code('A', 'Y', 'U', 'V') /* [31:0] A:Y:Cb:Cr 8:8:8:8 little endian */
#define DRM_FORMAT_XYUV8888 fourcc_code('X', 'Y', 'U', 'V') /* [31:0] X:Y:Cb:Cr 8:8:8:8 little endian */
/*
* packed YCbCr420 2x2 tiled formats

View File

@ -47,6 +47,13 @@ extern "C" {
#define DRM_VIRTGPU_WAIT 0x08
#define DRM_VIRTGPU_GET_CAPS 0x09
#define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01
#define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02
#define VIRTGPU_EXECBUF_FLAGS (\
VIRTGPU_EXECBUF_FENCE_FD_IN |\
VIRTGPU_EXECBUF_FENCE_FD_OUT |\
0)
struct drm_virtgpu_map {
__u64 offset; /* use for mmap system call */
__u32 handle;
@ -54,12 +61,12 @@ struct drm_virtgpu_map {
};
struct drm_virtgpu_execbuffer {
__u32 flags; /* for future use */
__u32 flags;
__u32 size;
__u64 command; /* void* */
__u64 bo_handles;
__u32 num_bo_handles;
__u32 pad;
__s32 fence_fd; /* in/out fence fd (see VIRTGPU_EXECBUF_FENCE_FD_IN/OUT) */
};
#define VIRTGPU_PARAM_3D_FEATURES 1 /* do we have 3D features in the hw */
@ -137,7 +144,7 @@ struct drm_virtgpu_get_caps {
DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map)
#define DRM_IOCTL_VIRTGPU_EXECBUFFER \
DRM_IOW(DRM_COMMAND_BASE + DRM_VIRTGPU_EXECBUFFER,\
DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_EXECBUFFER,\
struct drm_virtgpu_execbuffer)
#define DRM_IOCTL_VIRTGPU_GETPARAM \