Currently the OF DMA code uses a spin lock to protect the of_dma_list from
concurrent access and a per controller reference count to protect the controller
from being freed while a request operation is in progress. If
of_dma_controller_free() is called for a controller who's reference count is not
zero it will return -EBUSY and not remove the controller. This is fine up until
here, but leaves the question what the caller of of_dma_controller_free() is
supposed to do if the controller couldn't be freed. The only viable solution
for the caller is to spin on of_dma_controller_free() until it returns success.
E.g.
do {
ret = of_dma_controller_free(dev->of_node)
} while (ret != -EBUSY);
This is rather ugly and unnecessary and none of the current users of
of_dma_controller_free() check it's return value anyway. Instead protect the
list by a mutex. The mutex will be held as long as a request operation is in
progress. So if of_dma_controller_free() is called while a request operation is
in progress it will be put to sleep and only wake up once the request operation
has finished.
This means that it is no longer possible to register or unregister OF DMA
controllers from a context where it's not possible to sleep. But I doubt that
we'll ever need this.
Also rename of_dma_get_controller back to of_dma_find_controller.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
No DMA of-function alters the name, so this patch changes the name arguments
to be constant. Most drivers will probably request DMA channels using a
constant name.
Signed-off-by: Markus Pargmann <mpa@pengutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the current implementation of the OF DMA helpers, read-copy-update (RCU)
linked lists are being used for storing and accessing the DMA controller data.
This part of implementation is based upon V2 of the DMA helpers by Nicolas [1].
During a recent review of RCU, it became apparent that the code is missing the
required rcu_read_lock()/unlock() calls as well as synchronisation calls before
freeing any memory protected by RCU.
Having looked into adding the appropriate RCU calls to protect the DMA data it
became apparent that with the current DMA helper implementation, using RCU is
not as attractive as it may have been before. The main reasons being that ...
1. We need to protect the DMA data around calls to the xlate function.
2. The of_dma_simple_xlate() function calls the DMA engine function
dma_request_channel() which employs a mutex and so could sleep.
3. The RCU read-side critical sections must not sleep and so we cannot hold
an RCU read lock around the xlate function.
Therefore, instead of using RCU, an alternative for this use-case is to employ
a simple spinlock inconjunction with a usage count variable to keep track of
how many current users of the DMA data structure there are. With this
implementation, the DMA data cannot be freed until all current users of the
DMA data are finished.
This patch is based upon the DMA helpers fix for potential deadlock [2].
[1] http://article.gmane.org/gmane.linux.ports.arm.omap/73622
[2] http://marc.info/?l=linux-arm-kernel&m=134859982520984&w=2
Signed-off-by: Jon Hunter <jon-hunter@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
This is based upon the work by Benoit Cousson [1] and Nicolas Ferre [2]
to add some basic helpers to retrieve a DMA controller device_node and the
DMA request/channel information.
Aim of DMA helpers
- The purpose of device-tree is to describe the capabilites of the hardware.
Thinking about DMA controllers purely from the context of the hardware to
begin with, we can describe a device in terms of a DMA controller as
follows ...
1. Number of DMA controllers
2. Number of channels (maybe physical or logical)
3. Mapping of DMA requests signals to DMA controller
4. Number of DMA interrupts
5. Mapping of DMA interrupts to channels
- With the above in mind the aim of the DT DMA helper functions is to extract
the above information from the DT and provide to the appropriate driver.
However, due to the vast number of DMA controllers and not all are using a
common driver (such as DMA Engine) it has been seen that this is not a
trivial task. In previous discussions on this topic the following concerns
have been raised ...
1. How does the binding support devices with multiple DMA controllers?
2. How to support both legacy DMA controllers not using DMA Engine as
well as those that support DMA Engine.
3. When using with DMA Engine how do we support the various
implementations where the opaque filter function parameter differs
between implementations?
4. How do we handle DMA channels that are identified with a string
versus a integer?
- Hence the design of the DMA helpers has to accomodate the above or align on
an agreement what can be or should be supported.
Design of DMA helpers
1. Registering DMA controllers
In the case of DMA controllers that are using DMA Engine, requesting a
channel is performed by calling the following function.
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
dma_filter_fn filter_fn,
void *filter_param);
The mask variable is used to match a type of the device controller in a list
of controllers. The filter_fn and filter_param are used to identify the
required dma channel and return a handle to the dma channel of type dma_chan.
From the examples I have seen, the mask and filter_fn are constant
for a given DMA controller and therefore, we can specify these as controller
specific data when registering the DMA controller with the device-tree DMA
helpers.
The filter_param variable is of an unknown type and is typically specific
to the DMA engine implementation for a given DMA controller. To allow some
flexibility in the type and formating of this filter_param we employ an
xlate to translate the device-tree binding information into the appropriate
format. The xlate function used for a DMA controller can also be specified
when registering the DMA controller with the device-tree DMA helpers.
Based upon the above, a function for registering the DMA controller with the
DMA helpers now looks like the below. The data variable is used to pass a
pointer to DMA controller specific data used by the xlate function.
int of_dma_controller_register(struct device_node *np,
struct dma_chan *(*of_dma_xlate)
(struct of_phandle_args *, struct of_dma *),
void *data)
For example, in the case where DMA engine is used, we define the following
structure (that stores the DMA engine capability mask and filter function)
and pass this to the data variable in the above function.
struct of_dma_filter_info {
dma_cap_mask_t dma_cap;
dma_filter_fn filter_fn;
};
2. Representing and requesting channel information
Please see the dma binding documentation included in this patch for a
description of how DMA controllers and client information should be
represented with device-tree. For more information on how this binding
came about please see [3]. In addition to this, feedback received from
the Linux kernel summit showed a consensus (among those who attended) to
use a name to identify DMA client information [4].
A DMA channel can be requested by calling the following function, where name
is a required parameter used for identifying a DMA channel. This function
has been designed to return a structure of type dma_chan to work with the
DMA engine driver. Note that if DMA engine is used then drivers should be
using the DMA engine API dma_request_slave_channel() (implemented in part 2
of this series, "dmaengine: add helper function to request a slave DMA
channel") which will in turn call the below function if device-tree is
present. The aim being to have a common DMA engine interface regardless of
whether device tree is being used.
struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
char *name)
3. Supporting legacy devices not using DMA Engine
These devices present a problem, as there may not be a uniform way to easily
support them with regard to device tree. Ideally, these should be migrated
to DMA engine. However, if this is not possible, then they should still be
able to use this binding, the only constaint imposed by this implementation
is that when requesting a DMA channel via of_dma_request_slave_channel(), it
will return a type of dma_chan.
This implementation has been tested on OMAP4430 using the kernel v3.6-rc5. I
have validated that MMC is working on the PANDA board with this implementation.
My development branch for testing on OMAP can be found here [5].
v6: - minor corrections in DMA binding documentation
v5: - minor update to binding documentation
- added loop to exhaustively search for a slave channel in the case where
there could be alternative channels available
v4: - revert the removal of xlate function from v3
- update the proposed binding format and APIs based upon discussions [3]
v3: - avoid passing an xlate function and instead pass DMA engine parameters
- define number of dma channels and requests in dma-controller node
v2: - remove of_dma_to_resource API
- make property #dma-cells required (no fallback anymore)
- another check in of_dma_xlate_onenumbercell() function
[1] http://article.gmane.org/gmane.linux.drivers.devicetree/12022
[2] http://article.gmane.org/gmane.linux.ports.arm.omap/73622
[3] http://marc.info/?l=linux-omap&m=133582085008539&w=2
[4] http://pad.linaro.org/arm-mini-summit-2012
[5] https://github.com/jonhunter/linux/tree/dev-dt-dma
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Cc: Benoit Cousson <b-cousson@ti.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <djbw@fb.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Jon Hunter <jon-hunter@ti.com>
Reviewed-by: Stephen Warren <swarren@wwwdotorg.org>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>