As files move around, their previous links break. Fix the
references for them.
Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Acked-by: Jonathan Corbet <corbet@lwn.net>
DMAengine core has BUG_ON to check for mandatory operations and ones based
on capabilities, but they use BUG_ON, so remove and move to error returns
and logging the errors gracefully
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are no in kernel consumers for DMA_SG op. Removing operation,
dead code, and test code in dmatest.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Gary Hook <gary.hook@amd.com>
Cc: Ludovic Desroches <ludovic.desroches@microchip.com>
Cc: Kedareswara rao Appana <appana.durga.rao@xilinx.com>
Cc: Li Yang <leoyang.li@nxp.com>
Cc: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This fixes the following warning when building with clang and
CONFIG_DMA_ENGINE_RAID=n :
drivers/dma/dmaengine.c:1102:11: error: array index 2 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds]
return &unmap_pool[2];
^ ~
drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here
static struct dmaengine_unmap_pool unmap_pool[] = {
^
drivers/dma/dmaengine.c:1104:11: error: array index 3 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds]
return &unmap_pool[3];
^ ~
drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here
static struct dmaengine_unmap_pool unmap_pool[] = {
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dmaengine currently uses an IDR to allocate DMA IDs, but it only needs
to know whether IDs are in use or not; the ID to pointer functionality
of the IDR is unused. That means it can use the more space-efficient IDA.
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The IS_ENABLED() macro checks if a Kconfig symbol has been enabled either
built-in or as a module, use that macro instead of open coding the same.
Signed-off-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When check for capabilities recognize slave support by either DMA_SLAVE or
DMA_CYCLIC bit set. If we don't do that the user can't get a normally worked
DMA support for engines that doesn't have one of the mentioned bits set.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit ef859312c3 ("dmaengine: core: Use dev_ functions for debug and
error prints") wasn't quite right in __dma_request_channel() by claiming
that all pr_ prints have valid DMA channel pointer. Obviously it is not
true as __dma_request_channel() is looking for a channel and returns NULL
if it does not find it.
Prevent this potential NULL pointer dereference by reverting back to
pr_debug().
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_get_slave_caps() API only checked for slave capability where
we use slave capabilities for cyclic dma operations as well, so we
should add the cyclic case here too.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
According to dmaengine kerneldoc the struct dma_chan has always a non-NULL
pointer to DMA device and a test in dma_async_device_register()
validates that DMA device must also point to struct device.
All pr_ prints except one in dma_channel_table_init() have valid DMA
channel or DMA device pointer available which allow convert them to use
dev_ functions and thus able to show the associated DMA device.
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch add max_burst to dma_get_slave_caps for clients
to get the burst capability of slave dma controller.
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The two API function can cover most, if not all current APIs used to
request a channel. With minimal effort dmaengine drivers, platforms and
dmaengine user drivers can be converted to use the two function.
struct dma_chan *dma_request_chan_by_mask(const dma_cap_mask_t *mask);
To request any channel matching with the requested capabilities, can be
used to request channel for memcpy, memset, xor, etc where no hardware
synchronization is needed.
struct dma_chan *dma_request_chan(struct device *dev, const char *name);
To request a slave channel. The dma_request_chan() will try to find the
channel via DT, ACPI or in case if the kernel booted in non DT/ACPI mode
it will use a filter lookup table and retrieves the needed information from
the dma_slave_map provided by the DMA drivers.
This legacy mode needs changes in platform code, in dmaengine drivers and
finally the dmaengine user drivers can be converted:
For each dmaengine driver an array of DMA device, slave and the parameter
for the filter function needs to be added:
static const struct dma_slave_map da830_edma_map[] = {
{ "davinci-mcasp.0", "rx", EDMA_FILTER_PARAM(0, 0) },
{ "davinci-mcasp.0", "tx", EDMA_FILTER_PARAM(0, 1) },
{ "davinci-mcasp.1", "rx", EDMA_FILTER_PARAM(0, 2) },
{ "davinci-mcasp.1", "tx", EDMA_FILTER_PARAM(0, 3) },
{ "davinci-mcasp.2", "rx", EDMA_FILTER_PARAM(0, 4) },
{ "davinci-mcasp.2", "tx", EDMA_FILTER_PARAM(0, 5) },
{ "spi_davinci.0", "rx", EDMA_FILTER_PARAM(0, 14) },
{ "spi_davinci.0", "tx", EDMA_FILTER_PARAM(0, 15) },
{ "da830-mmc.0", "rx", EDMA_FILTER_PARAM(0, 16) },
{ "da830-mmc.0", "tx", EDMA_FILTER_PARAM(0, 17) },
{ "spi_davinci.1", "rx", EDMA_FILTER_PARAM(0, 18) },
{ "spi_davinci.1", "tx", EDMA_FILTER_PARAM(0, 19) },
};
This information is going to be needed by the dmaengine driver, so
modification to the platform_data is needed, and the driver map should be
added to the pdata of the DMA driver:
da8xx_edma0_pdata.slave_map = da830_edma_map;
da8xx_edma0_pdata.slavecnt = ARRAY_SIZE(da830_edma_map);
The DMA driver then needs to configure the needed device -> filter_fn
mapping before it registers with dma_async_device_register() :
ecc->dma_slave.filter_map.map = info->slave_map;
ecc->dma_slave.filter_map.mapcnt = info->slavecnt;
ecc->dma_slave.filter_map.fn = edma_filter_fn;
When neither DT or ACPI lookup is available the dma_request_chan() will
try to match the requester's device name with the filter_map's list of
device names, when a match found it will use the information from the
dma_slave_map to get the channel with the dma_get_channel() internal
function.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Channel matching with private_candidate() is used in two paths, the error
checking is slightly different in them and they are duplicating code also.
Move the code under find_candidate() to provide consistent execution and
going to allow us to reuse this mode of channel lookup later.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If mask is NULL skip the mask matching against the DMA device capabilities.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the current state, the capability of transfer reuse can neither be
set by a slave dmaengine driver, nor used by a client driver, because
the capability is not available to dma_get_slave_caps().
Fix this by adding a way to declare the capability.
Fixes: 272420214d ("dmaengine: Add DMA_CTRL_REUSE")
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The DMAengine API has a long standing race condition that is inherent to
the API itself. Calling dmaengine_terminate_all() is supposed to stop and
abort any pending or active transfers that have previously been submitted.
Unfortunately it is possible that this operation races against a currently
running (or with some drivers also scheduled) completion callback.
Since the API allows dmaengine_terminate_all() to be called from atomic
context as well as from within a completion callback it is not possible to
synchronize to the execution of the completion callback from within
dmaengine_terminate_all() itself.
This means that a user of the DMAengine API does not know when it is safe
to free resources used in the completion callback, which can result in a
use-after-free race condition.
This patch addresses the issue by introducing an explicit synchronization
primitive to the DMAengine API called dmaengine_synchronize().
The existing dmaengine_terminate_all() is deprecated in favor of
dmaengine_terminate_sync() and dmaengine_terminate_async(). The former
aborts all pending and active transfers and synchronizes to the current
context, meaning it will wait until all running completion callbacks have
finished. This means it is only possible to call this function from
non-atomic context. The later function does not synchronize, but can still
be used in atomic context or from within a complete callback. It has to be
followed up by dmaengine_synchronize() before a client can free the
resources used in a completion callback.
In addition to this the semantics of the device_terminate_all() callback
are slightly relaxed by this patch. It is now OK for a driver to only
schedule the termination of the active transfer, but does not necessarily
have to wait until the DMA controller has completely stopped. The driver
must ensure though that the controller has stopped and no longer accesses
any memory when the device_synchronize() callback returns.
This was in part done since most drivers do not pay attention to this
anyway at the moment and to emphasize that this needs to be done when the
device_synchronize() callback is implemented. But it also helps with
implementing support for devices where stopping the controller can require
operations that may sleep.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This time we have a very typical update which is mostly fixes and updates to
drivers and no new drivers.
- Biggest change is coming from Peter for edma cleanup which even caused
some last minute regression, things seem settled now
- idma64 and dw updates
- iotdma updates
- module autoload fixes for various drivers
- scatter gather support for hdmac
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWP1vYAAoJEHwUBw8lI4NHKg8QAKwJ/iPxZDMfWwCLAoZoeaDp
GgOmxIZ3LWrFQHIziI1IUcxSAU28Z+N6GBziaXycw49oQic4J/ukHGhgxwI2GM/W
JGbFzHalnwdGqKzVZyfWGV0njT3KRvwKl7qqb66ZhikF5lD5imga66QGGxof8wBK
uoXra7yycrE8nGpyY5Gdd4H59aWVyAK1fW6+0cxa2kMfWo8vLkgj/olbzYZydhQz
DFCEvx2hbTyP/8VjGybVhtd+oeLjX43tGJOm6weyCb/8z3/soD73CChZIP53Mt6i
g+LKPfurRZ3VGILgdemkiMB+5pXi4jhmziw5pCSI6BluUGVZAMyg8hC1zapFUr/2
HaLfVmtxYdsBlceuMss1f3vyN2d+kRdDPKlOb9Td0PC2TEpSa5QayuMscvVfau+Q
epSdIxgjlHqAVDVZJ0kX55FLdmItx36tat6zJL5cUcZmrDvEY3TQmvcXyR2dIltY
U9WB2lRaETOG4Js0gAItXDjijNvLs0F4q10Zk6RdH4BcWbuqvONAhaCpWKeOWNsf
NgQkkg+DqHqol6+C0pI0uX6dvEfwJuZ4bqq98wl4SryuxWJNra/g55/oFEFDbJCU
oRMPYguMUgV8tzCpAurmDhxcrsZu/dvaSSW+flVz2QP81ecvFjRnZ8hhXbCr6iWi
VVoAISmQBQPCXoqPr7ZC
=tIzO
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have a very typical update which is mostly fixes and
updates to drivers and no new drivers.
- the biggest change is coming from Peter for edma cleanup which even
caused some last minute regression, things seem settled now
- idma64 and dw updates
- iotdma updates
- module autoload fixes for various drivers
- scatter gather support for hdmac"
* tag 'dmaengine-4.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (77 commits)
dmaengine: edma: Add dummy driver skeleton for edma3-tptc
Revert "ARM: DTS: am33xx: Use the new DT bindings for the eDMA3"
Revert "ARM: DTS: am437x: Use the new DT bindings for the eDMA3"
dmaengine: dw: some Intel devices has no memcpy support
dmaengine: dw: platform: provide platform data for Intel
dmaengine: dw: don't override platform data with autocfg
dmaengine: hdmac: Add scatter-gathered memset support
dmaengine: hdmac: factorise memset descriptor allocation
dmaengine: virt-dma: Fix kernel-doc annotations
ARM: DTS: am437x: Use the new DT bindings for the eDMA3
ARM: DTS: am33xx: Use the new DT bindings for the eDMA3
dmaengine: edma: New device tree binding
dmaengine: Kconfig: edma: Select TI_DMA_CROSSBAR in case of ARCH_OMAP
dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx
dmaengine: edma: Merge the of parsing functions
dmaengine: edma: Do not allocate memory for edma_rsv_info in case of DT boot
dmaengine: edma: Refactor the dma device and channel struct initialization
dmaengine: edma: Get qDMA channel information from HW also
dmaengine: edma: Merge map_dmach_to_queue into assign_channel_eventq
dmaengine: edma: Correct PaRAM access function names (_parm_ to _param_)
...
dma_release_channel() decrements privatecnt counter and almost all dma_get*
function increments it with the exception of dma_get_slave_channel().
In most cases this does not cause issue since normally the channel is not
requested and released, but if a driver requests DMA channel via
dma_get_slave_channel() and releases the channel the privatecnt will be
unbalanced and this will prevent for example getting channel for memcpy.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch increments privatecnt value and set DMA_PRIVATE in device
caps in dma_request_slave_channel() function. This is needed to keep
privatecnt increment/decrement balance.
As function dma_release_channel() decrements privatecnt counter, we need
to increment it when channel is requested. Otherwise privatecnt drops
into negatives after few dma_release_channel() calls.
Reported-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Robert Baldyga <r.baldyga@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This time we have support for few new devices, few new features and odd
fixes spread thru the subsystem.
New devices added
- support for CSRatlas7 dma controller
- Allwinner H3(sun8i) controller
- TI DMA crossbar driver on DRA7x
- new pxa driver
New features added:
- memset support is bought back now that we have a user in xdmac controller
- interleaved transfers support different source and destination strides
- supporting DMA routers and configuration thru DT
- support for reusing descriptors
- xdmac memset and interleaved transfer support
- hdmac support for interleaved transfers
- omap-dma support for memcpy
Others
- Constify platform_device_id
- mv_xor fixes and improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVjsgRAAoJEHwUBw8lI4NHcu8QAMw6EMPSD+tXWr0eDKhZm3zr
9rURBLXaVKjcboY78uvcZvtzC9PB5AVexoTt7K2zKkeF24t8hIz7nVBAnTqLtd00
tEoJpDEIxtmRyKkCPpF7LvbVVFh+qD2+66Gf67LMb0UXzOFKsrdAdrfNtST8ezUl
rQU95ZmZfW1CfCDg0zaM9ipxZWB54txR51Wf1C14Y5SzKWVHSaD7jgAqhA81WPLF
iIOqGY9VyOh3Ry58ON/x/Q8lOGfMEocXs9+FLa1tMFrO3vKSQB1lPN1NfwbnvZKy
Oqh+1sqdGwPUoQBEGZfBHcYvVgyX4FC4d8V6BIBPVD3PGt3oQJ6+pVom9ufnDtaQ
3cxbpNt+n0FywIKEZrIxe96kHrkb7FWL17p3ZuA7n4qmEHt5pabFjqEBS/isqpzB
CiVJDzh3x3LOlL4zzvp303a/Yn/fnuDJpa1Zfw45uYZgMkyNlatd1Llrxm2Z24j8
g56Jve+JXx17j1b5yjSVcuWR9QOwBrqJncbFVx7rGLjo755ex24pXEMccvMy2BCD
x/le8obIGsY3jAU/4k+eJSrI5RLsAins5tCicrL3d12elPCcSlPCR8FyLbNDyFIV
K67hOmVrkJrqsLVoRtFxEwaLJF1M1DGstjPr42G2W82pF4IbHEF1oHRqAhsXY6xB
+PStPU1krDOu/nTJOPOm
=VM4w
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have support for few new devices, few new features and
odd fixes spread thru the subsystem.
New devices added:
- support for CSRatlas7 dma controller
- Allwinner H3(sun8i) controller
- TI DMA crossbar driver on DRA7x
- new pxa driver
New features added:
- memset support is bought back now that we have a user in xdmac controller
- interleaved transfers support different source and destination strides
- supporting DMA routers and configuration thru DT
- support for reusing descriptors
- xdmac memset and interleaved transfer support
- hdmac support for interleaved transfers
- omap-dma support for memcpy
Others:
- Constify platform_device_id
- mv_xor fixes and improvements"
* tag 'dmaengine-4.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits)
dmaengine: xgene: fix file permission
dmaengine: fsl-edma: clear pending interrupts on initialization
dmaengine: xdmac: Add memset support
Documentation: dmaengine: document DMA_CTRL_ACK
dmaengine: virt-dma: don't always free descriptor upon completion
dmaengine: Revert "drivers/dma: remove unused support for MEMSET operations"
dmaengine: hdmac: Implement interleaved transfers
dmaengine: Move icg helpers to global header
dmaengine: mv_xor: improve descriptors list handling and reduce locking
dmaengine: mv_xor: Enlarge descriptor pool size
dmaengine: mv_xor: add support for a38x command in descriptor mode
dmaengine: mv_xor: Rename function for consistent naming
dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
dmaengine: pl330: fix wording in mcbufsz message
dmaengine: sirf: add CSRatlas7 SoC support
dmaengine: xgene-dma: Fix "incorrect type in assignement" warnings
dmaengine: fix kernel-doc documentation
dmaengine: pxa_dma: add support for legacy transition
dmaengine: pxa_dma: add debug information
dmaengine: pxa: add pxa dmaengine driver
...
This reverts commit 48a9db462d.
Some platforms actually need support for the memset operations. Bring it back.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Some drivers implement only pause operation (no resuming). Example is
pl330 where pause is needed for getting residuum. pl330 does not support
resume operation, transfer must be stopped after pause.
However for slaves this is exposed always as "pause and resume" which
introduces subtle errors on Odroid U3 board (Exynos4412 with pl330).
After adding pause function to pl330 driver the audio playback
(utilizing DMA) gets choppy after some time (approximately 24 hours).
Fix this by exposing "cmd_pause" if and only if pause and resume are
implemented.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Reported-by: gabriel@unseen.is
Reported-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: <stable@vger.kernel.org>
Fixes: 88987d2c75 ("dmaengine: pl330: add DMA_PAUSE feature")
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMA routers are transparent devices used to mux DMA requests from
peripherals to DMA controllers. They are used when the SoC integrates more
devices with DMA requests then their controller can handle.
DRA7x is one example of such SoC, where the sDMA can hanlde 128 DMA request
lines, but in SoC level it has 205 DMA requests.
The of_dma_router will be registered as of_dma_controller with special
xlate function and additional parameters. The driver for the router is
responsible to craft the dma_spec (in the of_dma_route_allocate callback)
which can be used to requests a DMA channel from the real DMA controller.
This way the router can be transparent for the system while remaining generic
enough to be used in different environments.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Channels allocated via dma_get_any_slave_channel were not increasing
the counter tracking private allocations. When these channels were
released, privatecnt may erroneously fall to zero. The DMA device
would then lose its DMA_PRIVATE cap and fail to allocate future private
channels (via private_candidate) as any allocations still outstanding
would incorrectly be seen as public allocations.
Signed-off-by: Christopher Freeman <cfreeman@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This reverts commit ecc19d1786.
It added a new warning to try to encourage driver writers to set the
device capabities properly, but drivers haven't been updated and in the
meantime it just generaters a scary message that users cannot actually
do anything about.
Warnings like these are appropriate if you actually expect to fix the
code that causes them. They are not appropriate for releases.
Requested-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Free Software Foundation mailing address has been moved in the past and some
of the addresses here are outdated. Remove them from file headers since the
COPYING file in the kernel sources includes it.
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since commit 7bced39751 ("net_dma: simple removal") removed the net_dma
support entirely, net_dma_find_channel has no users left. Remove the function
entirely.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The function is too big to be a static inline.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For the slave caps retrieval to be really useful, most drivers need to
implement it.
Hence, we need to be slightly more aggressive, and trigger a warning at
registration time for drivers that don't fill their caps infos in order to
encourage them to implement it.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In order to migrate the drivers without triggering a BUG_ON for the converted
drivers, which would cause bisectability issues, we need to remove that check
before removing the device_control function entirely.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Nowadays, some drivers don't have anything in there channel allocation
callbacks anymore.
Remove the BUG_ON if those callbacks aren't implemented, in order to allow
drivers to not implement them.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_chan_get uses a rather interesting error handling and code path.
Change it to something more usual in the kernel.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The free_percpu() function tests whether its argument is NULL and then
returns immediately. Thus the test around the call is not needed.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
1/ Step down as dmaengine maintainer see commit 08223d80df "dmaengine
maintainer update"
2/ Removal of net_dma, as it has been marked 'broken' since 3.13 (commit
7787380336 "net_dma: mark broken"), without reports of performance
regression.
3/ Miscellaneous fixes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUKDLKAAoJEB7SkWpmfYgC7wwP/iNHqRjf1suMUTBIF3P6Hgbe
VCUwh0IkuujMPDG46WRn6cYzarRxVPLoGaLHLPszgjI6pmGPVv19wqeDOlUxtcmr
0iQWEWv/zqseaAIW+4gj/WYCyMgKil49EUBJKCZCfNmIaad+e0pr8f0uE5yOkHPM
tqWoZERu9A4dlXGr1TjeOZVzdnPrCt92MrLDN6ZZ6tMuJaEc5PauaLxKTeGy5fYj
UB+k1xJQzECbsYfpB+uCVYl5/qPO1rNyuBYS8THCsW+JYmrbbfH2kkF2lo2FaUpO
8Yd50FtzXHKWwAt7BzfIwU2M7x0wRmryrC/xsQi6M+WmVeHYvvHUIpzaA66xRZ5x
fCy3Fu8sEnmnmboAbh2v2c5uTycqRl2xPzbpLAuxglloXIxzi3ckp6ESF/Z4SldH
oxIoEievN7lah3vKgvlHZYcWDzrYr8EKf/EzFe9RqDBQDKtzDzre1H9Uivr387Vm
uFUcGHYG/GXuX47C7EUsMtaSW2UEoR2ytw/HR6CKFPTVXwAzEO6kA9vg0EqL0iIq
2wVLgavlZuwegmaUBgnr+bgVZMvVN7OU7fAIRVe5xNO6itrPKvheSlQthmRiiq9C
uzOu4PS6PexqzHUNPCcJpCsj+lawmCSrE0bxtPzTA/CQInVgWs219V9+W5Gn/0YA
EARN9k6ueX9PZPQrPQLm
=BBBv
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/dmaengine
Pull dmaengine updates from Dan Williams:
"Even though this has fixes marked for -stable, given the size and the
needed conflict resolutions this is 3.18-rc1/merge-window material.
These patches have been languishing in my tree for a long while. The
fact that I do not have the time to do proper/prompt maintenance of
this tree is a primary factor in the decision to step down as
dmaengine maintainer. That and the fact that the bulk of drivers/dma/
activity is going through Vinod these days.
The net_dma removal has not been in -next. It has developed simple
conflicts against mainline and net-next (for-3.18).
Continuing thanks to Vinod for staying on top of drivers/dma/.
Summary:
1/ Step down as dmaengine maintainer see commit 08223d80df
"dmaengine maintainer update"
2/ Removal of net_dma, as it has been marked 'broken' since 3.13
(commit 7787380336 "net_dma: mark broken"), without reports of
performance regression.
3/ Miscellaneous fixes"
* tag 'dmaengine-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/dmaengine:
net: make tcp_cleanup_rbuf private
net_dma: revert 'copied_early'
net_dma: simple removal
dmaengine maintainer update
dmatest: prevent memory leakage on error path in thread
ioat: Use time_before_jiffies()
dmaengine: fix xor sources continuation
dma: mv_xor: Rename __mv_xor_slot_cleanup() to mv_xor_slot_cleanup()
dma: mv_xor: Remove all callers of mv_xor_slot_cleanup()
dma: mv_xor: Remove unneeded mv_xor_clean_completed_slots() call
ioat: Use pci_enable_msix_exact() instead of pci_enable_msix()
drivers: dma: Include appropriate header file in dca.c
drivers: dma: Mark functions as static in dma_v3.c
dma: mv_xor: Add DMA API error checks
ioat/dca: Use dev_is_pci() to check whether it is pci device
Per commit "77873803363c net_dma: mark broken" net_dma is no longer used
and there is no plan to fix it.
This is the mechanical removal of bits in CONFIG_NET_DMA ifdef guards.
Reverting the remainder of the net_dma induced changes is deferred to
subsequent patches.
Marked for stable due to Roman's report of a memory leak in
dma_pin_iovec_pages():
https://lkml.org/lkml/2014/9/3/177
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: David Whipple <whipple@securedatainnovations.ch>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: <stable@vger.kernel.org>
Reported-by: Roman Gushchin <klamm@yandex-team.ru>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The count which is used to get_unmap_data maybe not the same as the
count computed in dmaengine_unmap which causes to free data in a
wrong pool.
This patch fixes this issue by keeping the map count with unmap_data
structure and use this count to get the pool.
Cc: <stable@vger.kernel.org>
Signed-off-by: Xuelin Shi <xuelin.shi@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Currently acpi_dma_request_slave_chan_by_index() and
acpi_dma_request_slave_chan_by_name() return only requested channel or NULL.
This patch converts them to return appropriate error code instead of NULL in
case of unsuccessfull request.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is the branch where we usually queue up cleanup efforts, moving
drivers out of the architecture directory, header file restructuring,
etc. Sometimes they tangle with new development so it's hard to keep it
strictly to cleanups.
Some of the things included in this branch are:
* Atmel SAMA5 conversion to common clock
* Reset framework conversion for tegra platforms
- Some of this depends on tegra clock driver reworks that are shared with Mike
Turquette's clk tree.
* Tegra DMA refactoring, which are shared branches with the DMA tree.
* Removal of some header files on exynos to prepare for multiplatform
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJS4Vf7AAoJEIwa5zzehBx3f9UP/jwMlbfbSZHfNQ/QG0SqZ9RD
zvddyDMHY/qXnzgF3Dax+JR9BDDVy8AlQe713FCoiHJZggWRAbbavkx8gxITDrZQ
6NYaEkkuVxqyM8APl3PwMqYm8UZ8MUf4lCltlOA4jkesY9vue91AFnfyKh2CvHrn
Leg4XT6mFzf/vYDL6RbvTz/Qr253uv3KvYBxkeiRNa0Y7OXRemEXSOfgxh0YGxUl
LZ2IWQFOh/DH4kaeQI8V4G67X3ceHiFyhCnl0CPwfxaZaNBVaxvIFgIUTdetS6Sb
zcXa029tE/Dfsr55vZAv9LUHEipCSOeE5rn2EJWehTWyM7vJ42Eozqgh+zfCjXS7
Ib6g2npsvIluQit/RdITu44h5yZlrQsLgKTGJ8jjXqbT4HQ/746W8b/TP0YLtbw7
N8oqr7k4vsZyF0dAYZQtfQUZeGISz67UbFcdzl9tmYOR7HFuAYkAQYst77zkVJf8
om59BAYYTG5FNjQ4I9AKUfJzxXYveI6AKpXSCCZiahpFM2D1CJIzp9Wi0GwK1HRR
sFVWhS0dajvz63pVVC2tw5Sq4J7onRRNGIXFPoE5fkmlelm0/q0zzGjw3Z0nTqbZ
8zxuwuy2FfPJK11GbUAIhAgn1sCLYyAhl6IE+FsanGeMOSGIMrH0v5/HphAxoCXt
BvqMDogyLoGPce1Gm3pJ
=3CcT
-----END PGP SIGNATURE-----
Merge tag 'cleanup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC cleanups from Olof Johansson:
"This is the branch where we usually queue up cleanup efforts, moving
drivers out of the architecture directory, header file restructuring,
etc. Sometimes they tangle with new development so it's hard to keep
it strictly to cleanups.
Some of the things included in this branch are:
* Atmel SAMA5 conversion to common clock
* Reset framework conversion for tegra platforms
- Some of this depends on tegra clock driver reworks that are shared
with Mike Turquette's clk tree.
* Tegra DMA refactoring, which are shared branches with the DMA tree.
* Removal of some header files on exynos to prepare for
multiplatform"
* tag 'cleanup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (169 commits)
ARM: mvebu: move Armada 370/XP specific definitions to armada-370-xp.h
ARM: mvebu: remove prototypes of non-existing functions from common.h
ARM: mvebu: move ARMADA_XP_MAX_CPUS to armada-370-xp.h
serial: sh-sci: Rework baud rate calculation
serial: sh-sci: Compute overrun_bit without using baud rate algo
serial: sh-sci: Remove unused GPIO request code
serial: sh-sci: Move overrun_bit and error_mask fields out of pdata
serial: sh-sci: Support resources passed through platform resources
serial: sh-sci: Don't check IRQ in verify port operation
serial: sh-sci: Set the UPF_FIXED_PORT flag
serial: sh-sci: Remove duplicate interrupt check in verify port op
serial: sh-sci: Simplify baud rate calculation algorithms
serial: sh-sci: Remove baud rate calculation algorithm 5
serial: sh-sci: Sort headers alphabetically
ARM: EXYNOS: Kill exynos_pm_late_initcall()
ARM: EXYNOS: Consolidate selection of PM_GENERIC_DOMAINS for Exynos4
ARM: at91: switch Calao QIL-A9260 board to DT
clk: at91: fix pmc_clk_ids data type attriubte
PM / devfreq: use inclusion <mach/map.h> instead of <plat/map-s5p.h>
ARM: EXYNOS: remove <mach/regs-clock.h> for exynos
...
Merging in external dependencies for the Tegra DMA and reset controller
refactoring from external trees.
Per Stephen Warren, the stability of these branches have been negotiated
with the relevant parties (Vinod/Mark/Mike)
* depends/asoc-dma:
ASoC: dmaengine: fix deferred probe detection
ASoC: dmaengine: support deferred probe for DMA channels
dma: add channel request API that supports deferred probe
ASoC: dmaengine: add custom DMA config to snd_dmaengine_pcm_config
ASoC: don't leak on error in snd_dmaengine_pcm_register
ASoC: restructure dmaengine_pcm_request_chan_of()
ASoC: generic-dmaengine-pcm: Set BATCH flag when residue reporting is not supported
ASoC: Add resource managed snd_dmaengine_pcm_register()
* depends/dma-of:
dma: add dma_get_any_slave_channel(), for use in of_xlate()
* depends/tegra-clk: (42 commits)
clk: tegra: fix __clk_lookup() return value checks
clk: tegra: Do not print errors for clk_round_rate()
clk: tegra: Initialize DSI low-power clocks
clk: tegra: add FUSE clock device
clk: tegra: Properly setup PWM clock on Tegra30
clk: tegra: Initialize secondary gr3d clock on Tegra30
clk: tegra114: Initialize clocks needed for HDMI
clk: tegra124: add suspend/resume function for tegra_cpu_car_ops
clk: tegra124: add wait_for_reset and disable_clock for tegra_cpu_car_ops
clk: tegra124: Add support for Tegra124 clocks
clk: tegra124: Add new peripheral clocks
clk: tegra124: Add common clk IDs to clk-id.h
clk: tegra: add TEGRA_PERIPH_NO_GATE
clk: tegra: add locking to periph clks
clk: tegra: Add periph regs bank X
clk: tegra: Add support for PLLSS
clk: tegra: move tegra20 to common infra
clk: tegra: move tegra30 to common infra
clk: tegra: introduce common gen4 super clock
clk: tegra: move PMC, fixed clocks to common files
...
Signed-off-by: Olof Johansson <olof@lixom.net>
The higher order mempools support raid operations, and we want to
disable them when raid support is not enabled. Making them conditional
on ASYNC_TX_DMA is not sufficient as other users (specifically dmatest)
will also issue raid operations. Make raid drivers explicitly request
that the core carry the higher order pools.
Reported-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Tested-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>