The dmac_reset() callaback of the pl330_info struct is always set to NULL, so
remove it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The pl330_chanstatus struct is completely unused, so remove it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The settings for destination and source cache control are exactly the same. This
patch removes the duplicated enum and uses the same for both destination and
source cache control.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The pl330 driver has the custom pl330_reqtype enum which has the same possible
settings as the generic dma_transfer_direction enum. Switching over to the
generic enum internally makes it possible to directly initialize it from the
transfer request direction.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To be able to see debug messages during boot, enable the debug settings
from Kconfig also for drivers in subdirectories.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use the zeroing function instead of dma_alloc_coherent & memset(,0,)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for end of transaction (EOT) and notify when done (NWD)
hardware descriptor flags.
The EOT flag requests that the peripheral assert an end of transaction interrupt
when that descriptor is complete. It also results in special signaling protocol
that is used between the attached peripheral and the core using the DMA
controller. Clients will specify DMA_PREP_INTERRUPT to enable this flag.
The NWD flag requests that the peripheral wait until the data has been fully
processed by the peripheral before moving on to the next descriptor. Clients
will specify DMA_PREP_FENCE to enable this flag.
Signed-off-by: Andy Gross <agross@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix the potential risk when enable config NET_DMA and ASYNC_TX. Async_tx is
lack of support in current release process of dma descriptor, all descriptors
will be released whatever is acked or no-acked by async_tx, so there is a
potential race condition when dma engine is uesd by others clients (e.g. when
enable NET_DMA to offload TCP).
In our case, a race condition which is raised when use both of talitos and
dmaengine to offload xor is because napi scheduler will sync all pending
requests in dma channels, it affects the process of raid operations due to
ack_tx is not checked in fsl dma. The no-acked descriptor is freed which is
submitted just now, as a dependent tx, this freed descriptor trigger
BUG_ON(async_tx_test_ack(depend_tx)) in async_tx_submit().
TASK = ee1a94a0[1390] 'md0_raid5' THREAD: ecf40000 CPU: 0
GPR00: 00000001 ecf41ca0 ee44/921a94a0 0000003f 00000001 c00593e4 00000000 00000001
GPR08: 00000000 a7a7a7a7 00000001 045/920000002 42028042 100a38d4 ed576d98 00000000
GPR16: ed5a11b0 00000000 2b162000 00000200 046/920000000 2d555000 ed3015e8 c15a7aa0
GPR24: 00000000 c155fc40 00000000 ecb63220 ecf41d28 e47/92f640bb0 ef640c30 ecf41ca0
NIP [c02b048c] async_tx_submit+0x6c/0x2b4
LR [c02b068c] async_tx_submit+0x26c/0x2b4
Call Trace:
[ecf41ca0] [c02b068c] async_tx_submit+0x26c/0x2b448/92 (unreliable)
[ecf41cd0] [c02b0a4c] async_memcpy+0x240/0x25c
[ecf41d20] [c0421064] async_copy_data+0xa0/0x17c
[ecf41d70] [c0421cf4] __raid_run_ops+0x874/0xe10
[ecf41df0] [c0426ee4] handle_stripe+0x820/0x25e8
[ecf41e90] [c0429080] raid5d+0x3d4/0x5b4
[ecf41f40] [c04329b8] md_thread+0x138/0x16c
[ecf41f90] [c008277c] kthread+0x8c/0x90
[ecf41ff0] [c0011630] kernel_thread+0x4c/0x68
Another modification in this patch is the change of completed descriptors,
there is a potential risk which caused by exception interrupt, all descriptors
in ld_running list are seemed completed when an interrupt raised, it works fine
under normal condition, but if there is an exception occured, it cannot work as
our excepted. Hardware should not be depend on s/w list, the right way is to
read current descriptor address register to find the last completed descriptor.
If an interrupt is raised by an error, all descriptors in ld_running should not
be seemed finished, or these unfinished descriptors in ld_running will be
released wrongly.
A simple way to reproduce:
Enable dmatest first, then insert some bad descriptors which can trigger
Programming Error interrupts before the good descriptors. Last, the good
descriptors will be freed before they are processsed because of the exception
intrerrupt.
Note: the bad descriptors are only for simulating an exception interrupt. This
case can illustrate the potential risk in current fsl-dma very well.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Qiang Liu <qiang.liu@freescale.com>
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The usage of spin_lock_irqsave() is a stronger locking mechanism than is
required throughout the driver. The minimum locking required should be used
instead. Interrupts will be turned off and context will be saved, it is
unnecessary to use irqsave.
This patch changes all instances of spin_lock_irqsave() to spin_lock_bh(). All
manipulation of protected fields is done using tasklet context or weaker, which
makes spin_lock_bh() the correct choice.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Qiang Liu <qiang.liu@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This resolves a number of merge issues with changes in this tree and
Linus's tree at the same time.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch implements DMA Engine API for DMA controller on MIC X100
Coprocessors. DMA h/w is shared between host and card s/w.
Channels 0 to 3 are used by host and 4 to 7 are used by card.
Since the DMA device doesn't show up as PCIe device, a virtual bus called mic
bus is created and virtual devices are added on that bus to follow device model.
Allowed dma transfer directions are host to card, card to host and card to card.
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Reviewed-by: Nikhil Rao <nikhil.rao@intel.com>
Reviewed-by: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Siva Yerramreddy <yshivakrishna@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch adds DT support to Audio DMAC peri peri driver.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
[horms+renesas@verge.net.au: Do not add trailing blank line to rcar-audmapp.txt]
[horms+renesas@verge.net.au: squashed patch to add NULL terminater to audmapp_of_match]
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Also add a few definitions that were missing.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Current audmapp driver is keeping audmapp_slave_config
for each channeles, but, nessasary information is only "chcr".
Current style (= keeping audmapp_slave_config) is
not good match for DT support.
Keep "chcr" instead of audmapp_slave_config
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Current .set_slave callback did nothing,
since it assumed src/dst address come from platform settings.
But, it isn't good match to DT probing.
This patch enables .set_slave callback to this issue.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
eDMA can be configured for 3bytes word size for source and destination.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
I received a report this morning from one of the Novena developers that
the behaviour of the iMX6 ASoC codec driver (using imx-pcm-dma.c) was
sub-optimal under high system load.
While there are issues relating to system load remaining, upon reviewing
the ASoC imx-pcm-dma.c driver, it was noticed that it not using the
residue support, because SDMA doesn't support it. This has the effect
that SDMA has to make multiple calls into the ASoC and ALSA code, one
for each period.
Since ALSA's snd_pcm_elapsed() does not need to be called multiple times
and it is entirely sufficient to call it once to update ALSA with the
current buffer position via the pointer method, we can do better here.
We can also avoid stopping the DMA entirely, just like real cyclic DMA
implementations behave. While this means that we replay some old samples,
this is a nicer behaviour than having audio stop and restart.
The changes to achieve this are relatively minor - imx-sdma.c can track
where the DMA is to the nearest descriptor boundary - it does this
already when deciding how many callbacks to issue. In doing this,
buf_tail always points at the descriptor which will complete next.
The residue is defined by the bytes remaining to the end of the buffer,
when the buffer is viewed as a single block of memory [start...end].
So, when we start out, there's a full buffer worth of residue, and this
counts down as we approach the end of the buffer, eventually becoming
zero at the end, before returning to the full buffer worth when we
wrap back to the start.
Moving the walking of the descriptors into the interrupt handler means
that we can update the BD_DONE flag at interrupt time, thus avoiding
a delayed tasklet stopping the cyclic DMA.
This means that the residue can be calculated from (total descriptors -
buf_tail) * descriptor size. This is what the change below does. We
update imx-pcm-dma.c to remove the NO_RESIDUE flag since we now provide
the residue.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When a 0-length packet is received on the bus, desc->pd0 yields 1,
which confuses the driver's users. This information is clearly wrong
and not in accordance to the datasheet, but it's been observed on an
AM335x board, very reproducible.
Fix this by looking at bit 19 in PD2 of the completed packet. This bit
will tell us if a zero-length packet was received on a queue. If it's
set, ignore the value in PD0 and report a total length of 0 instead.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dynamic stack allocation in kernel is considered bad as kernel stack is low and
we get warns on few archs as reported by kbuild test robot
>> drivers/dma/sh/shdma-base.c:671:32: sparse: Variable length array is used.
>> drivers/dma/sh/shdma-base.c:701:1: warning: 'shdma_prep_dma_cyclic' uses
>> dynamic stack allocation [enabled by default]
Fix this by making a static array of 32 which should be sufficient for
shdma_prep_dma_cyclic which only user in kernel is audio and 32 periods for
audio seems quite sufficient atm
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As documented in Documentation/printk-formats.txt we should use %zu/%zx
specifiers for size_t type variables for the code to compile on different
architectures. This is uncovered as COMPILE_TEST has been enabled recently for
this driver
drivers/dma/sh/shdma-base.c: In function 'shdma_prep_dma_cyclic':
>> drivers/dma/sh/shdma-base.c:683:4: warning: format '%d' expects argument of
>> type 'int', but argument 4 has type 'size_t' [-Wformat=]
__func__, buf_len, period_len, slave_id);
>> drivers/dma/sh/shdma-base.c:683:4: warning: format '%d' expects argument of
>> type 'int', but argument 5 has type 'size_t' [-Wformat=]
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
kbuild test robot reports that shdma_prep_dma_cyclic should be static, since
symbol is not declared, quick check revails that is the case
>> drivers/dma/sh/shdma-base.c:660:32: sparse: symbol 'shdma_prep_dma_cyclic'
>> was not declared. Should it be static?
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
SoC-near driver changes that we're merging through our tree. Mostly
because they depend on other changes we have staged, but in some cases
because the driver maintainers preferred that we did it this way.
This contains a largeish cleanup series of the omap_l3_noc bus driver,
cpuidle rework for Exynos, some reset driver conversions and a long
branch of TI EDMA fixes and cleanups, with more to come next release.
The TI EDMA cleanups is a shared branch with the dmaengine tree, with
a handful of Davinci-specific fixes on top.
After discussion at last year's KS (and some more on the mailing lists),
we are here adding a drivers/soc directory. The purpose of this is
to keep per-vendor shared code that's needed by different drivers but
that doesn't fit into the MFD (nor drivers/platform) model. We expect
to keep merging contents for this hierarchy through arm-soc so we can
keep an eye on what the vendors keep adding here and not making it a
free-for-all to shove in crazy stuff.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABAgAGBQJTjOFiAAoJEIwa5zzehBx30RYP/0UE+R8ccdsodunmIDrmQ7QP
qFWe1YTWlyXtGBDaPCNfdcU09UYatPKuCv5dJ2ToQCyyFI26PIIhFtnCNXmMuYz+
XPCuqAlJ9hZWx7+j2hXRlyhoZMAaJ5EVVxaK5tnVYXDIfy1Y3xG7i069HD/qGrQp
xrV+XofFmpU2VAds6S+SpecFFfYD7n/pJ1bTSgzPfaUsEUyV882dJ3skgs1VpTzQ
PnL/0Z2t4ePoP3+6p+F7EnJxemLF5IXrlL0c7hODxQKuMqlzoUluywh6SwOHfCQL
u2cc5SFUbbKhExwlGOVibdQMiC0HUOXyRvyYFOIdbv+xNH+Zc/tcoQQ22PWm4Yy1
08qOm3Fr6yw5nH5IT+1wCIFCzJEC/ZHM5B2t+RISFybAMk6Bg1TDYJLmd570zkEL
aTLtS5hdmy4h8Ad5FBtwKNyL//6FJJxhbHUu/m0qaE0phq94+78B2M6vbx6757xC
kCFlpJsHoN0Tn5c9Q1hpTqI/BHxb4UR7Nf+b8Ox8Veuc9JrS35lzi/rWnGxB5WB0
+1KCA8eih9KXTtksxAte1TmSbMciqW559RUR7dNAPXAMPksY2mJV1I+rg0cRsY3i
F90Lnc6LWUM5PYpc4VwiC0sUCLKzTFnpZUELqMOiws3PUblbb0StXuoNo6owbtsK
mp1Juxi1n7VhoN9AFVpL
=SC+e
-----END PGP SIGNATURE-----
Merge tag 'drivers-for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc into next
Pull ARM SoC driver changes from Olof Johansson:
"SoC-near driver changes that we're merging through our tree. Mostly
because they depend on other changes we have staged, but in some cases
because the driver maintainers preferred that we did it this way.
This contains a largeish cleanup series of the omap_l3_noc bus driver,
cpuidle rework for Exynos, some reset driver conversions and a long
branch of TI EDMA fixes and cleanups, with more to come next release.
The TI EDMA cleanups is a shared branch with the dmaengine tree, with
a handful of Davinci-specific fixes on top.
After discussion at last year's KS (and some more on the mailing
lists), we are here adding a drivers/soc directory. The purpose of
this is to keep per-vendor shared code that's needed by different
drivers but that doesn't fit into the MFD (nor drivers/platform)
model. We expect to keep merging contents for this hierarchy through
arm-soc so we can keep an eye on what the vendors keep adding here and
not making it a free-for-all to shove in crazy stuff"
* tag 'drivers-for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (101 commits)
cpufreq: exynos: Fix driver compilation with ARCH_MULTIPLATFORM
tty: serial: msm: Remove direct access to GSBI
power: reset: keystone-reset: introduce keystone reset driver
Documentation: dt: add bindings for keystone pll control controller
Documentation: dt: add bindings for keystone reset driver
soc: qcom: fix of_device_id table
ARM: EXYNOS: Fix kernel panic when unplugging CPU1 on exynos
ARM: EXYNOS: Move the driver to drivers/cpuidle directory
ARM: EXYNOS: Cleanup all unneeded headers from cpuidle.c
ARM: EXYNOS: Pass the AFTR callback to the platform_data
ARM: EXYNOS: Move S5P_CHECK_SLEEP into pm.c
ARM: EXYNOS: Move the power sequence call in the cpu_pm notifier
ARM: EXYNOS: Move the AFTR state function into pm.c
ARM: EXYNOS: Encapsulate the AFTR code into a function
ARM: EXYNOS: Disable cpuidle for exynos5440
ARM: EXYNOS: Encapsulate boot vector code into a function for cpuidle
ARM: EXYNOS: Pass wakeup mask parameter to function for cpuidle
ARM: EXYNOS: Remove ifdef for scu_enable in pm
ARM: EXYNOS: Move scu_enable in the cpu_pm notifier
ARM: EXYNOS: Use the cpu_pm notifier for pm
...
The APBX-DMA block is also found on MX6Q/MX6DL chips.
Update the help text accordingly.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
linux/err.h isn't implicitly included by the current headers on all
platforms, resulting in compilation failures due to implicit
declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
linux/err.h isn't implicitly included by the current headers on all
platforms, resulting in compilation failures due to implicit
declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
linux/err.h isn't implicitly included by the current headers on all
platforms, resulting in compilation failures due to implicit
declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Many audio interface drivers require support of cyclic transfers to work
correctly, for example Samsung ASoC DMA driver. This patch adds support
for cyclic transfers to the s3c24xx-dma driver
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Due to redundant 'break' in loop driver processed only first chunk.
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In cyclic dma tx's handler sdma_handle_channel_loop(),
SDMA channel statue is set to either DMA_ERROR or DMA_IN_PROGRESS
based on each period's status. This has the following issues:
1) If one period's status is BD_RROR, then channel status
will be set to DMA_ERROR, but it will be overwritten to DMA_IN_PROGRESS
if the following periods are OK.
2) DMA client may call sdma_control(DMA_TERMINATE_ALL) to stop the cyclic dma
operation, sdma channel status will be set to DMA_ERROR,
but if after this handler is called, then again the channel status will be overwritten
to DMA_IN_PROGRESS. Then the following dmaengine_prep_dma_cyclic() will always fail,
as channel status is DMA_IN_PROGRESS.
As in cyclic dma tx, channel status will be initially set to DMA_IN_PROGRESS,
driver only needs to change it to DMA_ERROR, when something wrong happens
(one period status is wrong, or stoped by client explicitly).
Signed-off-by: Jiada Wang <jiada_wang@mentor.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pull slave-dmaengine fixes from Vinod Koul:
"We have three small fixes.
First one from Andy reverts the devm_request irq as we need to ensure
the tasklet is killed after irq is freed, so we need to do free irq in
our code. Other two from Arnd are fixing the compilation issue in
omap and sa11x0 drivers with ARM randconfigs"
* 'fixes' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: sa11x0: remove broken #ifdef
dmaengine: omap: hide filter_fn for built-in drivers
dmaengine: dw: went back to plain {request,free}_irq() calls
commit 4828b493 introduced COMPILE_TEST for this driver and this cause compile
failure on alpha as kzalloc wasnt availble for this arch in included header, so
explictly add slab.h
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_async_device_register() may return non-zero error code. In such case we
have to follow error path.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The commit dbde5c29 "dw_dmac: use devm_* functions to simplify code" turns
probe function to use devm_* helpers and simultaneously brings a regression.
We have to 1) call clk_disable_unprepare() on error path, and 2) check error
code of clk_enable_prepare(). First part was done in the original code, second
one is an update.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
hclk signal is a bus clock. So, it means we have to have it enabled during
access to the DMA controller. This patch makes sure that we enable clock before
access to the device, though it currently works on Intel hardware.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The pch_dma driver is for a companion chip to the Intel Atom E600
series processors. These are 32-bit x86 processors so the driver is
only needed on X86_32. Add COMPILE_TEST as an alternative, so that the
driver can still be build-tested elsewhere.
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Introduce support for slave s/g transfer preparation and the associated
device control callback in the MPC512x DMA controller driver, which adds
support for data transfers between memory and peripheral I/O to the
previously supported mem-to-mem transfers.
Signed-off-by: Alexander Popov <a13xp0p0v88@gmail.com>
[fixed subsytem name]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The count which is used to get_unmap_data maybe not the same as the
count computed in dmaengine_unmap which causes to free data in a
wrong pool.
This patch fixes this issue by keeping the map count with unmap_data
structure and use this count to get the pool.
Cc: <stable@vger.kernel.org>
Signed-off-by: Xuelin Shi <xuelin.shi@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
We need to use writel() instead of writel_relaxed() when starting
a channel, to ensure all the descriptors have been flushed before
the activation.
While at it, remove the unneeded read-modify-write and make the
code simpler.
Cc: <stable@vger.kernel.org>
Signed-off-by: Lior Amsalem <alior@marvell.com>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The sa11x0_dma_pm_ops unconditionally reference sa11x0_dma_resume
and sa11x0_dma_suspend, which currently breaks if CONFIG_PM_SLEEP
is disabled.
There is probably a better way to remove the reference in this
case, but the safe choice is to have the suspend/resume code always
built in the driver.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: dmaengine@vger.kernel.org
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Don't use DEFINE_PCI_DEVICE_TABLE macro, because this macro
is deprecated.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The commit dbde5c29 "dw_dmac: use devm_* functions to simplify code" turns
probe function to use devm_* helpers and simultaneously brings a regression. We
need to ensure irq is disabled, followed by ensuring that don't schedule any
more tasklets and then its safe to use tasklet_kill().
The free_irq() will ensure that the irq is disabled and also wait till all
scheduled interrupts are executed by invoking synchronize_irq(). So we need to
only do tasklet_kill() after invoking free_irq().
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: stable@vger.kernel.org # v3.11+
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Clients may still be active in the early phase of system PM, thus we
need to move the suspend operations to the late system PM phase.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
A channel can accommodate more than one transaction, each consisting of
multiple descriptors, the last of which has the DCMD_ENDIRQEN bit set.
In order to report the channel's residue, we hence have to walk the
list of running descriptors, look for those which match the cookie,
and then try to find the descriptor which defines upper and lower
boundaries that embrace the current transport pointer. Once it is found,
walk forward until we find the descriptor that tells us about the end of
a transaction via a set DCMD_ENDIRQEN bit.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Make sure to handle register context save/restore when needed from
system PM callbacks.
Previously we solely trusted the device to reside in in-active state
while the system suspend callback were invoked, which is just too
optimistic.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Converting to the PM macros makes us simplify and remove some code.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
While probing, don't rely on CONFIG_PM_RUNTIME to be configured.
Instead, let's power up the device and make it fully operational.
Update the runtime PM status to reflect the active state.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The runtime PM resume callback needs to be executed while holding the
spinlock, make sure to maintain this for the pause operation as well.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The commit 4501fe61 "dma: dw: Add suspend and resume handling for PCI mode
DW_DMAC." introduces system power management callbacks. Regarding to commit
f78c4cff "PM / Sleep: Add macro to define common late/early system PM
callbacks" we have nice macro to setup dev_pm_ops structure. This patch
converts a driver to use the macro.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no need to use *_noirq version of suspend and resume PM callbacks. The
suspend_late / resume_early suit better (it was discussed in [1]) and in future
could be used for runtime PM support.
[1] http://www.spinics.net/lists/kernel/msg1650974.html
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix mpc_dma_probe() error path and mpc_dma_remove(): manually free IRQs and
dispose IRQ mappings before devm_* takes care of other resources.
Moreover replace devm_request_irq() with request_irq() since there is no need
to use it because the original code always frees IRQ manually with
devm_free_irq(). Replace devm_free_irq() with free_irq() accordingly.
Signed-off-by: Alexander Popov <a13xp0p0v88@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
MPC512x and MPC8308 have similar DMA controllers, but are independent SoCs.
DMA controller driver should have separate 'compatible' values for these SoCs.
Signed-off-by: Alexander Popov <a13xp0p0v88@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Concentrate the specific code for MPC8308 in the 'if' branch
and handle MPC512x in the 'else' branch.
This modification only reorders instructions but doesn't change behaviour.
Signed-off-by: Alexander Popov <a13xp0p0v88@gmail.com>
Acked-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
These functions will be modified in the next patch in the series. By moving the
function in a patch separate from the changes, it will make review easier.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Qiang Liu <qiang.liu@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are several places where descriptors are freed using identical code.
This patch puts this code into a function to reduce code duplication.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Qiang Liu <qiang.liu@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Delete attribute DMA_INTERRUPT because fsldma doesn't support this function,
exception will be thrown if talitos is used to offload xor at the same time.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Qiang Liu <qiang.liu@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Methods of accessing DMA controller registers are inconsistent, some registers
are accessed by DMA_IN/OUT directly, while others are accessed by functions
get/set_* which are wrappers of DMA_IN/OUT, and even for the BCR register, it
is read by get_bcr but written by DMA_OUT.
This patch unifies the inconsistent methods, all registers are accessed by
get/set_* now.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Some codes are calling chan_dbg with FSL_DMA_LD_DEBUG surrounded, it is really
unnecessary to use such a macro because chan_dbg is a wrapper of dev_dbg, we do
have corresponding DEBUG macro to switch on/off dev_dbg, and most of the other
codes are also calling chan_dbg directly without using FSL_DMA_LD_DEBUG.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch add cyclic transfer support and enables dmaengine_prep_dma_cyclic()
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
[reflown changelog for readablity]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Current shdma is using "last" which indicates last desc which needs to have
callback function. But that desc's chunks is always 1, we can use it as finder
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
[reflown changelog for readablity]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use sizeof(*var) instead of sizeof(type) when calling devm_k*alloc().
This avoids using the wrong type as was done to allocate the physical
channels array.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As the physical channel and virtual channel point to each other,
pchan->phy->vchan is always equal to pchan. Simplify the code
accordingly.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is the driver for the AXI Video Direct Memory Access (AXI
VDMA) core, which is a soft Xilinx IP core that provides high-
bandwidth direct memory access between memory and AXI4-Stream
type video target peripherals. The core provides efficient two
dimensional DMA operations with independent asynchronous read
and write channel operation.
This module works on Zynq (ARM Based SoC) and Microblaze platforms.
Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
Reviewed-by: Levente Kurusa <levex@linux.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
edma param struct is now within an edma_pset struct introduced in Thomas
Gleixner's edma tx status series. Update memcpy function for the same.
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The granular residue accounting code uses certain variables specifically
for residue accounting. Document these in the structure declaration.
Also move around some elements and group them together.
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The first slot in the ParamRAM of EDMA holds the current active
subtransfer. Depending on the direction we read either the source or
the destination address from there. In the internal psets we have the
address of the buffer(s).
In the cyclic case we only use the internal pset[0] which holds the
start address of the circular buffer and calculate the remaining room
to the end of the buffer.
In the SG case we read the current address and compare it to the
internal psets address and length.
- If the current address is outside of this range, the pset has been
processed already and we mark it done, update the residue_stat value
and process the next set. That avoids that we need to walk all
processed psets for every invocation of tx_status.
- If its inside the range we know that we look at the current active
set and stop the walk.
- In case of intermediate transfers we update the stats in the
interrupt callback function before starting the next batch of
transfers. The tx_status callback and the interrupt callback are
serialized via vchan.lock.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[joelf@ti.com: Hunk #2 in original patch manually applied]
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For granular accounting we need to store the direction and the
information for the individual psets:
- source or destination address, depending on direction
- length
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Preparatory patch to support finer grained accounting.
Move the edma_params array out of edma_desc so we can add further per
pset data to it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[joelf@ti.com: Fixed up hunk #3 in original patch to apply]
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It's likely that the caller investigates the status of a currently
active descriptor. Make that simple check first and only rumage in the
vchan list if that fails.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The residue reporting in edma_tx_status() is just broken. It blindly
walks the psets and recalculates the lenght of the transfer from the
hardware parameters. For cyclic transfers it adds the link pset, which
results in interestingly large residues. For non-cyclic it adds the
dummy pset, which is stupid as well.
Aside of that it's silly to walk through the pset params when the per
descriptor residue is known at the point of creating it.
Store the information in edma_desc and use it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It helps to identify issues if we have some information regarding to the
channel which the event is associated.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The vchan lock in edma_callback is acquired in hard interrupt context. As
interrupts are already disabled, there's no point in save/restoring interrupt
mask bit or cpsr flags.
Get rid of flags local variable and use spin_lock instead of spin_lock_irqsave.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We add DMA memcpy support to EDMA driver. Successful tests performed using
dmatest kernel module. Copy alignment is set to DMA_SLAVE_BUSWIDTH_4_BYTES and
users must ensure length is aligned so that copy is performed fully.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In case of not supported direction it is better to print the direction also.
It is unlikely, but in such an event it helps with the debugging.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
prep_slave_sg and prep_dma_cyclic callbacks have mostly same failure cases
with the same texts printed in case we hit them. It helps when debugging if
we know exactly which callback generated the errors.
At the same time change the debug level for descriptor allocation failure
from dbg to err since all other error cases are dev_err and this failure is
similarly fatal as the other ones.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With the callback implemented omap-dma can provide information to client
drivers regarding to supported address widths, directions, residue
granularity, etc.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Do not print the paRAM information when verbose debugging is not asked and
also reduce the number of lines printed in edma_prep_dma_cyclic()
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Indicate that the edma dmaengine driver has support for cyclic mode.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pause/Resume can be used by the audio stack when the stream is paused/resumed
The edma platform code has support for this and the legacy audio stack used
this.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When clients asks for maxburst = 0 it is basically the same case as if they
were asking for maxburst = 1 since in both case ASYNC need to be used and
the eDMA is expected to write/read one word per DMA request.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Because of some driver base on DMA, changed the initcall order as subsys_initcall.
Signed-off-by: Yuan Yao <yao.yuan@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The ">" here should be ">=" or we are one step beyond the end of the
sdma->channels[] array.
Fixes: 2e041c9462 ('dmaengine: sirf: enable generic dt binding for dma channels')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
VIDEO_TIMBERDALE selects TIMB_DMA which itself depends on
MFD_TIMBERDALE, so VIDEO_TIMBERDALE should either select or depend on
MFD_TIMBERDALE as well. I chose to make it depend on it because I
think it makes more sense and it is consistent with what other options
are doing.
Adding a "|| HAS_IOMEM" to the TIMB_DMA dependencies silenced the
kconfig warning about unmet direct dependencies but it was wrong:
without MFD_TIMBERDALE, TIMB_DMA is useless as the driver has no
device to bind to.
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mauro Carvalho Chehab <m.chehab@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The code to handle any length SG lists calls edma_resume()
even before edma_start() is called. This is incorrect
because edma_resume() enables edma events on the channel
after which CPU (in edma_start) cannot clear posted
events by writing to ECR (per the EDMA user's guide).
Because of this EDMA transfers fail to start if due
to some reason there is a pending EDMA event registered
even before EDMA transfers are started. This can happen if
an EDMA event is a byproduct of device initialization.
Fix this by calling edma_resume() only if it is not the
first batch of MAX_NR_SG elements.
Without this patch, MMC/SD fails to function on DA850 EVM
with DMA. The behaviour is triggered by specific IP and
this can explain why the issue was not reported before
(example with MMC/SD on AM335x).
Tested on DA850 EVM and AM335x EVM-SK using MMC/SD card.
Cc: stable@vger.kernel.org # v3.12.x+
Cc: Joel Fernandes <joelf@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Tested-by: Jon Ringle <jringle@gridpoint.com>
Tested-by: Alexander Holler <holler@ahsoftware.de>
Reported-by: Jon Ringle <jringle@gridpoint.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Now that mv_xor_slot_cleanup() has no remaining callers, we remove it
and rename __mv_xor_slot_cleanup() to mv_xor_slot_cleanup().
We take this opportunity to add a comment that makes it clear that the
channel spinlock should be held before calling mv_xor_slot_cleanup().
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In order to simplify the code, remove all the calls to the locked
mv_xor_slot_cleanup() and instead use the unlocked version only,
It's less error prone to have just one function, and require the caller
to ensure proper locking.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In mv_xor_status(), we are currently calling mv_xor_clean_completed_slots()
when the transaction is complete (the cookie status is DMA_COMPLETE).
However, a completed status means that mv_xor_slot_cleanup() was called,
which cleans the completed slots.
In other words, there's nothing to cleanup for a completed transaction in
mv_xor_status(). Remove the unneeded call to mv_xor_clean_completed_slots().
Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
As result of deprecation of MSI-X/MSI enablement functions
pci_enable_msix() and pci_enable_msi_block() all drivers
using these two interfaces need to be updated to use the
new pci_enable_msi_range() or pci_enable_msi_exact()
and pci_enable_msix_range() or pci_enable_msix_exact()
interfaces.
Function pci_enable_msix() returns a tri-state value while
pci_enable_msi_exact() is a canonical zero/-errno variant.
The former is being phased out in favor of the latter.
In case of 'ioat' there (should be) no difference.
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Includes an appropriate header file dma_v2.h in ioat/dca.c because
functions ioat2_dca_init() and ioat3_dca_init() have their function
declarations in dma_v2.h.
This eliminates the following warning in ioat/dca.c:
drivers/dma/ioat/dca.c:410:22: warning: no previous prototype for ‘ioat2_dca_init’ [-Wmissing-prototypes]
drivers/dma/ioat/dca.c:624:22: warning: no previous prototype for ‘ioat3_dca_init’ [-Wmissing-prototypes]
Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Mark the functions ioat3_prep_xor_val(), ioat3_prep_pq_val() and
ioat3_prep_pqxor_val() as static in dma_v3.c because they are not used
outside this file.
This eliminates the following warnings in dma_v3.c:
drivers/dma/ioat/dma_v3.c:741:1: warning: no previous prototype for ‘ioat3_prep_xor_val’ [-Wmissing-prototypes]
drivers/dma/ioat/dma_v3.c:1092:1: warning: no previous prototype for ‘ioat3_prep_pq_val’ [-Wmissing-prototypes]
drivers/dma/ioat/dma_v3.c:1134:1: warning: no previous prototype for ‘ioat3_prep_pqxor_val’ [-Wmissing-prototypes]
Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This commit adds proper error checking for various DMA API calls,
as reported by DMA_API_DEBUG=y.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use PCI standard marco dev_is_pci() instead of directly compare
pci_bus_type to check whether it is pci device.
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Pull slave-dmaengine updates from Vinod Koul:
- New driver for Qcom bam dma
- New driver for RCAR peri-peri
- New driver for FSL eDMA
- Various odd fixes and updates thru the subsystem
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits)
dmaengine: add Qualcomm BAM dma driver
shdma: add R-Car Audio DMAC peri peri driver
dmaengine: sirf: enable generic dt binding for dma channels
dma: omap-dma: Implement device_slave_caps callback
dmaengine: qcom_bam_dma: Add device tree binding
dma: dw: Add suspend and resume handling for PCI mode DW_DMAC.
dma: dw: allocate memory in two stages in probe
Add new line to test result strings produced in verbose mode
dmaengine: pch_dma: use tasklet_kill in teardown
dmaengine: at_hdmac: use tasklet_kill in teardown
dma: cppi41: start tear down only if channel is busy
usb: musb: musb_cppi41: Dont reprogram DMA if tear down is initiated
dmaengine: s3c24xx-dma: make phy->irq signed for error handling
dma: imx-dma: Add missing module owner field
dma: imx-dma: Replace printk with dev_*
dma: fsl-edma: fix static checker warning of NULL dereference
dma: Remove comment about embedding dma_slave_config into custom structs
dma: mmp_tdma: move to generic device tree binding
dma: mmp_pdma: add IRQF_SHARED when request irq
dma: edma: Fix memory leak in edma_prep_dma_cyclic()
...
Add the DMA engine driver for the QCOM Bus Access Manager (BAM) DMA controller
found in the MSM 8x74 platforms.
Each BAM DMA device is associated with a specific on-chip peripheral. Each
channel provides a uni-directional data transfer engine that is capable of
transferring data between the peripheral and system memory (System mode), or
between two peripherals (BAM2BAM).
The initial release of this driver only supports slave transfers between
peripherals and system memory.
Signed-off-by: Andy Gross <agross@codeaurora.org>
Tested-by: Stanimir Varbanov <svarbanov@mm-sol.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We can move the handling of the DMA synchronisation control out of the
prepare functions; this can be pre-calculated when the DMA channel has
been allocated, so we don't need to duplicate this in both prepare
functions.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the interrupt handling for OMAP2+ into omap-dma, rather than using
the legacy support in the platform code.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Export the DMA register information from the SoC specific data, such
that we can access the registers directly in omap-dma.c, mapping the
register region ourselves as well.
Rather than calculating the DMA channel register in its entirety for
each access, we pre-calculate an offset base address for the allocated
DMA channel and then just use the appropriate register offset.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide a function to read the CSAC/CDAC register, working around the
OMAP 3.2/3.3 erratum (which requires two reads of the register if the
first returned zero.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide a pair of channel register accessors, and a pair of global
accessors for non-channel specific registers.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We don't need to read-modify-write the CCR register; we already know
what value it should contain at this point. Use the cached CCR value
when setting the enable bit.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We don't need to issue a barrier for every segment of a DMA transfer;
doing this just once per descriptor will do.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the clnk_ctrl setup to the preparation functions, saving its
value in the omap_desc. This only needs to be set once per descriptor,
not for each segment, so set it in omap_dma_start_desc() rather than
omap_dma_start().
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The only thing which changes is which registers are written, so put this
in local variables instead. This results in smaller code.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Consolidate clearing of the channel status register, rather than open
coding the same functionality in two places.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since we record the CCR register in the dma transaction, we can move the
processing of the iframe buffering errata out of the omap_dma_start().
Move it to the preparation functions.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide our own set of more complete register definitions; this allows
us to get rid of the meaningless 1 << n constants scattered throughout
this code.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Consolidate the setup of the channel control register. Prepare the
basic value in the preparation of the DMA descriptor, and write it into
the register upon descriptor execution.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Consolidate the setup of the channel source destination parameters
register. This way, we calculate the required CSDP value when we setup
a transfer descriptor, and only write it to the device registers once
when we start the descriptor.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Read the current DMA position from the hardware directly rather than via
arch/arm/plat-omap/dma.c.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Program the non-cyclic mode DMA start/stop directly, rather than via
arch/arm/plat-omap/dma.c.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no need to keep writing registers which don't change value in
omap_dma_start_sg(). Move this into omap_dma_start_desc() and merge
the register updates together.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Program the transfer parameters directly into the hardware, rather
than using the functions in arch/arm/plat-omap/dma.c.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide and use a hook to obtain the underlying DMA platform operations
so that omap-dma.c can access the hardware more directly without
involving the legacy DMA driver.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Use devm_kzalloc() to allocate omap_dmadev() so that we don't need
complex error cleanup paths.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
move to support of_dma_request_slave_channel() and dma_request_slave_channel.
we add a xlate() to let dma clients be able to find right dma_chan by generic
"dmas" properties in dts.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With the callback implemented omap-dma can provide information to client
drivers regarding to supported address widths, directions, residue
granularity, etc.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is to disable/enable DW_DMAC hw during late suspend/early resume.
Since DMA is providing service to other clients (eg: SPI, HSUART),
we need to ensure DMA suspends after the clients and resume
before the clients are active.
Signed-off-by: Chew, Chiau Ee <chiau.ee.chew@intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This makes the probe() function a little bit clearer.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Prevents test result strings from being output on same line. Issue will
happen with verbose and multi-iteration modes enabled.
Signed-off-by: Jerome Blin <jerome.blin@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As discussed in [1] the tasklet_disable is not a proper function for teardown.
We need to ensure irq is disabled, followed by ensuring that don't schedule any
more tasklets and then its safe to use tasklet_kill().
Here in pch dma driver we need to use free_irq() before tasklet_kill(). So move
up the free_irq() which will ensure that the irq is disabled and also wait till
all scheduled interrupts are executed by invoking synchronize_irq().
[1]: http://lwn.net/Articles/588457/
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As discussed in [1] the tasklet_disable is not a proper function for teardown.
We need to ensure irq is disabled, followed by ensuring that don't schedule any
more tasklets and then its safe to use tasklet_kill().
Here in at_hdmac driver we use free_irq() before tasklet_kill(). The free_irq()
will ensure that the irq is disabled and also wait till all scheduled interrupts
are executed by invoking synchronize_irq(). So we need to only do tasklet_kill()
after invoking free_irq()
[1]: http://lwn.net/Articles/588457/
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Start the channel tear down only if the channel is busy, else just
bail out. In some cases its seen that by the time the tear down is
initiated the cppi completes the DMA, especially in ISOCH transfers.
Signed-off-by: George Cherian <george.cherian@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is a bug in s3c24xx_dma_probe() where we do:
phy->irq = platform_get_irq(pdev, i);
if (phy->irq < 0) {
The problem is that "phy->irq" is unsigned so the error handling doesn't
work. I have changed it to signed.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use the dev_* message logging API instead of raw printk.
Signed-off-by: Alexander Shiyan <shc_work@mail.ru>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The static checker reports following warning:
drivers/dma/fsl-edma.c:732 fsl_edma_xlate()
error: we previously assumed 'chan' could be null (see line 737)
The changes of the loop cursor in the iteration may result in
NULL dereference when dma_get_slave_channel failed but loop
will continue. So use list_for_each_entry_safe() instead of
list_for_each_entry() to against this.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jingchang Lu <b35083@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch makes the mmp_tdma controller able to provide DMA
resources in DT environments by providing an dma xlate function to
get the generic DMA device tree helper support. Then DMA clients only
need to call dma_request_slave_channel() for requesting a DMA channel
from dmaengine.
Signed-off-by: Nenghua Cao <nhcao@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For some SOCes use mmp_pdma, they have several dma controllers
sharing same irq.
So add IRQF_SHARED to flag when request irq. It can make multiple
controllers share the same irq.
Signed-off-by: Chao Xie <chao.xie@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix a memory leak in the edma_prep_dma_cyclic() error handling path.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The structure isn't used outside of its compilation unit. Make it
static.
Cc: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Several functions and variables are use on SH_CPU4 or ARM only. Guard
their declaration with conditional compilation directives to avoid
warnings.
Cc: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use the %zu and %pad printk specifiers to print size_t and dma_addr_t
variables, and cast pointers to uintptr_t instead of unsigned int where
applicable. This fixes warnings on platforms where pointers and/or
dma_addr_t have a different size than int
Cc: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pull slave-dma fixes from Vinod Koul:
"This request brings you two small fixes. First one for fixing
dereference of freed descriptor and second for fixing sdma bindings
for it to work for imx25.
I was planning to send this about 10days ago but then I had to proceed
on my paternity leave and didnt get chance to send this. Now got a
bit of time from dady duties :)"
* 'fixes' of git://git.infradead.org/users/vkoul/slave-dma:
dma: sdma: Add imx25 compatible
dma: ste_dma40: don't dereference free:d descriptor
Since commit 7787380336 "net_dma: mark broken" we no longer pin dma
engines active for the network-receive-offload use case. As a result
the ->free_chan_resources() that occurs after the driver self test no
longer has a NET_DMA induced ->alloc_chan_resources() to back it up. A
late firing irq can lead to ksoftirqd spinning indefinitely due to the
tasklet_disable() performed by ->free_chan_resources(). Only
->alloc_chan_resources() can clear this condition in affected kernels.
This problem has been present since commit 3e037454bc "I/OAT: Add
support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the
NET_DMA use case is deprecated we can revisit moving the driver to use
threaded irqs. For now, just tear down the irq and tasklet properly by:
1/ Disable the irq from triggering the tasklet
2/ Disable the irq from re-arming
3/ Flush inflight interrupts
4/ Flush the timer
5/ Flush inflight tasklets
References:
https://lkml.org/lkml/2014/1/27/282https://lkml.org/lkml/2014/2/19/672
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: <stable@vger.kernel.org>
Reported-by: Mike Galbraith <bitbucket@online.de>
Reported-by: Stanislav Fomichev <stfomichev@yandex-team.ru>
Tested-by: Mike Galbraith <bitbucket@online.de>
Tested-by: Stanislav Fomichev <stfomichev@yandex-team.ru>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
imx25 did not work without a firmware previously.
This patch adds a DT compatible to pass the correct data with the
default script addresses for imx25.
Add imx25 compatible to the list of compatibles in the binding
documentation.
Signed-off-by: Markus Pargmann <mpa@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add Freescale enhanced direct memory(eDMA) controller support.
This module can be found on Vybrid and LS-1 SoCs.
Signed-off-by: Alison Wang <b18965@freescale.com>
Signed-off-by: Jingchang Lu <b35083@freescale.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is couple of leftovers in the comment blocks. This patch modifies the
comments accordingly.
There is no functional change.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It appears that in the DMA40 driver the DMA tasklet will very
often dereference memory for a descriptor just free:d from the
DMA40 slab. Nothing happens because no other part of the driver
has yet had a chance to claim this memory, but it's really
nasty to dereference free:d memory, so let's check the flag
before the descriptor is free and store it in a bool variable.
Cc: stable@vger.kernel.org
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Enabling some of the mvebu platforms in the multiplatform config for ARM
enabled these drivers, which also triggered a bunch of warnings when LPAE
is enabled (thus making phys_addr_t 64-bit).
Most changes are switching printk formats, but also a bit of changes to what
used to be array-based pointer arithmetic that could just be done with the
address types instead.
The warnings were:
drivers/dma/mv_xor.c: In function 'mv_xor_tx_submit':
drivers/dma/mv_xor.c:500:3: warning: format '%x' expects argument of type
'unsigned int', but argument 4 has type 'dma_addr_t' [-Wformat]
drivers/dma/mv_xor.c: In function 'mv_xor_alloc_chan_resources':
drivers/dma/mv_xor.c:553:13: warning: cast to pointer from integer of
different size [-Wint-to-pointer-cast]
drivers/dma/mv_xor.c:555:4: warning: cast from pointer to integer of
different size [-Wpointer-to-int-cast]
drivers/dma/mv_xor.c: In function 'mv_xor_prep_dma_memcpy':
drivers/dma/mv_xor.c:584:2: warning: format '%x' expects argument of type
'unsigned int', but argument 5 has type 'dma_addr_t' [-Wformat]
drivers/dma/mv_xor.c:584:2: warning: format '%x' expects argument of type
'unsigned int', but argument 6 has type 'dma_addr_t' [-Wformat]
drivers/dma/mv_xor.c: In function 'mv_xor_prep_dma_xor':
drivers/dma/mv_xor.c:628:2: warning: format '%u' expects argument of type
'unsigned int', but argument 7 has type 'dma_addr_t' [-Wformat]
Acked-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Olof Johansson <olof@lixom.net>