add the wq lock in cdev open and release call. This fixes
race conditions observed in the open and close routines.
Fixes: 42d279f913 ("dmaengine: idxd: add char driver to expose submission portal to userland")
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/159285824892.64944.2905413694915141834.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
event_id0 is defined as 'unsigned int', so it is always greater or
equal to zero.
Remove the unneeded comparisons to fix the following W=1 build
warning:
drivers/dma/imx-sdma.c: In function 'sdma_free_chan_resources':
drivers/dma/imx-sdma.c:1334:23: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
1334 | if (sdmac->event_id0 >= 0)
| ^~
drivers/dma/imx-sdma.c: In function 'sdma_config':
drivers/dma/imx-sdma.c:1635:23: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
1635 | if (sdmac->event_id0 >= 0) {
|
Fixes: 25962e1a7f ("dmaengine: imx-sdma: Fix the event id check to include RX event for UART6")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Fabio Estevam <festevam@gmail.com>
Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
Link: https://lore.kernel.org/r/20200621155730.28766-1-festevam@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The cookie tracking in dmaengine expects all submissions completed in
order. Some DMA devices like Intel DSA can complete submissions out of
order, especially if configured with a work queue sharing multiple DMA
engines. Add a status DMA_OUT_OF_ORDER that tx_status can be returned for
those DMA devices. The user should use callbacks to track the completion
rather than the DMA cookie. This would address the issue of dmatest
complaining that descriptors are "busy" when the cookie count goes
backwards due to out of order completion. Add DMA_COMPLETION_NO_ORDER
DMA capability to allow the driver to flag the device's ability to complete
operations out of order.
Reported-by: Swathi Kovvuri <swathi.kovvuri@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Tested-by: Swathi Kovvuri <swathi.kovvuri@intel.com>
Link: https://lore.kernel.org/r/158939557151.20335.12404113976045569870.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On a MMP2, the DMA interrupt is shared by all channels of the peripheral
DMA controller and the audio DMA controller. Both drivers can identify
their interrupts, but only the PDMA driver marks the line shared:
[ 1.185782] mmp-pdma d4000000.dma: initialized 16 channels
[ 1.186808] mmp-tdma d42a0800.adma: IRQ index 1 not found
[ 1.194317] genirq: Flags mismatch irq 64. 00000000 (tdma) vs. 00000080 (pdma)
[ 1.197894] mmp-tdma: probe of d42a0800.adma failed with error -16
Let's turn on IRQF_SHARED in the ADMA driver as well.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Link: https://lore.kernel.org/r/20200601192252.172773-1-lkundrak@v3.sk
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When there's a single interrupt for all the DMA channels, the
unsuccessful attempt to request separate IRQs emits useless warnings:
[ 1.370381] mmp-pdma d4000000.dma: IRQ index 1 not found
...
[ 1.412398] mmp-pdma d4000000.dma: IRQ index 15 not found
[ 1.418308] mmp-pdma d4000000.dma: initialized 16 channels
Avoid that, treating the IRQs as optional.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Link: https://lore.kernel.org/r/20200601192337.172869-1-lkundrak@v3.sk
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If PCI enumerated controller has a companion device,
register it in the ACPI DMA controllers as well.
Fixes: f7c799e950 ("dmaengine: dw: we do support Merrifield SoC in PCI mode")
Depends-on: b685fe26e9 ("dmaengine: dw: platform: Split ACPI helpers to separate module")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200526182416.52805-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In the unlikely case when the channel is running (RT enabled) during
alloc_chan_resources then we should use udma_reset_chan() and not
udma_stop() as the later is trying to initiate a teardown on the channel,
which is not valid at this point.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200527070612.636-3-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Some of the earlier errors should be sent to the error cleanup path to
make sure that the uchan struct is reset, the dma_pool (if allocated) is
released and memcpy channel pairs are released in a correct way.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200527070612.636-2-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The "ti,udma-atype" property is expected in the UDMA node and not in the
parent navss node.
Fixes: 0ebcf1a274 ("dmaengine: ti: k3-udma: Implement support for atype (for virtualization)")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200527065357.30791-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is a regular need in the kernel to provide a way to declare having a
dynamically sized set of trailing elements in a structure. Kernel code should
always use “flexible array members”[1] for these cases. The older style of
one-element or zero-length arrays should no longer be used[2].
[1] https://en.wikipedia.org/wiki/Flexible_array_member
[2] https://github.com/KSPP/linux/issues/21
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
There is a regular need in the kernel to provide a way to declare having a
dynamically sized set of trailing elements in a structure. Kernel code should
always use “flexible array members”[1] for these cases. The older style of
one-element or zero-length arrays should no longer be used[2].
[1] https://en.wikipedia.org/wiki/Flexible_array_member
[2] https://github.com/KSPP/linux/issues/21
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Since commit 84af7a6194 ("checkpatch: kconfig: prefer 'help' over
'---help---'"), the number of '---help---' has been gradually
decreasing, but there are still more than 2400 instances.
This commit finishes the conversion. While I touched the lines,
I also fixed the indentation.
There are a variety of indentation styles found.
a) 4 spaces + '---help---'
b) 7 spaces + '---help---'
c) 8 spaces + '---help---'
d) 1 space + 1 tab + '---help---'
e) 1 tab + '---help---' (correct indentation)
f) 1 tab + 1 space + '---help---'
g) 1 tab + 2 spaces + '---help---'
In order to convert all of them to 1 tab + 'help', I ran the
following commend:
$ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/'
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Bunch of updates to drivers like dmatest, dw-edma, ioat,
mmp-tdma and k3-udma along with Renesas binding update to json-schema
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAl7fdVEACgkQfBQHDyUj
g0e7PBAAhtKUb1hbx9OR9Wm4l0ZPgheip0Ai8zWjAXaM0au/OhGbYrss7ilRySYL
ORUZqCp0/972w7oyHeHeNu2gxvbg8ouYCSTEftF4v78UPToYmoVYjUHVgxyLEeQ4
i+B/K2sa96WYa0D64T3cDi7Vmf61ouB86Do7iIU8LZJuiS18Smyz5dfAUNCWZ7br
rREDnJ6QPjoWMpdz/toYkYxChBWN7td8Zve1u0LUEFXt/OG6jpfv4Dg2jJ+j5E4V
TGmV5p+7YVpuxmi1ab/MEOTWdG1rckhr85/+Fxy55V+BH8GaU5GEzeFg/5gIwypp
it4dbPRq/fUPmuuWA+o2BYDC3ytJuFP/UQkCvxazT1OQS5xSHFHZXQYKOkRGZWln
jNGML2CTTRe4JvRDU1rlLtQjSjlyF9Gt/5fRYkJQsJYByJhpvgp1+Fsuv8e2TFGX
hhPH/RJJ04nhyHMjH6WBea+3BiUDHk9nxTPxeefu3tpF779vYmBXgO70thYt+J7d
oJwegyfwJmgv7ZyItgQC+oo9AOVyByXrELuPgbFxHCnxNXjE8Xzu7lxjI3SX1zn2
1tFX7yqJamR0s7EFwMDVBsG2SqUr/XKmruEUxt+VWIM/UjDrVvDPnqQ0koziRNV6
dWClV/+QuOXKo349bDQUMjUp7gBub6lHCIJ2U428Y/pckgsOWac=
=Jun4
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.8-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"A fairly small dmaengine update which includes mostly driver updates
(dmatest, dw-edma, ioat, mmp-tdma and k3-udma) along with Renesas
binding update to json-schema"
* tag 'dmaengine-5.8-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (39 commits)
dmaengine: imx-sdma: initialize all script addresses
dmaengine: ti: k3-udma: Use proper return code in alloc_chan_resources
dmaengine: ti: k3-udma: Remove udma_chan.in_ring_cnt
dmaengine: ti: k3-udma: Add missing dma_sync call for rx flush descriptor
dmaengine: at_xdmac: Replace zero-length array with flexible-array
dmaengine: at_hdmac: Replace zero-length array with flexible-array
dmaengine: qcom: bam_dma: Replace zero-length array with flexible-array
dmaengine: ti: k3-udma: Use PTR_ERR_OR_ZERO() to simplify code
dmaengine: moxart-dma: Drop pointless static qualifier in moxart_probe()
dmaengine: sf-pdma: Simplify the error handling path in 'sf_pdma_probe()'
dmaengine: qcom_hidma: use true,false for bool variable
dmaengine: dw-edma: support local dma device transfer semantics
dmaengine: Fix doc strings to satisfy validation script
dmaengine: Include dmaengine.h into dmaengine.c
dmaengine: dmatest: Describe members of struct dmatest_info
dmaengine: dmatest: Describe members of struct dmatest_params
dmaengine: dmatest: Allow negative timeout value to specify infinite wait
Revert "dmaengine: dmatest: timeout value of -1 should specify infinite wait"
dmaengine: stm32-dma: direct mode support through device tree
dt-bindings: dma: add direct mode support through device tree in stm32-dma
...
Commit b53611fb1c ("dmaengine: tegra210-adma: Fix crash during probe")
has moved some code in the probe function and reordered the error handling
path accordingly.
However, a goto has been missed.
Fix it and goto the right label if 'dma_async_device_register()' fails, so
that all resources are released.
Fixes: b53611fb1c ("dmaengine: tegra210-adma: Fix crash during probe")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200516214205.276266-1-christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The script addresses array increases with each new version. The driver
initializes the array to -EINVAL initially, but only up to the size
of the v1 array. Initialize the additional addresses for the newer
versions as well. Without this uninitialized values of the newer arrays
are treated as valid.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Robin Gong <yibin.gong@nxp.com>
Link: https://lore.kernel.org/r/20200513060405.18685-1-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In udma_alloc_chan_resources() if the channel is not willing to stop then
the function should return with error code.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200512134519.5642-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The TR mode rx flush descriptor did not had a dma_sync_single_for_device()
call to make sure that the DMA see the correct information.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200512134544.5839-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
cppi5_tr_csf_set() clears previously set Configuration Specific Flags.
Setting the EOP flag clears the SUPR_EVT flag for the last TR which is not
desirable as we do not want to have events from the TR.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200512134531.5742-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
sizeof(flexible-array-member) triggers a warning because flexible array
members have incomplete type[1]. There are some instances of code in
which the sizeof operator is being incorrectly/erroneously applied to
zero-length arrays and the result is zero. Such instances may be hiding
some bugs. So, this work (flexible-array member conversions) will also
help to get completely rid of those sorts of issues.
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Acked-by: Ludovic Desroches<ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200507190046.GA15298@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
sizeof(flexible-array-member) triggers a warning because flexible array
members have incomplete type[1]. There are some instances of code in
which the sizeof operator is being incorrectly/erroneously applied to
zero-length arrays and the result is zero. Such instances may be hiding
some bugs. So, this work (flexible-array member conversions) will also
help to get completely rid of those sorts of issues.
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Acked-by: Ludovic Desroches<ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200507190038.GA15272@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
sizeof(flexible-array-member) triggers a warning because flexible array
members have incomplete type[1]. There are some instances of code in
which the sizeof operator is being incorrectly/erroneously applied to
zero-length arrays and the result is zero. Such instances may be hiding
some bugs. So, this work (flexible-array member conversions) will also
help to get completely rid of those sorts of issues.
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Reviewed-by: Jeffrey Hugo <jhugo@codeaurora.org>
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20200508210707.GA24136@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fixes coccicheck warnings:
drivers/dma/ti/k3-udma.c:1294:1-3: WARNING: PTR_ERR_OR_ZERO can be used
drivers/dma/ti/k3-udma.c:1311:1-3: WARNING: PTR_ERR_OR_ZERO can be used
drivers/dma/ti/k3-udma.c:1376:1-3: WARNING: PTR_ERR_OR_ZERO can be used
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Samuel Zou <zou_wei@huawei.com>
Link: https://lore.kernel.org/r/1588757146-38858-1-git-send-email-zou_wei@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need to have the 'void __iomem *dma_base_addr' variable
static since new value always be assigned before use it.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20200505101353.195446-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need to explicitly free memory that have been 'devm_kzalloc'ed.
Simplify the probe function accordingly.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Tested-by: Green Wan <green.wan@sifive.com>
Reviewed-by: Green Wan <green.wan@sifive.com>
Link: https://lore.kernel.org/r/20200501100824.126534-1-christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix the following coccicheck warning:
drivers/dma/qcom/hidma.c:553:1-17: WARNING: Assignment of 0/1 to bool
variable
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Acked By: Sinan Kaya <okaya@kernel.org>
Link: https://lore.kernel.org/r/20200504113406.41530-1-yanaijie@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In case of dmatest is built-in and no channel was configured test
doesn't run with:
dmatest: Could not start test, no channels configured
Even though description to "channel" parameter claims that default is
any.
Add default channel back as it used to be rather than reject test with
no channel configuration.
Fixes: d53513d5dc ("dmaengine: dmatest: Add support for multi channel testing)
Reported-by: Dijil Mohan <Dijil.Mohan@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Link: https://lore.kernel.org/r/20200429071522.58148-1-vladimir.murzin@arm.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current implementation may miss completions after we unmask the
interrupt. In order to make sure we process all competions, we need to:
1. Do an MMIO read from the device as a barrier to ensure that all PCI
writes for completions have arrived.
2. Check for any additional completions that we missed.
Fixes: 8f47d1a5e5 ("dmaengine: idxd: connect idxd to dmaengine subsystem")
Reported-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158834641769.35613.1341160109892008587.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When the kernel is built with lockdep support and the owl-dma driver is
used, the following message is shown:
[ 2.496939] INFO: trying to register non-static key.
[ 2.501889] the code is fine but needs lockdep annotation.
[ 2.507357] turning off the locking correctness validator.
[ 2.512834] CPU: 0 PID: 12 Comm: kworker/0:1 Not tainted 5.6.3+ #15
[ 2.519084] Hardware name: Generic DT based system
[ 2.523878] Workqueue: events_freezable mmc_rescan
[ 2.528681] [<801127f0>] (unwind_backtrace) from [<8010da58>] (show_stack+0x10/0x14)
[ 2.536420] [<8010da58>] (show_stack) from [<8080fbe8>] (dump_stack+0xb4/0xe0)
[ 2.543645] [<8080fbe8>] (dump_stack) from [<8017efa4>] (register_lock_class+0x6f0/0x718)
[ 2.551816] [<8017efa4>] (register_lock_class) from [<8017b7d0>] (__lock_acquire+0x78/0x25f0)
[ 2.560330] [<8017b7d0>] (__lock_acquire) from [<8017e5e4>] (lock_acquire+0xd8/0x1f4)
[ 2.568159] [<8017e5e4>] (lock_acquire) from [<80831fb0>] (_raw_spin_lock_irqsave+0x3c/0x50)
[ 2.576589] [<80831fb0>] (_raw_spin_lock_irqsave) from [<8051b5fc>] (owl_dma_issue_pending+0xbc/0x120)
[ 2.585884] [<8051b5fc>] (owl_dma_issue_pending) from [<80668cbc>] (owl_mmc_request+0x1b0/0x390)
[ 2.594655] [<80668cbc>] (owl_mmc_request) from [<80650ce0>] (mmc_start_request+0x94/0xbc)
[ 2.602906] [<80650ce0>] (mmc_start_request) from [<80650ec0>] (mmc_wait_for_req+0x64/0xd0)
[ 2.611245] [<80650ec0>] (mmc_wait_for_req) from [<8065aa10>] (mmc_app_send_scr+0x10c/0x144)
[ 2.619669] [<8065aa10>] (mmc_app_send_scr) from [<80659b3c>] (mmc_sd_setup_card+0x4c/0x318)
[ 2.628092] [<80659b3c>] (mmc_sd_setup_card) from [<80659f0c>] (mmc_sd_init_card+0x104/0x430)
[ 2.636601] [<80659f0c>] (mmc_sd_init_card) from [<8065a3e0>] (mmc_attach_sd+0xcc/0x16c)
[ 2.644678] [<8065a3e0>] (mmc_attach_sd) from [<8065301c>] (mmc_rescan+0x3ac/0x40c)
[ 2.652332] [<8065301c>] (mmc_rescan) from [<80143244>] (process_one_work+0x2d8/0x780)
[ 2.660239] [<80143244>] (process_one_work) from [<80143730>] (worker_thread+0x44/0x598)
[ 2.668323] [<80143730>] (worker_thread) from [<8014b5f8>] (kthread+0x148/0x150)
[ 2.675708] [<8014b5f8>] (kthread) from [<801010b4>] (ret_from_fork+0x14/0x20)
[ 2.682912] Exception stack(0xee8fdfb0 to 0xee8fdff8)
[ 2.687954] dfa0: 00000000 00000000 00000000 00000000
[ 2.696118] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 2.704277] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000
The obvious fix would be to use 'spin_lock_init()' on 'pchan->lock'
before attempting to call 'spin_lock_irqsave()' in 'owl_dma_get_pchan()'.
However, according to Manivannan Sadhasivam, 'pchan->lock' was supposed
to only protect 'pchan->vchan' while 'od->lock' does a similar job in
'owl_dma_terminate_pchan()'.
Therefore, this patch substitutes 'pchan->lock' with 'od->lock' and
removes the 'lock' attribute in 'owl_dma_pchan' struct.
Fixes: 47e20577c2 ("dmaengine: Add Actions Semi Owl family S900 DMA driver")
Signed-off-by: Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Acked-by: Andreas Färber <afaerber@suse.de>
Link: https://lore.kernel.org/r/c6e6cdaca252b5364bd294093673951036488cf0.1588439073.git.cristian.ciocaltea@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Modify dw_edma_device_transfer() to also support the semantics of dma
device transfer for additional use cases involving pcitest utility as a
local initiator.
For its original use case, dw-edma supported the semantics of dma device
transfer from the perspective of a remote initiator who is located across
the PCIe bus from dma channel hardware.
To a remote initiator, DMA_DEV_TO_MEM means using a remote dma WRITE
channel to transfer from remote memory to local memory. A WRITE channel
would be employed on the remote device in order to move the contents of
remote memory to the bus destined for local memory.
To a remote initiator, DMA_MEM_TO_DEV means using a remote dma READ
channel to transfer from local memory to remote memory. A READ channel
would be employed on the remote device in order to move the contents of
local memory to the bus destined for remote memory.
>From the perspective of a local dma initiator who is co-located on the
same side of the PCIe bus as the dma channel hardware, the semantics of
dma device transfer are flipped.
To a local initiator, DMA_DEV_TO_MEM means using a local dma READ channel
to transfer from remote memory to local memory. A READ channel would be
employed on the local device in order to move the contents of remote
memory to the bus destined for local memory.
To a local initiator, DMA_MEM_TO_DEV means using a local dma WRITE channel
to transfer from local memory to remote memory. A WRITE channel would be
employed on the local device in order to move the contents of local memory
to the bus destined for remote memory.
To support local dma initiators, dw_edma_device_transfer() is modified to
now examine the direction field of struct dma_slave_config for the channel
which initiators can configure by calling dmaengine_slave_config().
If direction is configured as either DMA_DEV_TO_MEM or DMA_MEM_TO_DEV,
local initiator semantics are used. If direction is a value other than
DMA_DEV_TO_MEM nor DMA_MEM_TO_DEV, then remote initiator semantics are
used. This should maintain backward compatibility with the original use
case of dw-edma.
The dw-edma-test utility is an example of a remote initiator. From reading
its patch, dw-edma-test does not specifically set the direction field of
struct dma_slave_config. Since dw_edma_device_transfer() also does not
check the direction field of struct dma_slave_config, it seems safe to use
this convention in dw-edma to support both local and remote initiator
semantics.
Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com>
Link: https://lore.kernel.org/r/1588122633-1552-1-git-send-email-alan.mikhak@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The validation kernel doc script complains about undescribed
function parameters
.../dmaengine.c:155: warning: Function parameter or member 'dev' not descr ibed in 'dev_to_dma_chan'
.../dmaengine.c:251: warning: cannot understand function prototype: 'dma_cap_mask_t dma_cap_mask_all; '
.../dmaengine.c:257: warning: cannot understand function prototype: 'struct dma_chan_tbl_ent '
.../dmaengine.c:264: warning: cannot understand function prototype: 'struct dma_chan_tbl_ent __percpu *channel_table[DMA_TX_TYPE_END]; '
.../dmaengine.c:304: warning: Function parameter or member 'chan' not described in 'dma_chan_is_local'
.../dmaengine.c:304: warning: Function parameter or member 'cpu' not described in 'dma_chan_is_local'
.../dmaengine.c:414: warning: Function parameter or member 'chan' not described in 'balance_ref_count'
.../dmaengine.c:447: warning: Function parameter or member 'chan' not described in 'dma_chan_get'
.../dmaengine.c:494: warning: Function parameter or member 'chan' not described in 'dma_chan_put'
Add descriptions to the function parameters and in some cases update
existing text as well.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200429122151.50989-2-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If we do
% echo 1 > /sys/module/dmatest/parameters/run
[ 115.851124] dmatest: Could not start test, no channels configured
% echo dma8chan7 > /sys/module/dmatest/parameters/channel
[ 127.563872] dmatest: Added 1 threads using dma8chan7
% cat /sys/module/dmatest/parameters/wait
... !!! HANG !!! ...
The culprit is the commit 6138f967bc
("dmaengine: dmatest: Use fixed point div to calculate iops")
which makes threads not to run, but pending and being kicked off by writing
to the 'run' node. However, it forgot to consider 'wait' routine to avoid
above mentioned case.
In order to fix this, check for really running threads, i.e. with pending
and done flags unset.
It's pity the culprit commit hadn't updated documentation and tested all
possible scenarios.
Fixes: 6138f967bc ("dmaengine: dmatest: Use fixed point div to calculate iops")
Cc: Seraj Alijan <seraj.alijan@sondrel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200428113518.70620-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The dmatest module parameter 'timeout' is documented as accepting a -1 to mean
"infinite timeout". However, an infinite timeout is not advised, nor possible
since the module parameter is an unsigned int, which won't accept a negative
value. Change the parameter type to be signed integer.
Cc: Gary Hook <Gary.Hook@amd.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200424161147.16895-4-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This reverts commit ed04b7c57c.
While it gives a good description what happens, the approach seems too
confusing. Let's fix it in the following patch.
Cc: Gary Hook <Gary.Hook@amd.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200424161147.16895-3-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Under some circumstances, i.e. when test is still running and about to
time out and user runs, for example,
grep -H . /sys/module/dmatest/parameters/*
the iterations parameter is not respected and test is going on and on until
user gives
echo 0 > /sys/module/dmatest/parameters/run
This is not what expected.
The history of this bug is interesting. I though that the commit
2d88ce76eb ("dmatest: add a 'wait' parameter")
is a culprit, but looking closer to the code I think it simple revealed the
broken logic from the day one, i.e. in the commit
0a2ff57d6f ("dmaengine: dmatest: add a maximum number of test iterations")
which adds iterations parameter.
So, to the point, the conditional of checking the thread to be stopped being
first part of conjunction logic prevents to check iterations. Thus, we have to
always check both conditions to be able to stop after given iterations.
Since it wasn't visible before second commit appeared, I add a respective
Fixes tag.
Fixes: 2d88ce76eb ("dmatest: add a 'wait' parameter")
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Link: https://lore.kernel.org/r/20200424161147.16895-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Direct mode or FIFO mode is computed by stm32-dma driver. Add a way for
the user to force direct mode, by setting bit 2 in the bitfield value
specifying DMA features in the device tree.
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200422102904.1448-3-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need to call 'hidma_debug_uninit()' in the error handling
path. 'hidma_debug_init()' has not been called yet.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/20200427111043.70218-1-christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DMA synchronization hook checks whether interrupt is raised by testing
corresponding bit in a hardware status register, and thus, clock should
be enabled in this case, otherwise CPU may hang if synchronization is
invoked while Runtime PM is in suspended state. This patch resumes the RPM
during of the DMA synchronization process in order to avoid potential
problems. It is a minor clean up of a previous commit, no real problem is
fixed by this patch because currently RPM is always in a resumed state
while DMA is synchronized, although this may change in the future.
Fixes: 6697255f23 ("dmaengine: tegra-apb: Improve DMA synchronization")
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Link: https://lore.kernel.org/r/20200426190835.21950-1-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We only support DMA_DEV_TO_MEM and DMA_MEM_TO_DEV. Let's not do
undefined things with other values and reject them.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Link: https://lore.kernel.org/r/20200424215020.105281-1-lkundrak@v3.sk
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Modify dw_edma_irq_request() to check if a struct msi_desc entry exists
before copying the contents of its struct msi_msg pointer.
Without this sanity check, __get_cached_msi_msg() crashes when invoked by
dw_edma_irq_request() running on a Linux-based PCIe endpoint device. MSI
interrupt are not received by PCIe endpoint devices. If irq_get_msi_desc()
returns null, then there is no cached struct msi_msg to be copied.
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/1587607101-31914-1-git-send-email-alan.mikhak@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When the channel register code was changed to allow hotplug operations,
dynamic indexing wasn't taken into account. When channels are randomly
plugged and unplugged out of order, the serial indexing breaks. Convert
channel indexing to using IDA tracking in order to allow dynamic
assignment. The previous code does not cause any regression bug for
existing channel allocation besides idxd driver since the hotplug usage
case is only used by idxd at this point.
With this change, the chan->idr_ref is also not needed any longer. We can
have a device with no channels registered due to hot plug. The channel
device release code no longer should attempt to free the dma device id on
the last channel release.
Fixes: e81274cd6b ("dmaengine: add support to dynamic register/unregister of channels")
Reported-by: Yixin Zhang <yixin.zhang@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Tested-by: Yixin Zhang <yixin.zhang@intel.com>
Link: https://lore.kernel.org/r/158679961260.7674.8485924270472851852.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
removing unnecessary mod_timer from timeout handler
incase of ioat_cleanup_preamble() is true for cleaner code
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Leonid Ravich <Leonid.Ravich@emc.com>
Link: https://lore.kernel.org/r/1587589761-32690-2-git-send-email-leonid.ravich@dell.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
A generic SRAM will driver for Device Tree enabled platforms will do as
well.
The non-DT drivers that use mmp_tdma to transfer audio samples to and from
the audio SRAM should depend on MMP_SRAM themselves.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Link: https://lore.kernel.org/r/20200419164912.670973-8-lkundrak@v3.sk
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This makes dma_get_slave_caps() work with the device so that it could
actually be used with soc-generic-dmaengine-pcm.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Link: https://lore.kernel.org/r/20200419164912.670973-7-lkundrak@v3.sk
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When a channel configuration fails, the status of the channel is set to
DEV_ERROR so that an attempt to submit it fails. However, this status
sticks until the heat end of the universe, making it impossible to
recover from the error.
Let's reset it when the channel is released so that further use of the
channel with correct configuration is not impacted.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Link: https://lore.kernel.org/r/20200419164912.670973-5-lkundrak@v3.sk
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Drop a redundant "mmp_tdma:" from some error messages. The dev_err()
appends mostly the same thing for us:
[ 120.756530] mmp-tdma d42a0800.adma: mmp_tdma: unknown burst size.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Link: https://lore.kernel.org/r/20200419164912.670973-3-lkundrak@v3.sk
Signed-off-by: Vinod Koul <vkoul@kernel.org>
With an invalid dma_slave_config set previously,
mmp_tdma_prep_dma_cyclic() would detect an error whilst configuring the
channel, but proceed happily on:
[ 120.756530] mmp-tdma d42a0800.adma: mmp_tdma: unknown burst size.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Link: https://lore.kernel.org/r/20200419164912.670973-2-lkundrak@v3.sk
Signed-off-by: Vinod Koul <vkoul@kernel.org>
pd->dma.dev is read in irq handler pd_irq().
However, it is set to pdev->dev after request_irq().
Therefore, set pd->dma.dev to pdev->dev before request_irq() to
avoid data race between pch_dma_probe() and pd_irq().
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
Link: https://lore.kernel.org/r/20200416062335.29223-1-madhuparnabhowmik10@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
requreing kmalloc of 2M high chance to fail in
fragmented memory.
IOAT ring requires 64k * 64B memory
which will be achived by 512k * 8 allocation
instead of 2M * 2.
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Leonid Ravich <Leonid.Ravich@emc.com>
Link: https://lore.kernel.org/r/20200416170628.16196-2-leonid.ravich@dell.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
changing macros which assumption is chunk size of 2M,
which can be other size
prepare for changing allocation chunk size.
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Leonid Ravich <Leonid.Ravich@emc.com>
Link: https://lore.kernel.org/r/20200416170628.16196-1-leonid.ravich@dell.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Decouple dw-edma-core.c from struct pci_dev as a step toward integration
of dw-edma with pci-epf-test so the latter can initiate dma operations
locally from the endpoint side. A barrier to such integration is the
dependency of dw_edma_probe() and other functions in dw-edma-core.c on
struct pci_dev.
The Synopsys DesignWare dw-edma driver was designed to run on host side
of PCIe link to initiate DMA operations remotely using eDMA channels of
PCIe controller on the endpoint side. This can be inferred from seeing
that dw-edma uses struct pci_dev and accesses hardware registers of dma
channels across the bus using BAR0 and BAR2.
The ops field of struct dw_edma in dw-edma-core.h is currenty undefined:
const struct dw_edma_core_ops *ops;
However, the kernel builds without failure even when dw-edma driver is
enabled. Instead of removing the currently undefined and usued ops field,
define struct dw_edma_core_ops and use the ops field to decouple
dw-edma-core.c from struct pci_dev.
Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/1586971629-30196-1-git-send-email-alan.mikhak@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The DMA transfer might finish just after checking the state with
dma_cookie_status, but before the lock is acquired. Not checking
for an empty list in xilinx_dma_tx_status may result in reading
random data or data corruption when desc is written to. This can
be reliably triggered by using dma_sync_wait to wait for DMA
completion.
Signed-off-by: Sebastian von Ohr <vonohr@smaract.com>
Tested-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/20200303130518.333-1-vonohr@smaract.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Trace of a test for DMA memcpy domains slipped into the glue layer commit.
The memcpy support should be disabled on the MCU UDMAP.
Fixes: d702419134 ("dmaengine: ti: k3-udma: Add glue layer for non DMAengine users")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200327144228.11101-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It is not possible to compile test the UDMA stack right now due to
dependencies to T_SCI_PROTOCOL and TI_SCI_INTA_IRQCHIP and their
dependencies.
Remove the COMPILE_TEST until it is actually possible to compile test the
drivers.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20200403141950.9359-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If PCI_MSI is not set, building fais:
drivers/dma/hisi_dma.c: In function ‘hisi_dma_free_irq_vectors’:
drivers/dma/hisi_dma.c:138:2: error: implicit declaration of function ‘pci_free_irq_vectors’;
did you mean ‘pci_alloc_irq_vectors’? [-Werror=implicit-function-declaration]
pci_free_irq_vectors(data);
^~~~~~~~~~~~~~~~~~~~
Make HISI_DMA depends on PCI_MSI to fix this.
Fixes: e9f08b6525 ("dmaengine: hisilicon: Add Kunpeng DMA engine support")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20200328114133.17560-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- Core:
- Some code cleanup and optimization in core by Andy
- Debugfs support for displaying dmaengine channels by Peter
- Drivers:
- New driver for uniphier-xdmac controller
- Updates to stm32 dma, mdma and dmamux drivers and PM support
- More updates to idxd drivers
- Bunch of changes in tegra-apb driver and cleaning up of pm functions
- Bunch of spelling fixes and Replace zero-length array patches
- Shutdown hook for fsl-dpaa2-qdma driver
- Support for interleaved transfers for ti-edma and virtualization
support for k3-dma driver
- Support for reset and updates in xilinx_dma driver
- Improvements and locking updates in at_hdma driver
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAl6FyokACgkQfBQHDyUj
g0dnFRAAj9lvpflrL+b9eWBZkY1ElV1jAdxsTs4HnYdXQM3ijw8yOosVVSqiuiOy
2qMfSRTP7qU9gqZ7oa1fnh05DqPmuTc3OF2IZlvGzkU9CiGQ735WGGGG8FfK/dZe
F4OgQGwA45b47hNIbvM4acwWZYPL+pBuYusKdjdHkouqVM4SORiNM8aRrCJ59xIn
P9TER//sMpdMEASuRuUIQnXb+OzSNPn1mLiP3zT0XHSM/nBMTAm7AnCDNT/Tjs9f
hwk2j8rLrwllHGqeZln8cWLhUCPrZFNe5pBWtWyV3MyY/nxlrcUX0ndJUGJIDtsb
nfXc4QKemOeF1RsC8DsQ/AY8jl6HFvRzWEEkq742IrLPCu/nTnxia4dbXW9MJ0Dp
BI7IPwoaOoYqBdRkBnSVS2F4x3813egsEReznlu/sUorTIG2g9sWtmuzv6eRt4ow
HczGgfdJXfCvIKbRg5TIXpbaJogbbB+1YrUlWq9vrZyhVw0ULtfxlWVKDy5VI1cL
0Kiz/ZIGuoQ9h6E4G3jCpaQTV49tNbYp+vimU9kizmcm+WXrTXR7rgD4AI5tH2DQ
pxYXNEl4gm1NRtWL1zzJ+B1C0MPXpc1Xafl92W39D6rphEGOdVVzay8meVIaQKDU
qQaZ1dEK4uuSxwj8NrF7sXHSClafF888FFJBEMArde1HVql/HRU=
=+UJ7
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"Core:
- Some code cleanup and optimization in core by Andy
- Debugfs support for displaying dmaengine channels by Peter
Drivers:
- New driver for uniphier-xdmac controller
- Updates to stm32 dma, mdma and dmamux drivers and PM support
- More updates to idxd drivers
- Bunch of changes in tegra-apb driver and cleaning up of pm
functions
- Bunch of spelling fixes and Replace zero-length array patches
- Shutdown hook for fsl-dpaa2-qdma driver
- Support for interleaved transfers for ti-edma and virtualization
support for k3-dma driver
- Support for reset and updates in xilinx_dma driver
- Improvements and locking updates in at_hdma driver"
* tag 'dmaengine-5.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (89 commits)
dt-bindings: dma: renesas,usb-dmac: add r8a77961 support
dmaengine: uniphier-xdmac: Remove redandant error log for platform_get_irq
dmaengine: tegra-apb: Improve DMA synchronization
dmaengine: tegra-apb: Don't save/restore IRQ flags in interrupt handler
dmaengine: tegra-apb: mark PM functions as __maybe_unused
dmaengine: fix spelling mistake "exceds" -> "exceeds"
dmaengine: sprd: Set request pending flag when DMA controller is active
dmaengine: ppc4xx: Use scnprintf() for avoiding potential buffer overflow
dmaengine: idxd: remove global token limit check
dmaengine: idxd: reflect shadow copy of traffic class programming
dmaengine: idxd: Merge definition of dsa_batch_desc into dsa_hw_desc
dmaengine: Create debug directories for DMA devices
dmaengine: ti: k3-udma: Implement custom dbg_summary_show for debugfs
dmaengine: Add basic debugfs support
dmaengine: fsl-dpaa2-qdma: remove set but not used variable 'dpaa2_qdma'
dmaengine: ti: edma: fix null dereference because of a typo in pointer name
dmaengine: fsl-dpaa2-qdma: Adding shutdown hook
dmaengine: uniphier-xdmac: Add UniPhier external DMA controller driver
dt-bindings: dmaengine: Add UniPhier external DMA controller bindings
dmaengine: ti: k3-udma: Implement support for atype (for virtualization)
...
platform_get_irq prints the error on failure, so there is no need to
have caller add a log.
Remove the log in uniphier_xdmac_probe() for platform_get_irq() failure
Reported-by: kbuild test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/20200323171928.424223-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Boot CPU0 always handles DMA interrupts and under some rare circumstances
it could stuck in uninterruptible state for a significant time (like in a
case of KASAN + NFS root). In this case sibling CPU, which waits for DMA
transfer completion, will get a DMA transfer timeout. In order to handle
this rare condition, interrupt status needs to be polled until interrupt
is handled.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Link: https://lore.kernel.org/r/20200319212321.3297-2-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The interrupt is already disabled while interrupt handler is running, and
thus, there is no need to save/restore the IRQ flags within the spinlock.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Link: https://lore.kernel.org/r/20200319212321.3297-1-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When CONFIG_PM is disabled, gcc warning this:
drivers/dma/tegra20-apb-dma.c:1587:12: warning: 'tegra_dma_runtime_resume' defined but not used [-Wunused-function]
drivers/dma/tegra20-apb-dma.c:1578:12: warning: 'tegra_dma_runtime_suspend' defined but not used [-Wunused-function]
Make it as __maybe_unused to fix the warnings,
also remove unneeded function declarations.
Fixes: ec8a158678 ("dma: tegra: add dmaengine based dma driver")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200320071337.59756-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
All but one error handling paths in the 'k3_udma_glue_cfg_rx_flow()'
function 'goto err' and call 'k3_udma_glue_release_rx_flow()'.
This not correct because this function has a 'channel->flows_ready--;' at
the end, but 'flows_ready' has not been incremented here, when we branch to
the error handling path.
In order to keep a correct value in 'flows_ready', un-roll
'k3_udma_glue_release_rx_flow()', simplify it, add some labels and branch
at the correct places when an error is detected.
Doing so, we also NULLify 'flow->udma_rflow' in a path that was lacking it.
Fixes: d702419134 ("dmaengine: ti: k3-udma: Add glue layer for non DMAengine user")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200318191209.1267-1-christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On new Spreadtrum platforms, when the CPU enters idle, it will close
the DMA controllers' clock to save power if the DMA controller is not
busy. Moreover the DMA controller's busy signal depends on the DMA
enable flag and the request pending flag.
When DMA controller starts to transfer data, which means we already
set the DMA enable flag, but now we should also set the request pending
flag, in case the DMA clock will be closed accidentally if the CPU
can not detect the DMA controller's busy signal.
Signed-off-by: Zhenfang Wang <zhenfang.wang@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang7@gmail.com>
Link: https://lore.kernel.org/r/02adbe4364ec436ec2c5bc8fd2386bab98edd884.1584019223.git.baolin.wang7@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The refcount check for dedicated workqueue (dwq) is off by one and allows
more than 1 user to open the char device. Fix check so only a single user
can open the device.
Fixes: 42d279f913 ("dmaengine: idxd: add char driver to expose submission portal to userland")
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158403020187.10208.14117394394540710774.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since snprintf() returns the would-be-output size instead of the
actual output size, the succeeding calls may go beyond the given
buffer limit. Fix it by replacing with scnprintf().
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20200311071606.4485-1-tiwai@suse.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The global token_limit is not tied to group tokens_reserved and
tokens_allowed parameters. Remove the check in order to allow independent
configuration.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Yixin Zhang <yixin.zhang@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158386266911.11066.7545764533072221536.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The traffic class are set to -1 at initialization until the user programs
them. If the user choose not to, the driver will program appropriate
defaults. The driver also needs to update the shadowed copies of the values
after doing the programming.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Yixin Zhang <yixin.zhang@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158386263076.10898.4586509576813094559.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Create a placeholder directory for each registered DMA device.
DMA drivers can use the dmaengine_get_debugfs_root() call to get their
debugfs root and can populate with custom files to aim debugging.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200306142839.17910-4-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Via the /sys/kernel/debug/dmaengine/summary users can get information
about the DMA devices and the used channels.
Example output on am654-evm with audio using two channels and after running
dmatest on 4 channels:
dma0 (285c0000.dma-controller): number of channels: 96
dma1 (31150000.dma-controller): number of channels: 267
dma1chan0 | 2b00000.mcasp:tx
dma1chan1 | 2b00000.mcasp:rx
dma1chan2 | in-use
dma1chan3 | in-use
dma1chan4 | in-use
dma1chan5 | in-use
For slave channels we can show the device and the channel name a given
channel is requested.
For non slave devices the only information we know is that the channel is
in use.
DMA drivers can implement the optional dbg_summary_show callback to
provide controller specific information instead of the generic one.
It is easy to extend the generic dmaengine_summary_show() to print
additional information about the used channels.
I have taken the idea from gpiolib and clk subsystems.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200306142839.17910-2-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Dmaengine core warns the drivers registering for missing .device_release
implementation. The warning is accurate for dmaengine controllers which
hotplug but not for rest.
So reduce this to a debug log.
Link: https://lore.kernel.org/r/20200306135018.2286959-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c: In function dpaa2_qdma_shutdown:
drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c:795:28: warning: variable dpaa2_qdma set but not used [-Wunused-but-set-variable]
commit 3e0ca3c38d ("dmaengine: fsl-dpaa2-qdma: Adding shutdown hook")
involved this, remove it.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20200303131347.28392-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently there is a dereference of the null pointer m_ddev. This appears
to be a typo on the pointer, I believe s_ddev should be used instead.
Fix this by using the correct pointer.
Fixes: eb0249d501 ("dmaengine: ti: edma: Support for interleaved mem to mem transfer")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Addresses-Coverity: ("Explicit null dereferenced")
Link: https://lore.kernel.org/r/20200226185921.351693-1-colin.king@canonical.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We need to ensure DMA engine could be stopped in order for kexec
to start the next kernel.
So add the shutdown operation support.
Signed-off-by: Peng Ma <peng.ma@nxp.com>
Link: https://lore.kernel.org/r/20200227042841.18358-1-peng.ma@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This adds external DMA controller driver implemented in Socionext
UniPhier SoCs. This driver supports DMA_MEMCPY and DMA_SLAVE modes.
Since this driver does not support the the way to transfer size
unaligned to burst width, 'src_maxburst' or 'dst_maxburst' of
dma_slave_config must be 1 to transfer arbitrary size. If transfer
size is unaligned to burst size, the transfer isn't started and
the driver displays an error message.
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Link: https://lore.kernel.org/r/1582271550-3403-3-git-send-email-hayashi.kunihiko@socionext.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The DT for virtualized hosts have dma-cells == 2 where the second parameter
is the ATYPE for the channel.
In case of dma-cells == 1 we can configure the ATYPE as 0 (reset value).
The ATYPE defined for j721e are:
0: pointers are physical addresses (no translation)
1: pointers are intermediate addresses (PVU)
2: pointers are virtual addresses (SMMU)
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200218143126.11361-3-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The returned result from the check_vma() function in the cdev ->mmap() call
needs to be handled. Add the check and returning error.
Fixes: 42d279f913 ("dmaengine: idxd: add char driver to expose submission portal to userland")
Reported-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158264926659.9387.14325163515683047959.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On i.MX6UL/ULL and i.MX6SX the DMA event id for the RX channel of
UART6 is '0'. To fix the broken DMA support for UART6, we change
the check for event_id0 to include '0' as a valid id.
Fixes: 1ec1e82f25 ("dmaengine: Add Freescale i.MX SDMA support")
Signed-off-by: Frieder Schrempf <frieder.schrempf@kontron.de>
Reviewed-by: Fabio Estevam <festevam@gmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20200225082139.7646-1-frieder.schrempf@kontron.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Technically it is possible that DMA could be misconfigured in a way that
cyclic DMA transfer is processed slower than it takes to complete the
cycle and in this case the DMA is getting aborted with a not very
informative message about the problem, let's improve it.
Suggested-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-20-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is nothing arch-specific in the driver's code, so let's enable
compile-testing for the driver.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-18-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Tegra APB DMA driver is an Open Firmware driver, so it uses OF alias
naming scheme which overrides MODULE_ALIAS, meaning that MODULE_ALIAS
does nothing and could be removed safely.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-17-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver's removal was fixed by a recent commit and module load/unload
is working well now, tested on Tegra30.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-16-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The DMA controller shall be released on driver's removal.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-15-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It is enough to check whether hardware is busy on suspend and to reset
it across of suspend-resume because:
1. Channel's configuration is fully re-programmed on each DMA
transfer anyways.
2. Context save-restore of an active channel won't end up well without
pausing transfer prior to the context's saving, but note that every
channel shall be idling at the time of suspend, so save-restore is
not needed at all.
3. The only case where context save-restore may be useful is when
channel is in a paused state during suspend. But channel's pausing
could be supported only on Tegra114+ and this functionality wasn't
implemented by the driver for years now because there is no need for
it in upstream kernel.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-14-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It's a bit impractical to enable hardware's clock at the time of DMA
channel's allocation because most of DMA client drivers allocate DMA
channel at the time of the driver's probing, and thus, DMA clock is kept
always-enabled in practice, defeating the whole purpose of runtime PM.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-13-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are few place in the code which check whether pending_sg_req list is
empty despite of the check already being done. Let's remove the duplicated
checks to keep code clean.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-12-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The runtime PM is always available on all Tegra SoCs since the commit
40b2bb1b13 ("ARM: tegra: enforce PM requirement"), so there is no
need to handle the case of unavailable RPM in the code anymore.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-11-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need to re-initialize the already initialized variables.
The tdc->config_init=false after driver's probe and after channel's
freeing, so there is no need to re-initialize it on the channel's
allocation.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-10-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch fixes few dozens of coding style problems reported by
checkpatch and prettifies code where makes sense.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-9-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need to kill tasklet when driver's probe fails because tasklet
can't be scheduled at this time. It is also cleaner to kill tasklet on
channel's freeing rather than to kill it on driver's removal, otherwise
tasklet could perform a dummy execution after channel's releasing, which
isn't very nice.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-6-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It's incorrect to check the channel's "busy" state without taking a lock.
That shouldn't cause any real troubles, nevertheless it's always better
not to have any race conditions in the code.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-5-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The ISR tasklet could be kept scheduled after DMA transfer termination,
let's add synchronization hook which blocks until tasklet is finished.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-4-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The interrupt handler puts a half-completed DMA descriptor on a free list
and then schedules tasklet to process bottom half of the descriptor that
executes client's callback, this creates possibility to pick up the busy
descriptor from the free list. Thus, let's disallow descriptor's re-use
until it is fully processed.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200209163356.6439-3-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
I was doing some experiments with I2C and noticed that Tegra APB DMA
driver crashes sometime after I2C DMA transfer termination. The crash
happens because tegra_dma_terminate_all() bails out immediately if pending
list is empty, and thus, it doesn't release the half-completed descriptors
which are getting re-used before ISR tasklet kicks-in.
tegra-i2c 7000c400.i2c: DMA transfer timeout
elants_i2c 0-0010: elants_i2c_irq: failed to read data: -110
------------[ cut here ]------------
WARNING: CPU: 0 PID: 142 at lib/list_debug.c:45 __list_del_entry_valid+0x45/0xac
list_del corruption, ddbaac44->next is LIST_POISON1 (00000100)
Modules linked in:
CPU: 0 PID: 142 Comm: kworker/0:2 Not tainted 5.5.0-rc2-next-20191220-00175-gc3605715758d-dirty #538
Hardware name: NVIDIA Tegra SoC (Flattened Device Tree)
Workqueue: events_freezable_power_ thermal_zone_device_check
[<c010e5c5>] (unwind_backtrace) from [<c010a1c5>] (show_stack+0x11/0x14)
[<c010a1c5>] (show_stack) from [<c0973925>] (dump_stack+0x85/0x94)
[<c0973925>] (dump_stack) from [<c011f529>] (__warn+0xc1/0xc4)
[<c011f529>] (__warn) from [<c011f7e9>] (warn_slowpath_fmt+0x61/0x78)
[<c011f7e9>] (warn_slowpath_fmt) from [<c042497d>] (__list_del_entry_valid+0x45/0xac)
[<c042497d>] (__list_del_entry_valid) from [<c047a87f>] (tegra_dma_tasklet+0x5b/0x154)
[<c047a87f>] (tegra_dma_tasklet) from [<c0124799>] (tasklet_action_common.constprop.0+0x41/0x7c)
[<c0124799>] (tasklet_action_common.constprop.0) from [<c01022ab>] (__do_softirq+0xd3/0x2a8)
[<c01022ab>] (__do_softirq) from [<c0124683>] (irq_exit+0x7b/0x98)
[<c0124683>] (irq_exit) from [<c0168c19>] (__handle_domain_irq+0x45/0x80)
[<c0168c19>] (__handle_domain_irq) from [<c043e429>] (gic_handle_irq+0x45/0x7c)
[<c043e429>] (gic_handle_irq) from [<c0101aa5>] (__irq_svc+0x65/0x94)
Exception stack(0xde2ebb90 to 0xde2ebbd8)
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200209163356.6439-2-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Tasklets run with all the interrupts enabled. This means that we should
replace all the (already present) spin_lock_irqsave() uses in the tasklet
with spin_lock_irq() to protect being interrupted by a IRQ which tries
to get the same lock (via calls to device_prep_dma_* for example).
spin_lock and spin_lock_bh in tasklets are not enough to protect from IRQs,
update these to spin_lock_irq().
at_xdmac_advance_work() can be called with all the interrupts enabled (when
called from tasklet), or with interrupts disabled (when called from
at_xdmac_issue_pending). Move the locking in the callers to be able to use
spin_lock_irq() and spin_lock_irqsave() for these cases.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-10-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need for locking in device_alloc_chan_resources(),
the DMA core takes care of it by using a dma_list_mutex around
the DMA devices.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-8-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The code in cause is already in the else case of
'if (at_xdmac_chan_is_cyclic(atchan))', drop the redundant check.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-7-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix the following deadlocks:
1/ atc_handle_cyclic() and atc_chain_complete() called
dmaengine_desc_get_callback_invoke() while wrongly holding the
atchan->lock. Clients can set the callback to dmaengine_terminate_sync()
which will end up trying to get the same lock, thus a deadlock occurred.
2/ dma_run_dependencies() was called with the atchan->lock held, but the
method calls device_issue_pending() which tries to get the same lock,
and so a deadlock occurred.
The driver must not hold the lock when invoking the callback or when
running dependencies. Releasing the spinlock within a called function
before calling the callback is not a nice thing to do -> called functions
become non-atomic when called within an atomic region. Thus the lock is
now taken in the child routines whereever is needed.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-6-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Avoids sleeping without depleting the emergency pool.
The rationale being that in most cases a dma device is either
offloading an operation that will automatically fallback to
software when the descriptor allocation fails, or we can simply poll
and wait for the dma device to release some in use descriptors.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-5-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Having a list of descriptors allocated for the channel at
device_alloc_chan_resources() time is a sign for bad free usage.
Return err and add a debug message in case the channel is not
free from a previous use.
atchan->descs_allocated becomes useless, get rid of it. More,
drop the error message in atc_desc_get() because now it would
introduce an extra if statement. The callers of atc_desc_get()
already print error messages in case the callee fails, no one
is hurt.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-3-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need for locking in device_alloc_chan_resources(),
the DMA core takes care of it by using a dma_list_mutex around
the DMA devices.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-2-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
All members of the structure are initialized below in the function,
there is no need to use kzalloc.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-1-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In overlay application we noticed that dma channel node probe order is
inverted i.e s2mm channel is probed first followed by mm2s channel. The
reason for this inversion is fdtoverlay utility which uses a function
called fdt_add_subnode(*). It stores the subnodes after the properties,
this has the effect of inserting the new subnode before any others and
the end result is a reversal.
Because of this inverted channel probe order, the node probed first is
assigned a '0' index instead of Channel ID should be '0' for tx and '1'
for rx and dmatest client using the DT convention fails in dma transfer
as channel are swapped.
To fix above behavior and make channel assignment index independent
of probe order, always assign mm2s channel at '0' index and the s2mm
channel at IP specific fixed offset derived from the max_channels
count.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1580388865-9960-3-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Extend dma_config structure to store the max channel count. This input is
used to populate dma device channel nodes at the fixed offset. It serves
as a preparatory patch for removing dma channel DT node order dependency,
added in the subsequent commit.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1580388865-9960-2-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Move vdesc->node list_del in stm32_dma_start_transfer instead of in
stm32_mdma_chan_complete to avoid another race in vchan_dma_desc_free_list.
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-9-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch fixes BUG: sleeping function called from invalid context in
stm32_dma_disable_chan function.
The goal of this function is to force channel disable if it has not been
disabled by hardware. This consists in clearing STM32_DMA_SCR_EN bit and
read it as 0 to ensure the channel is well disabled and the last transfer
is over.
In previous implementation, the waiting loop was based on a do...while (1)
with a call to cond_resched to give the scheduler a chance to run a higher
priority process.
But in some conditions, stm32_dma_disable_chan can be called while
preemption is disabled, on a stm32_dma_stop call for example. So
cond_resched must not be used.
To avoid this, use readl_relaxed_poll_timeout_atomic to poll
STM32_DMA_SCR_EN bit cleared.
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-8-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch adds dma_set_max_seg_size to define sg dma constraint.
This constraint may be taken into account by client to scatter/gather
its buffer.
Signed-off-by: Ludovic Barre <ludovic.barre@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-6-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Change STM32 DMA driver to defer its probe operation when reset
controller is expected but has not been probed yet when DMA
device is probed.
Changes error traces when failing to get a system resource so that
it is not printed on failure with deferred probing.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-4-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Remove reset controller reference from device instance since it is
used only at probe time.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-3-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is a DMA problem with the serial ports on i.MX6.
When the following sequence is performed:
1) Open a port
2) Write some data
3) Close the port
4) Open a *different* port
5) Write some data
6) Close the port
The second write sends nothing and the second close hangs.
If the first close() is omitted it works.
Adding logs to the the UART driver shows that the DMA is being setup but
the callback is never invoked for the second write.
This used to work in 4.19.
Git bisect leads to:
ad0d92d: "dmaengine: imx-sdma: refine to load context only once"
This commit adds a "context_loaded" flag used to avoid unnecessary context
setups.
However the flag is only reset in sdma_channel_terminate_work(),
which is only invoked in a worker triggered by sdma_terminate_all() IF
there is an active descriptor.
So, if no active descriptor remains when the channel is terminated, the
flag is not reset and, when the channel is later reused the old context
is used.
Fix the problem by always resetting the flag in sdma_free_chan_resources().
Cc: stable@vger.kernel.org
Signed-off-by: Martin Fuzzey <martin.fuzzey@flowbird.group>
Fixes: ad0d92d7ba ("dmaengine: imx-sdma: refine to load context only once")
Reviewed-by: Fabio Estevam <festevam@gmail.com>
Link: https://lore.kernel.org/r/1580305274-27274-1-git-send-email-martin.fuzzey@flowbird.group
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Reset DMA channel after stop to ensure that pending transfers and FIFOs
in the datapath are flushed or completed. It also cleanup the terminate
path and removes stop for the cyclic mode as after the reset stop is not
required. This fixes intermittent data verification failure when xilinx
dma test the client is stressed and loaded/unloaded multiple times.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1580283909-32678-1-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Changes STM32 DMAMUX driver to defer its probe operation when
reset controller is expected but has not been probed yet.
Changes error traces when failing to get a system resource so that
it is not printed on failure with deferred probing.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200128094158.20361-5-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Remove reset controller reference from device instance since it is
used only at probe time.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200128094158.20361-4-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This change ensures the DMAMUX device is reset only once it is clocked
and that clock is released in a safe state when probe operation fails.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200128094158.20361-3-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Move vdesc->node list_del in stm32_mdma_start_transfer instead of in
stm32_mdma_xfer_end to avoid another race in vchan_dma_desc_free_list.
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200127085334.13163-7-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch changes error log when failing to get the clock so that it is
not printed on failure with probe deferring.
It also defers probe when reset controller is expected but has not been
probed yet when MDMA device is probed.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200127085334.13163-5-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch disables the clock in case of error during probe. The unneeded
err_unregister label is renamed err_clk instead.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200127085334.13163-4-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Remove reset controller reference from device instance since it is
used only at probe time.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200127085334.13163-3-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current size_store() function for idxd sysfs does not check the total
wq size. This allows configuration of all wqs with total wq size. Add check
to make sure the wq sysfs attribute rejects storing of size over the total
wq size.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Jerry Chen <jerry.t.chen@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158213309629.2509.3583411832507185041.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently when inputing an unrecognized wq type, we set the wq type to
"none". It really should return error and not change the existing wq type
that's in the kernel.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Yixin Zhang <yixin.zhang@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158213304803.2290.13336343633425868211.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20200214171302.GA20586@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20200214171657.GA25663@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Baolin Wang <baolin.wang7@gmail.com>
Link: https://lore.kernel.org/r/20200214171536.GA24077@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20200214171435.GA22930@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The calcuation for limit of reserved token did not take into account the
change the user wanted vs the current group reserved token. This causes
changing of the reserved token to be possible only after we set the value
of the reserved token back to 0. Fix calculation so we can set a value that
is non zero for reserved token.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Jerry Chen <jerry.t.chen@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158204471889.37789.7749177228265869168.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When we receive back the descriptor of the terminated transfer the cookie
must be marked as completed to make sure that the accounting is correct.
In udma_tx_status() the status should be marked as completed if the channel
is no longer running (it can only happen if the channel is not yet started
for the first time, or after a channel termination).
Fixes: 25dcb5dd7b ("dmaengine: ti: New driver for K3 UDMA")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200214091441.27535-7-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It should be possible to pause, resume and check the pause state of a
channel even if we do not have active transfer.
udma_is_chan_paused() can trigger NULL pointer reference in it's current
form when the status is checked while uc->desc is NULL.
Fixes: 25dcb5dd7b ("dmaengine: ti: New driver for K3 UDMA")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200214091441.27535-6-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Use the generic TR setup function to get the TR counters for both cyclic
and slave_sg transfers.
This way the period_size for cyclic and sg_dma_len() for slave_sg can be
as large as (SZ_64K - 1) * (SZ_64K - 1) and we can handle cases when the
length is >SZ_64K and a prime number.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200214091441.27535-5-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Move the TR counter parameter configuration code out from the prep_memcpy
callback to a helper function to allow a generic re-usable code for other
TR based transfers.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200214091441.27535-4-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When a channel is asked to be stopped (teardown) and we do not have active
descriptor to receive stale data buffered on the remote side then the
teardown will not complete as UDMA needs a descriptor to be able to flush
out the DMA pipe.
The peer is trying to push the data to UDMA in teardown, but UDMA is
pushing back because it has no descriptor which would allow it to drain the
data.
The workaround is to create 1K 'trashcan' to receive the discarded data and
set up descriptors for packet and TR mode channels.
When a channel is stopped and there is no active descriptor then a
descriptor is pushed to the ring for UDMA before the teardown is initiated.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200214091441.27535-3-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In some cases (McSPI for example) the jiffie and delayed_work based
workaround can cause big throughput drop.
Switch to use ktime/usleep_range based implementation to be able
to sustain speed for PDMA based peripherals.
Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200214091441.27535-2-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit 6ebb827f7a ("dmaengine: sun4i: use 'linear_mode' in
sun4i_dma_prep_dma_cyclic") updated the condition but introduced a semi
colon this making this statement have no effect, so add the bitwise OR
to fix it"
Fixes: 6ebb827f7a ("dmaengine: sun4i: use 'linear_mode' in sun4i_dma_prep_dma_cyclic")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Link: https://lore.kernel.org/r/20200214044609.2215861-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
drivers/dma/sun4i-dma.c: In function sun4i_dma_prep_dma_cyclic:
drivers/dma/sun4i-dma.c:672:24: warning:
variable linear_mode set but not used [-Wunused-but-set-variable]
commit ffc079a4ac ("dmaengine: sun4i: Add support for cyclic requests with dedicated DMA")
involved this, explicitly using the value makes the code more readable.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20200207024445.44600-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We can't call kfree(dev) after calling device_register(dev). The "dev"
pointer has to be freed using put_device().
Fixes: 42d279f913 ("dmaengine: idxd: add char driver to expose submission portal to userland")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20200205123248.hmtog7qa2eiqaagh@kili.mountain
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200213003925.GA6906@embeddedor.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20200213003535.GA3269@embeddedor.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20200213003703.GA4177@embeddedor.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
idxd_config_bus_probe() calls try_module_get() but never calls module_put()
when it fails. Thus with every failed attempt, the ref count goes up. Add
module_put() in failure paths.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Jerry Chen <jerry.t.chen@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158144296730.41381.12134210685456322434.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
drivers/dma/idxd/cdev.c: In function idxd_cdev_open:
drivers/dma/idxd/cdev.c:77:20: warning:
variable idxd_cdev set but not used [-Wunused-but-set-variable]
commit 42d279f913 ("dmaengine: idxd: add char driver to
expose submission portal to userland") involed this.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20200210151855.55044-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
drivers/dma/idxd/sysfs.c: In function engine_group_id_store:
drivers/dma/idxd/sysfs.c:419:29: warning: variable group set but not used [-Wunused-but-set-variable]
It is not used, so remove it.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20200211135335.55924-1-yuehaibing@huawei.com
Signed-Off-By: Vinod Koul <vkoul@kernel.org>
No need to use goto to jump over the
return chan ? chan : ERR_PTR(-EPROBE_DEFER);
We can just revert the check and return right there.
Do not fail the channel request if the chan->name allocation fails, but
print a warning about it.
Change the dev_err to dev_warn if sysfs_create_link() fails as it is not
fatal.
Only attempt to remove the DMA_SLAVE_NAME symlink if it is created - or it
was attempted to be created.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200131093859.3311-2-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit 71723a96b8 ("dmaengine: Create symlinks between DMA channels and
slaves") changed the dma_request_chan() function flow in such a way that
it always returns EPROBE_DEFER in case of channels that cannot be found.
This break the operation of the devices which have optional DMA channels
as it puts their drivers in endless deferred probe loop. Fix this by
propagating the proper error value.
Fixes: 71723a96b8 ("dmaengine: Create symlinks between DMA channels and slaves")
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200130070834.17537-1-m.szyprowski@samsung.com
[vkoul: fix typo in patch title]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- remove ioremap_nocache given that is is equivalent to
ioremap everywhere
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAl4vKHwLHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYMPGBAAuVNUZaZfWYHpiVP2oRcUQUguFiD3NTbknsyzV2oH
J9P0GfeENSKwE9OOhZ7XIjnCZAJwQgTK/ppQY5yiQ/KAtYyyXjXEJ6jqqjiTDInr
+3+I3t/LhkgrK7tMrb7ylTGa/d7KhaciljnOXC8+b75iddvM9I1z2pbHDbppZMS9
wT4RXL/cFtRb85AfOyPLybcka3f5P2gGvQz38qyimhJYEzHDXZu9VO1Bd20f8+Xf
eLBKX0o6yWMhcaPLma8tm0M0zaXHEfLHUKLSOkiOk+eHTWBZ3b/w5nsOQZYZ7uQp
25yaClbameAn7k5dHajduLGEJv//ZjLRWcN3HJWJ5vzO111aHhswpE7JgTZJSVWI
ggCVkytD3ESXapvswmACSeCIDMmiJMzvn6JvwuSMVB7a6e5mcqTuGo/FN+DrBF/R
IP+/gY/T7zIIOaljhQVkiEIIwiD/akYo0V9fheHTBnqcKEDTHV4WjKbeF6aCwcO+
b8inHyXZSKSMG//UlDuN84/KH/o1l62oKaB1uDIYrrL8JVyjAxctWt3GOt5KgSFq
wVz1lMw4kIvWtC/Sy2H4oB+RtODLp6yJDqmvmPkeJwKDUcd/1JKf0KsZ8j3FpGei
/rEkBEss0KBKyFAgBSRO2jIpdj2epgcBcsdB/r5mlhcn8L77AS6mHbA173kY4pQ/
Kdg=
=TUCJ
-----END PGP SIGNATURE-----
Merge tag 'ioremap-5.6' of git://git.infradead.org/users/hch/ioremap
Pull ioremap updates from Christoph Hellwig:
"Remove the ioremap_nocache API (plus wrappers) that are always
identical to ioremap"
* tag 'ioremap-5.6' of git://git.infradead.org/users/hch/ioremap:
remove ioremap_nocache and devm_ioremap_nocache
MIPS: define ioremap_nocache to ioremap
- Core:
- Support for dynamic channels
- Removal of various slave wrappers
- Make few slave request APIs as private to dmaengine
- Symlinks between channels and slaves
- Support for hotplug of controllers
- Support for metadata_ops for dma_async_tx_descriptor
- Reporting DMA cached data amount
- Virtual dma channel locking updates
- New drivers/device/feature support support:
- Driver for Intel data accelerators
- Driver for TI K3 UDMA
- Driver for PLX DMA engine
- Driver for hisilicon Kunpeng DMA engine
- Support for eDMA support for QorIQ LS1028A in fsl edma driver
- Support for cyclic dma in sun4i driver
- Support for X1830 in JZ4780 driver
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAl4u+QkACgkQfBQHDyUj
g0cCcg//awBruofTHIrBOwHmCX1a09mw5WmkFG48N7tYp4fvaI1aOgs3hH9PZiBG
fFZUktodwYpEKg6JJOfm1RnLBuKm0+3zmaKGPdK1RcbaDURh8G9qhW65f4mfImvB
GXlgw59WKtgPAM9zWW9UxjugAk4DBte5xVKYJUsI0t4P7k9TM4i0Fv0VmMUhhDuo
buPD1cM/GWFHbE7OYJ51aGRtrOHV1nPgQaHBkWaT7EotzGsZ3gtWYzteI3BRXRV/
IkSgxOefMkIgu1j3KIxFZ1CJDHCZSnx2B+AEMCcp63osyeHBOYoL7KQxo6tBjaRV
fbCasbkTkvvJUjyZdtOdU2wqf7ZqoDkD+n5nkpENf4G1M8J5RiHmrFq96m3HRonE
V1bmMslXhsJlvtoT6ec2iJFchiq0nx1XHyST6faUOK+0cd1lzbogWwztydQH4fwd
TxfEd+eYlFFu3lGDfRp14Tz7fAcFNPZ2bJQhZkF6RpwUW3y3L0cJc3Y0AcWmNkvJ
oStvTlbbUvgRgO7rvEyAmdPb31lE6PLaA0WCahcvf4zQxxNMyYyaWP73MegvqJGO
pfJXBOWBTTKwu0fDR5UHJd3tEDABvcZnwBaCSYrpI5f9bJ4NRI3f4DIMwLBnw9IK
aH6pzwo4gTAMuvxzq8KeTp3hU7kszyUN8q8hiTZlgVozMLKXhQY=
=mv1v
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have a bunch of core changes to support dynamic channels,
hotplug of controllers, new apis for metadata ops etc along with new
drivers for Intel data accelerators, TI K3 UDMA, PLX DMA engine and
hisilicon Kunpeng DMA engine. Also usual assorted updates to drivers.
Core:
- Support for dynamic channels
- Removal of various slave wrappers
- Make few slave request APIs as private to dmaengine
- Symlinks between channels and slaves
- Support for hotplug of controllers
- Support for metadata_ops for dma_async_tx_descriptor
- Reporting DMA cached data amount
- Virtual dma channel locking updates
New drivers/device/feature support support:
- Driver for Intel data accelerators
- Driver for TI K3 UDMA
- Driver for PLX DMA engine
- Driver for hisilicon Kunpeng DMA engine
- Support for eDMA support for QorIQ LS1028A in fsl edma driver
- Support for cyclic dma in sun4i driver
- Support for X1830 in JZ4780 driver"
* tag 'dmaengine-5.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (62 commits)
dmaengine: Create symlinks between DMA channels and slaves
dmaengine: hisilicon: Add Kunpeng DMA engine support
dmaengine: idxd: add char driver to expose submission portal to userland
dmaengine: idxd: connect idxd to dmaengine subsystem
dmaengine: idxd: add descriptor manipulation routines
dmaengine: idxd: add sysfs ABI for idxd driver
dmaengine: idxd: add configuration component of driver
dmaengine: idxd: Init and probe for Intel data accelerators
dmaengine: add support to dynamic register/unregister of channels
dmaengine: break out channel registration
x86/asm: add iosubmit_cmds512() based on MOVDIR64B CPU instruction
dmaengine: ti: k3-udma: fix spelling mistake "limted" -> "limited"
dmaengine: s3c24xx-dma: fix spelling mistake "to" -> "too"
dmaengine: Move dma_get_{,any_}slave_channel() to private dmaengine.h
dmaengine: Remove dma_request_slave_channel_compat() wrapper
dmaengine: Remove dma_device_satisfies_mask() wrapper
dt-bindings: fsl-imx-sdma: Add i.MX8MM/i.MX8MN/i.MX8MP compatible string
dmaengine: zynqmp_dma: fix burst length configuration
dmaengine: sun4i: Add support for cyclic requests with dedicated DMA
dmaengine: fsl-qdma: fix duplicated argument to &&
...
Currently it is not easy to find out which DMA channels are in use, and
which slave devices are using which channels.
Fix this by creating two symlinks between the DMA channel and the actual
slave device when a channel is requested:
1. A "slave" symlink from DMA channel to slave device,
2. A "dma:<name>" symlink slave device to DMA channel.
When the channel is released, the symlinks are removed again.
The latter requires keeping track of the slave device and the channel
name in the dma_chan structure.
Note that this is limited to channel request functions for requesting an
exclusive slave channel that take a device pointer (dma_request_chan()
and dma_request_slave_channel*()).
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Link: https://lore.kernel.org/r/20200117153056.31363-1-geert+renesas@glider.be
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch adds a driver for HiSilicon Kunpeng DMA engine. This DMA engine
which is an PCIe iEP offers 30 channels, each channel has a send queue, a
complete queue and an interrupt to help to do tasks. This DMA engine can do
memory copy between memory blocks or between memory and device buffer.
Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
Signed-off-by: Zhenfa Qiu <qiuzhenfa@hisilicon.com>
Link: https://lore.kernel.org/r/1579155057-80523-1-git-send-email-wangzhou1@hisilicon.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Create a char device region that will allow acquisition of user portals in
order to allow applications to submit DMA operations. A char device will be
created per work queue that gets exposed. The workqueue type "user"
is used to mark a work queue for user char device. For example if the
workqueue 0 of DSA device 0 is marked for char device, then a device node
of /dev/dsa/wq0.0 will be created.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/157965026985.73301.976523230037106742.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add plumbing for dmaengine subsystem connection. The driver register a DMA
device per DSA device. The channels are dynamically registered when a
workqueue is configured to be "kernel:dmanegine" type.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/157965026376.73301.13867988830650740445.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The device is left unconfigured when the driver is loaded. Various
components are configured via the driver sysfs attributes. Once
configuration is done, the device can be enabled by writing the device name
to the bind attribute of the device driver sysfs. Disabling can be done
similarly. Also the individual work queues can also be enabled and disabled
through the bind/unbind attributes. A constructed hierarchy is created
through the struct device framework in order to provide appropriate
configuration points and device state and status. This hierarchy is
presented off the virtual DSA bus.
i.e. /sys/bus/dsa/...
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/157965024585.73301.6431413676230150589.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The idxd driver introduces the Intel Data Stream Accelerator [1] that will
be available on future Intel Xeon CPUs. One of the kernel access
point for the driver is through the dmaengine subsystem. It will initially
provide the DMA copy service to the kernel.
Some of the main functionality introduced with this accelerator
are: shared virtual memory (SVM) support, and descriptor submission using
Intel CPU instructions movdir64b and enqcmds. There will be additional
accelerator devices that share the same driver with variations to
capabilities.
This commit introduces the probe and initialization component of the
driver.
[1]: https://software.intel.com/en-us/download/intel-data-streaming-accelerator-preliminary-architecture-specification
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/157965023991.73301.6186843973135311580.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The functions dma_get_slave_channel() and dma_get_any_slave_channel()
are called from DMA engine drivers only. Hence move their declarations
from the public header file <linux/dmaengine.h> to the private header
file drivers/dma/dmaengine.h.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20200121093311.28639-4-geert+renesas@glider.be
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit aa1e6f1a38 ("dmaengine: kill struct dma_client and
supporting infrastructure") removed the last user of the
dma_device_satisfies_mask() wrapper.
Remove the wrapper, and rename __dma_device_satisfies_mask() to
dma_device_satisfies_mask(), to get rid of one more function starting
with a double underscore.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20200121093311.28639-2-geert+renesas@glider.be
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since the dma engine expects the burst length register content as
power of 2 value, the burst length needs to be converted first.
Additionally add a burst length range check to avoid corrupting unrelated
register bits.
Signed-off-by: Matthias Fend <matthias.fend@wolfvision.net>
Link: https://lore.kernel.org/r/20200115102249.24398-1-matthias.fend@wolfvision.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently the cyclic transfers can be used only with normal DMAs. They
can be used by pcm_dmaengine module, which is required for implementing
sound with sun4i-hdmi encoder. This is so because the controller can
accept audio only from a dedicated DMA.
This patch enables them, following the existing style for the
scatter/gather type transfers.
Signed-off-by: Stefan Mavrodiev <stefan@olimex.com>
Acked-by: Maxime Ripard <mripard@kernel.org>
Link: https://lore.kernel.org/r/20200110141140.28527-2-stefan@olimex.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is duplicated argument to && in function fsl_qdma_free_chan_resources,
which looks like a typo, pointer fsl_queue->desc_pool also needs NULL check,
fix it.
Detected with coccinelle.
Fixes: b092529e0a ("dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs")
Signed-off-by: Chen Zhou <chenzhou10@huawei.com>
Reviewed-by: Peng Ma <peng.ma@nxp.com>
Tested-by: Peng Ma <peng.ma@nxp.com>
Link: https://lore.kernel.org/r/20200120125843.34398-1-chenzhou10@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fixe the following warnings by making these static
drivers/dma/ti/k3-psil-j721e.c:62:16: warning: symbol 'j721e_src_ep_map' was not declared. Should it be static?
drivers/dma/ti/k3-psil-j721e.c:172:16: warning: symbol 'j721e_dst_ep_map' was not declared. Should it be static?
drivers/dma/ti/k3-psil-j721e.c:216:20: warning: symbol 'j721e_ep_map' was not declared. Should it be static?
CC drivers/dma/ti/k3-psil-j721e.o
drivers/dma/ti/k3-psil-am654.c:52:16: warning: symbol 'am654_src_ep_map' was not declared. Should it be static?
drivers/dma/ti/k3-psil-am654.c:127:16: warning: symbol 'am654_dst_ep_map' was not declared. Should it be static?
drivers/dma/ti/k3-psil-am654.c:169:20: warning: symbol 'am654_ep_map' was not declared. Should it be static?
Reported-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200121070104.4393-1-peter.ujfalusi@ti.com
[vkoul: updated patch title]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Certain users can not use right now the DMAengine API due to missing
features in the core. Prime example is Networking.
These users can use the glue layer interface to avoid misuse of DMAengine
API and when the core gains the needed features they can be converted to
use generic API.
The most prominent features the glue layer clients are depending on:
- most PSI-L native peripheral use extra rflow ranges on a receive channel
and depending on the peripheral's configuration packets from a single
free descriptor ring is going to be received to different receive ring
- it is also possible to have different free descriptor rings per rflow
and an rflow can also support 4 additional free descriptor ring based
on the size of the incoming packet
- out of order completion of descriptors on a channel
- when we have several queues to handle different priority packets the
descriptors will be completed 'out-of-order'
- the notion of prep_slave_sg is not matching with what the streaming type
of operation is demanding for networking
- Streaming type of operation
- Ability to fill the free descriptor ring with descriptors in
anticipation of incoming traffic and when a packet arrives UDMAP will
form a packet and gives it to the client driver
- the descriptors are not backed with exact size data buffers as we don't
know the size of the packet we will receive, but as a generic pool of
buffers to be used by the receive channel
- NAPI type of operation (polling instead of interrupt driven transfer)
- without this we can not sustain gigabit speeds and we need to support NAPI
- not to limit this to networking, but other high performance operations
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-12-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Split patch for review containing: defines, structs, io and low level
functions and interrupt callbacks.
DMA driver for
Texas Instruments K3 NAVSS Unified DMA – Peripheral Root Complex (UDMA-P)
The UDMA-P is intended to perform similar (but significantly upgraded) functions
as the packet-oriented DMA used on previous SoC devices. The UDMA-P module
supports the transmission and reception of various packet types. The UDMA-P is
architected to facilitate the segmentation and reassembly of SoC DMA data
structure compliant packets to/from smaller data blocks that are natively
compatible with the specific requirements of each connected peripheral. Multiple
Tx and Rx channels are provided within the DMA which allow multiple segmentation
or reassembly operations to be ongoing. The DMA controller maintains state
information for each of the channels which allows packet segmentation and
reassembly operations to be time division multiplexed between channels in order
to share the underlying DMA hardware. An external DMA scheduler is used to
control the ordering and rate at which this multiplexing occurs for Transmit
operations. The ordering and rate of Receive operations is indirectly controlled
by the order in which blocks are pushed into the DMA on the Rx PSI-L interface.
The UDMA-P also supports acting as both a UTC and UDMA-C for its internal
channels. Channels in the UDMA-P can be configured to be either Packet-Based or
Third-Party channels on a channel by channel basis.
The initial driver supports:
- MEM_TO_MEM (TR mode)
- DEV_TO_MEM (Packet / TR mode)
- MEM_TO_DEV (Packet / TR mode)
- Cyclic (Packet / TR mode)
- Metadata for descriptors
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-11-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In K3 architecture the DMA operates within threads. One end of the thread
is UDMAP, the other is on the peripheral side.
The UDMAP channel configuration depends on the needs of the remote
endpoint and it can be differ from peripheral to peripheral.
This patch adds database for am654 and j721e and small API to fetch the
PSI-L endpoint configuration from the database which should only used by
the DMA driver(s).
Another API is added for native peripherals to give possibility to pass new
configuration for the threads they are using, which is needed to be able to
handle changes caused by different firmware loaded for the peripheral for
example.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-9-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
A DMA hardware can have big cache or FIFO and the amount of data sitting in
the DMA fabric can be an interest for the clients.
For example in audio we want to know the delay in the data flow and in case
the DMA have significantly large FIFO/cache, it can affect the latenc/delay
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-6-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The metadata is best described as side band data or parameters traveling
alongside the data DMAd by the DMA engine. It is data
which is understood by the peripheral and the peripheral driver only, the
DMA engine see it only as data block and it is not interpreting it in any
way.
The metadata can be different per descriptor as it is a parameter for the
data being transferred.
If the DMA supports per descriptor metadata it can implement the attach,
get_ptr/set_len callbacks.
Client drivers must only use either attach or get_ptr/set_len to avoid
misconfiguration.
Client driver can check if a given metadata mode is supported by the
channel during probe time with
dmaengine_is_metadata_mode_supported(chan, DESC_METADATA_CLIENT);
dmaengine_is_metadata_mode_supported(chan, DESC_METADATA_ENGINE);
and based on this information can use either mode.
Wrappers are also added for the metadata_ops.
To be used in DESC_METADATA_CLIENT mode:
dmaengine_desc_attach_metadata()
To be used in DESC_METADATA_ENGINE mode:
dmaengine_desc_get_metadata_ptr()
dmaengine_desc_set_metadata_len()
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-5-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On prep, a spin lock is taken and the next entry in the circular buffer
is filled. On submit, the valid bit is set in the hardware descriptor
and the lock is released.
The DMA engine is started (if it's not already running) when the client
calls dma_async_issue_pending().
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20200103212021.2881-4-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Allocate DMA coherent memory for the ring of DMA descriptors and
program the appropriate hardware registers.
A tasklet is created which is triggered on an interrupt to process
all the finished requests. Additionally, any remaining descriptors
are aborted when the hardware is removed or the resources freed.
Use an RCU pointer to synchronize PCI device unbind.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20200103212021.2881-3-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Some PLX Switches can expose DMA engines via extra PCI functions
on the upstream port. Each function will have one DMA channel.
This patch is just the core PCI driver skeleton and dma
engine registration.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20200103212021.2881-2-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently when the call to dev_get_platdata returns null the driver issues
a warning and then later dereferences the null pointer. Avoid this issue
by returning -ENODEV error rather when the platform data is null and
change the warning to an appropriate error message.
Addresses-Coverity: ("Dereference after null check")
Fixes: 211010aeb0 ("dmaengine: ti: omap-dma: Pass sdma auxdata to driver and use it")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
ioremap has provided non-cached semantics by default since the Linux 2.6
days, so remove the additional ioremap_nocache interface.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
For omap2, we need to block idle if SDMA is busy. Let's do this with a
cpu notifier and remove the custom call.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
With the legacy IRQ handling gone, we can now start allocating channels
directly in the dmaengine driver for device tree based SoCs.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
We can now start passing sdma auxdata to the dmaengine driver to start
removing the platform based sdma init.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
We can move the global priority register configuration to the dmaengine
driver and configure it based on the of_device_id match data.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
If dma_alloc_coherent() returns NULL in ioat_alloc_ring(), ring
allocation must not proceed.
Until now, if the first call to dma_alloc_coherent() in
ioat_alloc_ring() returned NULL, the processing could proceed, failing
with NULL-pointer dereferencing further down the line.
Signed-off-by: Alexander Barabash <alexander.barabash@dell.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/75e9c0e84c3345d693c606c64f8b9ab5@x13pwhopdag1307.AMER.DELL.COM
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current descriptor is not on any list of the virtual DMA channel.
Once sdma_terminate_all() is called when a descriptor is currently
in flight then this one is forgotten to be freed. We have to call
vchan_terminate_vdesc() on this descriptor to re-add it to the lists.
Now that we also free the currently running descriptor we can (and
actually have to) remove the current descriptor from its list also
for the cyclic case.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Robin Gong <yibin.gong@nxp.com>
Tested-by: Robin Gong <yibin.gong@nxp.com>
Link: https://lore.kernel.org/r/20191216105328.15198-10-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In sdma_tx_status() we must first find the current sdma_desc. In cyclic
mode we assume that this can always be found with vchan_find_desc().
This is true because do not remove the current descriptor from the
desc_issued list:
/*
* Do not delete the node in desc_issued list in cyclic mode, otherwise
* the desc allocated will never be freed in vchan_dma_desc_free_list
*/
if (!(sdmac->flags & IMX_DMA_SG_LOOP))
list_del(&vd->node);
We will change this in the next step, so check if the current descriptor is
the desired one also for the cyclic case.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Link: https://lore.kernel.org/r/20191216105328.15198-9-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Rename sdma_disable_channel_async() after the hook it implements, like
done for all other functions in the SDMA driver.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Link: https://lore.kernel.org/r/20191216105328.15198-8-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_dma_desc_free_list() basically open codes vchan_vdesc_fini() in its
loop body. Call it directly rather than duplicating the code.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191216105328.15198-7-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
All list operations are protected by &vc->lock. As vchan_vdesc_fini()
is called unlocked add the missing locking around the list operations.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191216105328.15198-6-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_vdesc_fini() shouldn't be called under a spin_lock. This is done
in two places, once in vchan_terminate_vdesc() and once in
vchan_synchronize(). Instead of freeing the vdesc right away, collect
the aborted vdescs on a separate list and free them along with the other
vdescs. The terminated descs are also freed in vchan_synchronize as done
before this patch.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191216105328.15198-5-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_dma_desc_free_list() basically open codes vchan_vdesc_fini() in
the loop body. One difference is an additional debug message. As this
isn't overly useful remove it.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191216105328.15198-4-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Originally freeing descriptors was split into a locked and an unlocked
part. The locked part in vchan_get_all_descriptors() collected all
descriptors on a separate list_head. This was done to allow iterating
over that new list in vchan_dma_desc_free_list() without a lock held.
This became broken in 13bb26ae88 ("dmaengine: virt-dma: don't always
free descriptor upon completion"). With this commit
vchan_dma_desc_free_list() no longer exclusively operates on the
separate list, but starts to put descriptors which can be reused back on
&vc->desc_allocated. This list operation should have been locked, but
wasn't.
In the mean time drivers started to call vchan_dma_desc_free_list() with
their lock held so that we now have the situation that
vchan_dma_desc_free_list() is called locked from some drivers and
unlocked from others.
To clean this up we have to do two things:
1. Add missing locking in vchan_dma_desc_free_list()
2. Make sure drivers call vchan_dma_desc_free_list() unlocked
This needs to be done atomically, so in this patch the locking is added
and all drivers are fixed.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Green Wan <green.wan@sifive.com>
Tested-by: Green Wan <green.wan@sifive.com>
Link: https://lore.kernel.org/r/20191216105328.15198-3-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_vdesc_fini() can't be called locked. Instead, call
vchan_terminate_vdesc() which delays the freeing of the descriptor to
vchan_synchronize().
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Link: https://lore.kernel.org/r/20191216105328.15198-2-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
error log for dma_channel_table_init() failure pointed a mere
"initialization failure", which is not very helpful message, so print
additional details like function name and error code.
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We call dma_device_put() and module_put() after invoking
.device_free_chan_resources callback, but we should also take care of
router devices and invoke this after .route_free callback. So move it
after .route_free
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Don't allocate memory using the devm infrastructure and instead call
kfree with the new dmaengine device_release call back. This ensures
the structures are available until the last reference is dropped.
We also need to ensure we call ioat_shutdown() in ioat_remove() so
that all the channels are quiesced and further transaction fails.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20191216190120.21374-6-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Adding a reference count helps drivers to properly implement the unbind
while in use case.
References are taken and put every time a channel is allocated or freed.
Once the final reference is put, the device is removed from the
dma_device_list and a release callback function is called to signal
the driver to free the memory.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20191216190120.21374-5-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
So it can be called by a release function which is needed higher up in
the code. No functional changes intended.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20191216190120.21374-4-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The module reference is taken to ensure the callbacks still exist
when they are called. If the channel holds the last reference to the
module, the module can disappear before device_free_chan_resources() is
called and would cause a call into free'd memory.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20191216190120.21374-3-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
dma_chan_to_owner() dereferences the driver from the struct device to
obtain the owner and call module_[get|put](). However, if the backing
device is unbound before the dma_device is unregistered, the driver
will be cleared and this will cause a NULL pointer dereference.
Instead, store a pointer to the owner module in the dma_device struct
so the module reference can be properly put when the channel is put, even
if the backing device was destroyed first.
This change helps to support a safer unbind of DMA engines.
If the dma_device is unregistered in the driver's remove function,
there's no guarantee that there are no existing clients and a users
action may trigger the WARN_ONCE in dma_async_device_unregister()
which is unlikely to leave the system in a consistent state.
Instead, a better approach is to allow the backing driver to go away
and fail any subsequent requests to it.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20191216190120.21374-2-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_vdesc_fini() is freeing up 'vd' so the access to vd->tx_result is
via already freed up memory.
Move the vchan_vdesc_fini() after invoking the callback to avoid this.
Fixes: 09d5b702b0 ("dmaengine: virt-dma: store result on dma descriptor")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20191220131100.21804-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In some cases we seem to submit two transactions in a row, which
causes us to lose track of the first. If we then cancel the
request, we may still get an interrupt, which traverses a null
ds_run value.
So try to avoid starting a new transaction if the ds_run value
is set.
While this patch avoids the null pointer crash, I've had some
reports of the k3dma driver still getting confused, which
suggests the ds_run/ds_done value handling still isn't quite
right. However, I've not run into an issue recently with it
so I think this patch is worth pushing upstream to avoid the
crash.
Signed-off-by: John Stultz <john.stultz@linaro.org>
[add ss tag]
Link: https://lore.kernel.org/r/20191218190906.6641-1-john.stultz@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix to return negative error code -ENOMEM from the error handling
case instead of 0, as done elsewhere in this function.
Fixes: 2a03c13145 ("dmaengine: ti: edma: add missed operations")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191212114622.127322-1-weiyongjun1@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
With old DMA code disabled for handling DMA requests for device tree based
SoCs, we can move omap3 specific context save and restore to the dmaengine
driver.
Let's do this by adding cpu_pm notifier handling to save and restore context,
and enable it based on device tree match data. This way we can use the match
data later to configure more SoC specific features later on too.
Note that we only clear the channels in use while the platform code also
clears reserved channels 0 and 1 on high-security SoCs. Based on testing
on n900, this is not needed though and the system idles just fine.
With the dmaengine driver handling context save and restore, we must now
remove the old custom calls for context save and restore.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
It turns out that the JZ4725B displays the same buggy behaviour as the
JZ4740 that was described in commit f4c255f1a7 ("dmaengine: dma-jz4780:
Break descriptor chains on JZ4740").
Work around it by using the same workaround previously used for the
JZ4740.
Fixes commit f4c255f1a7 ("dmaengine: dma-jz4780: Break descriptor
chains on JZ4740")
Cc: <stable@vger.kernel.org>
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20191210165545.59690-1-paul@crapouillou.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver misses checking the result of devm_regmap_init_mmio().
Add a check to fix it.
Fixes: fc15be39a8 ("dmaengine: axi-dmac: add regmap support")
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Reviewed-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20191209085711.16001-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It has turned out that it's in general a good idea for dmaengines to allow
DMA requests during the entire dpm_suspend() phase. Therefore, convert the
pl330 driver into using SET_LATE_SYSTEM_SLEEP_PM_OPS.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Link: https://lore.kernel.org/r/20191205143746.24873-3-ulf.hansson@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let's drop the boilerplate code in the system suspend/resume callbacks and
convert to use pm_runtime_force_suspend|resume(). This change also has a
nice side effect, as pm_runtime_force_resume() may decide to leave the
device in low power state, when that is feasible, thus avoiding to waste
both time and energy during system resume.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Link: https://lore.kernel.org/r/20191205143746.24873-2-ulf.hansson@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver forgets to call pm_runtime_disable and pm_runtime_put_sync in
probe failure and remove.
Add the calls and modify probe failure handling to fix it.
To simplify the fix, the patch adjusts the calling order and merges checks
for devm_kcalloc.
Fixes: 2b6b3b7420 ("ARM/dmaengine: edma: Merge the two drivers under drivers/dma/")
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191124052855.6472-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Adjust indentation from spaces to tab (+optional two spaces) as in
coding style with command like:
$ sed -e 's/^ /\t/' -i */Kconfig
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Link: https://lore.kernel.org/r/1574306348-29212-1-git-send-email-krzk@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The place where the macro, SF_PDMA_REG_BASE(), is cause kernel-doc
using wrong function declaration. Move it to header file.
Signed-off-by: Green Wan <green.wan@sifive.com>
Link: https://lore.kernel.org/r/20191118143554.16129-2-green.wan@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are several comments starting from "/**" but not for function
comment purpose. It causes kernel-doc parsing wrong string. Replace
"/**" with "/*" to fix them.
Signed-off-by: Green Wan <green.wan@sifive.com>
Link: https://lore.kernel.org/r/20191118143554.16129-1-green.wan@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When devm_kcalloc fails, it forgets to call edma_free_slot.
Replace direct return with failure handler to fix it.
Fixes: 1be5336bc7 ("dmaengine: edma: New device tree binding")
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Link: https://lore.kernel.org/r/20191118073802.28424-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver calls of_dma_controller_register in probe but does not free
it in remove.
Add the call to fix it.
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Link: https://lore.kernel.org/r/20191115083153.12334-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver calls of_dma_controller_register in probe but does not free
it in remove.
Add the call to fix it.
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Link: https://lore.kernel.org/r/20191115083100.12220-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The Spreadtrum Audio compress offload mode will use 2-stage DMA transfer
to save power. That means we can request 2 dma channels, one for source
channel, and another one for destination channel. Once the source channel's
transaction is done, it will trigger the destination channel's transaction
automatically by hardware signal.
In this case, the source channel will transfer data from IRAM buffer to
the DSP fifo to decoding/encoding, once IRAM buffer is empty by transferring
done, the destination channel will start to transfer data from DDR buffer
to IRAM buffer. Since the destination channel will use link-list mode to
fill the IRAM data, and IRAM buffer is allocated by 32K, and DDR buffer
is larger to 2M, that means we need lots of link-list nodes to do a cyclic
transfer, instead wasting lots of link-list memory, we can use wrap address
support to reduce link-list node number, which means when the transfer
address reaches the wrap address, the transfer address will jump to the
wrap_to address specified by wrap_to register, and only 2 link-list nodes
can do a cyclic transfer to transfer data from DDR to IRAM.
Thus this patch adds wrap address to support this case.
[Baolin Wang changes the commit message]
Signed-off-by: Eric Long <eric.long@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Link: https://lore.kernel.org/r/85a5484bc1f3dd53ce6f92700ad8b35f30a0b096.1571812029.git.baolin.wang@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In the probe method dmam_pool_create is used. Therefore, there is no
need to explicitly call dmam_pool_destroy in remove method as this
will be automatically taken care by devres
Signed-off-by: Satendra Singh Thakur <sst2005@gmail.com>
Link: https://lore.kernel.org/r/20191109113609.6159-1-sst2005@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When devm_request_irq fails, currently, the function
dma_async_device_unregister gets called. This doesn't free
the resources allocated by of_dma_controller_register.
Therefore, we have called of_dma_controller_free for this purpose.
Signed-off-by: Satendra Singh Thakur <sst2005@gmail.com>
Link: https://lore.kernel.org/r/20191109113523.6067-1-sst2005@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_irq() prints the error message, so caller need not do so,
remove the error line in this driver for platform_get_irq()
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20191106163128.1980714-2-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_irq() prints the error message, so caller need not do so,
remove the error line in this driver for platform_get_irq()
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20191106163128.1980714-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The remove misses to disable and unprepare jzdma->clk.
Add a call to clk_disable_unprepare to fix it.
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Acked-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20191104161622.11758-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>