- Core:
- Some code cleanup and optimization in core by Andy
- Debugfs support for displaying dmaengine channels by Peter
- Drivers:
- New driver for uniphier-xdmac controller
- Updates to stm32 dma, mdma and dmamux drivers and PM support
- More updates to idxd drivers
- Bunch of changes in tegra-apb driver and cleaning up of pm functions
- Bunch of spelling fixes and Replace zero-length array patches
- Shutdown hook for fsl-dpaa2-qdma driver
- Support for interleaved transfers for ti-edma and virtualization
support for k3-dma driver
- Support for reset and updates in xilinx_dma driver
- Improvements and locking updates in at_hdma driver
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAl6FyokACgkQfBQHDyUj
g0dnFRAAj9lvpflrL+b9eWBZkY1ElV1jAdxsTs4HnYdXQM3ijw8yOosVVSqiuiOy
2qMfSRTP7qU9gqZ7oa1fnh05DqPmuTc3OF2IZlvGzkU9CiGQ735WGGGG8FfK/dZe
F4OgQGwA45b47hNIbvM4acwWZYPL+pBuYusKdjdHkouqVM4SORiNM8aRrCJ59xIn
P9TER//sMpdMEASuRuUIQnXb+OzSNPn1mLiP3zT0XHSM/nBMTAm7AnCDNT/Tjs9f
hwk2j8rLrwllHGqeZln8cWLhUCPrZFNe5pBWtWyV3MyY/nxlrcUX0ndJUGJIDtsb
nfXc4QKemOeF1RsC8DsQ/AY8jl6HFvRzWEEkq742IrLPCu/nTnxia4dbXW9MJ0Dp
BI7IPwoaOoYqBdRkBnSVS2F4x3813egsEReznlu/sUorTIG2g9sWtmuzv6eRt4ow
HczGgfdJXfCvIKbRg5TIXpbaJogbbB+1YrUlWq9vrZyhVw0ULtfxlWVKDy5VI1cL
0Kiz/ZIGuoQ9h6E4G3jCpaQTV49tNbYp+vimU9kizmcm+WXrTXR7rgD4AI5tH2DQ
pxYXNEl4gm1NRtWL1zzJ+B1C0MPXpc1Xafl92W39D6rphEGOdVVzay8meVIaQKDU
qQaZ1dEK4uuSxwj8NrF7sXHSClafF888FFJBEMArde1HVql/HRU=
=+UJ7
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"Core:
- Some code cleanup and optimization in core by Andy
- Debugfs support for displaying dmaengine channels by Peter
Drivers:
- New driver for uniphier-xdmac controller
- Updates to stm32 dma, mdma and dmamux drivers and PM support
- More updates to idxd drivers
- Bunch of changes in tegra-apb driver and cleaning up of pm
functions
- Bunch of spelling fixes and Replace zero-length array patches
- Shutdown hook for fsl-dpaa2-qdma driver
- Support for interleaved transfers for ti-edma and virtualization
support for k3-dma driver
- Support for reset and updates in xilinx_dma driver
- Improvements and locking updates in at_hdma driver"
* tag 'dmaengine-5.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (89 commits)
dt-bindings: dma: renesas,usb-dmac: add r8a77961 support
dmaengine: uniphier-xdmac: Remove redandant error log for platform_get_irq
dmaengine: tegra-apb: Improve DMA synchronization
dmaengine: tegra-apb: Don't save/restore IRQ flags in interrupt handler
dmaengine: tegra-apb: mark PM functions as __maybe_unused
dmaengine: fix spelling mistake "exceds" -> "exceeds"
dmaengine: sprd: Set request pending flag when DMA controller is active
dmaengine: ppc4xx: Use scnprintf() for avoiding potential buffer overflow
dmaengine: idxd: remove global token limit check
dmaengine: idxd: reflect shadow copy of traffic class programming
dmaengine: idxd: Merge definition of dsa_batch_desc into dsa_hw_desc
dmaengine: Create debug directories for DMA devices
dmaengine: ti: k3-udma: Implement custom dbg_summary_show for debugfs
dmaengine: Add basic debugfs support
dmaengine: fsl-dpaa2-qdma: remove set but not used variable 'dpaa2_qdma'
dmaengine: ti: edma: fix null dereference because of a typo in pointer name
dmaengine: fsl-dpaa2-qdma: Adding shutdown hook
dmaengine: uniphier-xdmac: Add UniPhier external DMA controller driver
dt-bindings: dmaengine: Add UniPhier external DMA controller bindings
dmaengine: ti: k3-udma: Implement support for atype (for virtualization)
...
platform_get_irq prints the error on failure, so there is no need to
have caller add a log.
Remove the log in uniphier_xdmac_probe() for platform_get_irq() failure
Reported-by: kbuild test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/20200323171928.424223-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Boot CPU0 always handles DMA interrupts and under some rare circumstances
it could stuck in uninterruptible state for a significant time (like in a
case of KASAN + NFS root). In this case sibling CPU, which waits for DMA
transfer completion, will get a DMA transfer timeout. In order to handle
this rare condition, interrupt status needs to be polled until interrupt
is handled.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Link: https://lore.kernel.org/r/20200319212321.3297-2-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The interrupt is already disabled while interrupt handler is running, and
thus, there is no need to save/restore the IRQ flags within the spinlock.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Link: https://lore.kernel.org/r/20200319212321.3297-1-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When CONFIG_PM is disabled, gcc warning this:
drivers/dma/tegra20-apb-dma.c:1587:12: warning: 'tegra_dma_runtime_resume' defined but not used [-Wunused-function]
drivers/dma/tegra20-apb-dma.c:1578:12: warning: 'tegra_dma_runtime_suspend' defined but not used [-Wunused-function]
Make it as __maybe_unused to fix the warnings,
also remove unneeded function declarations.
Fixes: ec8a158678 ("dma: tegra: add dmaengine based dma driver")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200320071337.59756-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
All but one error handling paths in the 'k3_udma_glue_cfg_rx_flow()'
function 'goto err' and call 'k3_udma_glue_release_rx_flow()'.
This not correct because this function has a 'channel->flows_ready--;' at
the end, but 'flows_ready' has not been incremented here, when we branch to
the error handling path.
In order to keep a correct value in 'flows_ready', un-roll
'k3_udma_glue_release_rx_flow()', simplify it, add some labels and branch
at the correct places when an error is detected.
Doing so, we also NULLify 'flow->udma_rflow' in a path that was lacking it.
Fixes: d702419134 ("dmaengine: ti: k3-udma: Add glue layer for non DMAengine user")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200318191209.1267-1-christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On new Spreadtrum platforms, when the CPU enters idle, it will close
the DMA controllers' clock to save power if the DMA controller is not
busy. Moreover the DMA controller's busy signal depends on the DMA
enable flag and the request pending flag.
When DMA controller starts to transfer data, which means we already
set the DMA enable flag, but now we should also set the request pending
flag, in case the DMA clock will be closed accidentally if the CPU
can not detect the DMA controller's busy signal.
Signed-off-by: Zhenfang Wang <zhenfang.wang@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang7@gmail.com>
Link: https://lore.kernel.org/r/02adbe4364ec436ec2c5bc8fd2386bab98edd884.1584019223.git.baolin.wang7@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The refcount check for dedicated workqueue (dwq) is off by one and allows
more than 1 user to open the char device. Fix check so only a single user
can open the device.
Fixes: 42d279f913 ("dmaengine: idxd: add char driver to expose submission portal to userland")
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158403020187.10208.14117394394540710774.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since snprintf() returns the would-be-output size instead of the
actual output size, the succeeding calls may go beyond the given
buffer limit. Fix it by replacing with scnprintf().
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20200311071606.4485-1-tiwai@suse.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The global token_limit is not tied to group tokens_reserved and
tokens_allowed parameters. Remove the check in order to allow independent
configuration.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Yixin Zhang <yixin.zhang@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158386266911.11066.7545764533072221536.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The traffic class are set to -1 at initialization until the user programs
them. If the user choose not to, the driver will program appropriate
defaults. The driver also needs to update the shadowed copies of the values
after doing the programming.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Yixin Zhang <yixin.zhang@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158386263076.10898.4586509576813094559.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Create a placeholder directory for each registered DMA device.
DMA drivers can use the dmaengine_get_debugfs_root() call to get their
debugfs root and can populate with custom files to aim debugging.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200306142839.17910-4-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Via the /sys/kernel/debug/dmaengine/summary users can get information
about the DMA devices and the used channels.
Example output on am654-evm with audio using two channels and after running
dmatest on 4 channels:
dma0 (285c0000.dma-controller): number of channels: 96
dma1 (31150000.dma-controller): number of channels: 267
dma1chan0 | 2b00000.mcasp:tx
dma1chan1 | 2b00000.mcasp:rx
dma1chan2 | in-use
dma1chan3 | in-use
dma1chan4 | in-use
dma1chan5 | in-use
For slave channels we can show the device and the channel name a given
channel is requested.
For non slave devices the only information we know is that the channel is
in use.
DMA drivers can implement the optional dbg_summary_show callback to
provide controller specific information instead of the generic one.
It is easy to extend the generic dmaengine_summary_show() to print
additional information about the used channels.
I have taken the idea from gpiolib and clk subsystems.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200306142839.17910-2-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Dmaengine core warns the drivers registering for missing .device_release
implementation. The warning is accurate for dmaengine controllers which
hotplug but not for rest.
So reduce this to a debug log.
Link: https://lore.kernel.org/r/20200306135018.2286959-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c: In function dpaa2_qdma_shutdown:
drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c:795:28: warning: variable dpaa2_qdma set but not used [-Wunused-but-set-variable]
commit 3e0ca3c38d ("dmaengine: fsl-dpaa2-qdma: Adding shutdown hook")
involved this, remove it.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20200303131347.28392-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently there is a dereference of the null pointer m_ddev. This appears
to be a typo on the pointer, I believe s_ddev should be used instead.
Fix this by using the correct pointer.
Fixes: eb0249d501 ("dmaengine: ti: edma: Support for interleaved mem to mem transfer")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Addresses-Coverity: ("Explicit null dereferenced")
Link: https://lore.kernel.org/r/20200226185921.351693-1-colin.king@canonical.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We need to ensure DMA engine could be stopped in order for kexec
to start the next kernel.
So add the shutdown operation support.
Signed-off-by: Peng Ma <peng.ma@nxp.com>
Link: https://lore.kernel.org/r/20200227042841.18358-1-peng.ma@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This adds external DMA controller driver implemented in Socionext
UniPhier SoCs. This driver supports DMA_MEMCPY and DMA_SLAVE modes.
Since this driver does not support the the way to transfer size
unaligned to burst width, 'src_maxburst' or 'dst_maxburst' of
dma_slave_config must be 1 to transfer arbitrary size. If transfer
size is unaligned to burst size, the transfer isn't started and
the driver displays an error message.
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Link: https://lore.kernel.org/r/1582271550-3403-3-git-send-email-hayashi.kunihiko@socionext.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The DT for virtualized hosts have dma-cells == 2 where the second parameter
is the ATYPE for the channel.
In case of dma-cells == 1 we can configure the ATYPE as 0 (reset value).
The ATYPE defined for j721e are:
0: pointers are physical addresses (no translation)
1: pointers are intermediate addresses (PVU)
2: pointers are virtual addresses (SMMU)
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200218143126.11361-3-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The returned result from the check_vma() function in the cdev ->mmap() call
needs to be handled. Add the check and returning error.
Fixes: 42d279f913 ("dmaengine: idxd: add char driver to expose submission portal to userland")
Reported-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158264926659.9387.14325163515683047959.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On i.MX6UL/ULL and i.MX6SX the DMA event id for the RX channel of
UART6 is '0'. To fix the broken DMA support for UART6, we change
the check for event_id0 to include '0' as a valid id.
Fixes: 1ec1e82f25 ("dmaengine: Add Freescale i.MX SDMA support")
Signed-off-by: Frieder Schrempf <frieder.schrempf@kontron.de>
Reviewed-by: Fabio Estevam <festevam@gmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20200225082139.7646-1-frieder.schrempf@kontron.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Technically it is possible that DMA could be misconfigured in a way that
cyclic DMA transfer is processed slower than it takes to complete the
cycle and in this case the DMA is getting aborted with a not very
informative message about the problem, let's improve it.
Suggested-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-20-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is nothing arch-specific in the driver's code, so let's enable
compile-testing for the driver.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-18-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Tegra APB DMA driver is an Open Firmware driver, so it uses OF alias
naming scheme which overrides MODULE_ALIAS, meaning that MODULE_ALIAS
does nothing and could be removed safely.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-17-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver's removal was fixed by a recent commit and module load/unload
is working well now, tested on Tegra30.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-16-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The DMA controller shall be released on driver's removal.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-15-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It is enough to check whether hardware is busy on suspend and to reset
it across of suspend-resume because:
1. Channel's configuration is fully re-programmed on each DMA
transfer anyways.
2. Context save-restore of an active channel won't end up well without
pausing transfer prior to the context's saving, but note that every
channel shall be idling at the time of suspend, so save-restore is
not needed at all.
3. The only case where context save-restore may be useful is when
channel is in a paused state during suspend. But channel's pausing
could be supported only on Tegra114+ and this functionality wasn't
implemented by the driver for years now because there is no need for
it in upstream kernel.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-14-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It's a bit impractical to enable hardware's clock at the time of DMA
channel's allocation because most of DMA client drivers allocate DMA
channel at the time of the driver's probing, and thus, DMA clock is kept
always-enabled in practice, defeating the whole purpose of runtime PM.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-13-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are few place in the code which check whether pending_sg_req list is
empty despite of the check already being done. Let's remove the duplicated
checks to keep code clean.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-12-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The runtime PM is always available on all Tegra SoCs since the commit
40b2bb1b13 ("ARM: tegra: enforce PM requirement"), so there is no
need to handle the case of unavailable RPM in the code anymore.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-11-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need to re-initialize the already initialized variables.
The tdc->config_init=false after driver's probe and after channel's
freeing, so there is no need to re-initialize it on the channel's
allocation.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-10-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch fixes few dozens of coding style problems reported by
checkpatch and prettifies code where makes sense.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-9-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need to kill tasklet when driver's probe fails because tasklet
can't be scheduled at this time. It is also cleaner to kill tasklet on
channel's freeing rather than to kill it on driver's removal, otherwise
tasklet could perform a dummy execution after channel's releasing, which
isn't very nice.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-6-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It's incorrect to check the channel's "busy" state without taking a lock.
That shouldn't cause any real troubles, nevertheless it's always better
not to have any race conditions in the code.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-5-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The ISR tasklet could be kept scheduled after DMA transfer termination,
let's add synchronization hook which blocks until tasklet is finished.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20200209163356.6439-4-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The interrupt handler puts a half-completed DMA descriptor on a free list
and then schedules tasklet to process bottom half of the descriptor that
executes client's callback, this creates possibility to pick up the busy
descriptor from the free list. Thus, let's disallow descriptor's re-use
until it is fully processed.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200209163356.6439-3-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
I was doing some experiments with I2C and noticed that Tegra APB DMA
driver crashes sometime after I2C DMA transfer termination. The crash
happens because tegra_dma_terminate_all() bails out immediately if pending
list is empty, and thus, it doesn't release the half-completed descriptors
which are getting re-used before ISR tasklet kicks-in.
tegra-i2c 7000c400.i2c: DMA transfer timeout
elants_i2c 0-0010: elants_i2c_irq: failed to read data: -110
------------[ cut here ]------------
WARNING: CPU: 0 PID: 142 at lib/list_debug.c:45 __list_del_entry_valid+0x45/0xac
list_del corruption, ddbaac44->next is LIST_POISON1 (00000100)
Modules linked in:
CPU: 0 PID: 142 Comm: kworker/0:2 Not tainted 5.5.0-rc2-next-20191220-00175-gc3605715758d-dirty #538
Hardware name: NVIDIA Tegra SoC (Flattened Device Tree)
Workqueue: events_freezable_power_ thermal_zone_device_check
[<c010e5c5>] (unwind_backtrace) from [<c010a1c5>] (show_stack+0x11/0x14)
[<c010a1c5>] (show_stack) from [<c0973925>] (dump_stack+0x85/0x94)
[<c0973925>] (dump_stack) from [<c011f529>] (__warn+0xc1/0xc4)
[<c011f529>] (__warn) from [<c011f7e9>] (warn_slowpath_fmt+0x61/0x78)
[<c011f7e9>] (warn_slowpath_fmt) from [<c042497d>] (__list_del_entry_valid+0x45/0xac)
[<c042497d>] (__list_del_entry_valid) from [<c047a87f>] (tegra_dma_tasklet+0x5b/0x154)
[<c047a87f>] (tegra_dma_tasklet) from [<c0124799>] (tasklet_action_common.constprop.0+0x41/0x7c)
[<c0124799>] (tasklet_action_common.constprop.0) from [<c01022ab>] (__do_softirq+0xd3/0x2a8)
[<c01022ab>] (__do_softirq) from [<c0124683>] (irq_exit+0x7b/0x98)
[<c0124683>] (irq_exit) from [<c0168c19>] (__handle_domain_irq+0x45/0x80)
[<c0168c19>] (__handle_domain_irq) from [<c043e429>] (gic_handle_irq+0x45/0x7c)
[<c043e429>] (gic_handle_irq) from [<c0101aa5>] (__irq_svc+0x65/0x94)
Exception stack(0xde2ebb90 to 0xde2ebbd8)
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200209163356.6439-2-digetx@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Tasklets run with all the interrupts enabled. This means that we should
replace all the (already present) spin_lock_irqsave() uses in the tasklet
with spin_lock_irq() to protect being interrupted by a IRQ which tries
to get the same lock (via calls to device_prep_dma_* for example).
spin_lock and spin_lock_bh in tasklets are not enough to protect from IRQs,
update these to spin_lock_irq().
at_xdmac_advance_work() can be called with all the interrupts enabled (when
called from tasklet), or with interrupts disabled (when called from
at_xdmac_issue_pending). Move the locking in the callers to be able to use
spin_lock_irq() and spin_lock_irqsave() for these cases.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-10-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no need for locking in device_alloc_chan_resources(),
the DMA core takes care of it by using a dma_list_mutex around
the DMA devices.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-8-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The code in cause is already in the else case of
'if (at_xdmac_chan_is_cyclic(atchan))', drop the redundant check.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-7-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix the following deadlocks:
1/ atc_handle_cyclic() and atc_chain_complete() called
dmaengine_desc_get_callback_invoke() while wrongly holding the
atchan->lock. Clients can set the callback to dmaengine_terminate_sync()
which will end up trying to get the same lock, thus a deadlock occurred.
2/ dma_run_dependencies() was called with the atchan->lock held, but the
method calls device_issue_pending() which tries to get the same lock,
and so a deadlock occurred.
The driver must not hold the lock when invoking the callback or when
running dependencies. Releasing the spinlock within a called function
before calling the callback is not a nice thing to do -> called functions
become non-atomic when called within an atomic region. Thus the lock is
now taken in the child routines whereever is needed.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-6-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Avoids sleeping without depleting the emergency pool.
The rationale being that in most cases a dma device is either
offloading an operation that will automatically fallback to
software when the descriptor allocation fails, or we can simply poll
and wait for the dma device to release some in use descriptors.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/20200123140237.125799-5-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>