Add support for probing the newer HW and also organize MSI capable hardware
into an array for maintenance reasons.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Driver is missing the interrupts if two requests are queued up at the same
time as the interrupt handler is servicing a request that was just
delivered.
The ISR clears the interrupt at the end but it could be clearing the
interrupt for an outstanding event. Therefore, second interrupt never
arrives.
Clear the interrupt first and then check for completions.
Also, make sure that request start and interrupt clear do not overlap in
time by using a spinlock.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
in error path of jz4740_dma_probe(), call clk_disable_unprepare() to clean
up.
Found by Linux Driver Verification project (linuxtesting.org).
Fixes: 25ce6c35fe MIPS: jz4740: Remove custom DMA API
Signed-off-by: Tobias Jordan <Tobias.Jordan@elektrobit.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Trivial fix to spelling mistake in dev_err error message text.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
introduced a bug (that is in fact documented by the patch commit text)
that leaves behind a dangling pointer. Since the done_wait structure is
allocated on the stack, future invocations to the DMATEST can produce
undesirable results (e.g., corrupted spinlocks).
Commit a9df21e34b ("dmaengine: dmatest: warn user when dma test times
out") attempted to WARN the user that the stack was likely corrupted but
did not fix the actual issue.
This patch fixes the issue by pushing the wait queue and callback
structs into the the thread structure. If a failure occurs due to time,
dmaengine_terminate_all will force the callback to safely call
wake_up_all() without possibility of using a freed pointer.
Cc: stable@vger.kernel.org
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=197605
Fixes: adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Suggested-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Adam Wallis <awallis@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Now that READ_ONCE() implies smp_read_barrier_depends(), the
__cleanup() and ioat_abort_descs() functions no longer need their
smp_read_barrier_depends() calls, which this commit removes.
It is actually not entirely clear why this driver ever included
smp_read_barrier_depends() given that it appears to be x86-only and
given that smp_read_barrier_depends() has no effect whatsoever except
on DEC Alpha.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: <dmaengine@vger.kernel.org>
Fix ptr_ret.cocci warnings:
drivers/dma/mic_x100_dma.c:483:1-3: WARNING: PTR_ERR_OR_ZERO can be used
Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR
Generated by: scripts/coccinelle/api/ptr_ret.cocci
Signed-off-by: Vasyl Gomonovych <gomonovych@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Even with the introduced vchan_synchronize() we can face race when
terminating a cyclic transfer.
If the terminate_all is called after the interrupt handler called
vchan_cyclic_callback(), but before the vchan_complete tasklet is called:
vc->cyclic is set to the cyclic descriptor, but the descriptor itself was
freed up in the driver's terminate_all() callback.
When the vhan_complete() is executed it will try to fetch the vc->cyclic
vdesc, but the pointer is pointing now to uninitialized memory leading to
(hard to reproduce) kernel crash.
In order to fix this, drivers should:
- call vchan_terminate_vdesc() from their terminate_all callback instead
calling their free_desc function to free up the descriptor.
- implement device_synchronize callback and call vchan_synchronize().
This way we can make sure that the descriptor is only going to be freed up
after the vchan_callback was executed in a safe manner.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The vchan_vdesc_fini() can be used to free or reuse a given descriptor
after it has been marked as completed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
_xt_ is being dereferenced before it is null checked, hence there is a
potential null pointer dereference.
Fix this by moving the pointer dereference after _xt_ has been null
checked.
This issue was detected with the help of Coccinelle.
Fixes: 4483320e24 ("dmaengine: Use Pointer xt after NULL check.")
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the last test in 'ioat_dma_self_test()' fails, we must release all
the allocated resources and not just part of them.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
SYS/RT/Audio DMAC includes independent data buffers for reading
and writing. Therefore, the read transfer counter and write transfer
counter have different values.
TCR indicates read counter, and TCRB indicates write counter.
The relationship is like below.
TCR TCRB
[SOURCE] -> [DMAC] -> [SINK]
In the MEM_TO_DEV direction, what really matters is how much data has
been written to the device. If the DMA is interrupted between read and
write, then, the data doesn't end up in the destination, so shouldn't
be counted. TCRB is thus the register we should use in this cases.
In the DEV_TO_MEM direction, the situation is more complex. Both the
read and write side are important. What matters from a data consumer
point of view is how much data has been written to memory.
On the other hand, if the transfer is interrupted between read and
write, we'll end up losing data. It can also be important to report.
In the MEM_TO_MEM direction, what matters is of course how much data
has been written to memory from data consumer point of view.
Here, because read and write have independent data buffers, it will
take a while for TCR and TCRB to become equal. Thus we should check
TCRB in this case, too.
Thus, all cases we should check TCRB instead of TCR.
Without this patch, Sound Capture has noise after PulseAudio support
(= 07b7acb51d ("ASoC: rsnd: update pointer more accurate")), because
the recorder will use wrong residue counter which indicates transferred
from sound device, but in reality the data was not yet put to memory
and recorder will record it.
However, because DMAC is buffering data until it can be transferable
size, TCRB might not be updated.
For example, if consumer doesn't know how much data can be received,
it requests enough size to DMAC. But in reality, it might receive very
few data. In such case, DMAC just buffered it until transferable size,
and no TCRB updated.
In such case, this buffered data will be transferred if CHCR::DE bit was
cleared, and this is happen if rcar_dmac_chan_halt(). In other word, it
happen when consumer called dmaengine_terminate_all().
Because of this behavior, it need to flush buffered data when it returns
"residue" (= dmaengine_tx_status()).
Otherwise, consumer might calculate wrong things if it called
dmaengine_tx_status() and dmaengine_terminate_all() consecutively.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Tested-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Tested-by: Ryo Kodama <ryo.kodama.vz@renesas.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMAC reads data from source device, and buffered it until transferable
size for sink device. Because of this behavior, DMAC is including
buffered data .
Now, CHCR DE bit is controlling DMA transfer enable/disable.
If DE bit was cleared during data transferring, or during buffering,
it will flush buffered data if source device was peripheral device
(The buffered data will be removed if source device was memory).
Because of this behavior, driver should ensure that DE bit is actually
0 after clearing.
This patch adds new rcar_dmac_chcr_de_barrier() and call it after CHCR
register access.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Tested-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Tested-by: Ryo Kodama <ryo.kodama.vz@renesas.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This allows DMA client to issue a non-flow controlled TX. In particular
it is needed for the fuse driver that reads fuse registers using APBDMA
to workaround a HW bug that results in hang when CPU and DMA perform
simultaneous access to fuse peripheral.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Updates for this cycle include:
- New driver for Spreadtrum dma controller, ST MDMA and DMAMUX controllers
- PM support for IMG MDC drivers
- Updates to bcm-sba-raid driver and improvements to sun6i driver
- Subsystem conversion for:
- timers to use timer_setup()
- remove usage of PCI pool API
- usage of %p format specifier
- Minor updates to bunch of drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJaCn48AAoJEHwUBw8lI4NHTe8P/RpJH8tDat/joT7Hl71stEod
vKa0iSkW2fdwd6PeaRfd+UTloska1NE9rdgfh8pCVveoHjCPQBVBOC7V8DbMtlsi
/IlJjFT74wl2R1aSHcSGoLGsIEyurz+9SK88qCU54OQSjVHSnfmyGI4ycTLQGH9U
zce5JHWHB5MkdftM4eJaSE/t0Md1DBkxadFSQRkwQqqDqoLE7jgJUK0TADRukQqS
fsDYPh/OhYAizAHlmEGuLZQheN0ld5W7n1sGsEnBD88wtBMvYHzAwT17B+BobxEp
jyaoE5nV4AgqWh1mvixrmgKoj2KL3DDC+QeoHYCExdcgIrvc86xN3homx9g9y38a
b99pgDDvXjw4N7S6AmRyQlm/5D0QyjUaoHgGklsaR3ix81dFwDY15aZa8/uQ4EAT
iKH8DxAgOq6aG1MkUycQ/7QTenRbN4yWQQa+Mm5ncoNU8bpazyxf2l5L9OJWpFjX
Q6VagNim+plGeUhpJ4IEfPi7LChXFaYsb1D7A/dqpIRvaYzwsy80b/DNhobGMDF6
eTpny64AKHnozWw/KP5k3DfcYvoU/ytcSsWf8h+CPN7EdLMBqUXFgkVwtyf6WKNc
UPl+2in08GLgfGb+n2IAdaQzlJ4dK2P7f7mx0T4OvRymu35HXd8nJjmMJ5ZyBr1t
Z/0JVfcA66AL+XSt179C
=t9Ix
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"Updates for this cycle include:
- new driver for Spreadtrum dma controller, ST MDMA and DMAMUX
controllers
- PM support for IMG MDC drivers
- updates to bcm-sba-raid driver and improvements to sun6i driver
- subsystem conversion for:
- timers to use timer_setup()
- remove usage of PCI pool API
- usage of %p format specifier
- minor updates to bunch of drivers"
* tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (49 commits)
dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type
dmaengine: dmatest: warn user when dma test times out
dmaengine: Revert "rcar-dmac: use TCRB instead of TCR for residue"
dmaengine: stm32_mdma: activate pack/unpack feature
dmaengine: at_hdmac: Remove unnecessary 0x prefixes before %pad
dmaengine: coh901318: Remove unnecessary 0x prefixes before %pad
MAINTAINERS: Step down from a co-maintaner of DW DMAC driver
dmaengine: pch_dma: Replace PCI pool old API
dmaengine: Convert timers to use timer_setup()
dmaengine: sprd: Add Spreadtrum DMA driver
dt-bindings: dmaengine: Add Spreadtrum SC9860 DMA controller
dmaengine: sun6i: Retrieve channel count/max request from devicetree
dmaengine: Build bcm-sba-raid driver as loadable module for iProc SoCs
dmaengine: bcm-sba-raid: Use common GPL comment header
dmaengine: bcm-sba-raid: Use only single mailbox channel
dmaengine: bcm-sba-raid: serialize dma_cookie_complete() using reqs_lock
dmaengine: pl330: fix descriptor allocation fail
dmaengine: rcar-dmac: use TCRB instead of TCR for residue
dmaengine: sun6i: Add support for Allwinner A64 and compatibles
arm64: allwinner: a64: Add devicetree binding for DMA controller
...
The used 0x1f mask is only valid for am335x family of SoC, different family
using this type of crossbar might have different number of electable
events. In case of am43xx family 0x3f mask should have been used for
example.
Instead of trying to handle each family's mask, just use u8 type to store
the mux value since the event offsets are aligned to byte offset.
Fixes: 42dbdcc6bf ("dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
introduced a bug (that is in fact documented by the patch commit text)
that leaves behind a dangling pointer. Since the done_wait structure is
allocated on the stack, future invocations to the DMATEST can produce
undesirable results (e.g., corrupted spinlocks). Ideally, this would be
cleaned up in the thread handler, but at the very least, the kernel
is left in a very precarious scenario that can lead to some long debug
sessions when the crash comes later.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=197605
Signed-off-by: Adam Wallis <awallis@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This reverts commit 847449f23dcb: ("dmaengine: rcar-dmac: use TCRB instead
of TCR for residue") as it breaks small serial console.
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If source and destination bus width differs pack/unpack MDMA
feature has to be activated for alignment.
This pack/unpack feature implies to have both source/destination address
and buffer length aligned on bus width.
Fixes: a4ffb13c89 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since commit 3cab1e7112 ("lib/vsprintf: refactor duplicate code
to special_hex_number()") %pad doesn't need 0x prefix so drop that.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since commit 3cab1e7112 ("lib/vsprintf: refactor duplicate code
to special_hex_number()") %pad doesn't need 0x prefix so drop that.
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The PCI pool API is deprecated. This commit replaces the PCI pool old
API by the appropriate function with the DMA pool API.
Signed-off-by: Romain Perier <romain.perier@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds the DMA controller driver for Spreadtrum SC9860 platform.
Signed-off-by: Baolin Wang <baolin.wang@spreadtrum.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid introduction of a new compatible for each small SoC/DMA controller
variation, move the definition of the channel count to the devicetree.
The number of vchans is no longer explicit, but limited by the highest
port/DMA request number. The result is a slight overallocation for SoCs
with a sparse port mapping.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
By default, we build Broadcom SBA RAID driver as loadable module for
iProc SOCs so that kernel image is little smaller and we load SBA RAID
driver only when required.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch makes the comment header of Broadcom SBA RAID driver
similar to the GPL comment header used across Broadcom driver
sources.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Each mailbox channel used by Broadcom SBA RAID driver is
a separate HW ring.
Currently, Broadcom SBA RAID driver creates one DMA channel
using one or more mailbox channels. When we are using more
than one mailbox channels for a DMA channel, the sba_request
are distributed evenly among multiple mailbox channels which
results in sba_request being completed out-of-order.
The above described out-of-order completion of sba_request
breaks the dma_async_is_complete() API because it assumes
DMA cookies are completed in orderly fashion.
To ensure correct behaviour of dma_async_is_complete() API,
this patch updates Broadcom SBA RAID driver to use only
single mailbox channel. If additional mailbox channels are
specified in DT then those will be ignored.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As-per documentation in driver/dma/dmaengine.h, the
dma_cookie_complete() API should be called with lock
held.
This patch ensures that Broadcom SBA RAID driver calls
the dma_cookie_complete() API with reqs_lock held.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The patch edf10919 [dmaengine: altera: fix spinlock usage] missed to
change 2 occurrences of spin_unlock_bh() to spin_unlock_irqrestore().
This patch fixes this by moving to the IRQ-safe call in the error
paths as well.
Fixes: edf10919 (dmaengine: altera: fix spinlock usage)
Signed-off-by: Stefan Roese <sr@denx.de>
Reviewed-by: Sylvain Lesne <lesne@alse-fr.com>
[add fixes tag and fix typo in log]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If two concurrent threads call pl330_get_desc() when DMAC descriptor
pool is empty it is possible that allocation for one of threads will fail
with message:
kernel: dma-pl330 20078000.dma-controller: pl330_get_desc:2469 ALERT!
Here how that can happen. Thread A calls pl330_get_desc() to get
descriptor. If DMAC descriptor pool is empty pl330_get_desc() allocates
new descriptor on shared pool using add_desc() and then get newly
allocated descriptor using pluck_desc(). At the same time thread B calls
pluck_desc() and take newly allocated descriptor. In that case descriptor
allocation for thread A will fail.
Using on-stack pool for new descriptor allow avoid the issue described.
The patch modify pl330_get_desc() to use on-stack pool for allocation
new descriptors.
Signed-off-by: Alexander Kochetkov <al.kochet@gmail.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
SYS/RT/Audio DMAC includes independent data buffers for reading
and writing. Therefore, the read transfer counter and write transfer
counter have different values.
TCR indicates read counter, and TCRB indicates write counter.
The relationship is like below.
TCR TCRB
[SOURCE] -> [DMAC] -> [SINK]
In the MEM_TO_DEV direction, what really matters is how much data has
been written to the device. If the DMA is interrupted between read and
write, then, the data doesn't end up in the destination, so shouldn't
be counted. TCRB is thus the register we should use in this cases.
In the DEV_TO_MEM direction, the situation is more complex. Both the
read and write side are important. What matters from a data consumer
point of view is how much data has been written to memory.
On the other hand, if the transfer is interrupted between read and
write, we'll end up losing data. It can also be important to report.
In the MEM_TO_MEM direction, what matters is of course how much data
has been written to memory from data consumer point of view.
Here, because read and write have independent data buffers, it will
take a while for TCR and TCRB to become equal. Thus we should check
TCRB in this case, too.
Thus, all cases we should check TCRB instead of TCR.
Without this patch, Sound Capture has noise after PluseAudio support
(= 07b7acb51d ("ASoC: rsnd: update pointer more accurate")), because
the recorder will use wrong residue counter which indicates transferred
from sound device, but in reality the data was not yet put to memory
and recorder will record it.
Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
[Kuninori: added detail information in log]
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The A64 SoC has the same dma engine as the H3 (sun8i), with a
reduced amount of physical channels. To allow future reuse of the
compatible, leave the channel count etc. in the config data blank
and retrieve it from the devicetree.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Preparatory patch: If the same compatible is used for different SoCs which
have a common register layout, but different number of channels, the
channel count can no longer be stored in the config. Store it in the
device structure instead.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The H3 supports bursts lengths of 1, 4, 8 and 16 transfers, each with
a width of 1, 2, 4 or 8 bytes.
The register value for the the width is log2-encoded, change the
conversion function to provide the correct value for width == 8.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current code mixes three distinct operations when transforming
the slave config to register settings:
1. special handling of DMA_SLAVE_BUSWIDTH_UNDEFINED, maxburst == 0
2. range checking
3. conversion of raw to register values
As the range checks depend on the specific SoC, move these out of the
conversion to distinct operations.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For the H3, the burst lengths field offsets in the channel configuration
register differs from earlier SoC generations.
Using the A31 register macros actually configured the H3 controller
do to bursts of length 1 always, which although working leads to higher
bus utilisation.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The H83T uses a compatible string different from the A23, but requires
the same clock autogating register setting.
The H3 also requires setting the clock autogating register, but has
the register at a different offset.
Add three suitable callbacks for the existing controller generations
and set it in the controller config structure.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add runtime PM support to disable the clock when the h/w is not in use.
The existing clock_prepare_enable is removed from probe() as the clock
is no longer permanently enabled.
Signed-off-by: Ed Blake <ed.blake@sondrel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add suspend / resume handling using suspend_late and resume_early, and
check that all channels are idle before suspending.
DMA drivers should use suspend_late / resume_early to ensure that all
DMA client devices are suspended before the DMA device itself, and that
client devices are resumed after the DMA device. This avoids suspending
the DMA device while transactions are still active.
It is the responsibility of client drivers to terminate all DMA
transactions in their suspend handlers, so there should be no active
transactions by the time suspend_late is called.
There's no need to save and restore registers for MDC during suspend /
resume, as all transactions will be terminated as a result of the
suspend, and all required registers are programmed anyway at the start
of any new transactions following resume.
Signed-off-by: Ed Blake <ed.blake@sondrel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use the of_device_get_match_data() helper instead of open coding.
Note that when used with DT, there's always a valid match.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
the device's max_burst to 16777215 (EN is 24bit unsigned value) so
clients can take this into consideration when setting up the transfer.
During slave transfer preparation check if the requested maxburst is valid.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <linux@armlinux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
the device's max_burst to 32767 (CIDX is 16bit signed value) so clients
can take this into consideration when setting up the transfer.
During slave transfer preparation check if the requested maxburst is valid.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
hwdesc is being initialized to desc->hwdesc but this is never read
as hwdesc is overwritten in a for-loop. Remove the redundant
initialization and move the declaration of hwdesc into the for-loop.
Cleans up clang warning:
Value stored to 'hwdesc' during its initialization is never read
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Without CONFIG_OF we get a build warning:
warning: (STM32_MDMA) selects DMA_OF which has unmet direct dependencies (DMADEVICES && OF)
This adds a dependency on CONFIG_OF. Since this means
we no longer need to select 'DMA_OF', I'm dropping that line
as well.
Fixes: a4ffb13c89 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pointer print was using explict cast and printing as %x which causes below
warn on some arch's so print using %p format specfier.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds the driver for the STM32 MDMA controller.
Master Direct memory access (MDMA) is used in order to provide high-speed
data transfer between memory and memory or between peripherals and memory.
MDMA controller provides a master AXI interface for main memory and
peripheral registers access (system access port) and a master AHB
interface only for Cortex-M7 TCM memory access (TCM access port).
MDMA works in conjunction with the standard DMA controllers (DMA1 or DMA2).
It offers up to 64 channels, each dedicated to managing memory access
requests from one of the DMA stream memory buffer or other peripherals
(w/ integrated FIFO).
Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since this lock is acquired in both process and IRQ context, failing to
to disable IRQs when trying to acquire the lock in process context can
lead to deadlocks.
Signed-off-by: Sylvain Lesne <lesne@alse-fr.com>
Reviewed-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 6084fc2ec4 ("dmaengine: altera: Use macros instead of structs
to describe the registers") introduced a minus sign before a register
offset.
This leads to soft-locks of the DMA controller, since reading the last
status byte is required to pop the response from the FIFO. Failing to
do so will lead to a full FIFO, which means that the DMA controller
will stop processing descriptors.
Signed-off-by: Sylvain Lesne <lesne@alse-fr.com>
Reviewed-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add DMA filters for the sa11x0 DMA channels. This will allow us to
migrate away from directly using the DMA filter function in drivers.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch implements the STM32 DMAMUX driver.
The DMAMUX request multiplexer allows routing a DMA request line between
the peripherals and the DMA controllers of the product. The routing
function is ensured by a programmable multi-channel DMA request line
multiplexer. Each channel selects a unique DMA request line,
unconditionally or synchronously with events from its DMAMUX
synchronization inputs. The DMAMUX may also be used as a DMA request
generator from programmable events on its input trigger signals
Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bam dmaengine has a circular FIFO to which we
add hw descriptors that describes the transaction.
The FIFO has space for about 4096 hw descriptors.
Currently we add one descriptor and wait for it to
complete with interrupt and then add the next pending
descriptor. In this way, the FIFO is underutilized
since only one descriptor is processed at a time, although
there is space in FIFO for the BAM to process more.
Instead keep adding descriptors to FIFO till its full,
that allows BAM to continue to work on the next descriptor
immediately after signalling completion interrupt for the
previous descriptor.
Also when the client has not set the DMA_PREP_INTERRUPT for
a descriptor, then do not configure BAM to trigger a interrupt
upon completion of that descriptor. This way we get a interrupt
only for the descriptor for which DMA_PREP_INTERRUPT was
requested and there signal completion of all the previous completed
descriptors. So we still do callbacks for all requested descriptors,
but just that the number of interrupts are reduced.
CURRENT:
------ ------- ---------------
|DES 0| |DESC 1| |DESC 2 + INT |
------ ------- ---------------
| | |
| | |
INTERRUPT: (INT) (INT) (INT)
CALLBACK: (CB) (CB) (CB)
MTD_SPEEDTEST READ PAGE: 3560 KiB/s
MTD_SPEEDTEST WRITE PAGE: 2664 KiB/s
IOZONE READ: 2456 KB/s
IOZONE WRITE: 1230 KB/s
bam dma interrupts (after tests): 96508
CHANGE:
------ ------- -------------
|DES 0| |DESC 1 |DESC 2 + INT |
------ ------- --------------
|
|
(INT)
(CB for 0, 1, 2)
MTD_SPEEDTEST READ PAGE: 3860 KiB/s
MTD_SPEEDTEST WRITE PAGE: 2837 KiB/s
IOZONE READ: 2677 KB/s
IOZONE WRITE: 1308 KB/s
bam dma interrupts (after tests): 58806
Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Tested-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When looking for unused xbar_out lane we should also protect the set_bit()
call with the same mutex to protect against concurrent threads picking the
same ID.
Fixes: ec9bfa1e1a ("dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idr")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The usage of of_device_get_match_data reduce the code size a bit.
Furthermore, it prevents an improbable dereference when
of_match_device() return NULL.
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Memory to Memory transfers does not have any special alignment needs
regarding to acnt array size, but if one of the areas are in memory mapped
regions (like PCIe memory), we need to make sure that the acnt array size
is aligned with the mem copy parameters.
Before "dmaengine: edma: Optimize memcpy operation" change the memcpy was set
up in a different way: acnt == number of bytes in a word based on
__ffs((src | dest | len), bcnt and ccnt for looping the necessary number of
words to comlete the trasnfer.
Instead of reverting the commit we can fix it to make sure that the ACNT size
is aligned to the traswnfer.
Fixes: df6694f803 (dmaengine: edma: Optimize memcpy operation)
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The driver already supports DMA_DEV_TO_DEV in sdma_config(),
DMA_SLAVE_BUSWIDTH_2_BYTES and DMA_SLAVE_BUSWIDTH_1_BYTE in
sdma_prep_slave_sg(). So this patch adds them to the lists.
Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The enum xdma_ip_type is only used inside the Xilinx DMA driver and not
exported to any consumers (nor should it be). So move it from the global
header to driver file itself.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When running in software cyclic mode the driver currently does not go back
to the first segment once the last segment has been reached. Effectively
making the transfer non-cyclic.
Fix this by going back to the first segment once the last segment has been
reached for cyclic transfers.
Special care need to be taken to avoid a segment from being submitted
multiple times concurrently, which could happen for transfers with a number
of segments that is smaller than the DMA controller's internal queue.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In hardware cyclic mode the submitted segment is repeated. This means
hardware cyclic mode can only be used if the transfer has a single segment.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
- Removal of DMA_SG support as we have no users for this feature
- New driver for Altera / Intel mSGDMA IP core
- Support for memset in dmatest and qcom_hidma driver
- Update for non cyclic mode in k3dma, bunch of update in bam_dma, bcm sba-raid
- Constify device ids across drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZsXteAAoJEHwUBw8lI4NHPwcP/iihF1n7jOQVtUm3zxPvUV+n
GzU7+rqAEDLKBaIttK28LIjgvg0AC/4aiEsosfCzTjpkzMHteRw00YyplwF7/wdM
O0owKOIub4PriDiL6d/SWFnhcWwv0/KLbyKscQcOwwvkksG/mwMn1VfW7alCrz1w
81TOQaW9SxLxL7guJU0aQHljkudT53l8Dgsp55iC9Ccz515Iuu7dQm3DnSG3sYjJ
Ct4u4MWWzDmmKKpbDoYe/Z+fiQT0WKuGfI7QHURVnw5qLo2sDKREWGbThhRG/lZj
YlnLQnkjWwLU5dyX1MyIWipPxe83sjf/7OwJ7XUlLjD6o+lNEuQxjmNkVAh0hNRc
dgrXRuqPRJMW40uOvAMDHTkexxikWc5ggt5LN9dIYDOdaS4Ch5ewf19SRi9pSDap
FZeIWY1FWwQCAU7HQMwSYyRLBjlmEmeSkElkXCd+2wu5aH2oKOMUMbUIYcqL4fjD
qMAR7kfn6e92fDT1gR1ZKL79Cfe9zsCQA3XmecpC/HwqiE3XtfZuDY/73cXD0MeO
SbJUCv4ldPGjrTKBHvs0wiWbxi5Mj5sXglmSaD0lEhtMsOfhPHY2BGatTzSmKKwO
WwmKAvM8qElQZy2Eh25dvlE04yAOofoJb6Pf/AraQOLTUkMyF8wRWEpltjUuttM9
VzQLvh8s25naKM5mOAM2
=88SI
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This one features the usual updates to the drivers and one good part
of removing DA_SG from core as it has no users.
Summary:
- Remove DMA_SG support as we have no users for this feature
- New driver for Altera / Intel mSGDMA IP core
- Support for memset in dmatest and qcom_hidma driver
- Update for non cyclic mode in k3dma, bunch of update in bam_dma,
bcm sba-raid
- Constify device ids across drivers"
* tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (52 commits)
dmaengine: sun6i: support V3s SoC variant
dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirk
dmaengine: rcar-dmac: document R8A77970 bindings
dmaengine: xilinx_dma: Fix error code format specifier
dmaengine: altera: Use macros instead of structs to describe the registers
dmaengine: ti-dma-crossbar: Fix dra7 reserve function
dmaengine: pl330: constify amba_id
dmaengine: pl08x: constify amba_id
dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED
dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sending
dmaengine: bcm-sba-raid: Add debugfs support
dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_RECEIVED
dmaengine: bcm-sba-raid: Re-factor sba_process_deferred_requests()
dmaengine: bcm-sba-raid: Pre-ack async tx descriptor
dmaengine: bcm-sba-raid: Peek mbox when we have no free requests
dmaengine: bcm-sba-raid: Alloc resources before registering DMA device
dmaengine: bcm-sba-raid: Improve sba_issue_pending() run duration
dmaengine: bcm-sba-raid: Increase number of free sba_request
dmaengine: bcm-sba-raid: Allow arbitrary number free sba_request
dmaengine: bcm-sba-raid: Remove reqs_free_count from sba_device
...
Allwinner V3s has a DMA engine similar to the ones from A31, but with
fewer channels and DRQs.
Add support for it.
Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Acked-by: Chen-Yu Tsai <wens@csie.org>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Originally we enable a special gate bit when the compatible indicates
A23/33.
But according to BSP sources and user manuals, more SoCs will need this
gate bit.
So make it a common quirk configured in the config struct.
Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Reviewed-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
'err' is a signed int and error codes are typically negative numbers, so
use '%d' instead of '%u' to format the error code in the error message.
Fixes: ba16db36b5 ("dmaengine: vdma: Add clock support")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch moves from a struct declaration for the DMA controller
registers to macros with offests to the base address. This is mainly
done to remove the sparse warnings, since the function parameter of
ioread32/iowrite32 is "void __iomem *" instead of a pointer to struct
members. With this patch applied, no sparse warning is seen anymore.
Please note that the struct for the descriptors is still kept in place,
as the code largely accesses the struct members as internal variables
before the complete struct is copied into the descriptor FIFO of the
DMA controller.
Additionally this patch also removes two warnings "variable xxx set but
not used" seen when compiling with "W=1". The registers need to be read
to flush the response FIFO, but nothing needs to be done with them. So
the code is correct here and the warning is a false one.
Signed-off-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMA crossbar uses 'xbar->dma_inuse' variable to manage allocated routes.
Each bit represents respective DMA channel. If the channel is free, bit
is set to '0', if channel is allocated, bit should be set to '1'.
In reserve function, the bits for requested DMA channels are cleared, so
they are not really reserved, but freed and become ready for allocation.
Signed-off-by: Alexander Smirnov <asmirnov@ilbers.de>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The SBA_REQUEST_STATE_COMPLETED state was added to keep track
of sba_request which got completed but cannot be freed because
underlying Async Tx descriptor was not ACKed by DMA client.
Instead of above, we can free the sba_request with non-ACKed
Async Tx descriptor and sba_alloc_request() will ensure that
it always allocates sba_request with ACKed Async Tx descriptor.
This alternate approach makes SBA_REQUEST_STATE_COMPLETED state
redundant hence this patch removes it.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We should explicitly ACK mailbox message because after
sending message we can know the send status via error
attribute of brcm_message.
This will also help SBA-RAID to use "txdone_ack" method
whenever mailbox controller supports it.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds debugfs support to report stats via debugfs
which in-turn will help debug hang or error situations.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The SBA_REQUEST_STATE_RECEIVED state is now redundant because
received sba_request are immediately freed or moved to completed
list in sba_process_received_request().
This patch removes redundant SBA_REQUEST_STATE_RECEIVED state.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently, sba_process_deferred_requests() handles both pending
and completed sba_request which is unnecessary overhead for
sba_issue_pending() because completed sba_request handling is
not required in sba_issue_pending().
This patch breaks sba_process_deferred_requests() into two parts
sba_process_received_request() and _sba_process_pending_requests().
The sba_issue_pending() will only process pending sba_request
by calling _sba_process_pending_requests(). This will improve
sba_issue_pending().
The sba_receive_message() will only process received sba_request
by calling sba_process_received_request() for each received
sba_request. The sba_process_received_request() will also call
_sba_process_pending_requests() after handling received sba_request
because we might have pending sba_request not submitted by previous
call to sba_issue_pending().
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We should pre-ack async tx descriptor at time of
allocating sba_request (just like other RAID drivers).
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When setting up RAID array on several NVMe disks we observed that
sba_alloc_request() start failing (due to no free requests left)
and RAID array setup becomes very slow.
To improve performance, we do mbox channel peek when we have
no free requests. This improves performance of RAID array setup
because mbox requests that were completed but not processed by
mbox completion worker will be processed immediately by mbox
channel peek.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We should allocate DMA channel resources before registering the
DMA device in sba_probe() because we can get DMA request soon
after registering the DMA device. If DMA channel resources are
not allocated before first DMA request then SBA-RAID driver will
crash.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The pending sba_request list can become very long in real-life usage
(e.g. setting up RAID array) which can cause sba_issue_pending() to
run for long duration.
This patch adds common sba_process_deferred_requests() to process
few completed and pending requests so that it finishes in short
duration. We use this common sba_process_deferred_requests() in
both sba_issue_pending() and sba_receive_message().
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently, we have only 1024 free sba_request created
by sba_prealloc_channel_resources(). This is too low
and the prep_xxx() callbacks start failing more often
at time of RAID array setup over NVMe disks.
This patch sets number of free sba_request created by
sba_prealloc_channel_resources() to be:
<number_of_mailbox_channels> x 8192
Due to above, we will have sufficient number of free
sba_request and prep_xxx() callbacks failing is very
unlikely.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently, we cannot have any arbitrary number of free sba_request
because sba_prealloc_channel_resources() allocates an array of
sba_request using devm_kcalloc() and kcalloc() cannot provide
memory beyond certain size.
This patch removes "reqs" (sba_request array) from sba_device
and makes "cmds" as variable array (instead of pointer) in
sba_request. This helps sba_prealloc_channel_resources() to
allocate sba_request and associated SBA command in one allocation
which in-turn allows arbitrary number of free sba_request.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The reqs_free_count member of sba_device is not used anywhere
hence no point in tracking number of free sba_request.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Both resp and resp_dma are redundant in sba_request because
resp is unused and resp_dma carries same information present
in tx.phys of sba_request. This patch removes both resp and
resp_dma from sba_request.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The next_count in sba_request is redundant because same information
is captured by next_pending_count. This patch removes next_count
from sba_request.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch merges sba_request state and fence into common
sba_request flags. The sba_request flags not only saves
memory but it can also be extended in-future without adding
new members.
We also make each sba_request state as separate bit in
sba_request flags to help debugging situations where a
sba_request is accidently in two states.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We don't require to hold "sba->reqs_lock" for long-time
in sba_alloc_request() because lock protection is not
required when initializing members of "struct sba_request".
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch does following improvments to comments:
1. Make section comments consistent across the driver by
avoiding " SBA " in some of the comments
2. Add/update few more section comments
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If DMA_PREP_CMD flag is passed in prep_slave_sg then peripheral
driver has passed the data is in BAM command descriptor format
and BAM driver should set CMD bit for each of the HW descriptors.
Signed-off-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMAengine core has BUG_ON to check for mandatory operations and ones based
on capabilities, but they use BUG_ON, so remove and move to error returns
and logging the errors gracefully
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Anton Volkov noticed that engine->dev is NULL before
of_dma_controller_register() in probe.
Thus there might be a NULL pointer dereference in
rcar_dmac_chan_start_xfer while accessing chan->chan.device->dev which
is equal to (&dmac->engine)->dev.
On same reason, same and similar things will happen if we didn't
initialize all necessary data before calling register irq function.
To be more safety code, this patch initialize all necessary data
before calling register irq function.
Reported-by: Anton Volkov <avolkov@ispras.ru>
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 36387a2b1f ("k3dma: Fix
memory handling in preparation for cyclic mode") adds few
calls to ON_WARN_ONCE() to track the use of ds_run/ds_done.
After the two fixes:
- dmaengine: k3dma: fix non-cyclic mode
- dmaengine: k3dma: fix re-free of the same descriptor
the behaviour of ds_run/ds_done is properly fixed.
The remaining ON_WARN_ONCE() are never triggered and can be
removed.
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 36387a2b1f ("k3dma: Fix
memory handling in preparation for cyclic mode") adds code
to free the descriptor in ds_done.
In cyclic mode, ds_done is never used and it's always NULL,
so the added code is not executed.
In non-cyclic mode, ds_done is used as a flag: when not NULL
it signals that the descriptor has been consumed. No need to
free it because it would be free by vchan_complete().
The fix takes back the code changed by the commit above:
- remove the free on the descriptor;
- initialize ds_done to NULL for the next run.
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 36387a2b1f ("k3dma: Fix
memory handling in preparation for cyclic mode") broke the
logic around ds_run/ds_done in case of non-cyclic DMA.
This went unnoticed as the only user of k3dma was the i2s
audio driver but, with a patch set to enable dma on SPI, the
issue popped out.
The fix re-applies the initialization to ds_run/ds_done in
k3_dma_start_txd() that were removed by the commit above.
Also, one of the calls to k3_dma_start_txd() is triggered by
(ds_done != NULL), so remove the noisy and useless call to
WARN_ON_ONCE().
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We observed performance increase with DMA copy from memory
to MMIO by changing the interrupt coalescing value to 0.
The previous set value was projected on the C5xxx Xeon
platform and no longer holds true. Removing hard coded
value and providing a tune-able in sysfs in order to allow
user to tune this on a per channel basis. By default this
value will be set to 0.
Example of sysfs variable importing for interrupt coalescing
value from command line:
echo 5> /sys/devices/pci0000:00/0000:00:04.0/dma/dma0chan0/
quickdata/intr_coalesce
Reported-by: Nithin Sujir <nsujir@tintri.com>
Signed-off-by: Ujjal Singh <ujjal.singh@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit c678fa66341c: ("dmaengine: remove DMA_SG as it is dead code in
kernel") removes DMA_SG from dmaengine subsystem but missed removing unused
xgene_dma_invalidate_buffer function, so remove it
drivers/dma/xgene-dma.c:394:13: warning: ‘xgene_dma_invalidate_buffer’ defined but not used [-Wunused-function]
static void xgene_dma_invalidate_buffer(__le64 *ext8)
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit c678fa66341c: ("dmaengine: remove DMA_SG as it is dead code in
kernel") removes DMA_SG from dmaengine subsystem but missed the newly added
driver, so remove it from here as well
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are no in kernel consumers for DMA_SG op. Removing operation,
dead code, and test code in dmatest.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Gary Hook <gary.hook@amd.com>
Cc: Ludovic Desroches <ludovic.desroches@microchip.com>
Cc: Kedareswara rao Appana <appana.durga.rao@xilinx.com>
Cc: Li Yang <leoyang.li@nxp.com>
Cc: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
clk_prepare_enable() can fail here and we must check its return value.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Replace '%d' by '%zu' to fix the compilation warning:-
"format ‘%d’ expects argument of type ‘int’,but argument has type ‘size_t’ [-Wformat=]"
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Make these const as they are only used during a copy operation.
Done using Coccinelle.
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If device_node np doesn't contain child or first child doesn't have
property "reg" then hidma_mgmt_of_populate_channels() perfoms
deallocation on uninitialized local variable res.
The patch adds res initialization by NULL.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Anton Vasilyev <vasilyev@ispras.ru>
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
of_irq_get() may return 0 as well as negative error number on failure,
while the driver only checks for the negative values. The driver would then
call request_irq(0, ...) in tegra_adma_alloc_chan_resources() and never get
valid channel interrupt.
Check for 'tdc->irq <= 0' instead and return -ENXIO from the driver's probe
iff of_irq_get() returned 0.
Fixes: f46b195799 ("dmaengine: tegra-adma: Add support for Tegra210 ADMA")
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It's better to be explicit and use the DRIVER_ATTR_RW() and
DRIVER_ATTR_RO() macros when defining a driver's sysfs file.
Bonus is this fixes up a checkpatch.pl warning.
This is part of a series to drop DRIVER_ATTR() from the tree entirely.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This driver builds with warnings which can be fixed by making these
functions static.
CC [M] drivers/dma/bcm-sba-raid.o
drivers/dma/bcm-sba-raid.c:786:1: warning: no previous prototype for ‘sba_prep_dma_xor_req’ [-Wmissing-prototypes]
sba_prep_dma_xor_req(struct sba_device *sba,
^
drivers/dma/bcm-sba-raid.c:995:1: warning: no previous prototype for ‘sba_prep_dma_pq_req’ [-Wmissing-prototypes]
sba_prep_dma_pq_req(struct sba_device *sba, dma_addr_t off,
^
drivers/dma/bcm-sba-raid.c:1247:1: warning: no previous prototype for ‘sba_prep_dma_pq_single_req’ [-Wmissing-prototypes]
sba_prep_dma_pq_single_req(struct sba_device *sba, dma_addr_t off,
^
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
A regression was found while testing QOS with different channels.
The QOS register offset is 0x700 rather than 0x300.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
A false overriding information is being presented during boot
under this scenario.
1. First object checks for kernel command line value against zero.
2. It doesn't find it, it sets the command line variable to the
value coming from ACPI/DT.
3. Second object is being probed.
4. Second object sees that the value of kernel command line
override is non-zero, it prints an overriding message even though
value matches ACPI/DT value.
hidma-mgmt QCOM8060:03: overriding max-write-burst-bytes: 1024
Add an additional check to verify that kernel command line value
is different from the ACPI/DT value.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
HIDMA HW supports memset operation in addition to memcpy.
Since the memset API is present on the kernel now, bring the
memset feature into life.
The descriptor format is the same for both memcpy and memset.
Type of the descriptor is 4 when memset is requested.
The lowest 8 bits of the source DMA argument is used as a
fill pattern.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Now that we have a custom printf format specifier, convert users of
full_name to use %pOF instead. This is preparation to remove storing
of the full path string for each node.
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This driver adds support for the Altera / Intel modular Scatter-Gather
Direct Memory Access (mSGDMA) intellectual property (IP) to the Linux
DMAengine subsystem. Currently it supports the following op modes:
- DMA_MEMCPY
- DMA_SG
- DMA_SLAVE
This implementation has been tested on an Altera Cyclone FPGA connected
via PCIe, both on an ARM and an x86 platform.
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Introducing memset test into dmatest. This change allows us to test
memset capable HW using the dmatest test procedure. The new dmatest
value for memset is 2 and it is not the default value.
Memset support patch shares the same code path as the other dmatest
code to reuse as much as we can.
The first value inside the source buffer is used as a pattern
to fill in the destination buffer space.
Prior to running the test, source/destination buffers are initialized
in the current code.
"The remaining bits are the inverse of a counter which increments by
one for each byte address."
Memset test will fill in the upper bits of pattern with the inverse of
fixed counter value 1 as opposed to an incrementing value in a loop.
An example run is as follows:
echo dma0chan0 > /sys/module/dmatest/parameters/channel
echo 2 > /sys/module/dmatest/parameters/dmatest
echo 2000 > /sys/module/dmatest/parameters/timeout
echo 10 > /sys/module/dmatest/parameters/iterations
echo 1 > /sys/module/dmatest/parameters/run
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
pci_device_id are not supposed to change at runtime. All functions
working with pci_device_id provided by <linux/pci.h> work with
const pci_device_id. So mark the non-const structs as const.
File size before:
text data bss dec hex filename
12582 3056 16 15654 3d26 drivers/dma/ioat/init.o
File size After adding 'const':
text data bss dec hex filename
14773 865 16 15654 3d26 drivers/dma/ioat/init.o
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
- removal of AVR32 support in dw driver as AVR32 is gone
- new driver for Broadcom stream buffer accelerator (SBA) RAID driver
- add support for Faraday Technology FTDMAC020 in amba-pl08x driver
- IOMMU support in pl330 driver
- updates to bunch of drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZYG73AAoJEHwUBw8lI4NH+DoP/1f0TsYrQFCjNqa4nybjU1Sd
bbqnpouuJscwt8Qk2LGuSimi0QG91gQOLvrmueFbXtEg86nPOfa0RnWGNF4qwYFK
oliDlXF2PnV65J5kl7CvqXCj6bFiXCULVdO9JD2HFoFB1+lzXN9JQqOG5ne29BQ6
g3HNRlUTXNzQXWisgbAOLxuuvyfv68Zo3wCLYLkd4vC/C4zmxM+KXUG8+s0hS7t3
AOUpYW6F/C+y1Ax+SiACm0QGNZ4rc6/+ZUIIXUO5CfTYGjv6QUdzxiLHtc4br25l
2CoN9IP4V/OxHaW9T1jA61TeAAFr63oXYfDMBBzclzVryZRAIU72ups31uRQXpFz
99zUQ0OsdOCvy0oPInhNd8u+cpyh/4e2RDgSZ9rxw3xVaKFh8lsw5OtcCBQzCMeI
xgFCUBHsLjEi4uafJcl6n2T7+Y4Y0KgOmxPHZo3tpq/2a5M6tVy8k68m3afCQylF
1SOxzVZdDRUutPpviQWop6RgP0EcVuzaUJ0vO4nat4j77vuimaPqdk+oLV46XP2d
5I52kcvbVI4BbJavTjVs3FRdcez0pW37iOw+5MOxHE3dnBp4X/3btFzBY4aOsdg0
wVut3B+9U4WHDBF2ConBxxMvGqMYmcssOQ096GdC6oBHHS7x6n7tEVPiZ5iUacn5
LB8k9AZtpBC7nUWPH7FS
=srPZ
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.13-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- removal of AVR32 support in dw driver as AVR32 is gone
- new driver for Broadcom stream buffer accelerator (SBA) RAID driver
- add support for Faraday Technology FTDMAC020 in amba-pl08x driver
- IOMMU support in pl330 driver
- updates to bunch of drivers
* tag 'dmaengine-4.13-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (36 commits)
dmaengine: qcom_hidma: correct API violation for submit
dmaengine: zynqmp_dma: Remove max len check in zynqmp_dma_prep_memcpy
dmaengine: tegra-apb: Really fix runtime-pm usage
dmaengine: fsl_raid: make of_device_ids const.
dmaengine: qcom_hidma: allow ACPI/DT parameters to be overridden
dmaengine: fsldma: set BWC, DAHTS and SAHTS values correctly
dmaengine: Kconfig: Simplify the help text for MXS_DMA
dmaengine: pl330: Delete unused functions
dmaengine: Replace WARN_TAINT_ONCE() with pr_warn_once()
dmaengine: Kconfig: Extend the dependency for MXS_DMA
dmaengine: mxs: Use %zu for printing a size_t variable
dmaengine: ste_dma40: Cleanup scatterlist layering violations
dmaengine: imx-dma: cleanup scatterlist layering violations
dmaengine: use proper name for the R-Car SoC
dmaengine: imx-sdma: Fix compilation warning.
dmaengine: imx-sdma: Handle return value of clk_prepare_enable
dmaengine: pl330: Add IOMMU support to slave tranfers
dmaengine: DW DMAC: Handle return value of clk_prepare_enable
dmaengine: pl08x: use GENMASK() to create bitmasks
dmaengine: pl08x: Add support for Faraday Technology FTDMAC020
...
In this new subsystem we'll try to properly maintain all the generic
code related to dma-mapping, and will further consolidate arch code
into common helpers.
This pull request contains:
- removal of the DMA_ERROR_CODE macro, replacing it with calls
to ->mapping_error so that the dma_map_ops instances are
more self contained and can be shared across architectures (me)
- removal of the ->set_dma_mask method, which duplicates the
->dma_capable one in terms of functionality, but requires more
duplicate code.
- various updates for the coherent dma pool and related arm code
(Vladimir)
- various smaller cleanups (me)
-----BEGIN PGP SIGNATURE-----
iQI/BAABCAApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAlldmw0LHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYOiKA/+Ln1mFLSf3nfTzIHa24Bbk8ZTGr0B8TD4Vmyyt8iG
oO3AeaTLn3d6ugbH/uih/tPz8PuyXsdiTC1rI/ejDMiwMTSjW6phSiIHGcStSR9X
VFNhmMFacp7QpUpvxceV0XZYKDViAoQgHeGdp3l+K5h/v4AYePV/v/5RjQPaEyOh
YLbCzETO+24mRWdJxdAqtTW4ovYhzj6XsiJ+pAjlV0+SWU6m5L5E+VAPNi1vqv1H
1O2KeCFvVYEpcnfL3qnkw2timcjmfCfeFAd9mCUAc8mSRBfs3QgDTKw3XdHdtRml
LU2WuA5cpMrOdBO4mVra2plo8E2szvpB1OZZXoKKdCpK3VGwVpVHcTvClK2Ks/3B
GDLieroEQNu2ZIUIdWXf/g2x6le3BcC9MmpkAhnGPqCZ7skaIBO5Cjpxm0zTJAPl
PPY3CMBBEktAvys6DcudOYGixNjKUuAm5lnfpcfTEklFdG0AjhdK/jZOplAFA6w4
LCiy0rGHM8ZbVAaFxbYoFCqgcjnv6EjSiqkJxVI4fu/Q7v9YXfdPnEmE0PJwCVo5
+i7aCLgrYshTdHr/F3e5EuofHN3TDHwXNJKGh/x97t+6tt326QMvDKX059Kxst7R
rFukGbrYvG8Y7yXwrSDbusl443ta0Ht7T1oL4YUoJTZp0nScAyEluDTmrH1JVCsT
R4o=
=0Fso
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-4.13' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping infrastructure from Christoph Hellwig:
"This is the first pull request for the new dma-mapping subsystem
In this new subsystem we'll try to properly maintain all the generic
code related to dma-mapping, and will further consolidate arch code
into common helpers.
This pull request contains:
- removal of the DMA_ERROR_CODE macro, replacing it with calls to
->mapping_error so that the dma_map_ops instances are more self
contained and can be shared across architectures (me)
- removal of the ->set_dma_mask method, which duplicates the
->dma_capable one in terms of functionality, but requires more
duplicate code.
- various updates for the coherent dma pool and related arm code
(Vladimir)
- various smaller cleanups (me)"
* tag 'dma-mapping-4.13' of git://git.infradead.org/users/hch/dma-mapping: (56 commits)
ARM: dma-mapping: Remove traces of NOMMU code
ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
ARM: NOMMU: Introduce dma operations for noMMU
drivers: dma-mapping: allow dma_common_mmap() for NOMMU
drivers: dma-coherent: Introduce default DMA pool
drivers: dma-coherent: Account dma_pfn_offset when used with device tree
dma: Take into account dma_pfn_offset
dma-mapping: replace dmam_alloc_noncoherent with dmam_alloc_attrs
dma-mapping: remove dmam_free_noncoherent
crypto: qat - avoid an uninitialized variable warning
au1100fb: remove a bogus dma_free_nonconsistent call
MAINTAINERS: add entry for dma mapping helpers
powerpc: merge __dma_set_mask into dma_set_mask
dma-mapping: remove the set_dma_mask method
powerpc/cell: use the dma_supported method for ops switching
powerpc/cell: clean up fixed mapping dma_ops initialization
tile: remove dma_supported and mapping_error methods
xen-swiotlb: remove xen_swiotlb_set_dma_mask
arm: implement ->dma_supported instead of ->set_dma_mask
mips/loongson64: implement ->dma_supported instead of ->set_dma_mask
...
This development cycle resulted in a fair amount of changes in both
core and driver sides. The most significant change in ALSA core is
about PCM. Also the support of of-graph card and the new DAPM widget
for DSP are noteworthy changes in ASoC core. And there're lots of
small changes splat over the tree, as you can see in diffstat.
Below are a few highlights:
ALSA core:
- Removal of set_fs() hackery from PCM core stuff, and the code
reorganization / optimization thereafter
- Improved support of PCM ack ops, and a new ABI for improved
control/status mmap handling
- Lots of constifications in various codes
ASoC core:
- The support of of-graph card, which may work as a better generic
device for a replacement of simple-card
- New widget types intended mainly for use with DSPs
ASoC drivers:
- New drivers for Allwinner V3s SoCs
- Ensonic ES8316 codec support
- More Intel SKL and KBL works
- More device support for Intel SST Atom (mostly for cheap tablets and
2-in-1 devices)
- Support for Rockchip PDM controllers
- Support for STM32 I2S and S/PDIF controllers
- Support for ZTE AUD96P22 codecs
HD-audio:
- Support of new Realtek codecs (ALC215/ALC285/ALC289), more quirks
for HP and Dell machines
- A few more fixes for i915 component binding
Note that of-graph change may bring the conflicts with a later pull
request of devicetree, as currently found in linux-next.
-----BEGIN PGP SIGNATURE-----
iQJCBAABCAAsFiEECxfAB4MH3rD5mfB6bDGAVD0pKaQFAllbtmMOHHRpd2FpQHN1
c2UuZGUACgkQbDGAVD0pKaTMkhAAnqvRvh9nYBI1E2VGtJON/AFcsF4s6xdJd0ow
Bn5Kq/07rGWxAi8Cy69LM930eQrZl+xR69I7LMkC54BxVNhvhXNef7E5GXPbRi+3
l6dkBmkqvwmmHP5iiOxKtYSAnUfJitu1rmtAOVAjRh8rsWNeLuI8N8V/uilQBioi
lRywdBjdylub00H1DL8cmZHbrBb4pYrL/LepTswZL3I/UZ225fMiIGFd8tXpQPwZ
IKRZiuzrc3SykxSsL/aNeyxP+2qTYRtPfl/FGenKBBO2PJmGAb00yAdtQJRcD2eX
Xf1alfvpNgpy/U6+C7dJgNWQvvr+lPCaFXuMukIDno/zg/xD1V1Ev/fnbVEINLve
xMOnuJSGGaY6fu6eZ4Cck0VfZIj7UVA9x8zvBOKntIhq/VLfE7DDu3p9tiAZAVfH
nMOLAhy+0kFyHSrv6zVWQj+cmjPwLvaW7fNWVljL5/MWuF5GJi05DUOfV/vk8BaO
EnyVqe2ynzNLTsFpLHHy6XKgKtSTkPygxYSNuI7kSYAxD5qE6hXXKXTAqJ3LjDkO
tGiFmxp/vHrlNvcyRjXc30th/9PPj/mRBcJ2KyjXPa63L5ZW86PiyIHKxJA4yogv
y4z2ZlhIz90cZvpigFHtFqq1puVlDtKDbAaJ6AKrP8HEHUlMiPNApsSjWWBUcfzV
DXzrlg0=
=PUEh
-----END PGP SIGNATURE-----
Merge tag 'sound-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound updates from Takashi Iwai:
"This development cycle resulted in a fair amount of changes in both
core and driver sides. The most significant change in ALSA core is
about PCM. Also the support of of-graph card and the new DAPM widget
for DSP are noteworthy changes in ASoC core. And there're lots of
small changes splat over the tree, as you can see in diffstat.
Below are a few highlights:
ALSA core:
- Removal of set_fs() hackery from PCM core stuff, and the code
reorganization / optimization thereafter
- Improved support of PCM ack ops, and a new ABI for improved
control/status mmap handling
- Lots of constifications in various codes
ASoC core:
- The support of of-graph card, which may work as a better generic
device for a replacement of simple-card
- New widget types intended mainly for use with DSPs
ASoC drivers:
- New drivers for Allwinner V3s SoCs
- Ensonic ES8316 codec support
- More Intel SKL and KBL works
- More device support for Intel SST Atom (mostly for cheap tablets
and 2-in-1 devices)
- Support for Rockchip PDM controllers
- Support for STM32 I2S and S/PDIF controllers
- Support for ZTE AUD96P22 codecs
HD-audio:
- Support of new Realtek codecs (ALC215/ALC285/ALC289), more quirks
for HP and Dell machines
- A few more fixes for i915 component binding"
* tag 'sound-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (418 commits)
ALSA: hda - Fix unbalance of i915 module refcount
ASoC: Intel: Skylake: Remove driver debugfs exit
ASoC: Intel: Skylake: explicitly add the headers sst-dsp.h
ALSA: hda/realtek - Remove GPIO_MASK
ALSA: hda/realtek - Fix typo of pincfg for Dell quirk
ALSA: pcm: add a documentation for tracepoints
ALSA: atmel: ac97c: fix error return code in atmel_ac97c_probe()
ALSA: x86: fix error return code in hdmi_lpe_audio_probe()
ASoC: Intel: Skylake: Add support to read firmware registers
ASoC: Intel: Skylake: Add sram address to sst_addr structure
ASoC: Intel: Skylake: Debugfs facility to dump module config
ASoC: Intel: Skylake: Add debugfs support
ASoC: fix semicolon.cocci warnings
ASoC: rt5645: Add quirk override by module option
ASoC: rsnd: make arrays path and cmd_case static const
ASoC: audio-graph-card: add widgets and routing for external amplifier support
ASoC: audio-graph-card: update bindings for amplifier support
ASoC: rt5665: calibration should be done before jack detection
ASoC: rsnd: constify dev_pm_ops structures.
ASoC: nau8825: change crosstalk-bypass property to bool type
...
The big news with this release is the of-graph card, this provides a
replacement for simple-card that is much more flexibile and scalable,
allowing many more systems to use a generic sound card than was possible
before:
- The of-graph card, finally merged after a long and dedicated effort
by Morimoto-san.
- New widget types intended mainly for use with DSPs.
- New drivers for Allwinner V3s SoCs, Ensonic ES8316, several classes
of x86 machine, Rockchip PDM controllers, STM32 I2S and S/PDIF
controllers and ZTE AUD96P22 CODECs.
-----BEGIN PGP SIGNATURE-----
iQFHBAABCAAxFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAllaa7wTHGJyb29uaWVA
a2VybmVsLm9yZwAKCRAk1otyXVSH0MhRB/0cIuUi/SmMSGz7cNKdDDArdpxHUV0U
dJb6qqhXCKeDQcx/34b1m+BnZpeT9au4Nt8HxOlLRbumcnuesYqfeBeZvuJhsC4I
q3e8e+idQlOp3+WM+snUXhWM4P/UsA9H4BaV1jvYSQW/C9WhfuLxsOraRiebLH7u
WJkmfeVjpzHHWzfDtpWJLHVroRLLMbOaz0e0Pw8/R1dfof0u27zKknqHOUcwRg0N
4+IWvKn3p59VE6eM6QUmruMZZCCfn2Hv5RygWf3LaHVlhA28BZi0dyMMSSSzVG6o
Im1Wm5z0dmmTfQKdNDU3PPBEKG6amTqF+2uuXOsq1I7vuiT+akHZbgWW
=F5BP
-----END PGP SIGNATURE-----
Merge tag 'asoc-v4.13' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Updates for v4.13
The big news with this release is the of-graph card, this provides a
replacement for simple-card that is much more flexibile and scalable,
allowing many more systems to use a generic sound card than was possible
before:
- The of-graph card, finally merged after a long and dedicated effort
by Morimoto-san.
- New widget types intended mainly for use with DSPs.
- New drivers for Allwinner V3s SoCs, Ensonic ES8316, several classes
of x86 machine, Rockchip PDM controllers, STM32 I2S and S/PDIF
controllers and ZTE AUD96P22 CODECs.
Current code is violating the DMA Engine API by putting the submitted
requests directly into the HW queue. This causes queued transactions
to be started by another thread as soon as the first one finishes.
The DMA Engine document clearly states this.
"dmaengine_submit() will not start the DMA operation".
Move HW queuing of the requests into the issue_pending() routine
to comply with API requirements also create a new queued state for
temporarily holding the requests.
A descriptor goes through these transitions now.
free->prepared->queued->active->completed->free
as opposed to
free->prepared->active->completed->free
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remove check for "len > ZYNQMP_DMA_MAX_TRANS_LEN" as its not needed.
If the length is larger, the transfer is split up into multiple parts
with the max descriptor length already.
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Kedareswara rao Appana <appanad@xilinx.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit edd3bdbe9d ("dmaengine: tegra-apb: Correct runtime-pm usage")
added pm_runtime_get/put() calls to the tegra-apb DMA system suspend
callbacks. Runtime PM is disabled during system suspend and so these
APIs cannot be used. Fix the suspend handling for the tegra-apb DMA by
moving the save and restore of the DMA register context into the
runtime PM suspend and resume callbacks, and then use the
pm_runtime_force_suspend/resume() APIs to invoke the runtime PM
callbacks during system suspend.
Fixes: edd3bdbe9d ("dmaengine: tegra-apb: Correct runtime-pm usage")
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
of_device_ids are not supposed to change at runtime. All functions
working with of_device_ids provided by <linux/of.h> work with const
of_device_ids. So mark the non-const structs as const.
File size before:
text data bss dec hex filename
3981 608 0 4589 11ed drivers/dma/fsl_raid.o
File size after constify:
text data bss dec hex filename
4381 192 0 4573 11dd drivers/dma/fsl_raid.o
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Parameters like maximum read/write request size and the maximum
number of active transactions are currently configured in DT/ACPI.
This patch allows a user to override these to fine tune performance
for their application.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bits of BWC, DAHTS and SAHTS in the DMA mode register must be cleared
before a new value can be or-ed in.
Signed-off-by: Thomas Breitung <thomas.breitung@izt-labs.de>
Signed-off-by: Wolfgang Ocker <weo@reccoware.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMA_ERROR_CODE is not a public API and will go away. Instead properly
unwind based on the loop counter.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Acked-By: Vinod Koul <vinod.koul@intel.com>
When the port_window support was verified it was done on setup where only
the MEM_TO_DEV direction was enabled. This got un-noticed and thus only
this direction worked.
Now that I have managed to get a setup to verify both direction it turned
out that the setup was incorrect:
omap_desc members are settings for the slave port while the omap_sg members
apply to the memory side of the sDMA setup.
Fixes: 527a275913 ("dmaengine: omap-dma: Fix the port_window support")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: dmaengine@vger.kernel.org
Cc: dan.j.williams@intel.com
Cc: vinod.koul@intel.com
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently the help text for the MXS_DMA option is incomplete as it does
not mention MX6SX, MX6ULL and MX7D, for example.
Instead of extending this list everytime a new SoC comes out, let's
keep the text more generic.
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The WARN_TAINT_ONCE() prints out a loud stack trace on broken BIOSes.
The systems that have this problem are several years out of support and
no longer have BIOS updates available. The stack trace isn't necessary
and a pr_warn_once() will do.
Change WARN_TAINT_ONCE() to pr_warn_once() and taint.
Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Duyck, Alexander H <alexander.h.duyck@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently it is not possible to select the mxs dma driver when only
mx6sx or mx7 are selected.
Extend the dependency to allow the mxs dma driver to be built whenever
ARCH_MXS or ARCH_MXC is selected.
This has the benefit to avoid having to add new entries in the
MXS_DMA Kconfig everytime a new i.MX SoC shows up and it also makes
it consistent with the other i.MX DMA engines, such as IMX_DMA and
IMX_SDMA.
While at it, also pass COMPILE_TEST for increasing the build coverage.
Acked-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use %zu for printing a size_t variable in order to fix the following
build warning:
drivers/dma/mxs-dma.c: In function 'mxs_dma_prep_dma_cyclic':
drivers/dma/mxs-dma.c:621:5: warning: format '%d' expects argument of type 'int', but argument 3 has type 'size_t' [-Wformat]
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When removing a device with less than 9 IRQs (AMBA_NR_IRQS), we'll get a
big WARN_ON from devres.c because pl330_remove calls devm_free_irqs for
unallocated irqs. Similarly to pl330_probe, check that IRQ number is
present before calling devm_free_irq.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This dma engine driver directly accesses page_link assuming knowledge
that should be contained only in scatterlist.h.
We replace this access with a call to sg_chain which is equivalent.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Stephen Bates <sbates@raithlin.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Per Förlin <per.forlin@axis.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This dma engine driver directly accesses page_link assuming knowledge
that should be contained only in scatterlist.h.
We replace these with calls to sg_chain and sg_assign_page.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Stephen Bates <sbates@raithlin.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Per Förlin <per.forlin@axis.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Replace '%d' by '%zu' to fix the following compilation warning:-
drivers/dma/imx-sdma.c: In function ‘sdma_prep_dma_cyclic’:
drivers/dma/imx-sdma.c:1327:5: warning: format ‘%d’ expects argument of type ‘int’, but argument 4 has type ‘size_t’ [-Wformat=]
channel, period_len, 0xffff);
^
drivers/dma/imx-sdma.c:1350:3: warning: format ‘%d’ expects argument of type ‘int’, but argument 5 has type ‘size_t’ [-Wformat=]
dev_dbg(sdma->dev, "entry %d: count: %d dma: %#llx %s%s\n",
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
clk_prepare_enable() can fail here and we must check its return value.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In descriptor mode, the descriptor running pointer is not maintained
by the interrupt handler, thus, driver finds the running descriptor
from the descriptor pointer field in the CHCRB register.
But, CHCRB::DPTR indicates *next* descriptor pointer, not current.
Thus, The residue calculation will be missed. This patch fixup it.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Wire up dma_map_resource() for slave transfers, so that we can let the
PL330 use IOMMU-backed DMA mapping ops on systems with an appropriate
IOMMU and RAM above 4GB, to avoid CPU bounce buffering.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
clk_prepare_enable() can fail here and we must check its return value.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current buffer is being reset to zero on device_free_chan_resources()
but not on device_terminate_all(). It could happen that HW is restarted and
expects BASE0 to be used, but the driver is not synchronized and will start
from BASE1. One solution is to reset the buffer explicitly in
m2p_hw_setup().
Signed-off-by: Alexander Sverdlin <alexander.sverdlin@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
After reading the specs for the Faraday Technology FTDMAC020 found
in the Gemini platform, it becomes pretty evident that this is just
another PL08x derivative, and should be handled like such by simply
extending the existing PL08x driver to handle the quirks in this
hardware.
This patch makes memcpy work and has been tested on the Gemini and
also regression-tested on the Nomadik NHK15 using dmatest with
10 threads per channel without a hinch for hours.
I have not implemented slave DMA in those codepaths, because this
device (Gemini) does not use slave DMA, and it seems like devices
using FTDMAC020 for device DMA have a slightly different register
layout so some real hardware is needed to proceed with this. I
left some FIXME etc in the code for this.
I had to do some refactorings of some helper functions, but I have
not split those into separate patches because these refactorings
do not make much sense without the increased complexity of handling
the FTDMAC020.
The DMA test would hang the platform on me on the Gemini after a
few thousand iterations, however after turning of the caches the
problem immediately disappeared and I could run the DMA engine
with 10 threads pers physical channel for days in a row without
a crash. I think there is no problem with the DMA driver: instead
it is something fishy in the FA526 cache handling code that get
pretty heavily exercised by the DMA engine and we need to go and
fix that instead.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the vendor data does not specify any signals, we do not
have to support slave DMA. Make the registration of the slave
DMA engine optional, so we can use this for the FTDMAC020
in the Gemini that only has memcpy support.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We cannot use bits from configuration registers as API between
platforms and driver like this, abstract it out to two enums
and mimic the stuff passed as device tree data.
This is done to make it possible for the driver to generate the
ccfg word on-the-fly so we can support more PL08x derivatives.
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This fixes a race condition where the channel resources could be freed
before the ISR had finished running resulting in a NULL pointer
reference from the ISR.
[ 167.148934] Unable to handle kernel NULL pointer dereference at virtual address 00000000
[ 167.157051] pgd = ffff80003c641000
[ 167.160449] [00000000] *pgd=000000007c507003, *pud=000000007c4ff003, *pmd=0000000000000000
[ 167.168719] Internal error: Oops: 96000046 [#1] PREEMPT SMP
[ 167.174289] Modules linked in:
[ 167.177348] CPU: 3 PID: 10547 Comm: dma_ioctl Not tainted 4.11.0-rc1-00001-g8d92afddc2f6633a #73
[ 167.186131] Hardware name: Renesas Salvator-X board based on r8a7795 (DT)
[ 167.192917] task: ffff80003a411a00 task.stack: ffff80003bcd4000
[ 167.198850] PC is at rcar_dmac_chan_prep_sg+0xe0/0x400
[ 167.203985] LR is at rcar_dmac_chan_prep_sg+0x48/0x400
Based of previous work by:
Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>.
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Implement the device_synchronize() callback which wait until a dma
channel is stopped to provide a synchronization point.
This protects the driver from multiple race conditions when terminating
and freeing resources. E.g. the completion callback still running after
device_terminate_all() has completed.
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The IRQ number is needed after probe to be able to add synchronisation
points in other places in the driver when freeing resources and to
implement a device_synchronize() callback. Store the IRQ number in the
struct rcar_dmac_chan so that it can be used later.
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Req is never null on at the point of the null check, so
remove this redundant check and just return &req->tx.
Detected by CoverityScan, CID#1436147 ("Logically dead code")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The new driver requires both mailbox and raid support for compile
testing:
drivers/dma/built-in.o: In function `sba_remove':
edma.c:(.text+0x4414): undefined reference to `mbox_free_channel'
drivers/dma/built-in.o: In function `sba_issue_pending':
edma.c:(.text+0x46cc): undefined reference to `mbox_send_message'
drivers/dma/built-in.o: In function `sba_probe':
edma.c:(.text+0x4e60): undefined reference to `mbox_request_channel'
edma.c:(.text+0x5038): undefined reference to `mbox_free_channel'
drivers/dma/built-in.o: In function `sba_tx_status':
edma.c:(.text+0x5210): undefined reference to `mbox_client_peek_data'
drivers/dma/built-in.o: In function `sba_prep_dma_pq_req':
edma.c:(.text+0x5784): undefined reference to `raid6_gflog'
edma.c:(.text+0x5798): undefined reference to `raid6_gflog'
This rearranges the Kconfig dependencies accordingly.
Fixes: 743e1c8ffe ("dmaengine: Add Broadcom SBA RAID driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The Broadcom stream buffer accelerator (SBA) provides offloading
capabilities for RAID operations. This SBA offload engine is
accessible via Broadcom SoC specific ring manager.
This patch adds Broadcom SBA RAID driver which provides one
DMA device with RAID capabilities using one or more Broadcom
SoC specific ring manager channels. The SBA RAID driver in its
current shape implements memcpy, xor, and pq operations.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
AVR32 is gone. Now it's time to clean up the driver by removing
leftovers that was used by AVR32 related code.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
This commit adds the support for suspend/resume in the mv_xor_v2
driver. The suspend suspend function disables the XOR engine after the
DMA stack has handled all pending descriptors in the queue. The resume
function re-configures the XOR engine and re-enables the engine.
Signed-off-by: Hanna Hawa <hannah@marvell.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remove unnecessary write to DESQ_STOP register, this register is used to
enable or disable the XOR engine, and not to issue all pending
descriptors in the queue. mv_xor_v2 driver already writes to this
register and enable XOR engine in the mv_xor_v2_descq_init() function,
called during initialization.
Signed-off-by: Hanna Hawa <hannah@marvell.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Until now, the driver was not using interrupt coalescing: one interrupt
was generated for each descriptor processed by the XOR engine. This
commit changes that by using the interrupt coalescing features of the
hardware, by setting both a number of descriptors processed before an
interrupt is generated and a timeout before an interrupt is generated.
Signed-off-by: Hanna Hawa <hannah@marvell.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The XORv2 engine on Armada 7K/8K can only access the first 40 bits of
the physical address space, so the DMA mask must be set accordingly.
Fixes: 19a340b1a8 ("dmaengine: mv_xor_v2: new driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current implementation of interrupt coalescing doesn't work, because
it doesn't configure the coalescing timer, which is needed to make sure
we get an interrupt at some point.
As a fix for stable, we simply remove the interrupt coalescing
functionality. It will be re-introduced properly in a future commit.
Fixes: 19a340b1a8 ("dmaengine: mv_xor_v2: new driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The mv_xor_v2_tx_submit() gets the next available HW descriptor by
calling mv_xor_v2_get_desq_write_ptr(), which reads a HW register
telling the next available HW descriptor. This was working fine when HW
descriptors were issued for processing directly in tx_submit().
However, as part of the review process of the driver, a change was
requested to move the actual kick-off of HW descriptors processing to
->issue_pending(). Due to this, reading the HW register to know the next
available HW descriptor no longer works.
So instead of using this HW register, we implemented a software index
pointing to the next available HW descriptor.
Fixes: 19a340b1a8 ("dmaengine: mv_xor_v2: new driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The engine was enabled prior to its configuration, which isn't
correct. This patch relocates the activation of the XOR engine, to be
after the configuration of the XOR engine.
Fixes: 19a340b1a8 ("dmaengine: mv_xor_v2: new driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Hanna Hawa <hannah@marvell.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Descriptors that have not been acknowledged by the async_tx layer
should not be re-used, so this commit adjusts the implementation of
mv_xor_v2_prep_sw_desc() to skip descriptors for which
async_tx_test_ack() is false.
Fixes: 19a340b1a8 ("dmaengine: mv_xor_v2: new driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
mv_xor_v2_tasklet() is looping over completed HW descriptors. Before the
loop, it initializes 'next_pending_hw_desc' to the first HW descriptor
to handle, and then the loop simply increments this point, without
taking care of wrapping when we reach the last HW descriptor. The
'pending_ptr' index was being wrapped back to 0 at the end, but it
wasn't used in each iteration of the loop to calculate
next_pending_hw_desc.
This commit fixes that, and makes next_pending_hw_desc a variable local
to the loop itself.
Fixes: 19a340b1a8 ("dmaengine: mv_xor_v2: new driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The mv_xor_v2_prep_sw_desc() is called from a few different places in
the driver, but we never take into account the fact that it might
return NULL. This commit fixes that, ensuring that we don't panic if
there are no more descriptors available.
Fixes: 19a340b1a8 ("dmaengine: mv_xor_v2: new driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This time again a smaller update consisting of:
- support for TI DA8xx dma controller and updates to the cppi driver
- updates on bunch of drivers like xilinx, pl08x, stm32-dma, mv_xor, ioat,
dmatest
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZEebIAAoJEHwUBw8lI4NHZgYP/2t/pQiIuy0895tvljtpeqmO
+g9waoxF9zWe2vO2R1HZAxT+Et7VGwYahE9+p+7uF8w8CyzT9sN6vASgq5xprlVI
n2aZ2Ew8ZTFAoC+lT6Iy63/TFJNGP2gws2n6PrScptN9aQSSLgIn6AbunN1FRZi4
TMuEROV3cLrPuF8qmStsQTqwW28FqQM8YJVZLy6Czak6siRSJDI++4Y1cOBOkGX6
x44sFOuKxKvQW58SO1Dm491UAOh0SO9llB4W8BztGWDFPQtEZKSrBYJtV6ge2BaU
UMxT/n1IbzPHYIRhRYitoc2BFKz6wf5ELH9N3qKSB76qbD9KogrEzwFK25CM8Po1
nieHiFEfXDP8rnivxo3PV8ULzSZQ5Z2LGb+1BBqWhtfSlF69V63VWL5ER2kvxGoG
klcSx2W2xrN4SpUuAh+koN/80ZE0OZpT+8EYmsJ8zfRg1y8ddn/8a82Q0lJZs979
tbEunqJM0l/bqKDd3PQCEi2tTF0OpdZa6yU/twUSm1MVRNpsrpqoAgo7/mrYF5G4
8rX/D5ihU1UijEYHKgMJHKCg4NFdYG/zqWABJQGbU6vLkOiuaDCzXIg7xZGCBx7R
d2A7jYABPp2ljP1D5yTTfjxhN+UtIIiAl4DYz7OKc7dNaOhDz/ahl4oFIw4T7XfS
Z4zjnDiDgSVZivS2HzaP
=Ygkn
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.12-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time again a smaller update consisting of:
- support for TI DA8xx dma controller and updates to the cppi driver
- updates on bunch of drivers like xilinx, pl08x, stm32-dma, mv_xor,
ioat, dmatest"
* tag 'dmaengine-4.12-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (35 commits)
dmaengine: pl08x: remove lock documentation
dmaengine: pl08x: fix pl08x_dma_chan_state documentation
dmaengine: pl08x: Use the BIT() macro consistently
dmaengine: pl080: Fix some missing kerneldoc
dmaengine: pl080: Cut some unused defines
dmaengine: dmatest: Add check for supported buffer count (sg_buffers)
dmaengine: dmatest: Select DMA_ENGINE_RAID as its needed for the slave_sg test
dmaengine: virt-dma: Convert to use list_for_each_entry_safe()
dma-debug: use offset_in_page() macro
dmaengine: mv_xor: use offset_in_page() macro
dmaengine: dmatest: use offset_in_page() macro
dmaengine: sun4i: fix invalid argument
dmaengine: ioat: use setup_timer
dmaengine: cppi41: Fix an Oops happening in cppi41_dma_probe()
dmaengine: pl330: remove pdata based initialization
dmaengine: cppi: fix build error due to bad variable
dmaengine: imx-sdma: add 1ms delay to ensure SDMA channel is stopped
dmaengine: cppi41: use managed functions devm_*()
dmaengine: cppi41: fix cppi41_dma_tx_status() logic
dmaengine: qcom_hidma: pause the channel on shutdown
...
This makes the driver shift bits with BIT() which is used on other
places in the driver.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Two elements of the physical channel description was missing.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When using dmatest with sg_buffers=128 I stumbled upon the problem, that
the "map_cnt" variable of "struct dmaengine_unmap_data" was set to 0.
"map_cnt" is an "u8" variable, resulting in an overrun when its
value is set to src_cnt + dst_cnt, to twice the sg_buffer value.
This patch adds a small check to dmatest, so that this confusing error
is detected and the test is aborted.
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Kedareswara rao Appana <appanad@xilinx.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To enable usage of multiple SG buffers via the sg_buffers= module
parameter, lets select DMA_ENGINE_RAID via Kconfig when DMATEST is
configured. Otherwise the dmatest will "BUG" when more than 1
buffer (total of 2 for src + dst) is configured via sg_buffers.
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Kedareswara rao Appana <appanad@xilinx.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use list_for_each_entry_safe() instead of open coding variants.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The "pchans_used" field is an unsigned long array.
for_each_clear_bit_from() expects an unsigned long pointer,
not an array address.
$ make C=2 drivers/dma/sun4i-dma.o
CHECK drivers/dma/sun4i-dma.c
drivers/dma/sun4i-dma.c:241:9: warning: incorrect type in argument 1 (different base types)
drivers/dma/sun4i-dma.c:241:9: expected unsigned long const *p
drivers/dma/sun4i-dma.c:241:9: got unsigned long ( *<noident> )[1]
Signed-off-by: Marc Gonzalez <marc_gonzalez@sigmadesigns.com>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use setup_timer() instead of init_timer() to simplify the code.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This fix an Oops happening on all platforms using the old dt bindings
(all platforms but da8xx).
This update cppi41_dma_probe() to use the index variable which is
required to keep compatibility between old and new dt bindings.
Fixes: 8e3ba95f41 ("dmaengine: cppi41: use managed functions devm_*()")
Reported-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This driver is now used only on platforms which support device tree, so
it is safe to remove legacy platform data based initialization code.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
For plat-samsung:
Acked-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 8e3ba95f41 ("dmaengine: cppi41: use managed functions devm_*()")
moved the code to devm_* but erranously changed a varible name, so fix it.
drivers/dma/cppi41.c:1052:5: error: 'struct cppi41_dd' has no member named 'qmrg_mem'
cdd->qmrg_mem = devm_ioremap_resource(dev, mem);
^
drivers/dma/cppi41.c:1053:16: error: 'struct cppi41_dd' has no member named 'qmrg_mem'
if (IS_ERR(cdd->qmrg_mem))
^
drivers/dma/cppi41.c:1054:21: error: 'struct cppi41_dd' has no member named 'qmrg_mem'
return PTR_ERR(cdd->qmrg_mem);
^
Fixes: 8e3ba95f41 ("dmaengine: cppi41: use managed functions devm_*()")
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
sdma_disable_channel() cannot ensure dma is stopped to access
module's FIFOs. There is chance SDMA core is running and accessing
BD when disable of corresponding channel, this may cause sometimes
even after call of .sdma_disable_channel(), SDMA core still be
running and accessing module's FIFOs.
According to NXP R&D team a delay of one BD SDMA cost time (maximum
is 1ms) should be added after disable of the channel bit, to ensure
SDMA core has really been stopped after SDMA clients call
.device_terminate_all.
This patch introduces adds a new function sdma_disable_channel_with_delay()
which simply adds 1ms delay after call sdma_disable_channel(),
and set it as .device_terminate_all.
Signed-off-by: Jiada Wang <jiada_wang@mentor.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This makes the error handling much more simpler than open-coding
everything and in addition makes the probe function smaller an tidier.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It makes sense to set residue when channel is in progress. Otherwise it
should be 0 since transfer is completed. Meanwhile this patch doesn't
prevent to set residue value anyway.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We need to ensure that all DMAs and interrupts are cleared during
shutdown operation in order for kexec to start the next kernel clearly.
Otherwise, HW could be performing a DMA into random addresses in the
middle of second kernel start.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Once the channels are stopped, disable interrupts to make sure no new
HW interaction can happen.
Similarly, re-enable the interrupts only if we know that channel is
operational again.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
SYS-DMAC can use 40bit address transfer, and it supports Descriptor
Mode too. Current SYS-DMAC driver disables Descriptor Mode if it was
40bit address today. But it can use Descriptor Mode with 40bit if
transfer Source/Destination address are located in same 4GiB region
in the 40 bit address space.
This patch enables it if all condition was clear
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The device_prep_dma_memcpy() callback for this driver allocates a new
xilinx_dma_tx_descriptor whose TX segments list is initialized as empty,
but then gets invalid TX segment pointer by list_last_entry() from the
empty TX segments list and memory corruption happens by the attempt to
update the next descriptor in invalid TX segment pointer.
This removes unnecessary memory access for nonexistent tail TX segment
which causes memory corruption.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Kedareswara rao Appana <appana.durga.rao@xilinx.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The device_terminate_all() callback for this driver stops current DMA
operations by clearing RUNSTOP bit in the control register and waiting
HALTED bit set in the status register.
But AXI CDMA which is one of the supported DMA engine by this driver
does not provide the run / stop controls and those bits in the control
and status registers are reserved. So when device_terminate_all() is
called, the error message is printed and the channel is marked as having
errors in xilinx_dma_halt().
This change adds stop_transfer() callback which differentiates CDMA and
other DMA engine. The CDMA's one avoids the unsupported operations and
instead polls the status register to check if the DMA operations are in
progress for AXI CDMA.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Kedareswara rao Appana <appana.durga.rao@xilinx.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This fixes the following warning when building with clang and
CONFIG_DMA_ENGINE_RAID=n :
drivers/dma/dmaengine.c:1102:11: error: array index 2 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds]
return &unmap_pool[2];
^ ~
drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here
static struct dmaengine_unmap_pool unmap_pool[] = {
^
drivers/dma/dmaengine.c:1104:11: error: array index 3 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds]
return &unmap_pool[3];
^ ~
drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here
static struct dmaengine_unmap_pool unmap_pool[] = {
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The SDMA hardware/driver does not actually report the transfer residue at
burst size granularity, but in fact is only able to report residue after
each finished segment.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The check to see if cd is null is redundant, pdata->channels is
never null at this point, and hence &pdata->channels[i] cannot
be null, so remove the null check.
Detected by CoverityScan, CID#1357194 ("Logically Dead Code")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
During the teardown of a RX channel, because there is only one
completion queue available for RX channel, descriptor of another
channel may be popped which will cause 2 warnings:
- the first one because we popped a wrong descriptor
(neither the channel's descriptor, nor the teardown descriptor).
- the second one happen during the teardown of another channel,
because we can't find the channel descriptor
(that is, the one that caused the first warning).
To avoid that, use one free queue instead of a transmit completion queue.
Note that fix doesn't fix all the teardown warnings:
I still get some when I run some corner case.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The DA8xx has a CPPI 4.1 DMA controller.
This is add the glue layer required to make it work on DA8xx.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
All the platform code to manage IRQ has been moved to MUSB,
and now the interrupt handler is completely generic.
Remove the isr callback that is not useful anymore.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Despite the driver is already using DT to get the number of channels,
init_sched() is using an hardcoded value to get it.
Use DT to get the number of channels.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Some constants are defined and use by the driver whereas they are
specifics to AM335x.
Add new variables to the glue layer, initialize them with the constants,
and use them in the driver.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently, only the AM335x is supported by the driver.
Though the driver has a glue layer to support different platforms,
some platform variable names are not prefixed with the platform name.
To facilitate the addition of a new platform,
rename some variables owned by the AM335x glue.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In order to make CPPI 4.1 DMA driver more generic, accesses to USBSS
have been removed. So it is not required anymore to map the "glue"
register's.
Remove usbss_mem.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Trivial fix to spelling mistake and make channel plural.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The code responsible for splitting periods into chunks that
can be handled by the DMA controller missed to update total_len,
the number of bytes processed in the current period, when there
are more chunks to follow.
Therefore total_len was stuck at 0 and the code didn't work at all.
This resulted in a wrong control block layout and audio issues because
the cyclic DMA callback wasn't executing on period boundaries.
Fix this by adding the missing total_len update.
Signed-off-by: Matthias Reichl <hias@horus.com>
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Tested-by: Clive Messer <clive.messer@digitaldreamtime.co.uk>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
But first update usage sites with the new header dependency.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Here is the big tty/serial driver patchset for 4.11-rc1.
Not much here, but a lot of little fixes and individual serial driver
updates all over the subsystem. Majority are for the sh-sci driver and
platform (the arch-specific changes have acks from the maintainer).
The start of the "serial bus" code is here as well, but nothing is
converted to use it yet. That work is still ongoing, hopefully will
start to show up across different subsystems for 4.12 (bluetooth is one
major place that will be used.)
All of these have been in linux-next for a while with no reported
issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWK2lDg8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+ylwfwCgyExa4x8Lur2nZyu8cgcaVRU68VAAoNQh0WJt
EZdEhEkRMt4d64j+ApYI
=iv7x
-----END PGP SIGNATURE-----
Merge tag 'tty-4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty/serial driver updates from Greg KH:
"Here is the big tty/serial driver patchset for 4.11-rc1.
Not much here, but a lot of little fixes and individual serial driver
updates all over the subsystem. Majority are for the sh-sci driver and
platform (the arch-specific changes have acks from the maintainer).
The start of the "serial bus" code is here as well, but nothing is
converted to use it yet. That work is still ongoing, hopefully will
start to show up across different subsystems for 4.12 (bluetooth is
one major place that will be used.)
All of these have been in linux-next for a while with no reported
issues"
* tag 'tty-4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (109 commits)
tty: pl011: Work around QDF2400 E44 stuck BUSY bit
atmel_serial: Use the fractional divider when possible
tty: Remove extra include in HVC console tty framework
serial: exar: Enable MSI support
serial: exar: Move register defines from uapi header to consumer site
serial: pci: Remove unused pci_boards entries
serial: exar: Move Commtech adapters to 8250_exar as well
serial: exar: Fix feature control register constants
serial: exar: Fix initialization of EXAR registers for ports > 0
serial: exar: Fix mapping of port I/O resources
serial: sh-sci: fix hardware RX trigger level setting
tty/serial: atmel: ensure state is restored after suspending
serial: 8250_dw: Avoid "too much work" from bogus rx timeout interrupt
serdev: ttyport: check whether tty_init_dev() fails
serial: 8250_pci: make pciserial_detach_ports() static
ARM: dts: STiH410-b2260: Enable HW flow-control
ARM: dts: STiH407-family: Use new Pinctrl groups
ARM: dts: STiH407-pinctrl: Add Pinctrl group for HW flow-control
ARM: dts: STiH410-b2260: Identify the UART RTS line
dt-bindings: serial: Update 'uart-has-rtscts' description
...
Here is the big USB and PHY driver updates for 4.11-rc1.
Nothing major, just the normal amount of churn in the usb gadget and dwc
and xhci controllers, new device ids, new phy drivers, a new usb-serial
driver, and a few other minor changes in different USB drivers.
All have been in linux-next for a long time with no reported issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWK2lrg8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+ykh7ACffotTJvB/gwpuSIWh6qhA8KQ9mH8AnjlxMafv
b5b3vfOXJ8/N0Go25VwI
=7fqN
-----END PGP SIGNATURE-----
Merge tag 'usb-4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Pull USB/PHY updates from Greg KH:
"Here is the big USB and PHY driver updates for 4.11-rc1.
Nothing major, just the normal amount of churn in the usb gadget and
dwc and xhci controllers, new device ids, new phy drivers, a new
usb-serial driver, and a few other minor changes in different USB
drivers.
All have been in linux-next for a long time with no reported issues"
* tag 'usb-4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (265 commits)
usb: cdc-wdm: remove logically dead code
USB: serial: keyspan: drop header file
USB: serial: io_edgeport: drop io-tables header file
usb: musb: add code comment for clarification
usb: misc: add USB251xB/xBi Hi-Speed Hub Controller Driver
usb: misc: usbtest: remove redundant check on retval < 0
USB: serial: upd78f0730: sort device ids
USB: serial: upd78f0730: add ID for EVAL-ADXL362Z
ohci-hub: fix typo in dbg_port macro
usb: musb: dsps: Manage CPPI 4.1 DMA interrupt in DSPS
usb: musb: tusb6010: Clean up tusb_omap_dma structure
usb: musb: cppi_dma: Clean up cppi41_dma_controller structure
usb: musb: cppi_dma: Clean up cppi structure
usb: musb: cppi41: Detect aborted transfers in cppi41_dma_callback()
usb: musb: dma: Add a DMA completion platform callback
drivers: usb: usbip: Add missing break statement to switch
usb: mtu3: remove redundant dev_err call in get_ssusb_rscs()
USB: serial: mos7840: fix another NULL-deref at open
USB: serial: console: clean up sanity checks
USB: serial: console: fix uninitialised spinlock
...
This time we fairly boring and bit small update.
- Support for Intel iDMA 32-bit hardware
- deprecate broken support for channel switching in async_tx
- bunch of updates on stm32-dma
- Cyclic support for zx dma and making in generic zx dma driver
- Small updates to bunch of other drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYrI1RAAoJEHwUBw8lI4NHMH4QALbxsOzil+Sl5gq27xK02ox3
8NGkVdAZgUlMI0m7VnPrdenhLy+t7wz383RM0/DUJfHqJItwxdR+895GRwe5TJTl
PJJMrrdURv6k24ZqUq1b8qKN51YOjp9aVX6sjKi531fcf1Cmj6nuquo8/gIlZtQC
GnLPQGqgK/i5EX/LGQwuHw4j6hESsLGu1vCPdJBSRGzlzd6T4rbmc2ApUjSB1PRO
uhmLrORdkbQnxnt1mUYDlEVwGy/0wVNML89k/SX1g/70NI2kvVUn2NX5Aq7+C+vl
iIdBfYXuPDnhygrze8jYNd6acc9MWyKp7uYelTtm+m4tvVMAD2UJS/eXgbgkcyLj
XoGGkfjDLlK40gQDqnWiNfLi+Dn899ZEwXvCZNsQa72RS32wiatu3ZG39x+a55mK
XISolM57iph975C4manY2sxLfOZmM5qmXg8+JGMppine5U2kmID44Y72jYKLNeUU
EjMCVeZidx7LFkAR8BfA5pZt2BBIooHcncIwD9MW9KsEMPqzMcjqg9Wvxn/hSIUu
VGPVnBZrz/IBSeN9sqMmXWVY0kI+teC7bFZHecZBOUylOyxcQ9ZBHT7Ie2wM2piM
fY4UYWTtn9p/YOrfE32iSn226ga6a0Pk1kEfWgOQ/KXmOd20D33z0NYY9xOQ/CXC
aT+Skn10zjYZbfU0MCp5
=pcdH
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.11-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we fairly boring and bit small update.
- Support for Intel iDMA 32-bit hardware
- deprecate broken support for channel switching in async_tx
- bunch of updates on stm32-dma
- Cyclic support for zx dma and making in generic zx dma driver
- Small updates to bunch of other drivers"
* tag 'dmaengine-4.11-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits)
async_tx: deprecate broken support for channel switching
dmaengine: rcar-dmac: Widen DMA mask to 40 bits
dmaengine: sun6i: allow build on ARM64 platforms (sun50i)
dmaengine: Provide a wrapper for memcpy operations
dmaengine: zx: fix build warning
dmaengine: dw: we do support Merrifield SoC in PCI mode
dmaengine: dw: add support of iDMA 32-bit hardware
dmaengine: dw: introduce register mappings for iDMA 32-bit
dmaengine: dw: introduce block2bytes() and bytes2block()
dmaengine: dw: extract dwc_chan_pause() for future use
dmaengine: dw: replace convert_burst() with one liner
dmaengine: dw: register IRQ and DMA pool with instance ID
dmaengine: dw: Fix data corruption in large device to memory transfers
dmaengine: ste_dma40: indicate granularity on channels
dmaengine: ste_dma40: indicate directions on channels
dmaengine: stm32-dma: Add error messages if xlate fails
dmaengine: dw: pci: remove LPE Audio DMA ID
dmaengine: stm32-dma: Add max_burst support
dmaengine: stm32-dma: Add synchronization support
dmaengine: stm32-dma: Fix residue computation issue in cyclic mode
...
By default, the DMA mask covers only the low 32-bit address space, which
causes SWIOTLB on arm64 to fall back to a bounce buffer for DMA
transfers involving memory outside the 32-bit address space.
The R-Car DMA controller hardware supports a 40-bit address space, hence
widen the DMA mask to 40 bits to actually make use of this feature.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Despite the CPPI 4.1 is a generic DMA, it is tied to USB.
On the DSPS, CPPI 4.1 interrupt's registers are in USBSS (the MUSB glue).
Currently, to enable / disable and clear interrupts, the CPPI 4.1 driver
maps and accesses to USBSS's register, which making CPPI 4.1 driver not
really generic.
Move the interrupt management to DSPS driver.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
As 64-bit Allwinner H5 SoC has the same DMA engine with H3, the DMA
driver should be allowed to be built for ARM64, in order to make it work on H5.
Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>