When clients asks for maxburst = 0 it is basically the same case as if they
were asking for maxburst = 1 since in both case ASYNC need to be used and
the eDMA is expected to write/read one word per DMA request.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The code to handle any length SG lists calls edma_resume()
even before edma_start() is called. This is incorrect
because edma_resume() enables edma events on the channel
after which CPU (in edma_start) cannot clear posted
events by writing to ECR (per the EDMA user's guide).
Because of this EDMA transfers fail to start if due
to some reason there is a pending EDMA event registered
even before EDMA transfers are started. This can happen if
an EDMA event is a byproduct of device initialization.
Fix this by calling edma_resume() only if it is not the
first batch of MAX_NR_SG elements.
Without this patch, MMC/SD fails to function on DA850 EVM
with DMA. The behaviour is triggered by specific IP and
this can explain why the issue was not reported before
(example with MMC/SD on AM335x).
Tested on DA850 EVM and AM335x EVM-SK using MMC/SD card.
Cc: stable@vger.kernel.org # v3.12.x+
Cc: Joel Fernandes <joelf@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Tested-by: Jon Ringle <jringle@gridpoint.com>
Tested-by: Alexander Holler <holler@ahsoftware.de>
Reported-by: Jon Ringle <jringle@gridpoint.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix a memory leak in the edma_prep_dma_cyclic() error handling path.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The channel allocated/released messages are very spammy and not really
interesting to users. Change them to "debug" level.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Acked-by: Matt Porter <mporter@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pull slave-dmaengine changes from Vinod Koul:
"This brings for slave dmaengine:
- Change dma notification flag to DMA_COMPLETE from DMA_SUCCESS as
dmaengine can only transfer and not verify validaty of dma
transfers
- Bunch of fixes across drivers:
- cppi41 driver fixes from Daniel
- 8 channel freescale dma engine support and updated bindings from
Hongbo
- msx-dma fixes and cleanup by Markus
- DMAengine updates from Dan:
- Bartlomiej and Dan finalized a rework of the dma address unmap
implementation.
- In the course of testing 1/ a collection of enhancements to
dmatest fell out. Notably basic performance statistics, and
fixed / enhanced test control through new module parameters
'run', 'wait', 'noverify', and 'verbose'. Thanks to Andriy and
Linus [Walleij] for their review.
- Testing the raid related corner cases of 1/ triggered bugs in
the recently added 16-source operation support in the ioatdma
driver.
- Some minor fixes / cleanups to mv_xor and ioatdma"
* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (99 commits)
dma: mv_xor: Fix mis-usage of mmio 'base' and 'high_base' registers
dma: mv_xor: Remove unneeded NULL address check
ioat: fix ioat3_irq_reinit
ioat: kill msix_single_vector support
raid6test: add new corner case for ioatdma driver
ioatdma: clean up sed pool kmem_cache
ioatdma: fix selection of 16 vs 8 source path
ioatdma: fix sed pool selection
ioatdma: Fix bug in selftest after removal of DMA_MEMSET.
dmatest: verbose mode
dmatest: convert to dmaengine_unmap_data
dmatest: add a 'wait' parameter
dmatest: add basic performance metrics
dmatest: add support for skipping verification and random data setup
dmatest: use pseudo random numbers
dmatest: support xor-only, or pq-only channels in tests
dmatest: restore ability to start test at module load and init
dmatest: cleanup redundant "dmatest: " prefixes
dmatest: replace stored results mechanism, with uniform messages
Revert "dmatest: append verify result to results"
...
Pull DMA mask updates from Russell King:
"This series cleans up the handling of DMA masks in a lot of drivers,
fixing some bugs as we go.
Some of the more serious errors include:
- drivers which only set their coherent DMA mask if the attempt to
set the streaming mask fails.
- drivers which test for a NULL dma mask pointer, and then set the
dma mask pointer to a location in their module .data section -
which will cause problems if the module is reloaded.
To counter these, I have introduced two helper functions:
- dma_set_mask_and_coherent() takes care of setting both the
streaming and coherent masks at the same time, with the correct
error handling as specified by the API.
- dma_coerce_mask_and_coherent() which resolves the problem of
drivers forcefully setting DMA masks. This is more a marker for
future work to further clean these locations up - the code which
creates the devices really should be initialising these, but to fix
that in one go along with this change could potentially be very
disruptive.
The last thing this series does is prise away some of Linux's addition
to "DMA addresses are physical addresses and RAM always starts at
zero". We have ARM LPAE systems where all system memory is above 4GB
physical, hence having DMA masks interpreted by (eg) the block layers
as describing physical addresses in the range 0..DMAMASK fails on
these platforms. Santosh Shilimkar addresses this in this series; the
patches were copied to the appropriate people multiple times but were
ignored.
Fixing this also gets rid of some ARM weirdness in the setup of the
max*pfn variables, and brings ARM into line with every other Linux
architecture as far as those go"
* 'for-linus-dma-masks' of git://git.linaro.org/people/rmk/linux-arm: (52 commits)
ARM: 7805/1: mm: change max*pfn to include the physical offset of memory
ARM: 7797/1: mmc: Use dma_max_pfn(dev) helper for bounce_limit calculations
ARM: 7796/1: scsi: Use dma_max_pfn(dev) helper for bounce_limit calculations
ARM: 7795/1: mm: dma-mapping: Add dma_max_pfn(dev) helper function
ARM: 7794/1: block: Rename parameter dma_mask to max_addr for blk_queue_bounce_limit()
ARM: DMA-API: better handing of DMA masks for coherent allocations
ARM: 7857/1: dma: imx-sdma: setup dma mask
DMA-API: firmware/google/gsmi.c: avoid direct access to DMA masks
DMA-API: dcdbas: update DMA mask handing
DMA-API: dma: edma.c: no need to explicitly initialize DMA masks
DMA-API: usb: musb: use platform_device_register_full() to avoid directly messing with dma masks
DMA-API: crypto: remove last references to 'static struct device *dev'
DMA-API: crypto: fix ixp4xx crypto platform device support
DMA-API: others: use dma_set_coherent_mask()
DMA-API: staging: use dma_set_coherent_mask()
DMA-API: usb: use new dma_coerce_mask_and_coherent()
DMA-API: usb: use dma_set_coherent_mask()
DMA-API: parport: parport_pc.c: use dma_coerce_mask_and_coherent()
DMA-API: net: octeon: use dma_coerce_mask_and_coherent()
DMA-API: net: nxp/lpc_eth: use dma_coerce_mask_and_coherent()
...
fixing of freeing descriptor memory was applied twice, so remove the one
duplicate
Reported-by: Wing-Keung Wang <wingkeung.wang@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Using the PaRAM configuration function that we split for reuse by the
different DMA types, we implement Cyclic DMA support.
For the cyclic case, we pass different configuration parameters to this
function, and handle all the Cyclic-specific functionality separately.
Callbacks to the DMA users are handled using vchan_cyclic_callback in
the virt-dma layer. Linking is handled the same way as the slave SG case
except for the last slot where we link it back to the first one in a
cyclic fashion.
For continuity, we check for cases where no.of periods is great than the
MAX number of slots the driver can allocate for a particular descriptor
and error out on such cases.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
register_platform_device_full() can setup the DMA mask provided the
appropriate member is set in struct platform_device_info. So lets
make that be the case. This avoids a direct reference to the DMA
masks by this driver.
While here, add the dma_set_mask_and_coherent() call which the DMA API
requires DMA-using drivers to call.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
edma header defines DMA_COMPLETE, this causes issues as commit adfedd9a32 move
DMA_SUCCESS to DMA_COMPLETE. edma should properly namespace its defines and
needs a future fix
Reported-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When it fails to allocate a slot, edesc should be free'd before return;
Signed-off-by: Valentin Ilie <valentin.ilie@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
davinci-pcm uses 16 as the no.of periods. With this, in EDMA we have to
allocate atleast 17 slots: 1 slot for channel, and 16 slots the periods.
Due to this, the MAX_NR_SG limitation causes problems, set it to 20 to make
cyclic DMA work when davinci-pcm is converted to use DMA Engine. Also add
a comment clarifying this.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
PaRAM set calculation is abstracted into its own function to
enable better reuse for other DMA cases such as cyclic. We adapt
the Slave SG case to use the new function.
This provides a much cleaner abstraction to the internals of the
PaRAM set. However, any PaRAM attributes that are not common to
all DMA types must be set separately such as setting of interrupts.
This function takes care of the most-common attributes.
Also added comments clarifying A-sync case calculations.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Free memory allocated to edma_desc when failing to allocate slot.
Signed-off-by: Geyslan G. Bem <geyslan@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Matt's @ti.com address bounces. Update the MODULE_AUTHOR information in
edma.c to his Linaro address.
Signed-off-by: Josh Boyer <jwboyer@fedoraproject.org>
Acked-by: Matt Porter <matt.porter@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With this series, this check is no longer required and
we shouldn't need to reject drivers DMA'ing more than the
MAX number of slots.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dummy slot has been used as a way for missed-events not to be
reported as missing. This has been particularly troublesome for cases
where we might want to temporarily pause all incoming events.
For EDMA DMAC, there is no way to do any such pausing of events as
the occurence of the "next" event is not software controlled.
Using "edma_pause" in IRQ handlers doesn't help as by then the event
in concern from the slave is already missed.
Linking a dummy slot, is seen to absorb these events which we didn't
want to miss. So we don't link to dummy, but instead leave it linked
to NULL set, allow an error condition and detect the channel that
missed it.
Consider the case where we have a scatter-list like:
SG1->SG2->SG3->SG4->SG5->SG6->Null
For ex, for a MAX_NR_SG of 2, earlier we were splitting this as:
SG1->SG2->Null
SG3->SG4->Null
SG5->SG6->Null
Now we split it as
SG1->SG2->Null
SG3->SG4->Null
SG5->SG6->Dummy
This approach results in lesser unwanted interrupts that occur
for the last list split. The Dummy slot has the property of not
raising an error condition if events are missed unlike the Null
slot. We are OK with this as we're done with processing the
whole list once we reach Dummy.
Signed-off-by: Joel Fernandes <joelf@ti.com>
[modifed duplicate s-o-b & patch title]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In an effort to move to using Scatter gather lists of any size with
EDMA as discussed at [1] instead of placing limitations on the driver,
we work through the limitations of the EDMAC hardware to find missed
events and issue them.
The sequence of events that require this are:
For the scenario where MAX slots for an EDMA channel is 3:
SG1 -> SG2 -> SG3 -> SG4 -> SG5 -> SG6 -> Null
The above SG list will have to be DMA'd in 2 sets:
(1) SG1 -> SG2 -> SG3 -> Null
(2) SG4 -> SG5 -> SG6 -> Null
After (1) is succesfully transferred, the events from the MMC controller
donot stop coming and are missed by the time we have setup the transfer
for (2). So here, we catch the events missed as an error condition and
issue them manually.
In the second part of the patch, we make handle the NULL slot cases:
For crypto IP, we continue to receive events even continuously in
NULL slot, the setup of the next set of SG elements happens after
the error handler executes. This is results in some recursion problems.
Due to this, we continously receive error interrupts when we manually
trigger an event from the error handler.
We fix this, by first detecting if the Channel is currently transferring
from a NULL slot or not, that's where the edma_read_slot in the error
callback from interrupt handler comes in. With this we can determine if
the set up of the next SG list has completed, and we manually trigger
only in this case. If the setup has _not_ completed, we are still in NULL
so we just set a missed flag and allow the manual triggerring to happen
in edma_execute which will be eventually called. This fixes the above
mentioned race conditions seen with the crypto drivers.
[1] http://marc.info/?l=linux-omap&m=137416733628831&w=2
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Process SG-elements in batches of MAX_NR_SG if they are greater
than MAX_NR_SG. Due to this, at any given time only those many
slots will be used in the given channel no matter how long the
scatter list is. We keep track of how much has been written
inorder to process the next batch of elements in the scatter-list
and detect completion.
For such intermediate transfer completions (one batch of MAX_NR_SG),
make use of pause and resume functions instead of start and stop
when such intermediate transfer is in progress or completed as we
donot want to clear any pending events.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Changes are made here for configuring existing parameters to support
DMA'ing them out in batches as needed.
Also allocate as many as slots as needed by the SG list, but not more
than MAX_NR_SG. Then these slots will be reused accordingly.
For ex, if MAX_NR_SG=10, and number of SG entries is 40, still only
10 slots will be allocated to DMA the entire SG list of size 40.
Also enable TC interrupts for slots that are a last in a current
iteration, or that fall on a MAX_NR_SG boundary.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Residue value is assigned to 0 by dma_cookie_status().
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Move mach-davinci/dma.c to common/edma.c so it can be used
by OMAP (specifically AM33xx) as well.
Signed-off-by: Matt Porter <mporter@ti.com>
Acked-by: Chris Ball <cjb@laptop.org> # davinci_mmc.c
Acked-by: Mark Brown <broonie@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
[nsekhar@ti.com: dropped davinci sffsdr changes]
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Pull slave-dmaengine updates from Vinod Koul:
"This is fairly big pull by my standards as I had missed last merge
window. So we have the support for device tree for slave-dmaengine,
large updates to dw_dmac driver from Andy for reusing on different
architectures. Along with this we have fixes on bunch of the drivers"
Fix up trivial conflicts, usually due to #include line movement next to
each other.
* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (111 commits)
Revert "ARM: SPEAr13xx: Pass DW DMAC platform data from DT"
ARM: dts: pl330: Add #dma-cells for generic dma binding support
DMA: PL330: Register the DMA controller with the generic DMA helpers
DMA: PL330: Add xlate function
DMA: PL330: Add new pl330 filter for DT case.
dma: tegra20-apb-dma: remove unnecessary assignment
edma: do not waste memory for dma_mask
dma: coh901318: set residue only if dma is in progress
dma: coh901318: avoid unbalanced locking
dmaengine.h: remove redundant else keyword
dma: of-dma: protect list write operation by spin_lock
dmaengine: ste_dma40: do not remove descriptors for cyclic transfers
dma: of-dma.c: fix memory leakage
dw_dmac: apply default dma_mask if needed
dmaengine: ioat - fix spare sparse complain
dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c
ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING
dw_dmac: add support for Lynxpoint DMA controllers
dw_dmac: return proper residue value
dw_dmac: fill individual length of descriptor
...
Accordingly to commentary in the platform_device_register_full the memory
allocated for dma_mask will not going to be freed. That's why is better to
assign dma_mask afterwards.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The edma_slave_config() implementation depends on the
direction field such that it will not properly configure
a slave channel when called without direction set.
This fixes the implementation so that the slave config
is copied as is and prep_slave_sg() handles the
direction dependent handling. spi-omap2-mcspi and
omap_hsmmc both expose this bug as they configure the
slave channel config from a common path with an unconfigured
direction field.
Signed-off-by: Matt Porter <mporter@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
CONFIG_HOTPLUG is going away as an option. As a result, the __dev*
markings need to be removed.
This change removes the use of __devinit, __devexit_p, __devinitconst,
and __devexit from these drivers.
Based on patches originally written by Bill Pemberton, but redone by me
in order to handle some of the coding style issues better, by hand.
Cc: Bill Pemberton <wfp5p@virginia.edu>
Cc: Viresh Kumar <viresh.linux@gmail.com>
Cc: Dan Williams <djbw@fb.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Barry Song <baohua.song@csr.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Jassi Brar <jassisinghbrar@gmail.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Bill Pemberton <wfp5p@virginia.edu>
Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
CONFIG_HOTPLUG is going away as an option so __devinit is no longer
needed.
Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Cc: Li Yang <leoli@freescale.com>
Cc: Zhang Wei <zw@zh-kernel.org>
Cc: Barry Song <baohua.song@csr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
CONFIG_HOTPLUG is going away as an option so __devexit_p is no longer
needed.
Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Acked-by: Barry Song <baohua.song@csr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add a DMA engine driver for the TI EDMA controller. This driver
is implemented as a wrapper around the existing DaVinci private
DMA implementation. This approach allows for incremental conversion
of each peripheral driver to the DMA engine API. The EDMA driver
supports slave transfers but does not yet support cyclic transfers.
Signed-off-by: Matt Porter <mporter@ti.com>
Tested-by: Tom Rini <trini@ti.com>
Tested-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>