There are more fields than just SWIDTH in CH_CONTROL register, so read
register value must be masked in addition to shifting.
Signed-off-by: Alban Bedel <alban.bedel@avionic-design.de>
Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
PL080S has separate register to store transfer size in, allowing single
transfer to be much larger than in standard PL080.
This patch makes the amba-pl08x driver aware of this and removes writing
transfer size to reserved bits of CH_CONTROL register on PL080S, which
was not a problem witn transfer sizes fitting the original bitfield
of PL080, but now would overwrite other fields.
Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
PL080S is a modified version of PL080 that can be found on Samsung SoCs,
such as S3C6400 and S3C6410.
It has different offset of CONFIG register, separate CONTROL1 register
that holds transfer size and larger maximum transfer size.
Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch refactors debugging code that dumps LLI entries by moving it
into separate function, which is stubbed when VERBOSE_DEBUG is not
selected. This allows us to get rid of the ugly ifdef from the body of
pl08x_fill_llis_for_desc().
No functional change is introduced by this patch.
Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently memory allocated for LLIs is casted to an array of structs,
which is fragile and also limits the driver to a single, predefined LLI
layout, while there are some variants of PL08x, which have more fields
in LLI (namely PL080S with its extra CCTL2).
This patch makes LLIs a sequence of 32-bit words, which is just filled
with appropriate values in appropriate order and padded with required
amount of dummy words (currently zero, but PL080S will make better use
of this).
Suggested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Some variants of PL08x (namely PL080S, found in Samsung S3C64xx SoCs)
have CONFIG register at different offset. This patch makes the driver
use offset from vendor data struct.
Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Further patch will introduce support for PL080S, which requires some
things to be done conditionally, thus increasing indentation level of
some functions even more.
This patch reduces indentation level of pl08x_getbytes_chan() function
by inverting several conditions and returning from function wherever
possible.
Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The test here should be ">=" instead of ">". The cdd->chan_busy[] array
has "ALLOC_DECS_NUM" elements.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The pl330 DMA driver is broken in regard to handling a terminate all request
while it is processing the list of completed descriptors. This is most visible
when calling dmaengine_terminate_all() from within the descriptors callback for
cyclic transfers. In this case the TERMINATE_ALL transfer will clear the
work_list and stop the transfer. But after all callbacks for all completed
descriptors have been handled the descriptors will be re-enqueued into the (now
empty) work_list. So the next time dma_async_issue_pending() is called for the
channel these descriptors will be transferred again which will cause data
corruption. Similar issues can occur if dmaengine_terminate_all() is not called
from within the descriptor callback but runs on a different CPU at the same time
as the completed descriptor list is processed.
This patch introduces a new per channel list which will hold the completed
descriptors. While processing the list the channel's lock will be held to avoid
racing against dmaengine_terminate_all(). The lock will be released when calling
the descriptors callback though. Since the list of completed descriptors might
be modified (e.g. by calling dmaengine_terminate_all() from the callback) we can
not use the normal list iterator macros. Instead we'll need to check for each
loop iteration again if there are still items in the list. The drivers
TERMINATE_ALL implementation is updated to move descriptors from both the
work_list as well the new completed_list back to the descriptor pool. This makes
sure that none of the descripts finds its way back into the work list and also
that we do not call any futher complete callbacks after
dmaengine_terminate_all() has been called.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With enabled pm_runtime in the kernel the device won't work because it
is not "on" during the probe function. This patch enables the device via
pm_runtime on probe so it remains activated.
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Before Randy figures out that this does not compile with CONFIG_BUG=n
here is a fix for it.
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Felipe Balbi <balbi@ti.com>
A bad merge resulted in a left-over free_irq() call. This patch removes it.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Acked-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This configuration data will be used, when DMAC DT support is added to
r8a73a4.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
All shdma DMACs on ARM SoCs share certain register layout patterns, which
are currently defined in arch/arm/mach-shmobile/include/mach/dma-register.h.
That header is included by SoC-specific setup-*.c files to be used in DMAC
platform data. That header, however, cannot be directly used by the driver.
This patch copies those defines into a driver-local header to be used by
Device Tree configurations.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Macros, named like TEND or SAR lack a namespace and are too broadly named
for a global header. Besides, they aren't needed globally. Move them to
where they belong - into the driver. Some other macros aren't used at all,
remove them.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This facilitates DMAC DT support by eliminating the need in AUXDATA and
avoiding creating complex DT data. This also fits well with DMAC devices,
of which SoCs often have multiple identical copies and it is perfectly
valid to use a single configuration data set for all of them.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Platform data shouldn't be changed at run-time, so, pointers to it should
be const.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Switch shdma to using devm_* managed functions for allocation of memory,
requesting IRQs, mapping IO resources etc.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
On newer r-car SoCs the CHCLR register only contains one bit per channel,
to which a 1 has to be written to reset the channel. Older SoC versions had
one CHCLR register per channel, to which a 0 must be written to reset the
channel and clear its buffers. This patch adds support for the newer
layout.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This adds the ROM script addresses for i.MX25, i.MX5x and i.MX6 to the
SDMA driver needed for the driver to work without additional firmware.
The ROM script addresses are SoC specific and in some cases even tapeout
specific. This patch adds the ROM script addresses only for SoCs which
do not have a tapeout specific SDMA ROM, because currently it's unclear
how this case should be handled.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use a struct type instead of an enum type for distinguishing between
different versions. This makes it simpler to handle multiple differences
without cluttering the code with comparisons for certain devtypes.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As the driver now has its own xlate function and makes use of the
dma_get_slave_channel(), we need to manually set the DMA_PRIVATE flags.
Drivers which rely on of_dma_simple_xlate() do implicitly the same by
going through __dma_request_channel().
Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Provide a callback to prepare cyclic DMA transfers.
This is for instance needed for audio channel transport.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In order to fully support multiple transactions per channel, we need to
assure we get an interrupt for each completed transaction. That flags
bit is also our only way to tell at which descriptor a transaction ends.
So, remove the manual clearing of that bit, and then inline the only
remaining command that is left in append_pending_queue() for better
readability.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently, when an interrupt has occured for a channel, the tasklet
worker code will only look at the very last entry in the running list
and complete its cookie, and then dispose the entire running chain.
Hence, the first transaction's cookie will never complete.
In fact, the interrupt we should handle will be the one related to the
first descriptor in the chain with the ENDIRQEN bit set, so complete
the second transaction that is in fact still running.
As a result, the driver can't currently handle multiple transactions on
one chanel, and it's likely that no drivers exist that rely on this
feature.
Fix this by walking the running_chain and look for the first
descriptor that has the interrupt-enable bit set. Only queue
descriptors up to that point for completion handling, while leaving
the rest intact. Also, only make the channel idle if the list is
completely empty after such a cycle.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In case of big endian CPU we have to convert either all fields in the structure
or leave this job to ACPICA. The second choice seems the best.
So, let's remove the ugly conversion that is not fully comprehensive anyway.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If "num_disabled" is equal to STEDMA40_MAX_PHYS (32) then we would write
one space beyond the end of the pdata->disable_channels[] array.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When CONFIG_ARM_LPAE=y the following build warning are generated:
drivers/dma/ste_dma40.c:3228:2: warning: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'resource_size_t' [-Wformat]
drivers/dma/ste_dma40.c:3582:3: warning: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'resource_size_t' [-Wformat]
drivers/dma/ste_dma40.c:3582:3: warning: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'resource_size_t' [-Wformat]
drivers/dma/ste_dma40.c:3593:5: warning: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'resource_size_t' [-Wformat]
According to Documentation/printk-formats.txt '%pa' can be used to properly
print 'resource_size_t'.
Also, for printing memory region the '%pr' is more convenient.
Reported-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Acked-by: Kevin Hilman <khilman@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix:
arch/arm/common/built-in.o: undefined reference to `edma_filter_fn'
seen with "make ARCH=arm allmodconfig"
Commit 6cba4355 (ARM: edma: Add DT and runtime PM support to the private EDMA
API) adds a dependency on edma_filter_fn() into arch/arm/common/edma.c. Since
this file is always built into the kernel, edma_filter_fn() must be built into
the kernel as well.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the shdma driver __iomem pointers are used to point to hardware
registers. Using typed pointers like "u32 __iomem *" in this case is
inconvenient, because then offsets, added to such pointers, have to be
devided by sizeof(u32) or similar. Switch the driver to use void
pointers, which avoids this clumsiness.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
struct sh_dmae_device::chan_reg is a pointer to u32, therefore when adding
offsets to it care should be taken to add offsets in sizeof(u32) units, not
in bytes. This patch corrects such a bug. While at it we also remove the
redundant parameter of the affected function.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Building dma_v3.o triggers a GCC warning:
drivers/dma/ioat/dma_v3.c: In function ‘__ioat3_prep_pq16_lock’:
drivers/dma/ioat/dma_v3.c:264:11: warning: array subscript is below array bounds [-Warray-bounds]
drivers/dma/ioat/dma_v3.c:264:11: warning: array subscript is below array bounds [-Warray-bounds]
This warning is caused by pq16_set_src(). It uses "int idx" as an index
to an eight element array. Changing "idx" to "unsigned" silences this
warning. Apparently GCC can then determine that "idx" will never be
negative.
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <djbw@fb.com>
dma_channel_rebalance() currently distributes channels by processor ID.
These IDs often change with the BIOS, and the order isn't related to
the DMA channel list (related to PCI bus ids).
* On my SuperMicro dual E5 machine, first socket has processor IDs [0-7]
(and [16-23] for hyperthreads), second socket has [8-15]+[24-31]
=> channels are properly allocated to local CPUs.
* On Dells R720 with same processors, first socket has even processor IDs,
second socket has odd numbers
=> half the processors get channels on the remote socket, causing
cross-NUMA traffic and lower DMA performance.
Change nth_chan() to return the channel with min table_count and in the
NUMA node of the given CPU, if any. If none, the (non-local) channel with
min table_count is returned. nth_chan() is therefore renamed into min_chan()
since we don't iterate until the nth channel anymore. In practice, the
behavior is the same because first channels are taken first and are then
ignored because they got an additional reference.
The new code has a slightly higher complexity since we always scan the
entire list of channels for finding the minimal table_count (instead
of stopping after N chans), and because we check whether the CPU is in the
DMA device locality mask. Overall we still have time complexity =
number of chans x number of processors. This rebalance is rarely used,
so this won't hurt.
On the above SuperMicro machine, channels are still allocated the same.
On the Dells, there are no locality issue anymore (MEMCPY channel X goes
to processor X and to its hyperthread sibling).
Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
Signed-off-by: Dan Williams <djbw@fb.com>
Disable RAID on non-Atom platform and remove related fixups such as the
64-byte alignement restriction on legacy DMA operations (introduced in
commit f26df1a1 as a workaround for silicon errata).
Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Jon Mason <jon.mason@intel.com>
Signed-off-by: Dan Williams <djbw@fb.com>
The mv_xor driver had never been used in a big-endian context, and
therefore was not using the hardware features to support such an
execution environment. The hardware provides a "descriptor swap" bit
that automatically swaps the bytes of the DMA descriptors, within
blocks of 8 bytes. This requires a different DMA descriptor layout on
big-endian systems, as well as enabling this "descriptor swap" bit.
This mechanism is exactly identical to the one already used in the
mv643xx_eth network driver and the mvneta network driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Dan Williams <djbw@fb.com>
In order to support big-endian execution, the mv_xor driver is changed
to use the readl_relaxed() and writel_relaxed() accessors that
properly convert from the CPU endianess to the device endianess (which
in the case of Marvell XOR hardware is always little-endian).
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Dan Williams <djbw@fb.com>
Let's move the behaviour of printing no error message back to the pre v3.10
times. It means we will use debug level in the described case, and a warning
level otherwise.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <djbw@fb.com>
There is a really little chance when we are able to create a directory and are
not able to create nodes under it. So, this patch just removes those checks.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <djbw@fb.com>
The debugfs interface brought a copy of the test case parameters. This makes
different set of values under /sys/module/dmatest/parameters/ and
/sys/kernel/debug/dmatest/. The user might be confused by the divergence of
values.
The proposed solution in this patch is to make module parameters writable and
remove them from the debugfs. Though we're still using debugfs to control test
runner and getting results.
Documentation part is updated accordingly.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Suggested-by: Dan Williams <djbw@fb.com>
Signed-off-by: Dan Williams <djbw@fb.com>
In Rob's recent pull request the patch
ARM: highbank: select ARCH_DMA_ADDR_T_64BIT for LPAE
promotes dma_addr_t to 64bit, so printk generates a warning about
an incorrect type. Fix this by casting it to u64 and using %llx.
Fixing long lines on the way.
Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
When dma_addr_t is 64 bits long, compilation of the AMBA PL08x DMA
driver breaks due to a missing 64bit%8bit modulo operation.
Looking more closely the divisor in these operations can only be
1, 2 or 4, so the full featured '%' modulo operation is overkill and
can be replaced with simple bit masking.
Change from v1:
Replace open-coded function with existing IS_ALIGNED macro and use a
macro around that to avoid a line becoming too long.
Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
With all mxs-dma clients moved to use generic DMA helper, the code
left from generic DMA binding conversion can be removed now.
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
After the patch: "2ccaef0 dma: imx-sdma: make channel0 operations atomic",
the "done" completion is not used any more.
Just remove it.
Signed-off-by: Huang Shijie <b32955@freescale.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remove unneeded error handling on the result of a call to
platform_get_resource when the value is passed to devm_ioremap_resource.
A simplified version of the semantic patch that makes this change is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression pdev,res,n,e,e1;
expression ret != 0;
identifier l;
@@
- res = platform_get_resource(pdev, IORESOURCE_MEM, n);
... when != res
- if (res == NULL) { ... \(goto l;\|return ret;\) }
... when != res
+ res = platform_get_resource(pdev, IORESOURCE_MEM, n);
e = devm_ioremap_resource(e1, res);
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The PXA DMA controller has a DALGN register which allows for
byte-aligned DMA transfers. Use it in case any of the transfer
descriptors is not aligned to a mask of ~0x7.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The DMA_SLAVE is currently set twice.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch makes the mmp_pdma controller able to provide DMA resources
in DT environments by providing an dma xlate function.
of_dma_simple_xlate() isn't used here, because if fails to handle
multiple different DMA engines or several instances of the same
controller. Instead, a private implementation is provided that makes use
of the newly introduced dma_get_slave_channel() call.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
PXA peripherals need to obtain specific DMA request ids which will
eventually be stored in the DRCMR register.
Currently, clients are expected to store that number inside the slave
config block as slave_id, which is unfortunately incompatible with the
way DMA resources are handled in DT environments.
This patch adds a filter function which stores the filter parameter
passed in by of-dma.c into the channel's drcmr register.
For backward compatability, cfg->slave_id is still used if set to
a non-zero value.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There's no reason for limiting the maximum transfer length to 0x1000.
Take the actual bit mask instead; the PDMA is able to transfer chunks of
up to SZ_8K - 1.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As suggested by Ezequiel García, release the spinlock at the end of the
function only, and use a goto for the control flow.
Just a minor cleanup.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The exact same calculation is done twice, so let's factor it out to a
macro.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
All patches here have been pending on linux-usb
and sitting in linux-next for a while now.
The biggest things in this tag are:
DWC3 learned proper usage of threaded IRQ
handlers and now we spend very little time
in hardirq context.
MUSB now has proper support for BeagleBone and
Beaglebone Black.
Tegra's USB support also got quite a bit of love
and is learning to use PHY layer and generic DT
attributes.
Other than that, the usual pack of cleanups and
non-critical fixes follow.
Signed-of-by: Felipe Balbi <balbi@ti.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJSCpApAAoJEIaOsuA1yqREJl0QAJ6SY4cOVcUrk0gMcbPcU6ah
mhGJAzA5xcOrRzsrA/r9+mT4aN5zMtOmPtYNJGLgHxPxtmrkWDnUqnpUqBSCJXpt
45GZTIY/TNbe0USteVg0sGz9y8FEokpcLsXk2bBpdnpb0eCC/6UiEl4kVgvNbtTu
z8+vooY9O++Y5bcR6L5QJVBwm+YiIm/rReoLb17aYQCWVLkPvQ5J5dNdfRF/5FUS
uzA4bZdcQCaUtzAAUroIL8z8TgVFOZUCrUalRCs7fE5+7gh9+i/JlVQKMuol/3rR
1bfOdYwuG9XVu3iYKssRLSGUSUXU68ZviLBxwO24cz7EFkCxiKSF6+JT2PHrG1hj
XPxPGuKx4zqn4Lol2KdE5iban9AdgN+2JgjwZ8w9hBob+O14HfRTafKRlwBc27Mt
BiXJv+5mEVmAbi8Xya1w3J/mWHAh+Qxhi1SlPEyT5FfUG3b+2D/Kv1dgpApdVdYL
BW3CFSBgkFK8+WYGifnkNYtjj0v8z0eDaEU0cPmpy2L1pKgL3czNMNv/rgSH6r2n
ilF5kR05CkEYsP56ZpuYg0VYCkpchhW1REDwaMx/2Nt1W4GXRql15aAyN9CcS+v4
Xq0HVOSDyOV4juEryi296DDJPid6COELP8UtsKQLD+3nmifQEB58/S0NdNXJWcqs
GocgpeGXdnzyk5y14FdJ
=3NS9
-----END PGP SIGNATURE-----
Merge tag 'usb-for-v3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-next
Felipe writes:
usb: patches for v3.12 merge window
All patches here have been pending on linux-usb
and sitting in linux-next for a while now.
The biggest things in this tag are:
DWC3 learned proper usage of threaded IRQ
handlers and now we spend very little time
in hardirq context.
MUSB now has proper support for BeagleBone and
Beaglebone Black.
Tegra's USB support also got quite a bit of love
and is learning to use PHY layer and generic DT
attributes.
Other than that, the usual pack of cleanups and
non-critical fixes follow.
Signed-of-by: Felipe Balbi <balbi@ti.com>
Conflicts:
drivers/usb/gadget/udc-core.c
drivers/usb/host/ehci-tegra.c
drivers/usb/musb/omap2430.c
drivers/usb/musb/tusb6010.c
This patch adds __pl330_giveback_descs which give back descriptors when fails
allocating descriptors. It requires to eliminate duplication for
pl330_prep_dma_sg which will be added later.
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Acked-by : Jassi Brar <jassisinghbrar@gmail.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
this patch adds PM ops entries in sirf-dma drivers, so that this
driver can support suspend/resume, hibernation and runtime PM.
while suspending, sirf-dma will lose all registers, so we save
them at suspend and restore in resume for active channels.
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Rongjun Ying <Rongjun.Ying@csr.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use the wrapper function for retrieving the platform data instead of
accessing dev->platform_data directly.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
sirfsoc_dma_prep_cyclic() returns pointer, thus NULL should be
used instead of 0 in order to fix the following sparse warning:
drivers/dma/sirf-dma.c:598:24: warning: Using plain integer as NULL pointer
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
%p is used, thus NULL should be used instead of 0
in order to fix the following sparse warning:
drivers/dma/mv_xor.c:648:9: warning: Using plain integer as NULL pointer
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
mmp_pdma_alloc_descriptor() is used only in this file.
Fix the following sparse warning:
drivers/dma/mmp_pdma.c:359:25: warning: symbol 'mmp_pdma_alloc_descriptor' was not declared. Should it be static?
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This driver is currently used by musb' cppi41 couter part. I may merge
both dma engine user of musb at some point but not just yet.
The driver seems to work in RX/TX mode in host mode, tested on mass
storage. I increaed the size of the TX / RX transfers and waited for the
core code to cancel a transfers and it seems to recover.
v2..3:
- use mall transfers on RX side and check data toggle.
- use rndis mode on tx side so we haveon interrupt for 4096 transfers.
- remove custom "transferred" hack and use dmaengine_tx_status() to
compute the total amount of data that has been transferred.
- cancel transfers and reclaim descriptors
v1..v2:
- RX path added
- dma mode 0 & 1 is working
- device tree nodes re-created.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <djbw@fb.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Felipe Balbi <balbi@ti.com>
In mmp pdma, phy channels are allocated/freed dynamically.
The mapping from DMA request to DMA channel number in DRCMR
should be cleared when a phy channel is freed. Otherwise
conflicts will happen when:
1. A is using channel 2 and free it after finished, but A
still maps to channel 2 in DRCMR of A.
2. Now another one B gets channel 2. So B maps to channel 2
too in DRCMR of B.
In the datasheet, it is described that "Do not map two active
requests to the same channel since it produces unpredictable
results" and we can observe that during test.
Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In mmp pdma, phy channels are allocated/freed dynamically
and frequently. But no proper protection is added.
Conflict will happen when multi-users are requesting phy
channels at the same time. Use spinlock to protect.
Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To obey a usual practice let's return DMA_PAUSED status only if
dma_cookie_status returned DMA_IN_PROGRESS.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point to go throught the rest of the function if first call to
dma_cookie_status() returned DMA_SUCCESS.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the PC world is quite possible that devices are sharing the same interrupt
line. The patch prepares dw_dmac driver to such cases.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In general ~0 does not fit some integer types. Let's do a helper to make a
comparison with that constant properly.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In rare cases (mostly for the testing purposes) the dw_dmac driver might be
compiled as a module as well as the other LPSS device drivers (I2C, SPI,
HSUART). When udev handles the event of the devices appearing the dw_dmac
module is missing. This patch will fix that.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes sparse warning:
drivers/dma/acpi-dma.c:76:21: sparse: cast to restricted __le32
Since everything in all ACPI tables is little-endian, by definition, the used
types in practice are uXX. Thus, we have to enforce __leXX if we want to
convert them to CPU order.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point to go throught the rest of the function if first call to
dma_cookie_status() returned DMA_SUCCESS.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_set_residue() sets only residue value, so user can't rely on the returned
values of cookies. That patch standardize the behaviour.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It's better to use generic dma_cookie_status() that allows user to get standard
possible return codes independently of the DMAC driver in charge.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: linux-tegra@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Li Yang <leoli@freescale.com>
Cc: Zhang Wei <zw@zh-kernel.org>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Residue value is assigned to 0 by dma_cookie_status().
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
last_used variable is applied only once, so, let's substitute it by its value.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
last_used variable is applied only once, so, let's substitute it by its value.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
sh_desc->hw.tcr is controlling real data size,
and, register TCR is controlling data transfer count
which was xmit_shifted value of hw.tcr.
Current sh_dmae_get_partial() is calculating in different unit.
This patch fixes it.
This bug has been present since c014906a87
("dmaengine: shdma: extend .device_terminate_all() to record partial
transfer"), which was added in 2.6.34-rc1.
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Acked-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Allocate a descriptor for each period of a cyclic transfer, not just the first.
Also since the callback needs to be called for each finished period make sure to
initialize the callback and callback_param fields of each descriptor in a cyclic
transfer.
Cc: stable@vger.kernel.org
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The dev_attrs field of struct class is going away soon, dev_groups
should be used instead. This converts the dma dma_devclass code to use
the correct field.
Cc: Dan Williams <djbw@fb.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fix to return -ENODEV when no proper base address found error
handling case instead of 0, as done elsewhere in this function.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Implement the device_slave_caps() callback for the pl330 driver. This allows
dmaengine users like the generic ALSA dmaengine PCM driver to query the
capabilities of the driver. The PL330 supports all buswidths and both
mem-to-dev as well as dev-to-mem transfers. In theory there is no limit on the
number of segments that can be transferred (in practice you'll run out of memory
eventually) and the number of bytes per segment is limited by the size of the
PL330 program buffer. Due to the nature of the PL330 the maximum number of bytes
per segment depends on the burstsize, the driver sets it to the value for a
1-byte burstsize, since it is the smallest.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The recent "drivers/dma: remove unused support for MEMSET operations"
change has fallout from lack of build testing by the author. This
fixes:
drivers/dma/iop-adma.c:1020:13: warning: unused variable 'dma_addr' [-Wunused-variable]
drivers/dma/iop-adma.c:1519:2: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
Signed-off-by: Olof Johansson <olof@lixom.net>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull slave-dmaengine updates from Vinod Koul:
"Once you have some time from extended weekend celebrations please
consider pulling the following to get:
- Various fixes and PCI driver for dw_dmac by Andy
- DT binding for imx-dma by Markus & imx-sdma by Shawn
- DT fixes for dmaengine by Lars
- jz4740 dmac driver by Lars
- and various fixes across the drivers"
What "extended weekend celebrations"? I'm in the merge window, who has
time for extended celebrations..
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (40 commits)
DMA: shdma: add DT support
DMA: shdma: shdma_chan_filter() has to be in shdma-base.h
DMA: shdma: (cosmetic) don't re-calculate a pointer
dmaengine: at_hdmac: prepare clk before calling enable
dmaengine/trivial: at_hdmac: add curly brackets to if/else expressions
dmaengine: at_hdmac: remove unsuded atc_cleanup_descriptors()
dmaengine: at_hdmac: add FIFO configuration parameter to DMA DT binding
ARM: at91: dt: add header to define at_hdmac configuration
MIPS: jz4740: Correct clock gate bit for DMA controller
MIPS: jz4740: Remove custom DMA API
MIPS: jz4740: Register jz4740 DMA device
dma: Add a jz4740 dmaengine driver
MIPS: jz4740: Acquire and enable DMA controller clock
dma: mmp_tdma: disable irq when disabling dma channel
dmaengine: PL08x: Avoid collisions with get_signal() macro
dmaengine: dw: select DW_DMAC_BIG_ENDIAN_IO automagically
dma: dw: add PCI part of the driver
dma: dw: split driver to library part and platform code
dma: move dw_dmac driver to an own directory
dw_dmac: don't check resource with devm_ioremap_resource
...
This patch adds Device Tree support to the shdma driver. No special DT
properties are used, only standard DMA DT bindings are implemented. Since
shdma controllers reside on SoCs, their configuration is SoC-specific and
shall be passed to the driver from the SoC platform data, using the
auxdata procedure.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use an existing pointer instead of retrieving it again.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Replace clk_enable/disable with clk_prepare_enable/disable_unprepare to
avoid common clk framework warnings.
Signed-off-by: Boris BREZILLON <b.brezillon@overkiz.com>
[nicolas.ferre@atmel.com: remove return code checking in at_dma_resume_noirq()]
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>