2019-06-04 16:11:33 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
/*
|
|
|
|
* Copyright 2015 Robert Jarzmik <robert.jarzmik@free.fr>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/err.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/dma-mapping.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/dmaengine.h>
|
|
|
|
#include <linux/platform_device.h>
|
|
|
|
#include <linux/device.h>
|
|
|
|
#include <linux/platform_data/mmp_dma.h>
|
|
|
|
#include <linux/dmapool.h>
|
|
|
|
#include <linux/of_device.h>
|
|
|
|
#include <linux/of_dma.h>
|
|
|
|
#include <linux/of.h>
|
2016-07-11 05:50:49 +08:00
|
|
|
#include <linux/wait.h>
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
#include <linux/dma/pxa-dma.h>
|
|
|
|
|
|
|
|
#include "dmaengine.h"
|
|
|
|
#include "virt-dma.h"
|
|
|
|
|
|
|
|
#define DCSR(n) (0x0000 + ((n) << 2))
|
|
|
|
#define DALGN(n) 0x00a0
|
|
|
|
#define DINT 0x00f0
|
|
|
|
#define DDADR(n) (0x0200 + ((n) << 4))
|
|
|
|
#define DSADR(n) (0x0204 + ((n) << 4))
|
|
|
|
#define DTADR(n) (0x0208 + ((n) << 4))
|
|
|
|
#define DCMD(n) (0x020c + ((n) << 4))
|
|
|
|
|
|
|
|
#define PXA_DCSR_RUN BIT(31) /* Run Bit (read / write) */
|
|
|
|
#define PXA_DCSR_NODESC BIT(30) /* No-Descriptor Fetch (read / write) */
|
|
|
|
#define PXA_DCSR_STOPIRQEN BIT(29) /* Stop Interrupt Enable (R/W) */
|
|
|
|
#define PXA_DCSR_REQPEND BIT(8) /* Request Pending (read-only) */
|
|
|
|
#define PXA_DCSR_STOPSTATE BIT(3) /* Stop State (read-only) */
|
|
|
|
#define PXA_DCSR_ENDINTR BIT(2) /* End Interrupt (read / write) */
|
|
|
|
#define PXA_DCSR_STARTINTR BIT(1) /* Start Interrupt (read / write) */
|
|
|
|
#define PXA_DCSR_BUSERR BIT(0) /* Bus Error Interrupt (read / write) */
|
|
|
|
|
|
|
|
#define PXA_DCSR_EORIRQEN BIT(28) /* End of Receive IRQ Enable (R/W) */
|
|
|
|
#define PXA_DCSR_EORJMPEN BIT(27) /* Jump to next descriptor on EOR */
|
|
|
|
#define PXA_DCSR_EORSTOPEN BIT(26) /* STOP on an EOR */
|
|
|
|
#define PXA_DCSR_SETCMPST BIT(25) /* Set Descriptor Compare Status */
|
|
|
|
#define PXA_DCSR_CLRCMPST BIT(24) /* Clear Descriptor Compare Status */
|
|
|
|
#define PXA_DCSR_CMPST BIT(10) /* The Descriptor Compare Status */
|
|
|
|
#define PXA_DCSR_EORINTR BIT(9) /* The end of Receive */
|
|
|
|
|
|
|
|
#define DRCMR_MAPVLD BIT(7) /* Map Valid (read / write) */
|
|
|
|
#define DRCMR_CHLNUM 0x1f /* mask for Channel Number (read / write) */
|
|
|
|
|
|
|
|
#define DDADR_DESCADDR 0xfffffff0 /* Address of next descriptor (mask) */
|
|
|
|
#define DDADR_STOP BIT(0) /* Stop (read / write) */
|
|
|
|
|
|
|
|
#define PXA_DCMD_INCSRCADDR BIT(31) /* Source Address Increment Setting. */
|
|
|
|
#define PXA_DCMD_INCTRGADDR BIT(30) /* Target Address Increment Setting. */
|
|
|
|
#define PXA_DCMD_FLOWSRC BIT(29) /* Flow Control by the source. */
|
|
|
|
#define PXA_DCMD_FLOWTRG BIT(28) /* Flow Control by the target. */
|
|
|
|
#define PXA_DCMD_STARTIRQEN BIT(22) /* Start Interrupt Enable */
|
|
|
|
#define PXA_DCMD_ENDIRQEN BIT(21) /* End Interrupt Enable */
|
|
|
|
#define PXA_DCMD_ENDIAN BIT(18) /* Device Endian-ness. */
|
|
|
|
#define PXA_DCMD_BURST8 (1 << 16) /* 8 byte burst */
|
|
|
|
#define PXA_DCMD_BURST16 (2 << 16) /* 16 byte burst */
|
|
|
|
#define PXA_DCMD_BURST32 (3 << 16) /* 32 byte burst */
|
|
|
|
#define PXA_DCMD_WIDTH1 (1 << 14) /* 1 byte width */
|
|
|
|
#define PXA_DCMD_WIDTH2 (2 << 14) /* 2 byte width (HalfWord) */
|
|
|
|
#define PXA_DCMD_WIDTH4 (3 << 14) /* 4 byte width (Word) */
|
|
|
|
#define PXA_DCMD_LENGTH 0x01fff /* length mask (max = 8K - 1) */
|
|
|
|
|
|
|
|
#define PDMA_ALIGNMENT 3
|
|
|
|
#define PDMA_MAX_DESC_BYTES (PXA_DCMD_LENGTH & ~((1 << PDMA_ALIGNMENT) - 1))
|
|
|
|
|
|
|
|
struct pxad_desc_hw {
|
|
|
|
u32 ddadr; /* Points to the next descriptor + flags */
|
|
|
|
u32 dsadr; /* DSADR value for the current transfer */
|
|
|
|
u32 dtadr; /* DTADR value for the current transfer */
|
|
|
|
u32 dcmd; /* DCMD value for the current transfer */
|
|
|
|
} __aligned(16);
|
|
|
|
|
|
|
|
struct pxad_desc_sw {
|
|
|
|
struct virt_dma_desc vd; /* Virtual descriptor */
|
|
|
|
int nb_desc; /* Number of hw. descriptors */
|
|
|
|
size_t len; /* Number of bytes xfered */
|
|
|
|
dma_addr_t first; /* First descriptor's addr */
|
|
|
|
|
|
|
|
/* At least one descriptor has an src/dst address not multiple of 8 */
|
|
|
|
bool misaligned;
|
|
|
|
bool cyclic;
|
|
|
|
struct dma_pool *desc_pool; /* Channel's used allocator */
|
|
|
|
|
|
|
|
struct pxad_desc_hw *hw_desc[]; /* DMA coherent descriptors */
|
|
|
|
};
|
|
|
|
|
|
|
|
struct pxad_phy {
|
|
|
|
int idx;
|
|
|
|
void __iomem *base;
|
|
|
|
struct pxad_chan *vchan;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct pxad_chan {
|
|
|
|
struct virt_dma_chan vc; /* Virtual channel */
|
|
|
|
u32 drcmr; /* Requestor of the channel */
|
|
|
|
enum pxad_chan_prio prio; /* Required priority of phy */
|
|
|
|
/*
|
|
|
|
* At least one desc_sw in submitted or issued transfers on this channel
|
|
|
|
* has one address such as: addr % 8 != 0. This implies the DALGN
|
|
|
|
* setting on the phy.
|
|
|
|
*/
|
|
|
|
bool misaligned;
|
|
|
|
struct dma_slave_config cfg; /* Runtime config */
|
|
|
|
|
|
|
|
/* protected by vc->lock */
|
|
|
|
struct pxad_phy *phy;
|
|
|
|
struct dma_pool *desc_pool; /* Descriptors pool */
|
dmaengine: pxa: handle bus errors
In the current state, upon bus error the driver will spin endlessly,
relaunching the last tx, which will fail again and again :
- a bus error happens
- pxad_chan_handler() is called
- as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is
lauched, which is the one triggering the bus error, as it didn't
terminate
- moreover, the STOP interrupt fires a new, as the STOPIRQEN is still
active
Break this logic by stopping the automatic relaunch of a dma channel
upon a bus error, even if there are still pending issued requests on it.
As dma_cookie_status() seems unable to return DMA_ERROR in its current
form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx
basis, it is chosen in this patch to remember on the channel which
transaction failed, and report it in pxad_tx_status().
It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1
was completed while T2 causes a bus error, the status of T3 and T4 will
be reported as DMA_IN_PROGRESS, while the channel is actually stopped.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-29 05:32:24 +08:00
|
|
|
dma_cookie_t bus_error;
|
2016-07-11 05:50:49 +08:00
|
|
|
|
|
|
|
wait_queue_head_t wq_state;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct pxad_device {
|
|
|
|
struct dma_device slave;
|
|
|
|
int nr_chans;
|
2016-02-16 04:57:48 +08:00
|
|
|
int nr_requestors;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
void __iomem *base;
|
|
|
|
struct pxad_phy *phys;
|
|
|
|
spinlock_t phy_lock; /* Phy association */
|
2015-05-26 05:29:21 +08:00
|
|
|
#ifdef CONFIG_DEBUG_FS
|
|
|
|
struct dentry *dbgfs_root;
|
|
|
|
struct dentry **dbgfs_chan;
|
|
|
|
#endif
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
#define tx_to_pxad_desc(tx) \
|
|
|
|
container_of(tx, struct pxad_desc_sw, async_tx)
|
|
|
|
#define to_pxad_chan(dchan) \
|
|
|
|
container_of(dchan, struct pxad_chan, vc.chan)
|
|
|
|
#define to_pxad_dev(dmadev) \
|
|
|
|
container_of(dmadev, struct pxad_device, slave)
|
|
|
|
#define to_pxad_sw_desc(_vd) \
|
|
|
|
container_of((_vd), struct pxad_desc_sw, vd)
|
|
|
|
|
|
|
|
#define _phy_readl_relaxed(phy, _reg) \
|
|
|
|
readl_relaxed((phy)->base + _reg((phy)->idx))
|
|
|
|
#define phy_readl_relaxed(phy, _reg) \
|
|
|
|
({ \
|
|
|
|
u32 _v; \
|
|
|
|
_v = readl_relaxed((phy)->base + _reg((phy)->idx)); \
|
|
|
|
dev_vdbg(&phy->vchan->vc.chan.dev->device, \
|
|
|
|
"%s(): readl(%s): 0x%08x\n", __func__, #_reg, \
|
|
|
|
_v); \
|
|
|
|
_v; \
|
|
|
|
})
|
|
|
|
#define phy_writel(phy, val, _reg) \
|
|
|
|
do { \
|
|
|
|
writel((val), (phy)->base + _reg((phy)->idx)); \
|
|
|
|
dev_vdbg(&phy->vchan->vc.chan.dev->device, \
|
|
|
|
"%s(): writel(0x%08x, %s)\n", \
|
|
|
|
__func__, (u32)(val), #_reg); \
|
|
|
|
} while (0)
|
|
|
|
#define phy_writel_relaxed(phy, val, _reg) \
|
|
|
|
do { \
|
|
|
|
writel_relaxed((val), (phy)->base + _reg((phy)->idx)); \
|
|
|
|
dev_vdbg(&phy->vchan->vc.chan.dev->device, \
|
|
|
|
"%s(): writel_relaxed(0x%08x, %s)\n", \
|
|
|
|
__func__, (u32)(val), #_reg); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
static unsigned int pxad_drcmr(unsigned int line)
|
|
|
|
{
|
|
|
|
if (line < 64)
|
|
|
|
return 0x100 + line * 4;
|
|
|
|
return 0x1000 + line * 4;
|
|
|
|
}
|
2015-05-26 05:29:21 +08:00
|
|
|
|
2018-06-18 01:02:15 +08:00
|
|
|
static bool pxad_filter_fn(struct dma_chan *chan, void *param);
|
2018-06-18 01:02:04 +08:00
|
|
|
|
2015-05-26 05:29:21 +08:00
|
|
|
/*
|
|
|
|
* Debug fs
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_DEBUG_FS
|
|
|
|
#include <linux/debugfs.h>
|
|
|
|
#include <linux/uaccess.h>
|
|
|
|
#include <linux/seq_file.h>
|
|
|
|
|
2018-12-06 00:18:59 +08:00
|
|
|
static int requester_chan_show(struct seq_file *s, void *p)
|
2015-05-26 05:29:21 +08:00
|
|
|
{
|
|
|
|
struct pxad_phy *phy = s->private;
|
|
|
|
int i;
|
|
|
|
u32 drcmr;
|
|
|
|
|
2015-08-18 14:15:32 +08:00
|
|
|
seq_printf(s, "DMA channel %d requester :\n", phy->idx);
|
2015-05-26 05:29:21 +08:00
|
|
|
for (i = 0; i < 70; i++) {
|
|
|
|
drcmr = readl_relaxed(phy->base + pxad_drcmr(i));
|
|
|
|
if ((drcmr & DRCMR_CHLNUM) == phy->idx)
|
2015-08-18 14:15:32 +08:00
|
|
|
seq_printf(s, "\tRequester %d (MAPVLD=%d)\n", i,
|
|
|
|
!!(drcmr & DRCMR_MAPVLD));
|
2015-05-26 05:29:21 +08:00
|
|
|
}
|
2015-08-18 14:15:32 +08:00
|
|
|
return 0;
|
2015-05-26 05:29:21 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int dbg_burst_from_dcmd(u32 dcmd)
|
|
|
|
{
|
|
|
|
int burst = (dcmd >> 16) & 0x3;
|
|
|
|
|
|
|
|
return burst ? 4 << burst : 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int is_phys_valid(unsigned long addr)
|
|
|
|
{
|
|
|
|
return pfn_valid(__phys_to_pfn(addr));
|
|
|
|
}
|
|
|
|
|
|
|
|
#define PXA_DCSR_STR(flag) (dcsr & PXA_DCSR_##flag ? #flag" " : "")
|
|
|
|
#define PXA_DCMD_STR(flag) (dcmd & PXA_DCMD_##flag ? #flag" " : "")
|
|
|
|
|
2018-12-06 00:18:59 +08:00
|
|
|
static int descriptors_show(struct seq_file *s, void *p)
|
2015-05-26 05:29:21 +08:00
|
|
|
{
|
|
|
|
struct pxad_phy *phy = s->private;
|
|
|
|
int i, max_show = 20, burst, width;
|
|
|
|
u32 dcmd;
|
|
|
|
unsigned long phys_desc, ddadr;
|
|
|
|
struct pxad_desc_hw *desc;
|
|
|
|
|
|
|
|
phys_desc = ddadr = _phy_readl_relaxed(phy, DDADR);
|
|
|
|
|
|
|
|
seq_printf(s, "DMA channel %d descriptors :\n", phy->idx);
|
|
|
|
seq_printf(s, "[%03d] First descriptor unknown\n", 0);
|
|
|
|
for (i = 1; i < max_show && is_phys_valid(phys_desc); i++) {
|
|
|
|
desc = phys_to_virt(phys_desc);
|
|
|
|
dcmd = desc->dcmd;
|
|
|
|
burst = dbg_burst_from_dcmd(dcmd);
|
|
|
|
width = (1 << ((dcmd >> 14) & 0x3)) >> 1;
|
|
|
|
|
|
|
|
seq_printf(s, "[%03d] Desc at %08lx(virt %p)\n",
|
|
|
|
i, phys_desc, desc);
|
|
|
|
seq_printf(s, "\tDDADR = %08x\n", desc->ddadr);
|
|
|
|
seq_printf(s, "\tDSADR = %08x\n", desc->dsadr);
|
|
|
|
seq_printf(s, "\tDTADR = %08x\n", desc->dtadr);
|
|
|
|
seq_printf(s, "\tDCMD = %08x (%s%s%s%s%s%s%sburst=%d width=%d len=%d)\n",
|
|
|
|
dcmd,
|
|
|
|
PXA_DCMD_STR(INCSRCADDR), PXA_DCMD_STR(INCTRGADDR),
|
|
|
|
PXA_DCMD_STR(FLOWSRC), PXA_DCMD_STR(FLOWTRG),
|
|
|
|
PXA_DCMD_STR(STARTIRQEN), PXA_DCMD_STR(ENDIRQEN),
|
|
|
|
PXA_DCMD_STR(ENDIAN), burst, width,
|
|
|
|
dcmd & PXA_DCMD_LENGTH);
|
|
|
|
phys_desc = desc->ddadr;
|
|
|
|
}
|
|
|
|
if (i == max_show)
|
|
|
|
seq_printf(s, "[%03d] Desc at %08lx ... max display reached\n",
|
|
|
|
i, phys_desc);
|
|
|
|
else
|
|
|
|
seq_printf(s, "[%03d] Desc at %08lx is %s\n",
|
|
|
|
i, phys_desc, phys_desc == DDADR_STOP ?
|
|
|
|
"DDADR_STOP" : "invalid");
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-12-06 00:18:59 +08:00
|
|
|
static int chan_state_show(struct seq_file *s, void *p)
|
2015-05-26 05:29:21 +08:00
|
|
|
{
|
|
|
|
struct pxad_phy *phy = s->private;
|
|
|
|
u32 dcsr, dcmd;
|
|
|
|
int burst, width;
|
|
|
|
static const char * const str_prio[] = {
|
|
|
|
"high", "normal", "low", "invalid"
|
|
|
|
};
|
|
|
|
|
|
|
|
dcsr = _phy_readl_relaxed(phy, DCSR);
|
|
|
|
dcmd = _phy_readl_relaxed(phy, DCMD);
|
|
|
|
burst = dbg_burst_from_dcmd(dcmd);
|
|
|
|
width = (1 << ((dcmd >> 14) & 0x3)) >> 1;
|
|
|
|
|
|
|
|
seq_printf(s, "DMA channel %d\n", phy->idx);
|
|
|
|
seq_printf(s, "\tPriority : %s\n",
|
|
|
|
str_prio[(phy->idx & 0xf) / 4]);
|
|
|
|
seq_printf(s, "\tUnaligned transfer bit: %s\n",
|
|
|
|
_phy_readl_relaxed(phy, DALGN) & BIT(phy->idx) ?
|
|
|
|
"yes" : "no");
|
|
|
|
seq_printf(s, "\tDCSR = %08x (%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s)\n",
|
|
|
|
dcsr, PXA_DCSR_STR(RUN), PXA_DCSR_STR(NODESC),
|
|
|
|
PXA_DCSR_STR(STOPIRQEN), PXA_DCSR_STR(EORIRQEN),
|
|
|
|
PXA_DCSR_STR(EORJMPEN), PXA_DCSR_STR(EORSTOPEN),
|
|
|
|
PXA_DCSR_STR(SETCMPST), PXA_DCSR_STR(CLRCMPST),
|
|
|
|
PXA_DCSR_STR(CMPST), PXA_DCSR_STR(EORINTR),
|
|
|
|
PXA_DCSR_STR(REQPEND), PXA_DCSR_STR(STOPSTATE),
|
|
|
|
PXA_DCSR_STR(ENDINTR), PXA_DCSR_STR(STARTINTR),
|
|
|
|
PXA_DCSR_STR(BUSERR));
|
|
|
|
|
|
|
|
seq_printf(s, "\tDCMD = %08x (%s%s%s%s%s%s%sburst=%d width=%d len=%d)\n",
|
|
|
|
dcmd,
|
|
|
|
PXA_DCMD_STR(INCSRCADDR), PXA_DCMD_STR(INCTRGADDR),
|
|
|
|
PXA_DCMD_STR(FLOWSRC), PXA_DCMD_STR(FLOWTRG),
|
|
|
|
PXA_DCMD_STR(STARTIRQEN), PXA_DCMD_STR(ENDIRQEN),
|
|
|
|
PXA_DCMD_STR(ENDIAN), burst, width, dcmd & PXA_DCMD_LENGTH);
|
|
|
|
seq_printf(s, "\tDSADR = %08x\n", _phy_readl_relaxed(phy, DSADR));
|
|
|
|
seq_printf(s, "\tDTADR = %08x\n", _phy_readl_relaxed(phy, DTADR));
|
|
|
|
seq_printf(s, "\tDDADR = %08x\n", _phy_readl_relaxed(phy, DDADR));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-12-06 00:18:59 +08:00
|
|
|
static int state_show(struct seq_file *s, void *p)
|
2015-05-26 05:29:21 +08:00
|
|
|
{
|
|
|
|
struct pxad_device *pdev = s->private;
|
|
|
|
|
|
|
|
/* basic device status */
|
|
|
|
seq_puts(s, "DMA engine status\n");
|
|
|
|
seq_printf(s, "\tChannel number: %d\n", pdev->nr_chans);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-12-06 00:18:59 +08:00
|
|
|
DEFINE_SHOW_ATTRIBUTE(state);
|
|
|
|
DEFINE_SHOW_ATTRIBUTE(chan_state);
|
|
|
|
DEFINE_SHOW_ATTRIBUTE(descriptors);
|
|
|
|
DEFINE_SHOW_ATTRIBUTE(requester_chan);
|
2015-05-26 05:29:21 +08:00
|
|
|
|
|
|
|
static struct dentry *pxad_dbg_alloc_chan(struct pxad_device *pdev,
|
|
|
|
int ch, struct dentry *chandir)
|
|
|
|
{
|
|
|
|
char chan_name[11];
|
2019-06-12 20:25:55 +08:00
|
|
|
struct dentry *chan;
|
2015-05-26 05:29:21 +08:00
|
|
|
void *dt;
|
|
|
|
|
|
|
|
scnprintf(chan_name, sizeof(chan_name), "%d", ch);
|
|
|
|
chan = debugfs_create_dir(chan_name, chandir);
|
|
|
|
dt = (void *)&pdev->phys[ch];
|
|
|
|
|
2019-06-12 20:25:55 +08:00
|
|
|
debugfs_create_file("state", 0400, chan, dt, &chan_state_fops);
|
|
|
|
debugfs_create_file("descriptors", 0400, chan, dt, &descriptors_fops);
|
|
|
|
debugfs_create_file("requesters", 0400, chan, dt, &requester_chan_fops);
|
2015-05-26 05:29:21 +08:00
|
|
|
|
|
|
|
return chan;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_init_debugfs(struct pxad_device *pdev)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct dentry *chandir;
|
|
|
|
|
|
|
|
pdev->dbgfs_chan =
|
2019-06-12 20:25:55 +08:00
|
|
|
kmalloc_array(pdev->nr_chans, sizeof(struct dentry *),
|
2015-05-26 05:29:21 +08:00
|
|
|
GFP_KERNEL);
|
|
|
|
if (!pdev->dbgfs_chan)
|
2019-06-12 20:25:55 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
pdev->dbgfs_root = debugfs_create_dir(dev_name(pdev->slave.dev), NULL);
|
|
|
|
|
|
|
|
debugfs_create_file("state", 0400, pdev->dbgfs_root, pdev, &state_fops);
|
2015-05-26 05:29:21 +08:00
|
|
|
|
|
|
|
chandir = debugfs_create_dir("channels", pdev->dbgfs_root);
|
|
|
|
|
2019-06-12 20:25:55 +08:00
|
|
|
for (i = 0; i < pdev->nr_chans; i++)
|
2015-05-26 05:29:21 +08:00
|
|
|
pdev->dbgfs_chan[i] = pxad_dbg_alloc_chan(pdev, i, chandir);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_cleanup_debugfs(struct pxad_device *pdev)
|
|
|
|
{
|
|
|
|
debugfs_remove_recursive(pdev->dbgfs_root);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void pxad_init_debugfs(struct pxad_device *pdev) {}
|
|
|
|
static inline void pxad_cleanup_debugfs(struct pxad_device *pdev) {}
|
|
|
|
#endif
|
|
|
|
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
static struct pxad_phy *lookup_phy(struct pxad_chan *pchan)
|
|
|
|
{
|
|
|
|
int prio, i;
|
|
|
|
struct pxad_device *pdev = to_pxad_dev(pchan->vc.chan.device);
|
|
|
|
struct pxad_phy *phy, *found = NULL;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* dma channel priorities
|
|
|
|
* ch 0 - 3, 16 - 19 <--> (0)
|
|
|
|
* ch 4 - 7, 20 - 23 <--> (1)
|
|
|
|
* ch 8 - 11, 24 - 27 <--> (2)
|
|
|
|
* ch 12 - 15, 28 - 31 <--> (3)
|
|
|
|
*/
|
|
|
|
|
|
|
|
spin_lock_irqsave(&pdev->phy_lock, flags);
|
|
|
|
for (prio = pchan->prio; prio >= PXAD_PRIO_HIGHEST; prio--) {
|
|
|
|
for (i = 0; i < pdev->nr_chans; i++) {
|
|
|
|
if (prio != (i & 0xf) >> 2)
|
|
|
|
continue;
|
|
|
|
phy = &pdev->phys[i];
|
|
|
|
if (!phy->vchan) {
|
|
|
|
phy->vchan = pchan;
|
|
|
|
found = phy;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
spin_unlock_irqrestore(&pdev->phy_lock, flags);
|
|
|
|
dev_dbg(&pchan->vc.chan.dev->device,
|
|
|
|
"%s(): phy=%p(%d)\n", __func__, found,
|
|
|
|
found ? found->idx : -1);
|
|
|
|
|
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_free_phy(struct pxad_chan *chan)
|
|
|
|
{
|
|
|
|
struct pxad_device *pdev = to_pxad_dev(chan->vc.chan.device);
|
|
|
|
unsigned long flags;
|
|
|
|
u32 reg;
|
|
|
|
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): freeing\n", __func__);
|
|
|
|
if (!chan->phy)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* clear the channel mapping in DRCMR */
|
2016-02-16 04:57:48 +08:00
|
|
|
if (chan->drcmr <= pdev->nr_requestors) {
|
2015-10-01 01:42:14 +08:00
|
|
|
reg = pxad_drcmr(chan->drcmr);
|
|
|
|
writel_relaxed(0, chan->phy->base + reg);
|
|
|
|
}
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
spin_lock_irqsave(&pdev->phy_lock, flags);
|
|
|
|
chan->phy->vchan = NULL;
|
|
|
|
chan->phy = NULL;
|
|
|
|
spin_unlock_irqrestore(&pdev->phy_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool is_chan_running(struct pxad_chan *chan)
|
|
|
|
{
|
|
|
|
u32 dcsr;
|
|
|
|
struct pxad_phy *phy = chan->phy;
|
|
|
|
|
|
|
|
if (!phy)
|
|
|
|
return false;
|
|
|
|
dcsr = phy_readl_relaxed(phy, DCSR);
|
|
|
|
return dcsr & PXA_DCSR_RUN;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool is_running_chan_misaligned(struct pxad_chan *chan)
|
|
|
|
{
|
|
|
|
u32 dalgn;
|
|
|
|
|
|
|
|
BUG_ON(!chan->phy);
|
|
|
|
dalgn = phy_readl_relaxed(chan->phy, DALGN);
|
|
|
|
return dalgn & (BIT(chan->phy->idx));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void phy_enable(struct pxad_phy *phy, bool misaligned)
|
|
|
|
{
|
2016-02-16 04:57:48 +08:00
|
|
|
struct pxad_device *pdev;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
u32 reg, dalgn;
|
|
|
|
|
|
|
|
if (!phy->vchan)
|
|
|
|
return;
|
|
|
|
|
|
|
|
dev_dbg(&phy->vchan->vc.chan.dev->device,
|
|
|
|
"%s(); phy=%p(%d) misaligned=%d\n", __func__,
|
|
|
|
phy, phy->idx, misaligned);
|
|
|
|
|
2016-02-16 04:57:48 +08:00
|
|
|
pdev = to_pxad_dev(phy->vchan->vc.chan.device);
|
|
|
|
if (phy->vchan->drcmr <= pdev->nr_requestors) {
|
2015-10-01 01:42:14 +08:00
|
|
|
reg = pxad_drcmr(phy->vchan->drcmr);
|
|
|
|
writel_relaxed(DRCMR_MAPVLD | phy->idx, phy->base + reg);
|
|
|
|
}
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
dalgn = phy_readl_relaxed(phy, DALGN);
|
|
|
|
if (misaligned)
|
|
|
|
dalgn |= BIT(phy->idx);
|
|
|
|
else
|
|
|
|
dalgn &= ~BIT(phy->idx);
|
|
|
|
phy_writel_relaxed(phy, dalgn, DALGN);
|
|
|
|
|
|
|
|
phy_writel(phy, PXA_DCSR_STOPIRQEN | PXA_DCSR_ENDINTR |
|
|
|
|
PXA_DCSR_BUSERR | PXA_DCSR_RUN, DCSR);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void phy_disable(struct pxad_phy *phy)
|
|
|
|
{
|
|
|
|
u32 dcsr;
|
|
|
|
|
|
|
|
if (!phy)
|
|
|
|
return;
|
|
|
|
|
|
|
|
dcsr = phy_readl_relaxed(phy, DCSR);
|
|
|
|
dev_dbg(&phy->vchan->vc.chan.dev->device,
|
|
|
|
"%s(): phy=%p(%d)\n", __func__, phy, phy->idx);
|
|
|
|
phy_writel(phy, dcsr & ~PXA_DCSR_RUN & ~PXA_DCSR_STOPIRQEN, DCSR);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_launch_chan(struct pxad_chan *chan,
|
|
|
|
struct pxad_desc_sw *desc)
|
|
|
|
{
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): desc=%p\n", __func__, desc);
|
|
|
|
if (!chan->phy) {
|
|
|
|
chan->phy = lookup_phy(chan);
|
|
|
|
if (!chan->phy) {
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): no free dma channel\n", __func__);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
dmaengine: pxa: handle bus errors
In the current state, upon bus error the driver will spin endlessly,
relaunching the last tx, which will fail again and again :
- a bus error happens
- pxad_chan_handler() is called
- as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is
lauched, which is the one triggering the bus error, as it didn't
terminate
- moreover, the STOP interrupt fires a new, as the STOPIRQEN is still
active
Break this logic by stopping the automatic relaunch of a dma channel
upon a bus error, even if there are still pending issued requests on it.
As dma_cookie_status() seems unable to return DMA_ERROR in its current
form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx
basis, it is chosen in this patch to remember on the channel which
transaction failed, and report it in pxad_tx_status().
It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1
was completed while T2 causes a bus error, the status of T3 and T4 will
be reported as DMA_IN_PROGRESS, while the channel is actually stopped.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-29 05:32:24 +08:00
|
|
|
chan->bus_error = 0;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Program the descriptor's address into the DMA controller,
|
|
|
|
* then start the DMA transaction
|
|
|
|
*/
|
|
|
|
phy_writel(chan->phy, desc->first, DDADR);
|
|
|
|
phy_enable(chan->phy, chan->misaligned);
|
2016-07-11 05:50:49 +08:00
|
|
|
wake_up(&chan->wq_state);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void set_updater_desc(struct pxad_desc_sw *sw_desc,
|
|
|
|
unsigned long flags)
|
|
|
|
{
|
|
|
|
struct pxad_desc_hw *updater =
|
|
|
|
sw_desc->hw_desc[sw_desc->nb_desc - 1];
|
|
|
|
dma_addr_t dma = sw_desc->hw_desc[sw_desc->nb_desc - 2]->ddadr;
|
|
|
|
|
|
|
|
updater->ddadr = DDADR_STOP;
|
|
|
|
updater->dsadr = dma;
|
|
|
|
updater->dtadr = dma + 8;
|
|
|
|
updater->dcmd = PXA_DCMD_WIDTH4 | PXA_DCMD_BURST32 |
|
|
|
|
(PXA_DCMD_LENGTH & sizeof(u32));
|
|
|
|
if (flags & DMA_PREP_INTERRUPT)
|
|
|
|
updater->dcmd |= PXA_DCMD_ENDIRQEN;
|
2016-02-17 05:54:02 +08:00
|
|
|
if (sw_desc->cyclic)
|
|
|
|
sw_desc->hw_desc[sw_desc->nb_desc - 2]->ddadr = sw_desc->first;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static bool is_desc_completed(struct virt_dma_desc *vd)
|
|
|
|
{
|
|
|
|
struct pxad_desc_sw *sw_desc = to_pxad_sw_desc(vd);
|
|
|
|
struct pxad_desc_hw *updater =
|
|
|
|
sw_desc->hw_desc[sw_desc->nb_desc - 1];
|
|
|
|
|
|
|
|
return updater->dtadr != (updater->dsadr + 8);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_desc_chain(struct virt_dma_desc *vd1,
|
|
|
|
struct virt_dma_desc *vd2)
|
|
|
|
{
|
|
|
|
struct pxad_desc_sw *desc1 = to_pxad_sw_desc(vd1);
|
|
|
|
struct pxad_desc_sw *desc2 = to_pxad_sw_desc(vd2);
|
|
|
|
dma_addr_t dma_to_chain;
|
|
|
|
|
|
|
|
dma_to_chain = desc2->first;
|
|
|
|
desc1->hw_desc[desc1->nb_desc - 1]->ddadr = dma_to_chain;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool pxad_try_hotchain(struct virt_dma_chan *vc,
|
|
|
|
struct virt_dma_desc *vd)
|
|
|
|
{
|
|
|
|
struct virt_dma_desc *vd_last_issued = NULL;
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(&vc->chan);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Attempt to hot chain the tx if the phy is still running. This is
|
|
|
|
* considered successful only if either the channel is still running
|
|
|
|
* after the chaining, or if the chained transfer is completed after
|
|
|
|
* having been hot chained.
|
|
|
|
* A change of alignment is not allowed, and forbids hotchaining.
|
|
|
|
*/
|
|
|
|
if (is_chan_running(chan)) {
|
|
|
|
BUG_ON(list_empty(&vc->desc_issued));
|
|
|
|
|
|
|
|
if (!is_running_chan_misaligned(chan) &&
|
|
|
|
to_pxad_sw_desc(vd)->misaligned)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
vd_last_issued = list_entry(vc->desc_issued.prev,
|
|
|
|
struct virt_dma_desc, node);
|
|
|
|
pxad_desc_chain(vd_last_issued, vd);
|
2016-08-08 03:01:48 +08:00
|
|
|
if (is_chan_running(chan) || is_desc_completed(vd))
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int clear_chan_irq(struct pxad_phy *phy)
|
|
|
|
{
|
|
|
|
u32 dcsr;
|
|
|
|
u32 dint = readl(phy->base + DINT);
|
|
|
|
|
|
|
|
if (!(dint & BIT(phy->idx)))
|
|
|
|
return PXA_DCSR_RUN;
|
|
|
|
|
|
|
|
/* clear irq */
|
|
|
|
dcsr = phy_readl_relaxed(phy, DCSR);
|
|
|
|
phy_writel(phy, dcsr, DCSR);
|
|
|
|
if ((dcsr & PXA_DCSR_BUSERR) && (phy->vchan))
|
|
|
|
dev_warn(&phy->vchan->vc.chan.dev->device,
|
|
|
|
"%s(chan=%p): PXA_DCSR_BUSERR\n",
|
|
|
|
__func__, &phy->vchan);
|
|
|
|
|
|
|
|
return dcsr & ~PXA_DCSR_RUN;
|
|
|
|
}
|
|
|
|
|
|
|
|
static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
|
|
|
|
{
|
|
|
|
struct pxad_phy *phy = dev_id;
|
|
|
|
struct pxad_chan *chan = phy->vchan;
|
|
|
|
struct virt_dma_desc *vd, *tmp;
|
|
|
|
unsigned int dcsr;
|
2016-08-08 03:01:49 +08:00
|
|
|
bool vd_completed;
|
dmaengine: pxa: handle bus errors
In the current state, upon bus error the driver will spin endlessly,
relaunching the last tx, which will fail again and again :
- a bus error happens
- pxad_chan_handler() is called
- as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is
lauched, which is the one triggering the bus error, as it didn't
terminate
- moreover, the STOP interrupt fires a new, as the STOPIRQEN is still
active
Break this logic by stopping the automatic relaunch of a dma channel
upon a bus error, even if there are still pending issued requests on it.
As dma_cookie_status() seems unable to return DMA_ERROR in its current
form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx
basis, it is chosen in this patch to remember on the channel which
transaction failed, and report it in pxad_tx_status().
It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1
was completed while T2 causes a bus error, the status of T3 and T4 will
be reported as DMA_IN_PROGRESS, while the channel is actually stopped.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-29 05:32:24 +08:00
|
|
|
dma_cookie_t last_started = 0;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
BUG_ON(!chan);
|
|
|
|
|
|
|
|
dcsr = clear_chan_irq(phy);
|
|
|
|
if (dcsr & PXA_DCSR_RUN)
|
|
|
|
return IRQ_NONE;
|
|
|
|
|
2020-10-28 05:52:52 +08:00
|
|
|
spin_lock(&chan->vc.lock);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
list_for_each_entry_safe(vd, tmp, &chan->vc.desc_issued, node) {
|
2016-08-08 03:01:49 +08:00
|
|
|
vd_completed = is_desc_completed(vd);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
2016-08-08 03:01:49 +08:00
|
|
|
"%s(): checking txd %p[%x]: completed=%d dcsr=0x%x\n",
|
|
|
|
__func__, vd, vd->tx.cookie, vd_completed,
|
|
|
|
dcsr);
|
dmaengine: pxa: handle bus errors
In the current state, upon bus error the driver will spin endlessly,
relaunching the last tx, which will fail again and again :
- a bus error happens
- pxad_chan_handler() is called
- as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is
lauched, which is the one triggering the bus error, as it didn't
terminate
- moreover, the STOP interrupt fires a new, as the STOPIRQEN is still
active
Break this logic by stopping the automatic relaunch of a dma channel
upon a bus error, even if there are still pending issued requests on it.
As dma_cookie_status() seems unable to return DMA_ERROR in its current
form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx
basis, it is chosen in this patch to remember on the channel which
transaction failed, and report it in pxad_tx_status().
It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1
was completed while T2 causes a bus error, the status of T3 and T4 will
be reported as DMA_IN_PROGRESS, while the channel is actually stopped.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-29 05:32:24 +08:00
|
|
|
last_started = vd->tx.cookie;
|
2016-02-17 05:54:02 +08:00
|
|
|
if (to_pxad_sw_desc(vd)->cyclic) {
|
|
|
|
vchan_cyclic_callback(vd);
|
|
|
|
break;
|
|
|
|
}
|
2016-08-08 03:01:49 +08:00
|
|
|
if (vd_completed) {
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
list_del(&vd->node);
|
|
|
|
vchan_cookie_complete(vd);
|
|
|
|
} else {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
dmaengine: pxa: handle bus errors
In the current state, upon bus error the driver will spin endlessly,
relaunching the last tx, which will fail again and again :
- a bus error happens
- pxad_chan_handler() is called
- as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is
lauched, which is the one triggering the bus error, as it didn't
terminate
- moreover, the STOP interrupt fires a new, as the STOPIRQEN is still
active
Break this logic by stopping the automatic relaunch of a dma channel
upon a bus error, even if there are still pending issued requests on it.
As dma_cookie_status() seems unable to return DMA_ERROR in its current
form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx
basis, it is chosen in this patch to remember on the channel which
transaction failed, and report it in pxad_tx_status().
It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1
was completed while T2 causes a bus error, the status of T3 and T4 will
be reported as DMA_IN_PROGRESS, while the channel is actually stopped.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-29 05:32:24 +08:00
|
|
|
if (dcsr & PXA_DCSR_BUSERR) {
|
|
|
|
chan->bus_error = last_started;
|
|
|
|
phy_disable(phy);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!chan->bus_error && dcsr & PXA_DCSR_STOPSTATE) {
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): channel stopped, submitted_empty=%d issued_empty=%d",
|
|
|
|
__func__,
|
|
|
|
list_empty(&chan->vc.desc_submitted),
|
|
|
|
list_empty(&chan->vc.desc_issued));
|
|
|
|
phy_writel_relaxed(phy, dcsr & ~PXA_DCSR_STOPIRQEN, DCSR);
|
|
|
|
|
|
|
|
if (list_empty(&chan->vc.desc_issued)) {
|
|
|
|
chan->misaligned =
|
|
|
|
!list_empty(&chan->vc.desc_submitted);
|
|
|
|
} else {
|
|
|
|
vd = list_first_entry(&chan->vc.desc_issued,
|
|
|
|
struct virt_dma_desc, node);
|
|
|
|
pxad_launch_chan(chan, to_pxad_sw_desc(vd));
|
|
|
|
}
|
|
|
|
}
|
2020-10-28 05:52:52 +08:00
|
|
|
spin_unlock(&chan->vc.lock);
|
2016-07-11 05:50:49 +08:00
|
|
|
wake_up(&chan->wq_state);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
return IRQ_HANDLED;
|
|
|
|
}
|
|
|
|
|
|
|
|
static irqreturn_t pxad_int_handler(int irq, void *dev_id)
|
|
|
|
{
|
|
|
|
struct pxad_device *pdev = dev_id;
|
|
|
|
struct pxad_phy *phy;
|
|
|
|
u32 dint = readl(pdev->base + DINT);
|
|
|
|
int i, ret = IRQ_NONE;
|
|
|
|
|
|
|
|
while (dint) {
|
|
|
|
i = __ffs(dint);
|
|
|
|
dint &= (dint - 1);
|
|
|
|
phy = &pdev->phys[i];
|
|
|
|
if (pxad_chan_handler(irq, phy) == IRQ_HANDLED)
|
|
|
|
ret = IRQ_HANDLED;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int pxad_alloc_chan_resources(struct dma_chan *dchan)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
struct pxad_device *pdev = to_pxad_dev(chan->vc.chan.device);
|
|
|
|
|
|
|
|
if (chan->desc_pool)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
chan->desc_pool = dma_pool_create(dma_chan_name(dchan),
|
|
|
|
pdev->slave.dev,
|
|
|
|
sizeof(struct pxad_desc_hw),
|
|
|
|
__alignof__(struct pxad_desc_hw),
|
|
|
|
0);
|
|
|
|
if (!chan->desc_pool) {
|
|
|
|
dev_err(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): unable to allocate descriptor pool\n",
|
|
|
|
__func__);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_free_chan_resources(struct dma_chan *dchan)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
|
|
|
|
vchan_free_chan_resources(&chan->vc);
|
|
|
|
dma_pool_destroy(chan->desc_pool);
|
|
|
|
chan->desc_pool = NULL;
|
|
|
|
|
2018-06-18 01:02:06 +08:00
|
|
|
chan->drcmr = U32_MAX;
|
|
|
|
chan->prio = PXAD_PRIO_LOWEST;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_free_desc(struct virt_dma_desc *vd)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
dma_addr_t dma;
|
|
|
|
struct pxad_desc_sw *sw_desc = to_pxad_sw_desc(vd);
|
|
|
|
|
|
|
|
BUG_ON(sw_desc->nb_desc == 0);
|
|
|
|
for (i = sw_desc->nb_desc - 1; i >= 0; i--) {
|
|
|
|
if (i > 0)
|
|
|
|
dma = sw_desc->hw_desc[i - 1]->ddadr;
|
|
|
|
else
|
|
|
|
dma = sw_desc->first;
|
|
|
|
dma_pool_free(sw_desc->desc_pool,
|
|
|
|
sw_desc->hw_desc[i], dma);
|
|
|
|
}
|
|
|
|
sw_desc->nb_desc = 0;
|
|
|
|
kfree(sw_desc);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct pxad_desc_sw *
|
|
|
|
pxad_alloc_desc(struct pxad_chan *chan, unsigned int nb_hw_desc)
|
|
|
|
{
|
|
|
|
struct pxad_desc_sw *sw_desc;
|
|
|
|
dma_addr_t dma;
|
|
|
|
int i;
|
|
|
|
|
2021-09-18 18:40:55 +08:00
|
|
|
sw_desc = kzalloc(struct_size(sw_desc, hw_desc, nb_hw_desc),
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
GFP_NOWAIT);
|
|
|
|
if (!sw_desc)
|
|
|
|
return NULL;
|
|
|
|
sw_desc->desc_pool = chan->desc_pool;
|
|
|
|
|
|
|
|
for (i = 0; i < nb_hw_desc; i++) {
|
|
|
|
sw_desc->hw_desc[i] = dma_pool_alloc(sw_desc->desc_pool,
|
|
|
|
GFP_NOWAIT, &dma);
|
|
|
|
if (!sw_desc->hw_desc[i]) {
|
|
|
|
dev_err(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): Couldn't allocate the %dth hw_desc from dma_pool %p\n",
|
|
|
|
__func__, i, sw_desc->desc_pool);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (i == 0)
|
|
|
|
sw_desc->first = dma;
|
|
|
|
else
|
|
|
|
sw_desc->hw_desc[i - 1]->ddadr = dma;
|
|
|
|
sw_desc->nb_desc++;
|
|
|
|
}
|
|
|
|
|
|
|
|
return sw_desc;
|
|
|
|
err:
|
|
|
|
pxad_free_desc(&sw_desc->vd);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static dma_cookie_t pxad_tx_submit(struct dma_async_tx_descriptor *tx)
|
|
|
|
{
|
|
|
|
struct virt_dma_chan *vc = to_virt_chan(tx->chan);
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(&vc->chan);
|
|
|
|
struct virt_dma_desc *vd_chained = NULL,
|
|
|
|
*vd = container_of(tx, struct virt_dma_desc, tx);
|
|
|
|
dma_cookie_t cookie;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
set_updater_desc(to_pxad_sw_desc(vd), tx->flags);
|
|
|
|
|
|
|
|
spin_lock_irqsave(&vc->lock, flags);
|
|
|
|
cookie = dma_cookie_assign(tx);
|
|
|
|
|
|
|
|
if (list_empty(&vc->desc_submitted) && pxad_try_hotchain(vc, vd)) {
|
|
|
|
list_move_tail(&vd->node, &vc->desc_issued);
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): txd %p[%x]: submitted (hot linked)\n",
|
|
|
|
__func__, vd, cookie);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fallback to placing the tx in the submitted queue
|
|
|
|
*/
|
|
|
|
if (!list_empty(&vc->desc_submitted)) {
|
|
|
|
vd_chained = list_entry(vc->desc_submitted.prev,
|
|
|
|
struct virt_dma_desc, node);
|
|
|
|
/*
|
|
|
|
* Only chain the descriptors if no new misalignment is
|
|
|
|
* introduced. If a new misalignment is chained, let the channel
|
|
|
|
* stop, and be relaunched in misalign mode from the irq
|
|
|
|
* handler.
|
|
|
|
*/
|
|
|
|
if (chan->misaligned || !to_pxad_sw_desc(vd)->misaligned)
|
|
|
|
pxad_desc_chain(vd_chained, vd);
|
|
|
|
else
|
|
|
|
vd_chained = NULL;
|
|
|
|
}
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): txd %p[%x]: submitted (%s linked)\n",
|
|
|
|
__func__, vd, cookie, vd_chained ? "cold" : "not");
|
|
|
|
list_move_tail(&vd->node, &vc->desc_submitted);
|
|
|
|
chan->misaligned |= to_pxad_sw_desc(vd)->misaligned;
|
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock_irqrestore(&vc->lock, flags);
|
|
|
|
return cookie;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_issue_pending(struct dma_chan *dchan)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
struct virt_dma_desc *vd_first;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&chan->vc.lock, flags);
|
|
|
|
if (list_empty(&chan->vc.desc_submitted))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
vd_first = list_first_entry(&chan->vc.desc_submitted,
|
|
|
|
struct virt_dma_desc, node);
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): txd %p[%x]", __func__, vd_first, vd_first->tx.cookie);
|
|
|
|
|
|
|
|
vchan_issue_pending(&chan->vc);
|
|
|
|
if (!pxad_try_hotchain(&chan->vc, vd_first))
|
|
|
|
pxad_launch_chan(chan, to_pxad_sw_desc(vd_first));
|
|
|
|
out:
|
|
|
|
spin_unlock_irqrestore(&chan->vc.lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct dma_async_tx_descriptor *
|
|
|
|
pxad_tx_prep(struct virt_dma_chan *vc, struct virt_dma_desc *vd,
|
|
|
|
unsigned long tx_flags)
|
|
|
|
{
|
|
|
|
struct dma_async_tx_descriptor *tx;
|
|
|
|
struct pxad_chan *chan = container_of(vc, struct pxad_chan, vc);
|
|
|
|
|
2015-09-21 17:06:32 +08:00
|
|
|
INIT_LIST_HEAD(&vd->node);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
tx = vchan_tx_prep(vc, vd, tx_flags);
|
|
|
|
tx->tx_submit = pxad_tx_submit;
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): vc=%p txd=%p[%x] flags=0x%lx\n", __func__,
|
|
|
|
vc, vd, vd->tx.cookie,
|
|
|
|
tx_flags);
|
|
|
|
|
|
|
|
return tx;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pxad_get_config(struct pxad_chan *chan,
|
|
|
|
enum dma_transfer_direction dir,
|
|
|
|
u32 *dcmd, u32 *dev_src, u32 *dev_dst)
|
|
|
|
{
|
|
|
|
u32 maxburst = 0, dev_addr = 0;
|
|
|
|
enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
|
2016-02-16 04:57:48 +08:00
|
|
|
struct pxad_device *pdev = to_pxad_dev(chan->vc.chan.device);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
*dcmd = 0;
|
2015-08-12 04:16:32 +08:00
|
|
|
if (dir == DMA_DEV_TO_MEM) {
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
maxburst = chan->cfg.src_maxburst;
|
|
|
|
width = chan->cfg.src_addr_width;
|
|
|
|
dev_addr = chan->cfg.src_addr;
|
|
|
|
*dev_src = dev_addr;
|
2015-10-01 01:42:14 +08:00
|
|
|
*dcmd |= PXA_DCMD_INCTRGADDR;
|
2016-02-16 04:57:48 +08:00
|
|
|
if (chan->drcmr <= pdev->nr_requestors)
|
2015-10-01 01:42:14 +08:00
|
|
|
*dcmd |= PXA_DCMD_FLOWSRC;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
}
|
2015-08-12 04:16:32 +08:00
|
|
|
if (dir == DMA_MEM_TO_DEV) {
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
maxburst = chan->cfg.dst_maxburst;
|
|
|
|
width = chan->cfg.dst_addr_width;
|
|
|
|
dev_addr = chan->cfg.dst_addr;
|
|
|
|
*dev_dst = dev_addr;
|
2015-10-01 01:42:14 +08:00
|
|
|
*dcmd |= PXA_DCMD_INCSRCADDR;
|
2016-02-16 04:57:48 +08:00
|
|
|
if (chan->drcmr <= pdev->nr_requestors)
|
2015-10-01 01:42:14 +08:00
|
|
|
*dcmd |= PXA_DCMD_FLOWTRG;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
}
|
2015-08-12 04:16:32 +08:00
|
|
|
if (dir == DMA_MEM_TO_MEM)
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
*dcmd |= PXA_DCMD_BURST32 | PXA_DCMD_INCTRGADDR |
|
|
|
|
PXA_DCMD_INCSRCADDR;
|
|
|
|
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): dev_addr=0x%x maxburst=%d width=%d dir=%d\n",
|
|
|
|
__func__, dev_addr, maxburst, width, dir);
|
|
|
|
|
|
|
|
if (width == DMA_SLAVE_BUSWIDTH_1_BYTE)
|
|
|
|
*dcmd |= PXA_DCMD_WIDTH1;
|
|
|
|
else if (width == DMA_SLAVE_BUSWIDTH_2_BYTES)
|
|
|
|
*dcmd |= PXA_DCMD_WIDTH2;
|
|
|
|
else if (width == DMA_SLAVE_BUSWIDTH_4_BYTES)
|
|
|
|
*dcmd |= PXA_DCMD_WIDTH4;
|
|
|
|
|
|
|
|
if (maxburst == 8)
|
|
|
|
*dcmd |= PXA_DCMD_BURST8;
|
|
|
|
else if (maxburst == 16)
|
|
|
|
*dcmd |= PXA_DCMD_BURST16;
|
|
|
|
else if (maxburst == 32)
|
|
|
|
*dcmd |= PXA_DCMD_BURST32;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct dma_async_tx_descriptor *
|
|
|
|
pxad_prep_memcpy(struct dma_chan *dchan,
|
|
|
|
dma_addr_t dma_dst, dma_addr_t dma_src,
|
|
|
|
size_t len, unsigned long flags)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
struct pxad_desc_sw *sw_desc;
|
|
|
|
struct pxad_desc_hw *hw_desc;
|
|
|
|
u32 dcmd;
|
|
|
|
unsigned int i, nb_desc = 0;
|
|
|
|
size_t copy;
|
|
|
|
|
|
|
|
if (!dchan || !len)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): dma_dst=0x%lx dma_src=0x%lx len=%zu flags=%lx\n",
|
|
|
|
__func__, (unsigned long)dma_dst, (unsigned long)dma_src,
|
|
|
|
len, flags);
|
|
|
|
pxad_get_config(chan, DMA_MEM_TO_MEM, &dcmd, NULL, NULL);
|
|
|
|
|
|
|
|
nb_desc = DIV_ROUND_UP(len, PDMA_MAX_DESC_BYTES);
|
|
|
|
sw_desc = pxad_alloc_desc(chan, nb_desc + 1);
|
|
|
|
if (!sw_desc)
|
|
|
|
return NULL;
|
|
|
|
sw_desc->len = len;
|
|
|
|
|
|
|
|
if (!IS_ALIGNED(dma_src, 1 << PDMA_ALIGNMENT) ||
|
|
|
|
!IS_ALIGNED(dma_dst, 1 << PDMA_ALIGNMENT))
|
|
|
|
sw_desc->misaligned = true;
|
|
|
|
|
|
|
|
i = 0;
|
|
|
|
do {
|
|
|
|
hw_desc = sw_desc->hw_desc[i++];
|
|
|
|
copy = min_t(size_t, len, PDMA_MAX_DESC_BYTES);
|
|
|
|
hw_desc->dcmd = dcmd | (PXA_DCMD_LENGTH & copy);
|
|
|
|
hw_desc->dsadr = dma_src;
|
|
|
|
hw_desc->dtadr = dma_dst;
|
|
|
|
len -= copy;
|
|
|
|
dma_src += copy;
|
|
|
|
dma_dst += copy;
|
|
|
|
} while (len);
|
|
|
|
set_updater_desc(sw_desc, flags);
|
|
|
|
|
|
|
|
return pxad_tx_prep(&chan->vc, &sw_desc->vd, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct dma_async_tx_descriptor *
|
|
|
|
pxad_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl,
|
|
|
|
unsigned int sg_len, enum dma_transfer_direction dir,
|
|
|
|
unsigned long flags, void *context)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
struct pxad_desc_sw *sw_desc;
|
|
|
|
size_t len, avail;
|
|
|
|
struct scatterlist *sg;
|
|
|
|
dma_addr_t dma;
|
|
|
|
u32 dcmd, dsadr = 0, dtadr = 0;
|
|
|
|
unsigned int nb_desc = 0, i, j = 0;
|
|
|
|
|
|
|
|
if ((sgl == NULL) || (sg_len == 0))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
pxad_get_config(chan, dir, &dcmd, &dsadr, &dtadr);
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): dir=%d flags=%lx\n", __func__, dir, flags);
|
|
|
|
|
|
|
|
for_each_sg(sgl, sg, sg_len, i)
|
|
|
|
nb_desc += DIV_ROUND_UP(sg_dma_len(sg), PDMA_MAX_DESC_BYTES);
|
|
|
|
sw_desc = pxad_alloc_desc(chan, nb_desc + 1);
|
|
|
|
if (!sw_desc)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
for_each_sg(sgl, sg, sg_len, i) {
|
|
|
|
dma = sg_dma_address(sg);
|
|
|
|
avail = sg_dma_len(sg);
|
|
|
|
sw_desc->len += avail;
|
|
|
|
|
|
|
|
do {
|
|
|
|
len = min_t(size_t, avail, PDMA_MAX_DESC_BYTES);
|
|
|
|
if (dma & 0x7)
|
|
|
|
sw_desc->misaligned = true;
|
|
|
|
|
|
|
|
sw_desc->hw_desc[j]->dcmd =
|
|
|
|
dcmd | (PXA_DCMD_LENGTH & len);
|
|
|
|
sw_desc->hw_desc[j]->dsadr = dsadr ? dsadr : dma;
|
|
|
|
sw_desc->hw_desc[j++]->dtadr = dtadr ? dtadr : dma;
|
|
|
|
|
|
|
|
dma += len;
|
|
|
|
avail -= len;
|
|
|
|
} while (avail);
|
|
|
|
}
|
|
|
|
set_updater_desc(sw_desc, flags);
|
|
|
|
|
|
|
|
return pxad_tx_prep(&chan->vc, &sw_desc->vd, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct dma_async_tx_descriptor *
|
|
|
|
pxad_prep_dma_cyclic(struct dma_chan *dchan,
|
|
|
|
dma_addr_t buf_addr, size_t len, size_t period_len,
|
|
|
|
enum dma_transfer_direction dir, unsigned long flags)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
struct pxad_desc_sw *sw_desc;
|
|
|
|
struct pxad_desc_hw **phw_desc;
|
|
|
|
dma_addr_t dma;
|
|
|
|
u32 dcmd, dsadr = 0, dtadr = 0;
|
|
|
|
unsigned int nb_desc = 0;
|
|
|
|
|
|
|
|
if (!dchan || !len || !period_len)
|
|
|
|
return NULL;
|
|
|
|
if ((dir != DMA_DEV_TO_MEM) && (dir != DMA_MEM_TO_DEV)) {
|
|
|
|
dev_err(&chan->vc.chan.dev->device,
|
|
|
|
"Unsupported direction for cyclic DMA\n");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
/* the buffer length must be a multiple of period_len */
|
|
|
|
if (len % period_len != 0 || period_len > PDMA_MAX_DESC_BYTES ||
|
|
|
|
!IS_ALIGNED(period_len, 1 << PDMA_ALIGNMENT))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
pxad_get_config(chan, dir, &dcmd, &dsadr, &dtadr);
|
2016-02-17 05:54:02 +08:00
|
|
|
dcmd |= PXA_DCMD_ENDIRQEN | (PXA_DCMD_LENGTH & period_len);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): buf_addr=0x%lx len=%zu period=%zu dir=%d flags=%lx\n",
|
|
|
|
__func__, (unsigned long)buf_addr, len, period_len, dir, flags);
|
|
|
|
|
|
|
|
nb_desc = DIV_ROUND_UP(period_len, PDMA_MAX_DESC_BYTES);
|
|
|
|
nb_desc *= DIV_ROUND_UP(len, period_len);
|
|
|
|
sw_desc = pxad_alloc_desc(chan, nb_desc + 1);
|
|
|
|
if (!sw_desc)
|
|
|
|
return NULL;
|
|
|
|
sw_desc->cyclic = true;
|
|
|
|
sw_desc->len = len;
|
|
|
|
|
|
|
|
phw_desc = sw_desc->hw_desc;
|
|
|
|
dma = buf_addr;
|
|
|
|
do {
|
|
|
|
phw_desc[0]->dsadr = dsadr ? dsadr : dma;
|
|
|
|
phw_desc[0]->dtadr = dtadr ? dtadr : dma;
|
|
|
|
phw_desc[0]->dcmd = dcmd;
|
|
|
|
phw_desc++;
|
|
|
|
dma += period_len;
|
|
|
|
len -= period_len;
|
|
|
|
} while (len);
|
|
|
|
set_updater_desc(sw_desc, flags);
|
|
|
|
|
|
|
|
return pxad_tx_prep(&chan->vc, &sw_desc->vd, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int pxad_config(struct dma_chan *dchan,
|
|
|
|
struct dma_slave_config *cfg)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
|
|
|
|
if (!dchan)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
chan->cfg = *cfg;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int pxad_terminate_all(struct dma_chan *dchan)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
struct pxad_device *pdev = to_pxad_dev(chan->vc.chan.device);
|
|
|
|
struct virt_dma_desc *vd = NULL;
|
|
|
|
unsigned long flags;
|
|
|
|
struct pxad_phy *phy;
|
|
|
|
LIST_HEAD(head);
|
|
|
|
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): vchan %p: terminate all\n", __func__, &chan->vc);
|
|
|
|
|
|
|
|
spin_lock_irqsave(&chan->vc.lock, flags);
|
|
|
|
vchan_get_all_descriptors(&chan->vc, &head);
|
|
|
|
|
|
|
|
list_for_each_entry(vd, &head, node) {
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): cancelling txd %p[%x] (completed=%d)", __func__,
|
|
|
|
vd, vd->tx.cookie, is_desc_completed(vd));
|
|
|
|
}
|
|
|
|
|
|
|
|
phy = chan->phy;
|
|
|
|
if (phy) {
|
|
|
|
phy_disable(chan->phy);
|
|
|
|
pxad_free_phy(chan);
|
|
|
|
chan->phy = NULL;
|
|
|
|
spin_lock(&pdev->phy_lock);
|
|
|
|
phy->vchan = NULL;
|
|
|
|
spin_unlock(&pdev->phy_lock);
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&chan->vc.lock, flags);
|
|
|
|
vchan_dma_desc_free_list(&chan->vc, &head);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int pxad_residue(struct pxad_chan *chan,
|
|
|
|
dma_cookie_t cookie)
|
|
|
|
{
|
|
|
|
struct virt_dma_desc *vd = NULL;
|
|
|
|
struct pxad_desc_sw *sw_desc = NULL;
|
|
|
|
struct pxad_desc_hw *hw_desc = NULL;
|
|
|
|
u32 curr, start, len, end, residue = 0;
|
|
|
|
unsigned long flags;
|
|
|
|
bool passed = false;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the channel does not have a phy pointer anymore, it has already
|
|
|
|
* been completed. Therefore, its residue is 0.
|
|
|
|
*/
|
|
|
|
if (!chan->phy)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&chan->vc.lock, flags);
|
|
|
|
|
|
|
|
vd = vchan_find_desc(&chan->vc, cookie);
|
|
|
|
if (!vd)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
sw_desc = to_pxad_sw_desc(vd);
|
|
|
|
if (sw_desc->hw_desc[0]->dcmd & PXA_DCMD_INCSRCADDR)
|
|
|
|
curr = phy_readl_relaxed(chan->phy, DSADR);
|
|
|
|
else
|
|
|
|
curr = phy_readl_relaxed(chan->phy, DTADR);
|
|
|
|
|
2015-10-01 01:42:15 +08:00
|
|
|
/*
|
|
|
|
* curr has to be actually read before checking descriptor
|
|
|
|
* completion, so that a curr inside a status updater
|
|
|
|
* descriptor implies the following test returns true, and
|
|
|
|
* preventing reordering of curr load and the test.
|
|
|
|
*/
|
|
|
|
rmb();
|
|
|
|
if (is_desc_completed(vd))
|
|
|
|
goto out;
|
|
|
|
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
for (i = 0; i < sw_desc->nb_desc - 1; i++) {
|
|
|
|
hw_desc = sw_desc->hw_desc[i];
|
|
|
|
if (sw_desc->hw_desc[0]->dcmd & PXA_DCMD_INCSRCADDR)
|
|
|
|
start = hw_desc->dsadr;
|
|
|
|
else
|
|
|
|
start = hw_desc->dtadr;
|
|
|
|
len = hw_desc->dcmd & PXA_DCMD_LENGTH;
|
|
|
|
end = start + len;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* 'passed' will be latched once we found the descriptor
|
|
|
|
* which lies inside the boundaries of the curr
|
|
|
|
* pointer. All descriptors that occur in the list
|
|
|
|
* _after_ we found that partially handled descriptor
|
|
|
|
* are still to be processed and are hence added to the
|
|
|
|
* residual bytes counter.
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (passed) {
|
|
|
|
residue += len;
|
|
|
|
} else if (curr >= start && curr <= end) {
|
|
|
|
residue += end - curr;
|
|
|
|
passed = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!passed)
|
|
|
|
residue = sw_desc->len;
|
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock_irqrestore(&chan->vc.lock, flags);
|
|
|
|
dev_dbg(&chan->vc.chan.dev->device,
|
|
|
|
"%s(): txd %p[%x] sw_desc=%p: %d\n",
|
|
|
|
__func__, vd, cookie, sw_desc, residue);
|
|
|
|
return residue;
|
|
|
|
}
|
|
|
|
|
|
|
|
static enum dma_status pxad_tx_status(struct dma_chan *dchan,
|
|
|
|
dma_cookie_t cookie,
|
|
|
|
struct dma_tx_state *txstate)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
enum dma_status ret;
|
|
|
|
|
dmaengine: pxa: handle bus errors
In the current state, upon bus error the driver will spin endlessly,
relaunching the last tx, which will fail again and again :
- a bus error happens
- pxad_chan_handler() is called
- as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is
lauched, which is the one triggering the bus error, as it didn't
terminate
- moreover, the STOP interrupt fires a new, as the STOPIRQEN is still
active
Break this logic by stopping the automatic relaunch of a dma channel
upon a bus error, even if there are still pending issued requests on it.
As dma_cookie_status() seems unable to return DMA_ERROR in its current
form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx
basis, it is chosen in this patch to remember on the channel which
transaction failed, and report it in pxad_tx_status().
It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1
was completed while T2 causes a bus error, the status of T3 and T4 will
be reported as DMA_IN_PROGRESS, while the channel is actually stopped.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-29 05:32:24 +08:00
|
|
|
if (cookie == chan->bus_error)
|
|
|
|
return DMA_ERROR;
|
|
|
|
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
ret = dma_cookie_status(dchan, cookie, txstate);
|
|
|
|
if (likely(txstate && (ret != DMA_ERROR)))
|
|
|
|
dma_set_residue(txstate, pxad_residue(chan, cookie));
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-07-11 05:50:49 +08:00
|
|
|
static void pxad_synchronize(struct dma_chan *dchan)
|
|
|
|
{
|
|
|
|
struct pxad_chan *chan = to_pxad_chan(dchan);
|
|
|
|
|
|
|
|
wait_event(chan->wq_state, !is_chan_running(chan));
|
|
|
|
vchan_synchronize(&chan->vc);
|
|
|
|
}
|
|
|
|
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
static void pxad_free_channels(struct dma_device *dmadev)
|
|
|
|
{
|
|
|
|
struct pxad_chan *c, *cn;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(c, cn, &dmadev->channels,
|
|
|
|
vc.chan.device_node) {
|
|
|
|
list_del(&c->vc.chan.device_node);
|
|
|
|
tasklet_kill(&c->vc.task);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int pxad_remove(struct platform_device *op)
|
|
|
|
{
|
|
|
|
struct pxad_device *pdev = platform_get_drvdata(op);
|
|
|
|
|
2015-05-26 05:29:21 +08:00
|
|
|
pxad_cleanup_debugfs(pdev);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
pxad_free_channels(&pdev->slave);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int pxad_init_phys(struct platform_device *op,
|
|
|
|
struct pxad_device *pdev,
|
|
|
|
unsigned int nb_phy_chans)
|
|
|
|
{
|
|
|
|
int irq0, irq, nr_irq = 0, i, ret;
|
|
|
|
struct pxad_phy *phy;
|
|
|
|
|
|
|
|
irq0 = platform_get_irq(op, 0);
|
|
|
|
if (irq0 < 0)
|
|
|
|
return irq0;
|
|
|
|
|
|
|
|
pdev->phys = devm_kcalloc(&op->dev, nb_phy_chans,
|
|
|
|
sizeof(pdev->phys[0]), GFP_KERNEL);
|
|
|
|
if (!pdev->phys)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
for (i = 0; i < nb_phy_chans; i++)
|
|
|
|
if (platform_get_irq(op, i) > 0)
|
|
|
|
nr_irq++;
|
|
|
|
|
|
|
|
for (i = 0; i < nb_phy_chans; i++) {
|
|
|
|
phy = &pdev->phys[i];
|
|
|
|
phy->base = pdev->base;
|
|
|
|
phy->idx = i;
|
|
|
|
irq = platform_get_irq(op, i);
|
|
|
|
if ((nr_irq > 1) && (irq > 0))
|
|
|
|
ret = devm_request_irq(&op->dev, irq,
|
|
|
|
pxad_chan_handler,
|
|
|
|
IRQF_SHARED, "pxa-dma", phy);
|
|
|
|
if ((nr_irq == 1) && (i == 0))
|
|
|
|
ret = devm_request_irq(&op->dev, irq0,
|
|
|
|
pxad_int_handler,
|
|
|
|
IRQF_SHARED, "pxa-dma", pdev);
|
|
|
|
if (ret) {
|
|
|
|
dev_err(pdev->slave.dev,
|
|
|
|
"%s(): can't request irq %d:%d\n", __func__,
|
|
|
|
irq, ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-04-25 17:47:56 +08:00
|
|
|
static const struct of_device_id pxad_dt_ids[] = {
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
{ .compatible = "marvell,pdma-1.0", },
|
|
|
|
{}
|
|
|
|
};
|
|
|
|
MODULE_DEVICE_TABLE(of, pxad_dt_ids);
|
|
|
|
|
|
|
|
static struct dma_chan *pxad_dma_xlate(struct of_phandle_args *dma_spec,
|
|
|
|
struct of_dma *ofdma)
|
|
|
|
{
|
|
|
|
struct pxad_device *d = ofdma->of_dma_data;
|
|
|
|
struct dma_chan *chan;
|
|
|
|
|
|
|
|
chan = dma_get_any_slave_channel(&d->slave);
|
|
|
|
if (!chan)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
to_pxad_chan(chan)->drcmr = dma_spec->args[0];
|
|
|
|
to_pxad_chan(chan)->prio = dma_spec->args[1];
|
|
|
|
|
|
|
|
return chan;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int pxad_init_dmadev(struct platform_device *op,
|
|
|
|
struct pxad_device *pdev,
|
2016-02-16 04:57:48 +08:00
|
|
|
unsigned int nr_phy_chans,
|
|
|
|
unsigned int nr_requestors)
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
unsigned int i;
|
|
|
|
struct pxad_chan *c;
|
|
|
|
|
|
|
|
pdev->nr_chans = nr_phy_chans;
|
2016-02-16 04:57:48 +08:00
|
|
|
pdev->nr_requestors = nr_requestors;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
INIT_LIST_HEAD(&pdev->slave.channels);
|
|
|
|
pdev->slave.device_alloc_chan_resources = pxad_alloc_chan_resources;
|
|
|
|
pdev->slave.device_free_chan_resources = pxad_free_chan_resources;
|
|
|
|
pdev->slave.device_tx_status = pxad_tx_status;
|
|
|
|
pdev->slave.device_issue_pending = pxad_issue_pending;
|
|
|
|
pdev->slave.device_config = pxad_config;
|
2016-07-11 05:50:49 +08:00
|
|
|
pdev->slave.device_synchronize = pxad_synchronize;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
pdev->slave.device_terminate_all = pxad_terminate_all;
|
|
|
|
|
|
|
|
if (op->dev.coherent_dma_mask)
|
|
|
|
dma_set_mask(&op->dev, op->dev.coherent_dma_mask);
|
|
|
|
else
|
|
|
|
dma_set_mask(&op->dev, DMA_BIT_MASK(32));
|
|
|
|
|
|
|
|
ret = pxad_init_phys(op, pdev, nr_phy_chans);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_phy_chans; i++) {
|
|
|
|
c = devm_kzalloc(&op->dev, sizeof(*c), GFP_KERNEL);
|
|
|
|
if (!c)
|
|
|
|
return -ENOMEM;
|
2018-06-18 01:02:06 +08:00
|
|
|
|
|
|
|
c->drcmr = U32_MAX;
|
|
|
|
c->prio = PXAD_PRIO_LOWEST;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
c->vc.desc_free = pxad_free_desc;
|
|
|
|
vchan_init(&c->vc, &pdev->slave);
|
2016-07-11 05:50:49 +08:00
|
|
|
init_waitqueue_head(&c->wq_state);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
}
|
|
|
|
|
2018-08-06 16:52:29 +08:00
|
|
|
return dmaenginem_async_device_register(&pdev->slave);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int pxad_probe(struct platform_device *op)
|
|
|
|
{
|
|
|
|
struct pxad_device *pdev;
|
|
|
|
const struct of_device_id *of_id;
|
2018-06-18 01:02:04 +08:00
|
|
|
const struct dma_slave_map *slave_map = NULL;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
struct mmp_dma_platdata *pdata = dev_get_platdata(&op->dev);
|
|
|
|
struct resource *iores;
|
2018-06-18 01:02:04 +08:00
|
|
|
int ret, dma_channels = 0, nb_requestors = 0, slave_map_cnt = 0;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
const enum dma_slave_buswidth widths =
|
|
|
|
DMA_SLAVE_BUSWIDTH_1_BYTE | DMA_SLAVE_BUSWIDTH_2_BYTES |
|
|
|
|
DMA_SLAVE_BUSWIDTH_4_BYTES;
|
|
|
|
|
|
|
|
pdev = devm_kzalloc(&op->dev, sizeof(*pdev), GFP_KERNEL);
|
|
|
|
if (!pdev)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
spin_lock_init(&pdev->phy_lock);
|
|
|
|
|
|
|
|
iores = platform_get_resource(op, IORESOURCE_MEM, 0);
|
|
|
|
pdev->base = devm_ioremap_resource(&op->dev, iores);
|
|
|
|
if (IS_ERR(pdev->base))
|
|
|
|
return PTR_ERR(pdev->base);
|
|
|
|
|
|
|
|
of_id = of_match_device(pxad_dt_ids, &op->dev);
|
2016-02-16 04:57:48 +08:00
|
|
|
if (of_id) {
|
2022-05-03 14:54:05 +08:00
|
|
|
/* Parse new and deprecated dma-channels properties */
|
|
|
|
if (of_property_read_u32(op->dev.of_node, "dma-channels",
|
|
|
|
&dma_channels))
|
|
|
|
of_property_read_u32(op->dev.of_node, "#dma-channels",
|
|
|
|
&dma_channels);
|
|
|
|
/* Parse new and deprecated dma-requests properties */
|
|
|
|
ret = of_property_read_u32(op->dev.of_node, "dma-requests",
|
2016-02-16 04:57:48 +08:00
|
|
|
&nb_requestors);
|
2022-05-03 14:54:05 +08:00
|
|
|
if (ret)
|
|
|
|
ret = of_property_read_u32(op->dev.of_node, "#dma-requests",
|
|
|
|
&nb_requestors);
|
2016-02-16 04:57:48 +08:00
|
|
|
if (ret) {
|
|
|
|
dev_warn(pdev->slave.dev,
|
|
|
|
"#dma-requests set to default 32 as missing in OF: %d",
|
|
|
|
ret);
|
|
|
|
nb_requestors = 32;
|
2018-08-30 05:04:26 +08:00
|
|
|
}
|
2016-02-16 04:57:48 +08:00
|
|
|
} else if (pdata && pdata->dma_channels) {
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
dma_channels = pdata->dma_channels;
|
2016-02-16 04:57:48 +08:00
|
|
|
nb_requestors = pdata->nb_requestors;
|
2018-06-18 01:02:04 +08:00
|
|
|
slave_map = pdata->slave_map;
|
|
|
|
slave_map_cnt = pdata->slave_map_cnt;
|
2016-02-16 04:57:48 +08:00
|
|
|
} else {
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
dma_channels = 32; /* default 32 channel */
|
2016-02-16 04:57:48 +08:00
|
|
|
}
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
dma_cap_set(DMA_SLAVE, pdev->slave.cap_mask);
|
|
|
|
dma_cap_set(DMA_MEMCPY, pdev->slave.cap_mask);
|
|
|
|
dma_cap_set(DMA_CYCLIC, pdev->slave.cap_mask);
|
|
|
|
dma_cap_set(DMA_PRIVATE, pdev->slave.cap_mask);
|
|
|
|
pdev->slave.device_prep_dma_memcpy = pxad_prep_memcpy;
|
|
|
|
pdev->slave.device_prep_slave_sg = pxad_prep_slave_sg;
|
|
|
|
pdev->slave.device_prep_dma_cyclic = pxad_prep_dma_cyclic;
|
2018-06-18 01:02:04 +08:00
|
|
|
pdev->slave.filter.map = slave_map;
|
|
|
|
pdev->slave.filter.mapcnt = slave_map_cnt;
|
|
|
|
pdev->slave.filter.fn = pxad_filter_fn;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
pdev->slave.copy_align = PDMA_ALIGNMENT;
|
|
|
|
pdev->slave.src_addr_widths = widths;
|
|
|
|
pdev->slave.dst_addr_widths = widths;
|
|
|
|
pdev->slave.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM);
|
|
|
|
pdev->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
|
2015-10-14 03:54:30 +08:00
|
|
|
pdev->slave.descriptor_reuse = true;
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
|
|
|
|
pdev->slave.dev = &op->dev;
|
2016-02-16 04:57:48 +08:00
|
|
|
ret = pxad_init_dmadev(op, pdev, dma_channels, nb_requestors);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
if (ret) {
|
|
|
|
dev_err(pdev->slave.dev, "unable to register\n");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (op->dev.of_node) {
|
|
|
|
/* Device-tree DMA controller registration */
|
|
|
|
ret = of_dma_controller_register(op->dev.of_node,
|
|
|
|
pxad_dma_xlate, pdev);
|
|
|
|
if (ret < 0) {
|
|
|
|
dev_err(pdev->slave.dev,
|
|
|
|
"of_dma_controller_register failed\n");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
platform_set_drvdata(op, pdev);
|
2015-05-26 05:29:21 +08:00
|
|
|
pxad_init_debugfs(pdev);
|
2016-02-16 04:57:48 +08:00
|
|
|
dev_info(pdev->slave.dev, "initialized %d channels on %d requestors\n",
|
|
|
|
dma_channels, nb_requestors);
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct platform_device_id pxad_id_table[] = {
|
|
|
|
{ "pxa-dma", },
|
|
|
|
{ },
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct platform_driver pxad_driver = {
|
|
|
|
.driver = {
|
|
|
|
.name = "pxa-dma",
|
|
|
|
.of_match_table = pxad_dt_ids,
|
|
|
|
},
|
|
|
|
.id_table = pxad_id_table,
|
|
|
|
.probe = pxad_probe,
|
|
|
|
.remove = pxad_remove,
|
|
|
|
};
|
|
|
|
|
2018-06-18 01:02:15 +08:00
|
|
|
static bool pxad_filter_fn(struct dma_chan *chan, void *param)
|
dmaengine: pxa: add pxa dmaengine driver
This is a new driver for pxa SoCs, which is also compatible with the former
mmp_pdma.
The rationale behind a new driver (as opposed to incremental patching) was :
- the new driver relies on virt-dma, which obsoletes all the internal
structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the
functions
- mmp_pdma allocates dma coherent descriptors containing not only hardware
descriptors but linked list information
The new driver only puts the dma hardware descriptors (ie. 4 u32) into the
dma pool allocated memory. This changes completely the way descriptors are
handled
- the architecture behind the interrupt/tasklet management was rewritten to be
more conforming to virt-dma
- the buffers alignment is handled differently
The former driver assumed that the DMA channel stopped between each
descriptor. The new one chains descriptors to let the channel running. This
is a necessary guarantee for real-time high bandwidth usecases such as video
capture on "old" architectures such as pxa.
- hot chaining / cold chaining / no chaining
Whenever possible, submitting a descriptor "hot chains" it to a running
channel. There is still no guarantee that the descriptor will be issued, as
the channel might be stopped just before the descriptor is submitted. Yet
this allows to submit several video buffers, and resubmit a buffer while
another is under handling.
As before, dma_async_issue_pending() is the only guarantee to have all the
buffers issued.
When an alignment issue is detected (ie. one address in a descriptor is not
a multiple of 8), if the already running channel is in "aligned mode", the
channel will stop, and restarted in "misaligned mode" to finished the issued
list.
- descriptors reusing
A submitted, issued and completed descriptor can be reused, ie resubmitted if
it was prepared with the proper flag (DMA_PREP_ACK). Only a channel
resources release will in this case release that buffer.
This allows a rolling ring of buffers to be reused, where there are several
thousands of hardware descriptors used (video buffer for example).
Additionally, a set of more casual features is introduced :
- debugging traces
- lockless way to know if a descriptor is terminated or not
The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x),
with dmatest, pxa_camera and pxamci.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26 05:29:20 +08:00
|
|
|
{
|
|
|
|
struct pxad_chan *c = to_pxad_chan(chan);
|
|
|
|
struct pxad_param *p = param;
|
|
|
|
|
|
|
|
if (chan->device->dev->driver != &pxad_driver.driver)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
c->drcmr = p->drcmr;
|
|
|
|
c->prio = p->prio;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
module_platform_driver(pxad_driver);
|
|
|
|
|
|
|
|
MODULE_DESCRIPTION("Marvell PXA Peripheral DMA Driver");
|
|
|
|
MODULE_AUTHOR("Robert Jarzmik <robert.jarzmik@free.fr>");
|
|
|
|
MODULE_LICENSE("GPL v2");
|