Commit Graph

446 Commits

Author SHA1 Message Date
Dan Williams e61dacaeb3 ioat3: enable dca for completion writes
Tag completion writes for direct cache access to reduce the latency of
checking for descriptor completions.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:57 -07:00
Dan Williams 5669e31c5a ioat: add 'ioat' sysfs attributes
Export driver attributes for diagnostic purposes:
'ring_size': total number of descriptors available to the engine
'ring_active': number of descriptors in-flight
'capabilities': supported operation types for this channel
'version': Intel(R) QuickData specfication revision

This also allows some chattiness to be removed from the driver startup
as this information is now available via sysfs.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:56 -07:00
Dan Williams bf40a6869c ioat3: split ioat3 support to its own file, add memset
Up until this point the driver for Intel(R) QuickData Technology
engines, specification versions 2 and 3, were mostly identical save for
a few quirks.  Version 3.2 hardware adds many new capabilities (like
raid offload support) requiring some infrastructure that is not relevant
for v2.  For better code organization of the new funcionality move v3
and v3.2 support to its own file dma_v3.c, and export some routines from
the base files (dma.c and dma_v2.c) that can be reused directly.

The first new capability included in this code reorganization is support
for v3.2 memset operations.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:55 -07:00
Dan Williams 2aec048cdc ioat3: hardware version 3.2 register / descriptor definitions
ioat3.2 adds raid5 and raid6 offload capabilities.

Signed-off-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:54 -07:00
Dan Williams 128f2d567f ioat2+: add fence support
In preparation for adding more operation types to the ioat3 path the
driver needs to honor the DMA_PREP_FENCE flag.  For example the async_tx api
will hand xor->memcpy->xor chains to the driver with the 'fence' flag set on
the first xor and the memcpy operation.  This flag in turn sets the 'fence'
flag in the descriptor control field telling the hardware that future
descriptors in the chain depend on the result of the current descriptor, so
wait for all writes to complete before starting the next operation.

Note that ioat1 does not prefetch the descriptor chain, so does not
require/support fenced operations.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:53 -07:00
Dan Williams 83544ae9f3 dmaengine, async_tx: support alignment checks
Some engines have transfer size and address alignment restrictions.  Add
a per-operation alignment property to struct dma_device that the async
routines and dmatest can use to check alignment capabilities.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:53 -07:00
Dan Williams 9308add6ea dmaengine: cleanup unused transaction types
No drivers currently implement these operation types, so they can be
deleted.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:52 -07:00
Dan Williams 138f4c359d dmaengine, async_tx: add a "no channel switch" allocator
Channel switching is problematic for some dmaengine drivers as the
architecture precludes separating the ->prep from ->submit.  In these
cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify
the async_tx allocator to only return channels that support all of the
required asynchronous operations.

For example MD_RAID456=y selects support for asynchronous xor, xor
validate, pq, pq validate, and memcpy.  When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these
capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to
quickly locate compatible channels with the guarantee that dependency
chains will remain on one channel.  When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select
channels that lead to operation chains that need to cross channel
boundaries using the async_tx channel switch capability.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:51 -07:00
Dan Williams f9dd213437 Merge branch 'md-raid6-accel' into ioat3.2
Conflicts:
	include/linux/dmaengine.h
2009-09-08 17:42:29 -07:00
Dan Williams 4b652f0db3 net_dma: poll for a descriptor after allocation failure
Handle descriptor allocation failures by polling for a descriptor.  The
driver will force forward progress when polled.  In the best case this
polling interval will be the time it takes for one dma memcpy
transaction to complete.  In the worst case, channel hang, we will need
to wait 100ms for the cleanup watchdog to fire (ioatdma driver).

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:38:54 -07:00
Dan Williams a309218ace ioat2,3: dynamically resize descriptor ring
Increment the allocation order of the descriptor ring every time we run
out of descriptors up to a maximum of allocation order specified by the
module parameter 'ioat_max_alloc_order'.  After each idle period
decrement the allocation order to a minimum order of
'ioat_ring_alloc_order' (i.e. the default ring size, tunable as a module
parameter).

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:38:54 -07:00
Dan Williams 09c8a5b85e ioat: switch watchdog and reset handler from workqueue to timer
In order to support dynamic resizing of the descriptor ring or polling
for a descriptor in the presence of a hung channel the reset handler
needs to make progress while in a non-preemptible context.  The current
workqueue implementation precludes polling channel reset completion
under spin_lock().

This conversion also allows us to return to opportunistic cleanup in the
ioat2 case as the timer implementation guarantees at least one cleanup
after every descriptor is submitted.  This means the worst case
completion latency becomes the timer frequency (for exceptional
circumstances), but with the benefit of avoiding busy waiting when the
lock is contended.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:30:24 -07:00
Dan Williams ad643f54c8 ioat1: trim ioat_dma_desc_sw
Save 4 bytes per software descriptor by transmitting tx_cnt in an unused
portion of the hardware descriptor.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:30:24 -07:00
Dan Williams 345d852391 ioat: ___devinit annotate the initialization paths
Mark all single use initialization routines with __devinit.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:30:24 -07:00
Dan Williams f6ab95b557 ioat: preserve chanctrl bits when re-arming interrupts
The register write in ioat_dma_cleanup_tasklet is unfortunate in two
ways:
1/ It clears the extra 'enable' bits that we set at alloc_chan_resources time
2/ It gives the impression that it disables interrupts when it is in
   fact re-arming interrupts

[ Impact: fix, persist the value of the chanctrl register when re-arming ]

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:30:24 -07:00
Dan Williams bb32078630 ioat: ignore reserved bits for chancnt and xfercap
Don't trust that the reserved bits are always zero, also sanity check
the returned value.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:30:24 -07:00
Dan Williams 4fb9b9e8d5 ioat: cleanup completion status reads
The cleanup path makes an effort to only perform an atomic read of the
64-bit completion address.  However in the 32-bit case it does not
matter if we read the upper-32 and lower-32 non-atomically because the
upper-32 will always be zero.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:30:24 -07:00
Dan Williams 6df9183a15 ioat: add some dev_dbg() calls
Provide some output for debugging the driver.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:30:23 -07:00
Dan Williams 38e12f64a1 ioat1: kill unused unmap parameters
The unified ioat1/ioat2 ioat_dma_unmap() implementation derives the
source and dest addresses from the unmap descriptor.  There is no longer
a need to track this information in struct ioat_desc_sw.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:30:23 -07:00
Dan Williams 5cbafa65b9 ioat2,3: convert to a true ring buffer
Replace the current linked list munged into a ring with a native ring
buffer implementation.  The benefit of this approach is reduced overhead
as many parameters can be derived from ring position with simple pointer
comparisons and descriptor allocation/freeing becomes just a
manipulation of head/tail pointers.

It requires a contiguous allocation for the software descriptor
information.

Since this arrangement is significantly different from the ioat1 chain,
move ioat2,3 support into its own file and header.  Common routines are
exported from driver/dma/ioat/dma.[ch].

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:55 -07:00
Dan Williams dcbc853af6 ioat: prepare the code for ioat[12]_dma_chan split
Prepare the code for the conversion of the ioat2 linked-list-ring into a
native ring buffer.  After this conversion ioat2 channels will share
less of the ioat1 infrastructure, but there will still be places where
sharing is possible.  struct ioat_chan_common is created to house the
channel attributes that will remain common between ioat1 and ioat2
channels.

For every routine that accesses both common and hardware specific fields
the old unified 'ioat_chan' pointer is split into an 'ioat' and  'chan'
pointer.  Where 'chan' references common fields and 'ioat' the
hardware/version specific.

[ Impact: pure structure member movement/variable renames, no logic changes ]

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:55 -07:00
Dan Williams a6a39ca1ba ioat: fix self test interrupts
If a callback is to be attached to a descriptor the channel needs to
know at ->prep time so it can set the interrupt enable bit.  This is in
preparation for moving descriptor ioat2 descriptor preparation from
->submit to ->prep.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:55 -07:00
Dan Williams a0587bcf3e ioat1: move descriptor allocation from submit to prep
The async_tx api assumes that after a successful ->prep a subsequent
->submit will not fail due to a lack of resources.

This also fixes a bug in the allocation failure case.  Previously the
descriptors allocated prior to the allocation failure would not be
returned to the free list.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:55 -07:00
Dan Williams c7984f4e4e ioat: define descriptor control bit-field
This cleans up a mess of and'ing and or'ing bit definitions, and allows
simple assignments from the specified dma_ctrl_flags parameter.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:55 -07:00
Dan Williams 77867fff03 ioat: fix type mismatch for ->dmacount
->dmacount tracks the sequence number of active descriptors.  It is
written to the DMACOUNT register to update the channel's view of pending
descriptors in the chain.  The register is 16-bits so ->dmacount should
be unsigned and 16-bit as well.  Also modify ->desccount to maintain
alignment.

This was never a problem in practice because we never compared dmacount
values, but this is a bug waiting to happen.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:54 -07:00
Dan Williams f2427e276f ioat: split ioat_dma_probe into core/version-specific routines
Towards the removal of ioatdma_device.version split the initialization
path into distinct versions.  This conversion:
1/ moves version specific probe code to version specific routines
2/ removes the need for ioat_device
3/ turns off the ioat1 msi quirk if the device is reinitialized for intx

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:54 -07:00
Dan Williams b31b78f1ab ioat: kill function prototype ifdef guards
The only .c files that utilize these protected prototypes depend on
CONFIG_INTEL_IOATDMA=y, so there is no value gained in providing empty
prototypes.

[ Impact: pure cleanup ]

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:54 -07:00
Dan Williams bc3c702585 ioat: cleanup some long deref chains and 80 column collisions
* reduce device->common. to dma-> in ioat_dma_{probe,remove,selftest}
* ioat_lookup_chan_by_index to ioat_chan_by_index
* multi-line function definitions
* ioat_desc_sw.async_tx to ioat_desc_sw.txd
* desc->txd. to tx-> in cleanup routine

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:54 -07:00
Dan Williams e6c0b69a43 ioat: convert ioat_probe to pcim/devm
The driver currently duplicates much of what these routines offer, so
just use the common code.  For example ->irq_mode tracks what interrupt
mode was initialized, which duplicates the ->msix_enabled and
->msi_enabled handling in pcim_release.

This also adds a check to the return value of dma_async_device_register,
which can fail.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:44 -07:00
Dan Williams 1f27adc2f0 ioat: move definitions to dma.h
Some of these defines may be useful outside of dma.c and the header is
private so there are no namespace pollution concerns.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:29:02 -07:00
Dan Williams a348a7e6fd Merge commit 'v2.6.31-rc1' into dmaengine 2009-09-08 14:32:24 -07:00
Dan Williams f6dbf65161 iop-adma: P+Q self test
Even though the intent is to extend dmatest with P+Q tests there is
still value in having an always-on sanity check to prevent an
unintentionally broken driver from registering.

This depends on raid6_pq.ko for verification, the side effect being that
PQ capable channels will fail to register when raid6 is disabled.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:12:40 -07:00
Dan Williams 7bf649aee8 iop-adma: P+Q support for iop13xx adma engines
iop33x support is not included because that engine is a bit more awkward
to handle in that it can either be in xor mode or pq mode.  The
dmaengine/async_tx layers currently only comprehend static capabilities.

Note iop13xx does not support hardware PQ continuation so the driver
must handle the DMA_PREP_CONTINUE flag for operations across > 16
sources. From the comment for dma_maxpq:

/* When an engine does not support native continuation we need 3 extra
 * source slots to reuse P and Q with the following coefficients:
 * 1/ {00} * P : remove P from Q', but use it as a source for P'
 * 2/ {01} * Q : use Q to continue Q' calculation
 * 3/ {00} * Q : subtract Q from P' to cancel (2)
 */

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:12:39 -07:00
Dan Williams 72be12f0c3 iop-adma: fix lockdep false positive
lockdep correctly identifies a potential recursive locking case for
iop_chan->lock, but in the dependency submission case we expect that the same
class will be acquired for both the parent dependency and the child channel.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:12:39 -07:00
Dan Williams 507fbec4cf iop-adma: cleanup iop_adma_run_tx_complete_actions
Replace 'desc->async_tx.' with 'tx->'

[ Impact: pure cleanup ]

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:12:39 -07:00
Dan Williams 58691d64c4 dmatest: add pq support
Test raid6 p+q operations with a simple "always multiply by 1" q
calculation to fit into dmatest's current destination verification
scheme.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:27 -07:00
Dan Williams b2f46fd8ef async_tx: add support for asynchronous GF multiplication
[ Based on an original patch by Yuri Tikhonov ]

This adds support for doing asynchronous GF multiplication by adding
two additional functions to the async_tx API:

 async_gen_syndrome() does simultaneous XOR and Galois field
    multiplication of sources.

 async_syndrome_val() validates the given source buffers against known P
    and Q values.

When a request is made to run async_pq against more than the hardware
maximum number of supported sources we need to reuse the previous
generated P and Q values as sources into the next operation.  Care must
be taken to remove Q from P' and P from Q'.  For example to perform a 5
source pq op with hardware that only supports 4 sources at a time the
following approach is taken:

p, q = PQ(src0, src1, src2, src3, COEF({01}, {02}, {04}, {08}))
p', q' = PQ(p, q, q, src4, COEF({00}, {01}, {00}, {10}))

p' = p + q + q + src4 = p + src4
q' = {00}*p + {01}*q + {00}*q + {10}*src4 = q + {10}*src4

Note: 4 is the minimum acceptable maxpq otherwise we punt to
synchronous-software path.

The DMA_PREP_CONTINUE flag indicates to the driver to reuse p and q as
sources (in the above manner) and fill the remaining slots up to maxpq
with the new sources/coefficients.

Note1: Some devices have native support for P+Q continuation and can skip
this extra work.  Devices with this capability can advertise it with
dma_set_maxpq.  It is up to each driver how to handle the
DMA_PREP_CONTINUE flag.

Note2: The api supports disabling the generation of P when generating Q,
this is ignored by the synchronous path but is implemented by some dma
devices to save unnecessary writes.  In this case the continuation
algorithm is simplified to only reuse Q as a source.

Cc: H. Peter Anvin <hpa@zytor.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:27 -07:00
Dan Williams 95475e5711 async_tx: remove walk of tx->parent chain in dma_wait_for_async_tx
We currently walk the parent chain when waiting for a given tx to
complete however this walk may race with the driver cleanup routine.
The routines in async_raid6_recov.c may fall back to the synchronous
path at any point so we need to be prepared to call async_tx_quiesce()
(which calls  dma_wait_for_async_tx).  To remove the ->parent walk we
guarantee that every time a dependency is attached ->issue_pending() is
invoked, then we can simply poll the initial descriptor until
completion.

This also allows for a lighter weight 'issue pending' implementation as
there is no longer a requirement to iterate through all the channels'
->issue_pending() routines as long as operations have been submitted in
an ordered chain.  async_tx_issue_pending() is added for this case.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:27 -07:00
Rafael J. Wysocki c00aafcd49 Merge branch 'master' into for-linus 2009-08-05 23:56:54 +02:00
Linus Torvalds db06816cb9 Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
  dmaengine: at_hdmac: add DMA slave transfers
  dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller
  dmaengine: dmatest: correct thread_count while using multiple thread per channel
  dmaengine: dmatest: add a maximum number of test iterations
  drivers/dma: Remove unnecessary semicolons
  drivers/dma/fsldma.c: Remove unnecessary semicolons
  dmaengine: move HIGHMEM64G restriction to ASYNC_TX_DMA
  fsldma: do not clear bandwidth control bits on the 83xx controller
  fsldma: enable external start for the 83xx controller
  fsldma: use PCI Read Multiple command
2009-07-30 16:46:31 -07:00
Dan Williams 584ec22759 ioat: move to drivers/dma/ioat/
When first created the ioat driver was the only inhabitant of
drivers/dma/.  Now, it is the only multi-file (more than a .c and a .h)
driver in the directory.  Moving it to an ioat/ subdirectory allows the
naming convention to be cleaned up, and allows for future splitting of
the source files by hardware version (v1, v2, and v3).

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-28 14:32:12 -07:00
Nicolas Ferre 808347f6a3 dmaengine: at_hdmac: add DMA slave transfers
This patch for at_hdmac adds the slave transfers capability to the Atmel DMA
controller available on some AT91 SOCs. This allow peripheral to memory and
memory to peripheral transfers with hardware handshaking.

Slave structure for controller specific information is passed through channel
private data. This at_dma_slave structure is defined in at_hdmac.h header file
and relative hardware definition are moved to this file from at_hdmac_regs.h.
Doing this we allow the channel configuration from platform definition code.

This work is intensively based on dw_dmac and several slave implementations.

Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-22 23:15:33 -07:00
Nicolas Ferre dc78baa2b9 dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller
This AHB DMA Controller (aka HDMA or DMAC on AT91 systems) is availlable on
at91sam9rl chip. It will be used on other products in the future.

This first release covers only the memory-to-memory tranfer type. This is the
only tranfer type supported by this chip.  On other products, it will be used
also for peripheral DMA transfer (slave API support to come).

I used dmatest client without problem in different configurations to test it.

Full documentation for this controller can be found in the SAM9RL datasheet:
http://www.atmel.com/dyn/products/product_card.asp?part_id=4243

Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-22 22:41:27 -07:00
Nicolas Ferre f1aef8b6e6 dmaengine: dmatest: correct thread_count while using multiple thread per channel
It seems that thread_count is not properly calculated in dmatest.
In fact the thread count number that is returned from dmatest_add_threads() is
not correctly added to the thread_count and thus not properly printed.

Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-22 22:11:28 -07:00
Nicolas Ferre 0a2ff57d6f dmaengine: dmatest: add a maximum number of test iterations
The dmatest usually waits for the killing of its kthreads to stop
running tests.  This patch adds a parameter that sets a maximum
number of test iterations.

This feature is quite interesting for debugging when you set a lot of
traces in your dmaengine controller driver.

Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-22 22:05:26 -07:00
Joe Perches c019894efc drivers/dma: Remove unnecessary semicolons
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-22 21:29:16 -07:00
Joe Perches e3d433040e drivers/dma/fsldma.c: Remove unnecessary semicolons
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-22 21:29:16 -07:00
Magnus Damm 4aebac2fb9 DMA: Rework txx9dmac suspend_late()/resume_early()
This patch reworks platform driver power management code
for txx9dmac from legacy late/early callbacks to dev_pm_ops.

The callbacks are converted for CONFIG_SUSPEND like this:
  suspend_late() -> suspend_noirq()
  resume_early() -> resume_noirq()

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2009-07-22 00:28:39 +02:00
Magnus Damm 4a256b5fc0 DMA: Rework dw_dmac suspend_late()/resume_early()
This patch reworks platform driver power management code
for dw_dmac from legacy late/early callbacks to dev_pm_ops.

The callbacks are converted for CONFIG_SUSPEND like this:
  suspend_late() -> suspend_noirq()
  resume_early() -> resume_noirq()

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2009-07-22 00:28:38 +02:00
Dan Williams daf4219dbc dmaengine: move HIGHMEM64G restriction to ASYNC_TX_DMA
On HIGHMEM64G systems dma_addr_t is known to be larger than (void *)
which precludes async_xor from performing dma address conversions by
reusing the input parameter address list.  However, other parts of the
dmaengine infrastructure do not suffer this constraint, so the
HIGHMEM64G restriction can be down-levelled.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-01 16:12:53 -07:00
Atsushi Nemoto 4ac4aa5cc3 DMA: txx9dmac: use dma_unmap_single if DMA_COMPL_{SRC,DEST}_UNMAP_SINGLE set
This patch does not change actual behaviour since dma_unmap_page is just
an alias of dma_unmap_single on MIPS.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Cc: Ralf Baechle <ralf@linux-mips.org>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-06-24 18:34:40 +01:00
Atsushi Nemoto ea76f0b375 DMA: TXx9 Soc DMA Controller driver
This patch adds support for the integrated DMAC of the TXx9 family.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-06-17 11:06:25 +01:00
Ira Snyder 43a1a3ed6b fsldma: do not clear bandwidth control bits on the 83xx controller
The 83xx controller does not support the external pause feature. The bit
in the mode register that controls external pause on the 85xx controller
happens to be part of the bandwidth control settings for the 83xx
controller.

This patch fixes the driver so that it only clears the external pause bit
if the hardware is the 85xx controller. When driving the 83xx controller,
the bit is left untouched. This follows the existing convention that mode
registers settings are not touched unless necessary.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-06-16 11:43:40 -07:00
Ira Snyder be30b226f2 fsldma: enable external start for the 83xx controller
The 83xx controller has external start capability, but lacks external pause
capability. Hook up the external start function pointer for the 83xx
controller.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-06-16 11:43:00 -07:00
Ira W. Snyder a7aea373b4 fsldma: use PCI Read Multiple command
By default, the Freescale 83xx DMA controller uses the PCI Read Line
command when reading data over the PCI bus. Setting the controller to use
the PCI Read Multiple command instead allows the controller to read much
larger bursts of data, which provides a drastic speed increase.

The slowdown due to using PCI Read Line was only observed when a PCI-to-PCI
bridge was between the devices trying to communicate.

A simple test driver showed an increase from 4MB/sec to 116MB/sec when
performing DMA over the PCI bus. Using DMA to transfer between blocks of
local SDRAM showed no change in performance with this patch. The dmatest
driver was also used to verify the correctness of the transfers, and showed
no errors.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Acked-by: Timur Tabi <timur@freescale.com>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-06-16 11:29:17 -07:00
Dan Williams 04ce9ab385 async_xor: permit callers to pass in a 'dma/page scribble' region
async_xor() needs space to perform dma and page address conversions.  In
most cases the code can simply reuse the struct page * array because the
size of the native pointer matches the size of a dma/page address.  In
order to support archs where sizeof(dma_addr_t) is larger than
sizeof(struct page *), or to preserve the input parameters, we utilize a
memory region passed in by the caller.

Since the code is now prepared to handle the case where it cannot
perform address conversions on the stack, we no longer need the
!HIGHMEM64G dependency in drivers/dma/Kconfig.

[ Impact: don't clobber input buffers for address conversions ]

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-06-03 14:22:28 -07:00
Linus Torvalds 3b798a5231 Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6:
  ACPI, i915: build fix (v2)
  acpi-cpufreq: fix printk typo and indentation
  ACPI processor: remove spurious newline from warning message
  drm/i915: acpi/video.c fix section mismatch warning
  ACPI: video: DMI workaround broken Acer 5315 BIOS enabling display brightness
  ACPI: video: DMI workaround broken eMachines E510 BIOS enabling display brightness
  ACPI: sanity check _PSS frequency to prevent cpufreq crash
  i7300_idle: allow testing on i5000-series hardware w/o re-compile
  PCI/ACPI: fix wrong ref count handling in acpi_pci_bind()
  cpuidle: fix AMD C1E suspend hang
  cpuidle: makes AMD C1E work in acpi_idle
2009-05-30 07:57:44 -07:00
Len Brown 2f102607ac i7300_idle: allow testing on i5000-series hardware w/o re-compile
Testing the i7300_idle driver on i5000-series hardware required
an edit to i7300_idle.h to "#define SUPPORT_I5000 1" and a re-build
of both i7300_idle and ioat_dma.

Replace that build-time scheme with a load-time module parameter:
"7300_idle.forceload=1" to make it easier to test the driver
on hardware that while not officially validated, works fine
and is much more commonly available.

By default (no modparam) the driver will continue to load
only on the i7300.

Note that ioat_dma runs a copy of i7300_idle's probe routine
to know to reserve an IOAT channel for i7300_idle.
This change makes ioat_dma do that always on the i5000,
just like it does on the i7300.

Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Andrew Henroid <andrew.d.henroid@intel.com>
2009-05-28 20:52:40 -04:00
Kumar Gala b787f2e2a3 fsldma: Fix compile warnings
We we build with dma_addr_t as a 64-bit quantity we get:

drivers/dma/fsldma.c: In function 'fsl_chan_xfer_ld_queue':
drivers/dma/fsldma.c:625: warning: cast to pointer from integer of different size
drivers/dma/fsldma.c: In function 'fsl_dma_chan_do_interrupt':
drivers/dma/fsldma.c:737: warning: cast to pointer from integer of different size
drivers/dma/fsldma.c:737: warning: cast to pointer from integer of different size
drivers/dma/fsldma.c: In function 'of_fsl_dma_probe':
drivers/dma/fsldma.c:927: warning: cast to pointer from integer of different

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-05-27 13:40:00 -07:00
Ira Snyder 2e077f8e83 fsldma: fix memory leak on error path in fsl_dma_prep_memcpy()
When preparing a memcpy operation, if the kernel fails to allocate memory
for a link descriptor after the first link descriptor has already been
allocated, then some memory will never be released. Fix the problem by
walking the list of allocated descriptors backwards, and freeing the
allocated descriptors back into the DMA pool.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Li Yang <leoli@freescale.com>
2009-05-22 16:54:42 +08:00
Ira Snyder 776c8943f2 fsldma: snooping is not enabled for last entry in descriptor chain
On the 83xx controller, snooping is necessary for the DMA controller to
ensure cache coherence with the CPU when transferring to/from RAM.

The last descriptor in a chain will always have the End-of-Chain interrupt
bit set, so we can set the snoop bit while adding the End-of-Chain
interrupt bit.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Li Yang <leoli@freescale.com>
2009-05-22 16:53:56 +08:00
Ira Snyder bcfb7465c0 fsldma: fix infinite loop on multi-descriptor DMA chain completion
When creating a DMA transaction with multiple descriptors, the async_tx
cookie is set to 0 for each descriptor in the chain, excluding the last
descriptor, whose cookie is set to -EBUSY.

When fsl_dma_tx_submit() is run, it only assigns a cookie to the first
descriptor. All of the remaining descriptors keep their original value,
including the last descriptor, which is set to -EBUSY.

After the DMA completes, the driver will update the last completed cookie
to be -EBUSY, which is an error code instead of a valid cookie. This causes
dma_async_is_complete() to always return DMA_IN_PROGRESS.

This causes the fsldma driver to never cleanup the queue of link
descriptors, and the driver will re-run the DMA transaction on the hardware
each time it receives the End-of-Chain interrupt. This causes an infinite
loop.

With this patch, fsl_dma_tx_submit() is changed to assign a cookie to every
descriptor in the chain. The rest of the code then works without problems.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Li Yang <leoli@freescale.com>
2009-05-22 16:51:28 +08:00
Ira Snyder 138ef01851 fsldma: fix "DMA halt timeout!" errors
When using the DMA controller from multiple threads at the same time, it is
possible to get lots of "DMA halt timeout!" errors printed to the kernel
log.

This occurs due to a race between fsl_dma_memcpy_issue_pending() and the
interrupt handler, fsl_dma_chan_do_interrupt(). Both call the
fsl_chan_xfer_ld_queue() function, which does not protect against
concurrent accesses to dma_halt() and dma_start().

The existing spinlock is moved to cover the dma_halt() and dma_start()
functions. Testing shows that the "DMA halt timeout!" errors disappear.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Li Yang <leoli@freescale.com>
2009-05-22 16:49:17 +08:00
Roel Kluin f47edc6dab fsldma: fix check on potential fdev->chan[] overflow
Fix the check of potential array overflow when using corrupted channel
device tree nodes.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Li Yang <leoli@freescale.com>
2009-05-22 16:46:52 +08:00
Guennadi Liakhovetski ad567ffb32 dma: fix ipu_idmac.c to not discard the last queued buffer
This also fixes the case of a single queued buffer, for example, when taking a
single frame snapshot with the mx3_camera driver.

Reported-by: Agustin Ferrin Pozuelo <gatoguan-os@yahoo.com>
Tested-by: Agustin Ferrin Pozuelo <gatoguan-os@yahoo.com>
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-05-12 14:41:48 -07:00
Maciej Sosnowski 4f005dbe55 ioatdma: fix "ioatdma frees DMA memory with wrong function"
as reported by Alexander Beregalov <a.beregalov@gmail.com>

ioatdma 0000:00:08.0: DMA-API: device driver frees DMA memory with
wrong function [device address=0x000000007f76f800] [size=2000 bytes]
[map
ped as single] [unmapped as page]

The ioatdma driver was unmapping all regions
(either allocated as page or single) using unmap_page.
This patch lets dma driver recognize if unmap_single or unmap_page should be used.
It introduces two new dma control flags:
DMA_COMPL_SRC_UNMAP_SINGLE and DMA_COMPL_DEST_UNMAP_SINGLE.
They should be set to indicate dma driver to do dma-unmapping as single
(first one for the source, tha latter for the destination).
If respective flag is not set, the driver assumes dma-unmapping as page.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Tested-by: Alexander Beregalov <a.beregalov@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-05-12 14:41:47 -07:00
Ben Nizette ca50a51e89 ipu_idmac: Use disable_irq_nosync() from within irq handlers.
disable_irq() should wait for all running handlers to complete
before returning.  As such, if it's used to disable an interrupt
from that interrupt's handler it will deadlock.  This replaces
the dangerous instances with the _nosync() variant which doesn't
have this problem.

Note the 2 handlers in question are only used #ifdef DEBUG so
I imagine these code paths don't get hit often.

Signed-off-by: Ben Nizette <bn@niasdigital.com>
Acked-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-05-05 12:16:56 -07:00
Dan Williams c56c81abe7 dmatest: fix max channels handling
The check for reaching max_channels is short circuited by 'continuing'
after successfully adding a channel.

[ Impact: make the 'max_channels' module parameter actually have an effect ]

Cc: <stable@kernel.org>
Reported-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-04-08 15:08:23 -07:00
Dan Williams 099f53cb50 async_tx: rename zero_sum to val
'zero_sum' does not properly describe the operation of generating parity
and checking that it validates against an existing buffer.  Change the
name of the operation to 'val' (for 'validate').  This is in
anticipation of the p+q case where it is a requirement to identify the
target parity buffers separately from the source buffers, because the
target parity buffers will not have corresponding pq coefficients.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-04-08 14:28:37 -07:00
Yang Hongyang 284901a90a dma-mapping: replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)
Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)

Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 08:31:11 -07:00
Yang Hongyang 6a35528a83 dma-mapping: replace all DMA_64BIT_MASK macro with DMA_BIT_MASK(64)
Replace all DMA_64BIT_MASK macro with DMA_BIT_MASK(64)

Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 08:31:10 -07:00
Linus Torvalds 133e2a3164 Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
  dma: Add SoF and EoF debugging to ipu_idmac.c, minor cleanup
  dw_dmac: add cyclic API to DW DMA driver
  dmaengine: Add privatecnt to revert DMA_PRIVATE property
  dmatest: add dma interrupts and callbacks
  dmatest: add xor test
  dmaengine: allow dma support for async_tx to be toggled
  async_tx: provide __async_inline for HAS_DMA=n archs
  dmaengine: kill some unused headers
  dmaengine: initialize tx_list in dma_async_tx_descriptor_init
  dma: i.MX31 IPU DMA robustness improvements
  dma: improve section assignment in i.MX31 IPU DMA driver
  dma: ipu_idmac driver cosmetic clean-up
  dmaengine: fail device registration if channel registration fails
2009-04-03 12:13:45 -07:00
Guennadi Liakhovetski 8c6db1bbf8 dma: Add SoF and EoF debugging to ipu_idmac.c, minor cleanup
Add Start-of-Frame and End-of-Frame debugging to ipu_idmac.c, in the
future it might also be needed for the actual video processing in
mx3-camera, at which point, the ISRs will have to be transferred to
mx3_camera.c, for which ipu_irq_map() and ipu_irq_unmap() functions will
have to be exported.

Also simplify a couple of pointer-dereferences.

Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-04-02 16:59:10 -07:00
Hans-Christian Egtvedt d9de451989 dw_dmac: add cyclic API to DW DMA driver
This patch adds a cyclic DMA interface to the DW DMA driver. This is
very useful if you want to use the DMA controller in combination with a
sound device which uses cyclic buffers.

Using a DMA channel for cyclic DMA will disable the possibility to use
it as a normal DMA engine until the user calls the cyclic free function
on the DMA channel. Also a cyclic DMA list can not be prepared if the
channel is already active.

Signed-off-by: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-04-01 15:42:34 -07:00
Russell King ed40d0c472 Merge branch 'origin' into devel
Conflicts:
	sound/soc/pxa/pxa2xx-i2s.c
2009-03-28 20:29:51 +00:00
Atsushi Nemoto 0f571515c3 dmaengine: Add privatecnt to revert DMA_PRIVATE property
Currently dma_request_channel() set DMA_PRIVATE capability but never
clear it.  So if a public channel was once grabbed by
dma_request_channel(), the device stay PRIVATE forever.  Add
privatecnt member to dma_device to correctly revert it.

[lg@denx.de: fix bad usage of 'chan' in dma_async_device_register]
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-26 09:48:09 -07:00
Dan Williams e44e0aa3cf dmatest: add dma interrupts and callbacks
Use the callback infrastructure to report driver/hardware hangs or
missed interrupts.  Since this makes the test threads much more
aggressive (from: explicit 1ms sleep to: wait_for_completion) we set the
nice value to 10 so as to not swamp legitimate tasks.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:25 -07:00
Dan Williams b54d5cb915 dmatest: add xor test
Extend dmatest to launch a thread per supported operation type and add
an xor test.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:25 -07:00
Dan Williams 729b5d1b8e dmaengine: allow dma support for async_tx to be toggled
Provide a config option for blocking the allocation of dma channels to
the async_tx api.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:25 -07:00
Dan Williams ccccce229c dmaengine: initialize tx_list in dma_async_tx_descriptor_init
Centralize this common initialization (and one case where ipu_idmac is
duplicating ->chan initialization).

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:24 -07:00
Guennadi Liakhovetski 8d47bae004 dma: i.MX31 IPU DMA robustness improvements
Add DMA error handling to the ISR, move common code fragments to functions, fix
scatter-gather element queuing in the ISR, survive channel freeing and
re-allocation in a quick succession.

Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:24 -07:00
Guennadi Liakhovetski 234f2df56f dma: improve section assignment in i.MX31 IPU DMA driver
The i.MX31 IPU DMA driver is a platform driver, but doesn't need hotplug, so we
can use __init and __exit function attributes.

Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:24 -07:00
Guennadi Liakhovetski 0149f7d5dc dma: ipu_idmac driver cosmetic clean-up
Remove superfluous semicolons, update comments.

Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:23 -07:00
Dan Williams 257b17ca03 dmaengine: fail device registration if channel registration fails
Atsushi points out:
"If alloc_percpu or kzalloc failed, chan_id does not match with its
position in device->channels list.

And above "continue" looks buggy anyway.  Keeping incomplete channels
in device->channels list looks very dangerous..."

Also, fix up leakage of idr_ref in the idr_pre_get() and channel init
fail cases.

Reported-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:23 -07:00
Kay Sievers dfbc90196d dma: struct device - replace bus_id with dev_name(), dev_set_name()
Cc: Dan Williams <dan.j.williams@intel.com>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
2009-03-24 16:38:22 -07:00
Sascha Hauer 9eb2eb8c40 MX31 clkdev support
This patch adds clkdev support for i.MX31. This is done in a
similar way done previously for i.MX27

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
2009-03-13 10:34:32 +01:00
Linus Torvalds 5dc18f51a2 Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
  dmatest: fix use after free in dmatest_exit
  ipu_idmac: fix spinlock type
  iop-adma, mv_xor: fix mem leak on self-test setup failure
  fsldma: fix off by one in dma_halt
  I/OAT: fail self-test if callback test reaches timeout
  I/OAT: update driver version and copyright dates
  I/OAT: list usage cleanup
  I/OAT: set tcp_dma_copybreak to 256k for I/OAT ver.3
  I/OAT: cancel watchdog before dma remove
  I/OAT: fail initialization on zero channels detection
  I/OAT: do not set DCACTRL_CMPL_WRITE_ENABLE for I/OAT ver.3
  I/OAT: add verification for proper APICID_TAG_MAP setting by BIOS
  dmaengine: update kerneldoc
2009-03-08 10:23:05 -07:00
Dan Williams 7cbd4877e5 dmatest: fix use after free in dmatest_exit
dmatest_cleanup_chanel will free dtc, so grab ->chan before it goes away
and use it to do the release.

Reported-by: Thierry Reding <thierry.reding@avionic-design.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:06:03 -07:00
Luotao Fu c74ef1f867 ipu_idmac: fix spinlock type
fix a probably accidently dropped reference operator while calling
spin_unlock_restore to an ipu lock.

Signed-off-by: Luotao Fu <l.fu@pengutronix.de>
Cc: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:41 -07:00
Roel Kluin a09b09ae51 iop-adma, mv_xor: fix mem leak on self-test setup failure
iop_adma_zero_sum_self_test has the brackets in the wrong place for the
setup failure deallocation path.  This error was duplicated in
mv_xor_xor_self_test.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:40 -07:00
Dan Williams 900325a6ce fsldma: fix off by one in dma_halt
Prevent dev_err from firing even if we successfully detected 'dma-idle'
before the full 1ms timeout has elapsed.

Acked-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:40 -07:00
Dan Williams 0c33e1ca3d I/OAT: fail self-test if callback test reaches timeout
If we miss interrupts in the self test then fail registration of this
channel as it is unsuitable for use as a public channel.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:40 -07:00
Maciej Sosnowski 211a22ce08 I/OAT: update driver version and copyright dates
Together with new fixes update driver version
and extend copyright dates ranges.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:40 -07:00
Eric Sesterhenn aa2d0b8b97 I/OAT: list usage cleanup
Trivial cleanup, list_del(); list_add_tail() is equivalent
to list_move_tail(). Semantic patch for coccinelle can be
found at www.cccmz.de/~snakebyte/list_move_tail.spatch

Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:40 -07:00
Maciej Sosnowski 5de22343b2 I/OAT: set tcp_dma_copybreak to 256k for I/OAT ver.3
Upcoming server platforms from Intel based on the Nehalem performance
have significantly improved CPU based copy performance.
However, the DMA engine can still be effective at higher I/O sizes
for TCP traffic and at this time copybreak
should be set to 256k for TCP traffic only.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:40 -07:00
Maciej Sosnowski 2b8a6bf896 I/OAT: cancel watchdog before dma remove
Channel watchdog should be canceled before the rest of dma remove stuff.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:40 -07:00
Maciej Sosnowski 8b794b141c I/OAT: fail initialization on zero channels detection
On some systems with I/OAT ver.2 when DCA is disabled in BIOS
situations have been observed
that zero DMA channels are detected instead of four.
To avoid kernel panic driver should fail gracefully with appropriate message.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:39 -07:00
Maciej Sosnowski ea9c717d01 I/OAT: do not set DCACTRL_CMPL_WRITE_ENABLE for I/OAT ver.3
Flag DCACTRL_CMPL_WRITE_ENABLE is valid only for I/OAT ver.2
so it should not be set for I/OAT ver.3.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:39 -07:00
Maciej Sosnowski 49bc46360d I/OAT: add verification for proper APICID_TAG_MAP setting by BIOS
BIOS versions for systems with I/OAT ver.2 have been found
which fail to program APICID_TAG_MAP for DCA.
The ioatdma driver should recognize incorrectly set APICID_TAG_MAP
and disable DCA in that case.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-04 16:04:39 -07:00
Russell King bdf602bd73 [ARM] fix lots of ARM __devexit sillyness
`iop_adma_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv_xor_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv64xxx_i2c_unmap_regs' referenced in section `.devinit.text' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv64xxx_i2c_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`orion_nand_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`pxafb_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o

Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-03 21:04:04 +00:00
Dan Williams 287d859222 atmel-mci: fix initialization of dma slave data
The conversion of atmel-mci to dma_request_channel missed the
initialization of the channel dma_slave information.  The filter_fn passed
to dma_request_channel is responsible for initializing the channel's
private data.  This implementation has the additional benefit of enabling
a generic client-channel data passing mechanism.

Reviewed-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-02-18 15:37:55 -08:00
Guennadi Liakhovetski 5296b56d1b i.MX31: Image Processing Unit DMA and IRQ drivers
i.MX3x SoCs contain an Image Processing Unit, consisting of a Control
Module (CM), Display Interface (DI), Synchronous Display Controller (SDC),
Asynchronous Display Controller (ADC), Image Converter (IC), Post-Filter
(PF), Camera Sensor Interface (CSI), and an Image DMA Controller (IDMAC).
CM contains, among other blocks, an Interrupt Generator (IG) and a Clock
and Reset Control Unit (CRCU). This driver serves IDMAC and IG. They are
supported over dmaengine and irq-chip APIs respectively.

IDMAC is a specialised DMA controller, its DMA channels cannot be used for
general-purpose operations, even though it might be possible to configure
a memory-to-memory channel for memcpy operation. This driver will not work
with generic dmaengine clients, clients, wishing to use it must use
respective wrapper structures, they also must specify which channels they
require, as channels are hard-wired to specific IPU functions.

Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-19 15:36:21 -07:00
Dan Williams 83436a0560 dmaengine: kill some dubious WARN_ONCEs
dma_find_channel and dma_issue_pending_all are good places to warn about
improper api usage.  However, warning correctly means synchronizing with
dma_list_mutex, i.e. too much overhead for these fast-path calls.

Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-19 14:39:10 -07:00
Peter Korsgaard 169d5f6637 fsldma: print correct IRQ on mpc83xx
The mpc83xx variant uses a shared IRQ for all channels, so the individual
channel nodes don't have an interrupt property. Fix the code to print the
controller IRQ instead if there isn't any for the channel.

Acked-by: Timur Tabi <timur@freescale.com>
Acked-by: Li Yang <leoli@freescale.com>
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-15 23:50:22 -07:00
Peter Korsgaard 6782dfe44a fsldma: check for NO_IRQ in fsl_dma_chan_remove()
There's no per-channel IRQ on mpc83xx, so only call free_irq if we have one.

Acked-by: Timur Tabi <timur@freescale.com>
Acked-by: Li Yang <leoli@freescale.com>
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-14 22:32:58 -07:00
Atsushi Nemoto d86be86e9a dmatest: Use custom map/unmap for destination buffer
The dmatest driver should use DMA_BIDIRECTIONAL on the destination buffer
to ensure that the poison values are written to RAM and not just written
to cache and discarded.

Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-13 09:22:20 -07:00
Dan Williams 6527de6d6d fsldma: use a valid 'device' for dma_pool_create
The dmaengine sysfs implementation was fixed to support proper
lifetime rules which means that the current:

new_fsl_chan->dev = &new_fsl_chan->common.dev->device;

...retrieves a NULL pointer because new_fsl_chan->common.dev has not
been allocated at this point.  So, set new_fsl_chan->dev to a valid
device.

Cc: Li Yang <leoli@freescale.com>
Cc: Zhang Wei <zw@zh-kernel.org>
Reported-by: Ira Snyder <iws@ovro.caltech.edu>
Tested-by: Ira Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-12 15:20:42 -07:00
Yuri Tikhonov dd59b8537f dmaengine: fix dependency chaining
In dmaengine we track the dependencies between the descriptors
using the 'next' pointers of the structure. These pointers are
set to NULL as soon as the corresponding descriptor has been
submitted to the channel (in dma_run_dependencies()).

But, the first 'next' in chain is still remaining set, regardless
the fact, that tx->next has been already submitted. This may lead to
multiple submissions of the same descriptor. This patch fixes this.

Actually, some previous implementation of the xxx_run_dependencies()
function already had this fix in place. The fdb..0eaf3 commit, beside the
correct things, broke this.

Cc: <stable@kernel.org>
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-12 15:19:29 -07:00
Dan Williams b9bdcbba01 ioat: fix self test for multi-channel case
In the multiple device case we need to re-arm the completion and protect
against concurrent self-tests.  The printk from the test callback is
removed as it can arbitrarily delay completion of the test.

Cc: <stable@kernel.org>
Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:22 -07:00
Dan Williams 652afc27b2 dmaengine: bump initcall level to arch_initcall
There are dmaengine users that would like to register dma devices at
subsys_initcall time to ensure channels are available by device_initcall
time.

Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:22 -07:00
Dan Williams e2346677af dmaengine: advertise all channels on a device to dma_filter_fn
Allow dma_filter_fn routines to disambiguate multiple channels on a device
rather than assuming that all channels on a device are equal.

Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Reported-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:21 -07:00
Dan Williams 864498aaa9 dmaengine: use idr for registering dma device numbers
This brings some predictability to dma device numbers, i.e. an rmmod/insmod
cycle may now result in /sys/class/dma/dma0chan0 being restored rather than
/sys/class/dma/dma1chan0 appearing.

Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:21 -07:00
Dan Williams 41d5e59c12 dmaengine: add a release for dma class devices and dependent infrastructure
Resolves:
WARNING: at drivers/base/core.c:122 device_release+0x4d/0x52()
Device 'dma0chan0' does not have a release() function, it is broken and must be fixed.

The dma_chan_dev object is introduced to gear-match sysfs kobject and
dmaengine channel lifetimes.  When a channel is removed access to the
sysfs entries return -ENODEV until the kobject can be released.

The bulk of the change is updates to existing code to handle the extra
layer of indirection between a dma_chan and its struct device.

Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:21 -07:00
Dan Williams 4fac7fa57c ioat: do not perform removal actions at shutdown
Unregistering services should only happen at "remove" time.  This prevents
the device from being unregistered while dmaengine clients are still
active.  Also, the comment on ioat_remove is stale since removal is prevented
while a channel may be in use.

Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:20 -07:00
Dan Williams 630738b9a5 iop-adma: enable module removal
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:20 -07:00
Dan Williams 0d603f611d iop-adma: kill debug BUG_ON
This BUG_ON caught problems in early development but now it is in the
way as it invalidly triggers when trying to remove the module.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:20 -07:00
Dan Williams f38822033d iop-adma: let devm do its job, don't duplicate free
No need to free stuff that the devm infrastructure will take care of...

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:19 -07:00
Dan Williams 7dd6025101 dmaengine: kill enum dma_state_client
DMA_NAK is now useless.  We can just use a bool instead.

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:19 -07:00
Dan Williams f27c580c36 dmaengine: remove 'bigref' infrastructure
Reference counting is done at the module level so clients need not worry
that a channel will leave while they are actively using dmaengine.

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:18 -07:00
Dan Williams aa1e6f1a38 dmaengine: kill struct dma_client and supporting infrastructure
All users have been converted to either the general-purpose allocator,
dma_find_channel, or dma_request_channel.

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:17 -07:00
Dan Williams 209b84a88f dmaengine: replace dma_async_client_register with dmaengine_get
Now that clients no longer need to be notified of channel arrival
dma_async_client_register can simply increment the dmaengine_ref_count.

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:17 -07:00
Dan Williams 74465b4ff9 atmel-mci: convert to dma_request_channel and down-level dma_slave
dma_request_channel provides an exclusive channel, so we no longer need to
pass slave data through dmaengine.

Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:16 -07:00
Dan Williams 33df8ca068 dmatest: convert to dma_request_channel
Replace the client registration infrastructure with a custom loop to
poll for channels.  Once dma_request_channel returns NULL stop asking
for channels.  A userspace side effect of this change if that loading
the dmatest module before loading a dma driver will result in no
channels being found, previously dmatest would get a callback.  To
facilitate testing in the built-in case dmatest_init is marked as a
late_initcall.  Another side effect is that channels under test can not
be used for any other purpose.

Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:15 -07:00
Dan Williams 59b5ec2144 dmaengine: introduce dma_request_channel and private channels
This interface is primarily for device-to-memory clients which need to
search for dma channels with platform-specific characteristics.  The
prototype is:

struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
                                     dma_filter_fn filter_fn,
                                     void *filter_param);

When the optional 'filter_fn' parameter is set to NULL
dma_request_channel simply returns the first channel that satisfies the
capability mask.  Otherwise, when the mask parameter is insufficient for
specifying the necessary channel, the filter_fn routine can be used to
disposition the available channels in the system. The filter_fn routine
is called once for each free channel in the system.  Upon seeing a
suitable channel filter_fn returns DMA_ACK which flags that channel to
be the return value from dma_request_channel.  A channel allocated via
this interface is exclusive to the caller, until dma_release_channel()
is called.

To ensure that all channels are not consumed by the general-purpose
allocator the DMA_PRIVATE capability is provided to exclude a dma_device
from general-purpose (memory-to-memory) consideration.

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:15 -07:00
Dan Williams 2ba05622b8 dmaengine: provide a common 'issue_pending_all' implementation
async_tx and net_dma each have open-coded versions of issue_pending_all,
so provide a common routine in dmaengine.

The implementation needs to walk the global device list, so implement
rcu to allow dma_issue_pending_all to run lockless.  Clients protect
themselves from channel removal events by holding a dmaengine reference.

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:14 -07:00
Dan Williams bec085134e dmaengine: centralize channel allocation, introduce dma_find_channel
Allowing multiple clients to each define their own channel allocation
scheme quickly leads to a pathological situation.  For memory-to-memory
offload all clients can share a central allocator.

This simply moves the existing async_tx allocator to dmaengine with
minimal fixups:
* async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan
* async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance
* split out common code from async_tx.c:__async_tx_find_channel -->
  dma_find_channel

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:14 -07:00
Dan Williams 6f49a57aa5 dmaengine: up-level reference counting to the module level
Simply, if a client wants any dmaengine channel then prevent all dmaengine
modules from being removed.  Once the clients are done re-enable module
removal.

Why?, beyond reducing complication:
1/ Tracking reference counts per-transaction in an efficient manner, as
   is currently done, requires a complicated scheme to avoid cache-line
   bouncing effects.
2/ Per-transaction ref-counting gives the false impression that a
   dma-driver can be gracefully removed ahead of its user (net, md, or
   dma-slave)
3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
   if such an engine were built one day we still would not need to notify
   clients of remove events.  The driver can simply return NULL to a
   ->prep() request, something that is much easier for a client to handle.

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-06 11:38:14 -07:00
Dan Williams 07f2211e4f dmaengine: remove dependency on async_tx
async_tx.ko is a consumer of dma channels.  A circular dependency arises
if modules in drivers/dma rely on common code in async_tx.ko.  It
prevents either module from being unloaded.

Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o
where they should have been from the beginning.

Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-01-05 18:10:19 -07:00
Dan Williams a06d568f7c async_xor: dma_map destination DMA_BIDIRECTIONAL
Mapping the destination multiple times is a misuse of the dma-api.
Since the destination may be reused as a source, ensure that it is only
mapped once and that it is mapped bidirectionally.  This appears to add
ugliness on the unmap side in that it always reads back the destination
address from the descriptor, but gcc can determine that dma_unmap is a
nop and not emit the code that calculates its arguments.

Cc: <stable@kernel.org>
Cc: Saeed Bishara <saeed@marvell.com>
Acked-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-12-08 13:46:00 -07:00
Dan Williams b0b42b16ff dmaengine: protect 'id' from concurrent registrations
There is a possibility to have two devices registered with the same id.

Cc: <stable@kernel.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-12-03 17:25:36 -07:00
Dan Williams 532d3b1f86 ioat: wait for self-test completion
As part of the ioat_dma self-test it performs a printk from a completion
callback.  Depending on the system console configuration this output can
take longer than a millisecond causing the self-test to fail.  Introduce a
completion with a generous timeout to mitigate this failure.

Cc: <stable@kernel.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-12-03 17:16:55 -07:00
Kay Sievers 06190d8415 dmaengine: struct device - replace bus_id with dev_name(), dev_set_name()
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-11-11 13:12:33 -07:00
Dan Williams 65e503814d iop-adma: use iop_paranoia() for debug BUG_ONs
Now that the critical read back to flush the next descriptor address is
fixed we can downgrade some BUG_ONs that need only be enabled when testing
changes to the driver.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-11-11 13:12:33 -07:00
Dan Williams 137cb55c6d iop-adma: add a dummy read to flush next descriptor update
The current dummy read references the wrong address allowing the next
descriptor address update to linger in the store buffer and get passed
by an 'append' event.

This issue was uncovered by the change from strongly-ordered to device
memory for the adma registers.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-11-11 13:12:33 -07:00
Maciej Sosnowski 12ccea24e3 [3/4] I/OAT: fix async_tx.callback checking
async_tx.callback should be checked for the first
not the last descriptor in the chain.

Cc: <stable@kernel.org>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-10 15:01:00 -08:00
Maciej Sosnowski c2c0b4c543 [2/4] I/OAT: fix dma_pin_iovec_pages() error handling
Error handling needs to be modified in dma_pin_iovec_pages().
It should return NULL instead of ERR_PTR
(pinned_list is checked for NULL in tcp_recvmsg() to determine
if iovec pages have been successfully pinned down).
In case of error for the first iovec,
local_list->nr_iovecs needs to be initialized.

Cc: <stable@kernel.org>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-10 15:00:56 -08:00
Maciej Sosnowski c3d4f44f50 [1/4] I/OAT: fix channel resources free for not allocated channels
If the ioatdma driver is loaded but not used it does not allocate descriptors.
Before it frees channel resources it should first be sure
that they have been previously allocated.

Cc: <stable@kernel.org>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Tested-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-10 15:00:51 -08:00
Len Brown 9fb3c5ca3d Merge branch 'i7300_idle' into release 2008-10-25 04:07:44 -04:00
Venki Pallipadi f371be6352 i7300_idle: Fix compile warning CONFIG_I7300_IDLE_IOAT_CHANNEL not defined
When I7300_idle driver is not configured, there is a compile time
warning about IDLE_IOAT_CHANNEL not defined. Fix it.

Reported-by: Suresh Siddha <suresh.b.siddha@intel.com>
Reported-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2008-10-24 12:59:47 -04:00
Venki Pallipadi 3ad0b02e4c i7300_idle: Disable ioat channel only on platforms where ile driver can load
Based on input from Andi Kleen:
share the platform detection code with ioat_dma and disable the channel in
dma engine only for specific platforms.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2008-10-24 12:54:18 -04:00
Len Brown 057316cc6a Merge branch 'linus' into test
Conflicts:
	MAINTAINERS
	arch/x86/kernel/acpi/boot.c
	arch/x86/kernel/acpi/sleep.c
	drivers/acpi/Kconfig
	drivers/pnp/Makefile
	drivers/pnp/quirks.c

Signed-off-by: Len Brown <len.brown@intel.com>
2008-10-23 00:11:07 -04:00
Andy Henroid 27471fdb32 i7300_idle driver v1.55
The Intel 7300 Memory Controller supports dynamic throttling of memory which can
be used to save power when system is idle. This driver does the memory
throttling when all CPUs are idle on such a system.

Refer to "Intel 7300 Memory Controller Hub (MCH)" datasheet
for the config space description.

Signed-off-by: Andy Henroid <andrew.d.henroid@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
2008-10-21 23:58:41 -04:00
Linus Torvalds b91385236c Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
  fsldma: allow Freescale Elo DMA driver to be compiled as a module
  fsldma: remove internal self-test from Freescale Elo DMA driver
  drivers/dma/dmatest.c: switch a GFP_ATOMIC to GFP_KERNEL
  dmatest: properly handle duplicate DMA channels
  drivers/dma/ioat_dma.c: drop code after return
  async_tx: make async_tx_run_dependencies() easier to read
2008-10-20 12:54:30 -07:00
Haavard Skinnemoen 7fe7b2f4ec dw_dmac: fix copy/paste bug in tasklet
The tasklet checks RAW.BLOCK twice, and does not check RAW.XFER. This is
obviously wrong, and could theoretically cause the driver to hang.

Reported-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-03 18:22:18 -07:00
Timur Tabi 77cd62e808 fsldma: allow Freescale Elo DMA driver to be compiled as a module
Modify the Freescale Elo / Elo Plus DMA driver so that it can be compiled as
a module.

The primary change is to stop treating the DMA controller as a bus, and the
DMA channels as devices on the bus.  This is because the Open Firmware (OF)
kernel code does not allow busses to be removed, so although we can call
of_platform_bus_probe() to probe the DMA channels, there is no
of_platform_bus_remove().  Instead, the DMA channels are manually probed,
similar to what fsl_elbc_nand.c does.

Cc: Scott Wood <scottwood@freescale.com>
Acked-by: Li Yang <leoli@freescale.com>
Signed-off-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-09-26 17:00:11 -07:00
Timur Tabi 59f647c25a fsldma: remove internal self-test from Freescale Elo DMA driver
The Freescale Elo DMA driver runs an internal self-test before registering
the channels with the DMA engine.  This self-test has a fundemental flaw in
that it calls the DMA engine's callback functions directly before the
registration.  However, the registration initializes some variables that the
callback functions uses, namely the device struct.

The code works today because there are two device structs: the one created
by the DMA engine, and one created by the Open Firmware (OF) subsystem.  The
self-test currently uses the device struct created by OF.  However, in the
future, some of the device structs created by OF will be eliminated.
This means that the self-test will only have access to the device struct
created by the DMA engine.  But this device struct isn't initialized when
the self-test runs, and this causes a kernel panic.

Since there is already a DMA test module (dmatest), the internal self-test
code is not useful anyway.  It is extremely unlikely that the test will fail
in normal usage.  It may have been helpful during development, but not any more.

Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Li Yang <leoli@freescale.com>
Cc: Scott Wood <scottwood@freescale.com>
Signed-off-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-09-23 15:55:56 -07:00
Andrew Morton 6fdb8bd471 drivers/dma/dmatest.c: switch a GFP_ATOMIC to GFP_KERNEL
It was needlessly using the unreliable GFP_ATOMIC.

Cc: Timur Tabi <timur@freescale.com>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-09-19 04:16:23 -07:00
Timur Tabi 6b3141962d dmatest: properly handle duplicate DMA channels
Update the the dmatest driver so that it handles duplicate DMA channels
properly.

When a DMA client is notified of an available DMA channel, it must check if it
has already allocated resources for that channel.  If so, it should return
DMA_DUP.  This can happen, for example, if a DMA driver calls
dma_async_device_register() more than once.

Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-09-19 04:16:19 -07:00
Julia Lawall 89f72a0633 drivers/dma/ioat_dma.c: drop code after return
The break after the return serves no purpose.

Signed-off-by: Julia Lawall <julia@diku.dk>
Reviewed-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-09-13 20:05:34 -07:00
Russell King 492c71dd54 Merge branch 'for-rmk' of git://git.marvell.com/orion 2008-08-09 18:03:13 +01:00
Lennert Buytenhek 6f088f1d21 [ARM] Move include/asm-arm/plat-orion to arch/arm/plat-orion/include/plat
This patch performs the equivalent include directory shuffle for
plat-orion, and fixes up all users.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-08-09 13:44:58 +02:00
Linus Torvalds 49b75b87ce Merge branch 'for-linus-merged' of master.kernel.org:/home/rmk/linux-2.6-arm
* 'for-linus-merged' of master.kernel.org:/home/rmk/linux-2.6-arm:
  [ARM] 5177/1: arm/mach-sa1100/Makefile: remove CONFIG_SA1100_USB
  [ARM] 5166/1: magician: add MAINTAINERS entry
  [ARM] fix pnx4008 build errors
  [ARM] Fix SMP booting with non-zero PHYS_OFFSET
  [ARM] 5185/1: Fix spi num_chipselect for lubbock
  [ARM] Move include/asm-arm/arch-* to arch/arm/*/include/mach
  [ARM] Add support for arch/arm/mach-*/include and arch/arm/plat-*/include
  [ARM] Remove asm/hardware.h, use asm/arch/hardware.h instead
  [ARM] Eliminate useless includes of asm/mach-types.h
  [ARM] Fix circular include dependency with IRQ headers
  avr32: Use <mach/foo.h> instead of <asm/arch/foo.h>
  avr32: Introduce arch/avr32/mach-*/include/mach
  avr32: Move include/asm-avr32 to arch/avr32/include/asm
  [ARM] sa1100_wdt: use reset_status to remember watchdog reset status
  [ARM] pxa: introduce reset_status and clear_reset_status for driver's usage
  [ARM] pxa: introduce reset.h for reset specific header information
2008-08-08 11:38:42 -07:00
Luis R. Rodriguez 7d283aee50 list.h: Add list_splice_tail() and list_splice_tail_init()
If you are using linked lists for queues list_splice() will not do what
you would expect even if you use the elements passed reversed. We need
to handle these differently. We add list_splice_tail() and
list_splice_tail_init().

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2008-08-07 09:49:42 -04:00
Russell King a09e64fbc0 [ARM] Move include/asm-arm/arch-* to arch/arm/*/include/mach
This just leaves include/asm-arm/plat-* to deal with.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-08-07 09:55:48 +01:00
Maciej Sosnowski 7f1b358a23 I/OAT: I/OAT version 3.0 support
This patch adds to ioatdma and dca modules
support for Intel I/OAT DMA engine ver.3 (aka CB3 device).
The main features of I/OAT ver.3 are:
 * 8 single channel DMA devices (8 channels total)
 * 8 DCA providers, each can accept 2 requesters
 * 8-bit TAG values and 32-bit extended APIC IDs

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-22 17:30:57 -07:00
Maciej Sosnowski 16a37acaaf I/OAT: tcp_dma_copybreak default value dependent on I/OAT version
I/OAT DMA performance tuning showed different optimal values of
tcp_dma_copybreak for different I/OAT versions (4096 for 1.2 and 2048
for 2.0).  This patch lets ioatdma driver set tcp_dma_copybreak value
according to these results.

[dan.j.williams@intel.com: remove some ifdefs]
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-22 17:30:57 -07:00
Maciej Sosnowski 09177e85d6 I/OAT: Add watchdog/reset functionality to ioatdma
Due to occasional DMA channel hangs observed for I/OAT versions 1.2 and 2.0
a watchdog has been introduced to check every 2 seconds
if all channels progress normally.
If stuck channel is detected, driver resets it.
The reset is done in two parts. The second part is scheduled
by the first one to reinitialize the channel after the restart.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-22 10:07:33 -07:00
Dan Williams 5eb907aaaf iop_adma: document how to calculate the minimum descriptor pool size
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-17 17:59:56 -07:00
Dan Williams c7141d005a iop_adma: directly reclaim descriptors on allocation failure
Force callers that trigger an "out of descriptors" condition to run the
cleanup loop directly.  Alleviates the requirement to have soft-irqs
enabled when polling for a descriptor in async_xor.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-17 17:59:56 -07:00
Haavard Skinnemoen 3bfb1d20b5 dmaengine: Driver for the Synopsys DesignWare DMA controller
This adds a driver for the Synopsys DesignWare DMA controller (aka
DMACA on AVR32 systems.) This DMA controller can be found integrated
on the AT32AP7000 chip and is primarily meant for peripheral DMA
transfer, but can also be used for memory-to-memory transfers.

This patch is based on a driver from David Brownell which was based on
an older version of the DMA Engine framework. It also implements the
proposed extensions to the DMA Engine API for slave DMA operations.

The dmatest client shows no problems, but there may still be room for
improvement performance-wise. DMA slave transfer performance is
definitely "good enough"; reading 100 MiB from an SD card running at ~20
MHz yields ~7.2 MiB/s average transfer rate.

Full documentation for this controller can be found in the Synopsys
DW AHB DMAC Databook:

http://www.synopsys.com/designware/docs/iip/DW_ahb_dmac/latest/doc/dw_ahb_dmac_db.pdf

The controller has lots of implementation options, so it's usually a
good idea to check the data sheet of the chip it's intergrated on as
well. The AT32AP7000 data sheet can be found here:

http://www.atmel.com/dyn/products/datasheets.asp?family_id=682


Changes since v4:
  * Use client_count instead of dma_chan_is_in_use()
  * Add missing include
  * Unmap buffers unless client told us not to

Changes since v3:
  * Update to latest DMA engine and DMA slave APIs
  * Embed the hw descriptor into the sw descriptor
  * Clean up and update MODULE_DESCRIPTION, copyright date, etc.

Changes since v2:
  * Dequeue all pending transfers in terminate_all()
  * Rename dw_dmac.h -> dw_dmac_regs.h
  * Define and use controller-specific dma_slave data
  * Fix up a few outdated comments
  * Define hardware registers as structs (doesn't generate better
    code, unfortunately, but it looks nicer.)
  * Get number of channels from platform_data instead of hardcoding it
    based on CONFIG_WHATEVER_CPU.
  * Give slave clients exclusive access to the channel

Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>,
Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:59:42 -07:00
Haavard Skinnemoen dc0ee6435c dmaengine: Add slave DMA interface
This patch adds the necessary interfaces to the DMA Engine framework
to use functionality found on most embedded DMA controllers: DMA from
and to I/O registers with hardware handshaking.

In this context, hardware hanshaking means that the peripheral that
owns the I/O registers in question is able to tell the DMA controller
when more data is available for reading, or when there is room for
more data to be written. This usually happens internally on the chip,
but these signals may also be exported outside the chip for things
like IDE DMA, etc.

A new struct dma_slave is introduced. This contains information that
the DMA engine driver needs to set up slave transfers to and from a
slave device. Most engines supporting DMA slave transfers will want to
extend this structure with controller-specific parameters.  This
additional information is usually passed from the platform/board code
through the client driver.

A "slave" pointer is added to the dma_client struct. This must point
to a valid dma_slave structure iff the DMA_SLAVE capability is
requested.  The DMA engine driver may use this information in its
device_alloc_chan_resources hook to configure the DMA controller for
slave transfers from and to the given slave device.

A new operation for preparing slave DMA transfers is added to struct
dma_device. This takes a scatterlist and returns a single descriptor
representing the whole transfer.

Another new operation for terminating all pending transfers is added as
well. The latter is needed because there may be errors outside the scope
of the DMA Engine framework that may require DMA operations to be
terminated prematurely.

DMA Engine drivers may extend the dma_device, dma_chan and/or
dma_slave_descriptor structures to allow controller-specific
operations. The client driver can detect such extensions by looking at
the DMA Engine's struct device, or it can request a specific DMA
Engine device by setting the dma_dev field in struct dma_slave.

dmaslave interface changes since v4:
  * Fix checkpatch errors
  * Fix changelog (there are no slave descriptors anymore)

dmaslave interface changes since v3:
  * Use dma_data_direction instead of a new enum
  * Submit slave transfers as scatterlists
  * Remove the DMA slave descriptor struct

dmaslave interface changes since v2:
  * Add a dma_dev field to struct dma_slave. If set, the client can
    only be bound to the DMA controller that corresponds to this
    device.  This allows controller-specific extensions of the
    dma_slave structure; if the device matches, the controller may
    safely assume its extensions are present.
  * Move reg_width into struct dma_slave as there are currently no
    users that need to be able to set the width on a per-transfer
    basis.

dmaslave interface changes since v1:
  * Drop the set_direction and set_width descriptor hooks. Pass the
    direction and width to the prep function instead.
  * Declare a dma_slave struct with fixed information about a slave,
    i.e. register addresses, handshake interfaces and such.
  * Add pointer to a dma_slave struct to dma_client. Can be NULL if
    the DMA_SLAVE capability isn't requested.
  * Drop the set_slave device hook since the alloc_chan_resources hook
    now has enough information to set up the channel for slave
    transfers.

Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:59:35 -07:00
Dan Williams e1d181efb1 dmaengine: add DMA_COMPL_SKIP_{SRC,DEST}_UNMAP flags to control dma unmap
In some cases client code may need the dma-driver to skip the unmap of source
and/or destination buffers.  Setting these flags indicates to the driver to
skip the unmap step.  In this regard async_xor is currently broken in that it
allows the destination buffer to be unmapped while an operation is still in
progress, i.e. when the number of sources exceeds the hardware channel's
maximum (fixed in a subsequent patch).

Acked-by: Saeed Bishara <saeed@marvell.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:59:12 -07:00
Haavard Skinnemoen 848c536a37 dmaengine: Add dma_client parameter to device_alloc_chan_resources
A DMA controller capable of doing slave transfers may need to know a
few things about the slave when preparing the channel. We don't want
to add this information to struct dma_channel since the channel hasn't
yet been bound to a client at this point.

Instead, pass a reference to the client requesting the channel to the
driver's device_alloc_chan_resources hook so that it can pick the
necessary information from the dma_client struct by itself.

[dan.j.williams@intel.com: fixed up fsldma and mv_xor]
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:58:58 -07:00
Haavard Skinnemoen 4a776f0aa9 dmatest: Simple DMA memcpy test client
This client tests DMA memcpy using various lengths and various offsets
into the source and destination buffers. It will initialize both
buffers with a repeatable pattern and verify that the DMA engine copies
the requested region and nothing more. It will also verify that the
bytes aren't swapped around, and that the source buffer isn't modified.

The dmatest module can be configured to test a specific device, a
specific channel. It can also test multiple channels at the same time,
and it can start multiple threads competing for the same channel.

Changes since v2:
  * Support testing multiple channels at the same time
  * Support testing with multiple threads competing for the same channel
  * Use counting test patterns in order to catch byte ordering issues

Changes since v1:
  * Remove extra dashes around "help"
  * Remove "default n" from Kconfig
  * Turn TEST_BUF_SIZE into a module parameter
  * Return DMA_NAK instead of DMA_DUP
  * Print unhandled events
  * Support testing specific channels and devices
  * Move to the end of the Makefile

Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:58:45 -07:00
Saeed Bishara ff7b04796d dmaengine: DMA engine driver for Marvell XOR engine
The XOR engine found in Marvell's SoCs and system controllers
provides XOR and DMA operation, iSCSI CRC32C calculation, memory
initialization, and memory ECC error cleanup operation support.

This driver implements the DMA engine API and supports the following
capabilities:
- memcpy
- xor
- memset

The XOR engine can be used by DMA engine clients implemented in the
kernel, one of those clients is the RAID module.  In that case, I
observed 20% improvement in the raid5 write throughput, and 40%
decrease in the CPU utilization when doing array construction, those
results obtained on an 5182 running at 500Mhz.

When enabling the NET DMA client, the performance decreased, so
meanwhile it is recommended to keep this client off.

Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:58:36 -07:00
Kay Sievers ebabe27626 iop-adma: fix platform driver hotplug/coldplug
Since 43cc71eed1, the platform
modalias is prefixed with "platform:". Add MODULE_ALIAS() to most
of the hotpluggable platform drivers, to re-enable auto loading.

Cc: <stable@kernel.org>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:58:28 -07:00
Dan Williams 7cc5bf9a3a dmaengine: track the number of clients using a channel
Haavard's dma-slave interface would like to test for exclusive access to a
channel.  The standard channel refcounting is not sufficient in that it
tracks more than just client references, it is also inaccurate as reference
counts are percpu until the channel is removed.

This change also enables a future fix to deallocate resources when a client
declines to use a capable channel.

Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:58:21 -07:00
Dan Williams 9c402f4e19 dmaengine: remove arch dependency from DMADEVICES
The dependency is redundant since all drivers set their specific arch
dependencies.  The NET_DMA option is modified to be enabled only on platforms
where it is known to have a positive effect.  HAS_DMA is added as an explicit
dependency for the DMADEVICES menu.

Acked-by: Adrian Bunk <bunk@kernel.org>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:58:12 -07:00
Haavard Skinnemoen 1099dc7924 dmaengine: Couple DMA channels to their physical DMA device
Set the 'parent' field of channel class devices to point to the
physical DMA device initialized by the DMA engine driver.

This allows drivers to use chan->dev.parent for syncing DMA buffers
and adds a 'device' symlink to the real device in
/sys/class/dma/dmaXchanY.

Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:58:05 -07:00
Li Yang 51ee87f27a fsldma: fix incorrect exit path for initialization
Signed-off-by: Li Yang <leoli@freescale.com>
Acked-by: Zhang Wei <zw@zh-kernel.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-07-08 11:57:45 -07:00
Christophe Jaillet eccf2144e1 iop-adma: fixup some kzalloc/memset confusions
1) Remove an explicit memset(.., 0, ...) to a variable allocated with
kzalloc (i.e. 'dest').

2) Allocate 'src' with kmalloc instead of kzalloc as all elements of the
'src' buffer are initialized in a 'for(...)' loop just after.

3) remove useless 'sizeof(u8)', which always returns 1, when computing the
size of the memory to be allocated.

Signed-off-by: Christophe Jaillet <christophe.jaillet@wanadoo.fr>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-05-20 13:51:20 -07:00
Sebastian Siewior 8a5703f846 DMA engine: typo fixes
Spelling fixes for dmaengine.[ch]

Signed-off-by: Sebastian Siewior <bigeasy@linutronix.de>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com>
2008-04-21 22:38:45 +00:00
Dan Williams 636bdeaa12 dmaengine: ack to flags: make use of the unused bits in the 'ack' field
'ack' is currently a simple integer that flags whether or not a client is done
touching fields in the given descriptor.  It is effectively just a single bit
of information.  Converting this to a flags parameter allows the other bits to
be put to use to control completion actions, like dma-unmap, and capture
results, like xor-zero-sum == 0.

Changes are one of:
1/ convert all open-coded ->ack manipulations to use async_tx_ack
   and async_tx_test_ack.
2/ set the ack bit at prep time where possible
3/ make drivers store the flags at prep time
4/ add flags to the device_prep_dma_interrupt prototype

Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-04-17 13:25:54 -07:00
Dan Williams c4fe15541d iop-adma: remove the workaround for missed interrupts on iop3xx
This workaround was covering the dependency submission bug in async_tx.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-04-17 13:25:54 -07:00
Dan Williams ce4d65a5db async_tx: kill ->device_dependency_added
DMA drivers no longer need to be notified of dependency submission
events as async_tx_run_dependencies and async_tx_channel_switch will
handle the scheduling and execution of dependent operations.

[sfr@canb.auug.org.au: extend this for fsldma]
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-04-17 13:25:54 -07:00
Dan Williams 19242d7233 async_tx: fix multiple dependency submission
Shrink struct dma_async_tx_descriptor and introduce
async_tx_channel_switch to properly inject a channel switch interrupt in
the descriptor stream.  This simplifies the locking model as drivers no
longer need to handle dma_async_tx_descriptor.lock.

Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-04-17 13:25:05 -07:00
Zhang Wei 1c62979ed2 fsldma: Split the MPC83xx event from MPC85xx and refine irq codes.
Split MPC83xx EOCDI event from MPC85xx EOLNI event, which is
also need to update cookie and start the next transfer.
The DMA channel irq handler function code is refined.
The patch is tested on MPC8377MDS board.

Signed-off-by: Zhang Wei <wei.zhang@freescale.com>
Signed-off-by; Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-04-17 13:22:16 -07:00
Zhang Wei 411e23dbe9 fsldma: Remove CONFIG_FSL_DMA_SELFTEST, keep fsl_dma_self_test() running always.
Always enabling the fsl_dma_self_test() to ensure the DMA controller
should works well after the driver probed.

Signed-off-by: Zhang Wei <wei.zhang@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-04-17 13:22:15 -07:00
Kumar Gala 049c9d4553 [POWERPC] fsldma: Use compatiable binding as spec
Documentation/powerpc/booting-without-of.txt specifies the
compatiables we should bind to for this driver (elo, eloplus).
Use these instead of the extremely specific 'mpc8540' and 'mpc8349'
compatiables.

Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-03-31 11:45:41 -05:00
Al Viro a4e6d5d381 fix the broken annotations in fsldma
a) every bitwise declaration will give a unique type; use typedefs.

 b) no need to bother with the stuff pointed to by iomem pointers,
    unless it's accessed directly.  noderef will force us to use helpers
    anyway.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-03-30 14:20:24 -07:00
Al Viro 53a0c98e11 ioat_dca __iomem annotations
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-03-30 14:18:41 -07:00
Zhang Wei f79abb627f fsldma: Fix the DMA halt when using DMA_INTERRUPT async_tx transfer.
The DMA_INTERRUPT async_tx is a NULL transfer, thus the BCR(count register)
is 0. When the transfer started with a byte count of zero, the DMA
controller will triger a PE(programming error) event and halt, not a normal
interrupt. I add special codes for PE event and DMA_INTERRUPT
async_tx testing.

Signed-off-by: Zhang Wei <wei.zhang@freescale.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-18 17:00:59 -07:00
Harvey Harrison 3d9b525b69 iop-adma.c: replace remaining __FUNCTION__ occurrences
__FUNCTION__ is gcc-specific, use __func__

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-13 10:57:09 -07:00
Zhang Wei 9c98718e73 fsldma: Add a completed cookie updated action in DMA finish interrupt.
The patch 'fsldma: do not cleanup descriptors in hardirq context'
(commit 222ccf9ab8) removed descriptors
cleanup function to tasklet but the completed cookie do not updated.
Thus, the DMA controller will get lots of duplicated transfer
interrupts. Just make a completed cookie update in interrupt handler.
And keep other cleanup jobs in tasklet function.

Tested-by: Sebastian Siewior <bigeasy@linutronix.de>
Signed-off-by: Zhang Wei <wei.zhang@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-13 10:57:09 -07:00
Zhang Wei 2187c269ad fsldma: Add device_prep_dma_interrupt support to fsldma.c
This is a bug that I assigned DMA_INTERRUPT capability to fsldma
but missing device_prep_dma_interrupt function. For a bug in
dmaengine.c the driver passed BUG_ON() checking. The patch fixes it.

Signed-off-by: Zhang Wei <wei.zhang@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-13 10:57:08 -07:00
Zhang Wei 9b941c6660 dmaengine: Fix a bug about BUG_ON() on DMA engine capability DMA_INTERRUPT.
The device->device_prep_dma_interrupt function is used by
DMA_INTERRUPT capability, not DMA_ZERO_SUM.

Signed-off-by: Zhang Wei <wei.zhang@freescale.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-13 10:57:07 -07:00
Zhang Wei 56822843ff fsldma: Fix fsldma.c warning messages when it's compiled under PPC64.
There are warning messages reported by Stephen Rothwell with
ARCH=powerpc allmodconfig build:

drivers/dma/fsldma.c: In function 'fsl_dma_prep_memcpy':
drivers/dma/fsldma.c:439: warning: comparison of distinct pointer types
lacks a cast
drivers/dma/fsldma.c: In function 'fsl_chan_xfer_ld_queue':
drivers/dma/fsldma.c:584: warning: format '%016llx' expects type 'long long
unsigned int', but argument 4 has type 'dma_addr_t'
drivers/dma/fsldma.c: In function 'fsl_dma_chan_do_interrupt':
drivers/dma/fsldma.c:668: warning: format '%x' expects type 'unsigned int',
but argument 5 has type 'dma_addr_t'
drivers/dma/fsldma.c:684: warning: format '%016llx' expects type 'long long
unsigned int', but argument 4 has type 'dma_addr_t'
drivers/dma/fsldma.c:684: warning: format '%016llx' expects type 'long long
unsigned int', but argument 5 has type 'dma_addr_t'
drivers/dma/fsldma.c:701: warning: format '%02x' expects type 'unsigned
int', but argument 4 has type 'dma_addr_t'
drivers/dma/fsldma.c: In function 'fsl_dma_self_test':
drivers/dma/fsldma.c:840: warning: format '%d' expects type 'int', but
argument 5 has type 'size_t'
drivers/dma/fsldma.c: In function 'of_fsl_dma_probe':
drivers/dma/fsldma.c:1010: warning: format '%08x' expects type 'unsigned
int', but argument 5 has type 'resource_size_t'

This patch fixed the above warning messages.

Signed-off-by: Zhang Wei <wei.zhang@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-13 10:57:07 -07:00
Dan Williams 6497dcffe0 ioat: fix 'ack' handling, driver must ensure that 'ack' is zero
Initialize 'ack' to zero in case the descriptor has been recycled.

Prevents "kernel BUG at crypto/async_tx/async_xor.c:185!"

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: stable@kernel.org
2008-03-04 10:16:46 -07:00
Dan Williams 222ccf9ab8 fsldma: do not cleanup descriptors in hardirq context
"Cleaning" descriptors involves calling pending callbacks and clients
assume that their callback will only ever happen in softirq context.
Delay cleanup to the tasklet.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Zhang Wei <wei.zhang@freescale.com>
2008-03-04 10:16:46 -07:00
Zhang Wei 173acc7ce8 dmaengine: add driver for Freescale MPC85xx DMA controller
The driver implements DMA engine API for Freescale MPC85xx DMA controller,
which could be used by devices in the silicon.  The driver supports the
Basic mode of Freescale MPC85xx DMA controller.  The MPC85xx processors
supported include MPC8540/60, MPC8555, MPC8548, MPC8641 and so on.

The MPC83xx(MPC8349, MPC8360) are also supported.

[kamalesh@linux.vnet.ibm.com: build fix]
[dan.j.williams@intel.com: merge mm fixes, rebase on async_tx-2.6.25]
Signed-off-by: Zhang Wei <wei.zhang@freescale.com>
Signed-off-by: Ebony Zhu <ebony.zhu@freescale.com>
Acked-by: Kumar Gala <galak@gate.crashing.org>
Cc: Shannon Nelson <shannon.nelson@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-04 10:16:46 -07:00
Dan Williams d4c56f97ff async_tx: replace 'int_en' with operation preparation flags
Pass a full set of flags to drivers' per-operation 'prep' routines.
Currently the only flag passed is DMA_PREP_INTERRUPT.  The expectation is
that arch-specific async_tx_find_channel() implementations can exploit this
capability to find the best channel for an operation.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
2008-02-06 10:12:18 -07:00
Dan Williams 0036731c88 async_tx: kill tx_set_src and tx_set_dest methods
The tx_set_src and tx_set_dest methods were originally implemented to allow
an array of addresses to be passed down from async_xor to the dmaengine
driver while minimizing stack overhead.  Removing these methods allows
drivers to have all transaction parameters available at 'prep' time, saves
two function pointers in struct dma_async_tx_descriptor, and reduces the
number of indirect branches..

A consequence of moving this data to the 'prep' routine is that
multi-source routines like async_xor need temporary storage to convert an
array of linear addresses into an array of dma addresses.  In order to keep
the same stack footprint of the previous implementation the input array is
reused as storage for the dma addresses.  This requires that
sizeof(dma_addr_t) be less than or equal to sizeof(void *).  As a
consequence CONFIG_DMADEVICES now depends on !CONFIG_HIGHMEM64G.  It also
requires that drivers be able to make descriptor resources available when
the 'prep' routine is polled.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
2008-02-06 10:12:17 -07:00
Denis Cheng e73ef9acfd iop-adma: use LIST_HEAD instead of LIST_HEAD_INIT
these three list_head are all local variables, but can also use LIST_HEAD.

Signed-off-by: Denis Cheng <crquan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-02-06 10:12:17 -07:00
Tony Jones 891f78ea83 DMA: Convert from class_device to device for DMA engine
Signed-off-by: Tony Jones <tonyj@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Cc: Shannon Nelson <shannon.nelson@intel.com>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-01-24 20:40:05 -08:00
Shannon Nelson bb8e8bcce7 I/OAT: fix null device in call to dev_err()
We can't use the device in a dev_err() after a kzalloc failure or after the
kfree, so simplify it to the pdev that was originally passed in.

Cc: Eric Sesterhenn <snakebyte@gmx.de>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-12-17 19:28:17 -08:00
Shannon Nelson 711924b105 I/OAT: fixups from code comments
A few fixups from Andrew's code comments.
  - removed "static inline" forward-declares
  - changed use of min() to min_t()
  - removed some unnecessary NULL initializations
  - removed a couple of BUG() calls

Fixes this:

drivers/dma/ioat_dma.c: In function `ioat1_tx_submit':
drivers/dma/ioat_dma.c:177: sorry, unimplemented: inlining failed in call to '__ioat1_dma_memcpy_issue_pending': function body not available
drivers/dma/ioat_dma.c:268: sorry, unimplemented: called from here

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: "Williams, Dan J" <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-12-17 19:28:17 -08:00
Haavard Skinnemoen 6d4f5879b6 dmaengine: correct invalid assumptions in the Kconfig text
This patch corrects recently changed (and now invalid) Kconfig descriptions
for the DMA engine framework:

 - Non-Intel(R) hardware also has DMA engines;
 - DMA is used for more than memcpy and RAID offloading.

In fact, on most platforms memcpy and RAID aren't factors, and DMA
exists so that peripherals can transfer data to/from memory while
the CPU does other work.

Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-11-29 09:24:53 -08:00
Shannon Nelson 7bb67c14fd I/OAT: Add support for version 2 of ioatdma device
Add support for version 2 of the ioatdma device.  This device handles
the descriptor chain and DCA services slightly differently:
 - Instead of moving the dma descriptors between a busy and an idle chain,
   this new version uses a single circular chain so that we don't have
   rewrite the next_descriptor pointers as we add new requests, and the
   device doesn't need to re-read the last descriptor.
 - The new device has the DCA tags defined internally instead of needing
   them defined statically.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: "Williams, Dan J" <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-11-14 18:45:41 -08:00
Haavard Skinnemoen 348badf1e8 dmaengine: fix broken device refcounting
When a DMA device is unregistered, its reference count is decremented twice
for each channel: Once dma_class_dev_release() and once in
dma_chan_cleanup().  This may result in the DMA device driver's remove()
function completing before all channels have been cleaned up, causing lots
of use-after-free fun.

Fix it by incrementing the device's reference count twice for each
channel during registration.

[dan.j.williams@intel.com: kill unnecessary client refcounting]
Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-11-14 18:45:39 -08:00
Andi Kleen 4138f08d1c Remove bogus default y for DMAR and NET_DMA
No reason I can think of of making them default y Most people don't have
the hardware and with default y they just pollute lots of configs during
make oldconfig.

Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jeff Garzik <jeff@garzik.org>
Acked-by: "Nelson, Shannon" <shannon.nelson@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-30 08:06:55 -07:00
Shannon Nelson 952184304f I/OAT: Add completion callback for async_tx interface use
The async_tx interface includes a completion callback.  This adds support
for using that callback, including using interrupts on completion.

[akpm@linux-foundation.org: various fixes]
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-18 14:37:32 -07:00
Shannon Nelson 7f2b291f56 I/OAT: Tighten descriptor setup performance
The change to the async_tx interface cost this driver some performance by
spreading the descriptor setup across several functions, including multiple
passes over the new descriptor chain.  Here we bring the work back into one
primary function and only do one pass.

[akpm@linux-foundation.org: cleanups, uninline]
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-18 14:37:32 -07:00
Shannon Nelson 5149fd010f I/OAT: clean up error handling and some print messages
Make better use of dev_err(), and catch an error where the transaction
creation might fail.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-18 14:37:32 -07:00
Shannon Nelson dfe2299e7b I/OAT: clean up of dca provider start and stop
Don't start ioat_dca if ioat_dma didn't start, and then stop ioat_dca
before stopping ioat_dma.  Since the ioat_dma side does the pci device
work, This takes care of ioat_dca trying to use a bad device reference.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-18 14:37:32 -07:00
Shannon Nelson 7df7cf0676 I/OAT: cleanup pci issues
Reorder the pci release actions
    Letting go of the resources in the right order helps get rid of
    occasional kernel complaints.

Fix the pci_driver object name [Randy Dunlap]
    Rename the struct pci_driver data so that false section mismatch
    warnings won't be produced.

Cc: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-18 14:37:32 -07:00
Rusty Russell af49d9248f Remove "unsafe" from module struct
Adrian Bunk points out that "unsafe" was used to mark modules touched by
the deprecated MOD_INC_USE_COUNT interface, which has long gone.  It's time
to remove the member from the module structure, as well.

If you want a module which can't unload, don't register an exit function.

(Vlad Yasevich says SCTP is now safe to unload, so just remove the
__unsafe there).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Cc: Sridhar Samudrala <sri@us.ibm.com>
Cc: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17 08:42:49 -07:00
Shannon Nelson 2ed6dc34f9 I/OAT: Add DCA services
Add code to connect to the DCA driver and provide cpu tags for use by
drivers that would like to use Direct Cache Access hints.

    [Adrian Bunk]                Several Kconfig cleanup items
    [Andrew Morten, Chris Leech] Fix for using cpu_physical_id() even when
			         built for uni-processor

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16 09:43:09 -07:00
Shannon Nelson 3e037454bc I/OAT: Add support for MSI and MSI-X
Add support for MSI and MSI-X interrupt handling, including the ability
to choose the desired interrupt method.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
[bunk@kernel.org: drivers/dma/ioat_dma.c: make 3 functions static]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16 09:43:09 -07:00
Shannon Nelson 8ab89567da I/OAT: Split PCI startup from DMA handling code
Split the general PCI startup from the DMA handling code in order to
prepare for adding support for DCA services and future versions of the
ioatdma device.

    [Rusty Russell] Removal of __unsafe() usage.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16 09:43:09 -07:00
Shannon Nelson 43d6e369d4 I/OAT: code cleanup from checkpatch output
Take care of a bunch of little code nits in ioatdma files

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16 09:43:09 -07:00
Shannon Nelson 1fda5f4e96 I/OAT: Rename the source file
Rename the ioatdma.c file in preparation for splitting into multiple files,
which will allow for easier adding new functionality.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16 09:43:09 -07:00
Shannon Nelson 223758c77a I/OAT: New device ids
Add device ids for new revs of the Intel I/OAT DMA engine

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16 09:43:09 -07:00
Shannon Nelson e422397634 [IOAT]: ioatdma needs to to play nice in a multi-dma-client world
Now that the DMA engine has a multi-client interface, fix the ioatdma
driver to play along.  At the same time, remove a couple of unnecessary
reads and writes.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-08-26 18:35:40 -07:00
Shannon Nelson 54a09feb0e [IOAT]: Remove redundant struct member to avoid descriptor cache miss
The layout for struct ioat_desc_sw is non-optimal and causes an extra
cache hit for every descriptor processed.  By tightening up the struct
layout and removing one item, we pull in the fields that get used in
the speedpath and get a little better performance.


Before:
-------
struct ioat_desc_sw {
	struct ioat_dma_descriptor * hw;                 /*     0     8
*/
	struct list_head           node;                 /*     8    16
*/
	int                        tx_cnt;               /*    24     4
*/

	/* XXX 4 bytes hole, try to pack */

	dma_addr_t                 src;                  /*    32     8
*/
	__u32                      src_len;              /*    40     4
*/

	/* XXX 4 bytes hole, try to pack */

	dma_addr_t                 dst;                  /*    48     8
*/
	__u32                      dst_len;              /*    56     4
*/

	/* XXX 4 bytes hole, try to pack */

	/* --- cacheline 1 boundary (64 bytes) --- */
	struct dma_async_tx_descriptor async_tx;         /*    64   144
*/
	/* --- cacheline 3 boundary (192 bytes) was 16 bytes ago --- */

	/* size: 208, cachelines: 4 */
	/* sum members: 196, holes: 3, sum holes: 12 */
	/* last cacheline: 16 bytes */
};	/* definitions: 1 */


After:
------

struct ioat_desc_sw {
	struct ioat_dma_descriptor * hw;                 /*     0     8
*/
	struct list_head           node;                 /*     8    16
*/
	int                        tx_cnt;               /*    24     4
*/
	__u32                      len;                  /*    28     4
*/
	dma_addr_t                 src;                  /*    32     8
*/
	dma_addr_t                 dst;                  /*    40     8
*/
	struct dma_async_tx_descriptor async_tx;         /*    48   144
*/
	/* --- cacheline 3 boundary (192 bytes) --- */

	/* size: 192, cachelines: 3 */
};	/* definitions: 1 */


Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-08-14 17:36:31 -07:00
Shannon Nelson 342ff7b24f [NET_DMA]: remove unused dma_memcpy_to_kernel_iovec
Al Viro pointed out that dma_memcpy_to_kernel_iovec() really was
unreachable and thus unused.  The code originally was there to support
in-kernel dma needs, but since it remains unused, we'll pull it out.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-31 02:28:03 -07:00
Dan Williams 1b0fac4587 dma-mapping: prevent dma dependent code from linking on !HAS_DMA archs
Continuing the work started in 411f0f3edc ...

This enables code with a dma path, that compiles away, to build without
requiring additional code factoring.  It also prevents code that calls
dma_alloc_coherent and dma_free_coherent from linking whereas previously
the code would hit a BUG() at run time.  Finally, it allows archs that set
!HAS_DMA to delete their asm/dma-mapping.h file.

Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: John W. Linville <linville@tuxdriver.com>
Cc: Kyle McMartin <kyle@parisc-linux.org>
Cc: James Bottomley <James.Bottomley@SteelEye.com>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: <geert@linux-m68k.org>
Cc: <zippel@linux-m68k.org>
Cc: <spyro@f2s.com>
Cc: <ysato@users.sourceforge.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-16 09:05:45 -07:00
Dan Williams 3039f0735a ioatdma: add the unisys "i/oat" pci vendor/device id
Cc: John Magolan <john.magolan@unisys.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-07-13 08:06:19 -07:00
Dan Williams c211092313 dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines
The Intel(R) IOP series of i/o processors integrate an Xscale core with
raid acceleration engines.  The capabilities per platform are:

iop219:
 (2) copy engines
iop321:
 (2) copy engines
 (1) xor and block fill engine
iop33x:
 (2) copy and crc32c engines
 (1) xor, xor zero sum, pq, pq zero sum, and block fill engine
iop34x (iop13xx):
 (2) copy, crc32c, xor, xor zero sum, and block fill engines
 (1) copy, crc32c, xor, xor zero sum, pq, pq zero sum, and block fill engine

The driver supports the features of the async_tx api:
* asynchronous notification of operation completion
* implicit (interupt triggered) handling of inter-channel transaction
  dependencies

The driver adapts to the platform it is running by two methods.
1/ #include <asm/arch/adma.h> which defines the hardware specific
   iop_chan_* and iop_desc_* routines as a series of static inline
   functions
2/ The private platform data attached to the platform_device defines the
   capabilities of the channels

20070626: Callbacks are run in a tasklet.  Given the recent discussion on
LKML about killing tasklets in favor of workqueues I did a quick conversion
of the driver.  Raid5 resync performance dropped from 50MB/s to 30MB/s, so
the tasklet implementation remains until a generic softirq interface is
available.

Changelog:
* fixed a slot allocation bug in do_iop13xx_adma_xor that caused too few
slots to be requested eventually leading to data corruption
* enabled the slot allocation routine to attempt to free slots before
returning -ENOMEM
* switched the cleanup routine to solely use the software chain and the
status register to determine if a descriptor is complete.  This is
necessary to support other IOP engines that do not have status writeback
capability
* make the driver iop generic
* modified the allocation routines to understand allocating a group of
slots for a single operation
* added a null xor initialization operation for the xor only channel on
iop3xx
* support xor operations on buffers larger than the hardware maximum
* split the do_* routines into separate prep, src/dest set, submit stages
* added async_tx support (dependent operations initiation at cleanup time)
* simplified group handling
* added interrupt support (callbacks via tasklets)
* brought the pending depth inline with ioat (i.e. 4 descriptors)
* drop dma mapping methods, suggested by Chris Leech
* don't use inline in C files, Adrian Bunk
* remove static tasklet declarations
* make iop_adma_alloc_slots easier to read and remove chances for a
  corrupted descriptor chain
* fix locking bug in iop_adma_alloc_chan_resources, Benjamin Herrenschmidt
* convert capabilities over to dma_cap_mask_t
* fixup sparse warnings
* add descriptor flush before iop_chan_enable
* checkpatch.pl fixes
* gpl v2 only correction
* move set_src, set_dest, submit to async_tx methods
* move group_list and phys to async_tx

Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-07-13 08:06:18 -07:00
Dan Williams 9bc89cd82d async_tx: add the async_tx api
The async_tx api provides methods for describing a chain of asynchronous
bulk memory transfers/transforms with support for inter-transactional
dependencies.  It is implemented as a dmaengine client that smooths over
the details of different hardware offload engine implementations.  Code
that is written to the api can optimize for asynchronous operation and the
api will fit the chain of operations to the available offload resources. 
 
	I imagine that any piece of ADMA hardware would register with the
	'async_*' subsystem, and a call to async_X would be routed as
	appropriate, or be run in-line. - Neil Brown

async_tx exploits the capabilities of struct dma_async_tx_descriptor to
provide an api of the following general format:

struct dma_async_tx_descriptor *
async_<operation>(..., struct dma_async_tx_descriptor *depend_tx,
			dma_async_tx_callback cb_fn, void *cb_param)
{
	struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>);
	struct dma_device *device = chan ? chan->device : NULL;
	int int_en = cb_fn ? 1 : 0;
	struct dma_async_tx_descriptor *tx = device ?
		device->device_prep_dma_<operation>(chan, len, int_en) : NULL;

	if (tx) { /* run <operation> asynchronously */
		...
		tx->tx_set_dest(addr, tx, index);
		...
		tx->tx_set_src(addr, tx, index);
		...
		async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
	} else { /* run <operation> synchronously */
		...
		<operation>
		...
		async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
	}

	return tx;
}

async_tx_find_channel() returns a capable channel from its pool.  The
channel pool is organized as a per-cpu array of channel pointers.  The
async_tx_rebalance() routine is tasked with managing these arrays.  In the
uniprocessor case async_tx_rebalance() tries to spread responsibility
evenly over channels of similar capabilities.  For example if there are two
copy+xor channels, one will handle copy operations and the other will
handle xor.  In the SMP case async_tx_rebalance() attempts to spread the
operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor
channel0 while cpu1 gets copy channel 1 and xor channel 1.  When a
dependency is specified async_tx_find_channel defaults to keeping the
operation on the same channel.  A xor->copy->xor chain will stay on one
channel if it supports both operation types, otherwise the transaction will
transition between a copy and a xor resource.

Currently the raid5 implementation in the MD raid456 driver has been
converted to the async_tx api.  A driver for the offload engines on the
Intel Xscale series of I/O processors, iop-adma, is provided in a later
commit.  With the iop-adma driver and async_tx, raid456 is able to offload
copy, xor, and xor-zero-sum operations to hardware engines.
 
On iop342 tiobench showed higher throughput for sequential writes (20 - 30%
improvement) and sequential reads to a degraded array (40 - 55%
improvement).  For the other cases performance was roughly equal, +/- a few
percentage points.  On a x86-smp platform the performance of the async_tx
implementation (in synchronous mode) was also +/- a few percentage points
of the original implementation.  According to 'top' on iop342 CPU
utilization drops from ~50% to ~15% during a 'resync' while the speed
according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s.
 
The tiobench command line used for testing was: tiobench --size 2048
--block 4096 --block 131072 --dir /mnt/raid --numruns 5
* iop342 had 1GB of memory available

Details:
* if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making
  async_tx_find_channel a static inline routine that always returns NULL
* when a callback is specified for a given transaction an interrupt will
  fire at operation completion time and the callback will occur in a
  tasklet.  if the the channel does not support interrupts then a live
  polling wait will be performed
* the api is written as a dmaengine client that requests all available
  channels
* In support of dependencies the api implicitly schedules channel-switch
  interrupts.  The interrupt triggers the cleanup tasklet which causes
  pending operations to be scheduled on the next channel
* Xor engines treat an xor destination address differently than a software
  xor routine.  To the software routine the destination address is an implied
  source, whereas engines treat it as a write-only destination.  This patch
  modifies the xor_blocks routine to take a an explicit destination address
  to mirror the hardware.

Changelog:
* fixed a leftover debug print
* don't allow callbacks in async_interrupt_cond
* fixed xor_block changes
* fixed usage of ASYNC_TX_XOR_DROP_DEST
* drop dma mapping methods, suggested by Chris Leech
* printk warning fixups from Andrew Morton
* don't use inline in C files, Adrian Bunk
* select the API when MD is enabled
* BUG_ON xor source counts <= 1
* implicitly handle hardware concerns like channel switching and
  interrupts, Neil Brown
* remove the per operation type list, and distribute operation capabilities
  evenly amongst the available channels
* simplify async_tx_find_channel to optimize the fast path
* introduce the channel_table_initialized flag to prevent early calls to
  the api
* reorganize the code to mimic crypto
* include mm.h as not all archs include it in dma-mapping.h
* make the Kconfig options non-user visible, Adrian Bunk
* move async_tx under crypto since it is meant as 'core' functionality, and
  the two may share algorithms in the future
* move large inline functions into c files
* checkpatch.pl fixes
* gpl v2 only correction

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-By: NeilBrown <neilb@suse.de>
2007-07-13 08:06:14 -07:00
Dan Williams d379b01e90 dmaengine: make clients responsible for managing channels
The current implementation assumes that a channel will only be used by one
client at a time.  In order to enable channel sharing the dmaengine core is
changed to a model where clients subscribe to channel-available-events.
Instead of tracking how many channels a client wants and how many it has
received the core just broadcasts the available channels and lets the
clients optionally take a reference.  The core learns about the clients'
needs at dma_event_callback time.

In support of multiple operation types, clients can specify a capability
mask to only be notified of channels that satisfy a certain set of
capabilities.

Changelog:
* removed DMA_TX_ARRAY_INIT, no longer needed
* dma_client_chan_free -> dma_chan_release: switch to global reference
  counting only at device unregistration time, before it was also happening
  at client unregistration time
* clients now return dma_state_client to dmaengine (ack, dup, nak)
* checkpatch.pl fixes
* fixup merge with git-ioat

Cc: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
2007-07-13 08:06:13 -07:00
Dan Williams 7405f74bad dmaengine: refactor dmaengine around dma_async_tx_descriptor
The current dmaengine interface defines mutliple routines per operation,
i.e. dma_async_memcpy_buf_to_buf, dma_async_memcpy_buf_to_page etc.  Adding
more operation types (xor, crc, etc) to this model would result in an
unmanageable number of method permutations.

	Are we really going to add a set of hooks for each DMA engine
	whizbang feature?
		- Jeff Garzik

The descriptor creation process is refactored using the new common
dma_async_tx_descriptor structure.  Instead of per driver
do_<operation>_<dest>_to_<src> methods, drivers integrate
dma_async_tx_descriptor into their private software descriptor and then
define a 'prep' routine per operation.  The prep routine allocates a
descriptor and ensures that the tx_set_src, tx_set_dest, tx_submit routines
are valid.  Descriptor creation and submission becomes:

struct dma_device *dev;
struct dma_chan *chan;
struct dma_async_tx_descriptor *tx;

tx = dev->device_prep_dma_<operation>(chan, len, int_flag)
tx->tx_set_src(dma_addr_t, tx, index /* for multi-source ops */)
tx->tx_set_dest(dma_addr_t, tx, index)
tx->tx_submit(tx)

In addition to the refactoring, dma_async_tx_descriptor also lays the
groundwork for definining cross-channel-operation dependencies, and a
callback facility for asynchronous notification of operation completion.

Changelog:
* drop dma mapping methods, suggested by Chris Leech
* fix ioat_dma_dependency_added, also caught by Andrew Morton
* fix dma_sync_wait, change from Andrew Morton
* uninline large functions, change from Andrew Morton
* add tx->callback = NULL to dmaengine calls to interoperate with async_tx
  calls
* hookup ioat_tx_submit
* convert channel capabilities to a 'cpumask_t like' bitmap
* removed DMA_TX_ARRAY_INIT, no longer needed
* checkpatch.pl fixes
* make set_src, set_dest, and tx_submit descriptor specific methods
* fixup git-ioat merge
* move group_list and phys to dma_async_tx_descriptor

Cc: Jeff Garzik <jeff@garzik.org>
Cc: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
2007-07-13 08:06:11 -07:00
Dan Aloni 428ed6024f I/OAT: fix I/OAT for kexec
Under kexec, I/OAT initialization breaks over busy resources because the
previous kernel did not release them.

I'm not sure this fix can be considered a complete one but it works for me.
 I guess something similar to the *_remove method should occur there..

Signed-off-by: Dan Aloni <da-x@monatomic.org>
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2007-07-11 16:10:53 -07:00
Chris Leech 70774b4739 ioatdma: Remove the use of writeq from the ioatdma driver
There's only one now anyway, and it's not in a performance path,
so make it behave the same on 32-bit and 64-bit CPUs.

Signed-off-by: Chris Leech <christopher.leech@intel.com>
2007-07-11 15:39:04 -07:00
Chris Leech e38288117c ioatdma: Remove the wrappers around read(bwl)/write(bwl) in ioatdma
Signed-off-by: Chris Leech <christopher.leech@intel.com>
2007-07-11 15:39:04 -07:00
Jeff Garzik ff487fb773 drivers/dma: handle sysfs errors
From: Jeff Garzik <jeff@garzik.org>

Signed-off-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Chris Leech <christopher.leech@intel.com>
2007-07-11 15:39:03 -07:00
Chris Leech 000725d56a ioatdma: Push pending transactions to hardware more frequently
Every 20 descriptors turns out to be to few append commands with
newer/faster CPUs.  Pushing every 4 still cuts down on MMIO writes to an
acceptable level without letting the DMA engine run out of work.

Signed-off-by: Chris Leech <christopher.leech@intel.com>
2007-07-11 15:39:03 -07:00
Randy Dunlap 92504f79a7 IOATDMA: fix section mismatches
Rename struct pci_driver data so that false section mismatch warnings won't
be produced.

Sam, ISTM that depending on variable names is the weakest & worst part of
modpost section checking.  Should __init_refok work here?  I got build
errors when I tried to use it, probably because the struct pci_driver probe
and remove methods are not marked "__init_refok".

WARNING: drivers/dma/ioatdma.o(.data+0x10): Section mismatch: reference to .init.text: (between 'ioat_pci_drv' and 'ioat_pci_tbl')
WARNING: drivers/dma/ioatdma.o(.data+0x14): Section mismatch: reference to .exit.text: (between 'ioat_pci_drv' and 'ioat_pci_tbl')

Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Acked-by: Chris Leech <christopher.leech@intel.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-28 11:34:53 -07:00
Martin Schwidefsky 9556fb73ed [S390] Kconfig: unwanted menus for s390.
Disable some more menus in the configuration files that are of no
interest to a s390 machine.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-05-10 15:46:07 +02:00
David Brownell 765e3d8a71 [PATCH] rm pointless dmaengine exports
This removes several pointless exports from drivers/dma/dmaengine.c; the
dma_async_memcpy_*() functions are inlined by <linux/dmaengine.h> so those
exports are inappropriate.

It also moves the existing EXPORT_SYMBOL declarations next to their functions,
so it's now trivial to confirm one-to-one correspondence between exports and
nonstatic symbols.

Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-03-16 19:25:03 -07:00
Christoph Lameter e94b176609 [PATCH] slab: remove SLAB_KERNEL
SLAB_KERNEL is an alias of GFP_KERNEL.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07 08:39:24 -08:00
Al Viro 47b16539e1 [PATCH] drivers/dma trivial annotations
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-10 15:37:21 -07:00
David Howells 7d12e780e0 IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.

The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around.  On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).

Where appropriate, an arch may override the generic storage facility and do
something different with the variable.  On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.

Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions.  Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller.  A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.

I've build this code with allyesconfig for x86_64 and i386.  I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.

This will affect all archs.  Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:

	struct pt_regs *old_regs = set_irq_regs(regs);

And put the old one back at the end:

	set_irq_regs(old_regs);

Don't pass regs through to generic_handle_irq() or __do_IRQ().

In timer_interrupt(), this sort of change will be necessary:

	-	update_process_times(user_mode(regs));
	-	profile_tick(CPU_PROFILING, regs);
	+	update_process_times(user_mode(get_irq_regs()));
	+	profile_tick(CPU_PROFILING);

I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().

Some notes on the interrupt handling in the drivers:

 (*) input_dev() is now gone entirely.  The regs pointer is no longer stored in
     the input_dev struct.

 (*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking.  It does
     something different depending on whether it's been supplied with a regs
     pointer or not.

 (*) Various IRQ handler function pointers have been moved to type
     irq_handler_t.

Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 15:10:12 +01:00
Henrik Kretzschmar b826315813 [I/OAT]: Remove pci_module_init() from Intel I/OAT DMA engine
Changes pci_module_init() to pci_register_driver().

Signed-off-by: Henrik Kretzschmar <henne@nachtwindheim.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-21 14:50:13 -07:00
Randy Dunlap 6508871edd [IOAT]: fix kernel-doc in source files
Fix kernel-doc warnings in drivers/dma/:
- use correct function & parameter names
- add descriptions where omitted

Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-03 19:45:31 -07:00
Benoit Boissinot 518d1c9679 [IOAT]: Fix a warning in ioatdma
drivers/dma/ioatdma.c: In function 'ioat_init_module':
drivers/dma/ioatdma.c:830: warning: control reaches end of non-void function

Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-03 19:28:13 -07:00
Adrian Bunk 56e0873b7b [IOAT]: drivers/dma/iovlock.c: make num_pages_spanned() static
This patch makes the needlessly global num_pages_spanned() static.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-03 19:27:20 -07:00
Randy Dunlap c1b4df5d2a [IOAT]: fix sparse ulong warning
Fix sparse warning:
drivers/dma/ioatdma.c:444:32: warning: constant 0xFFFFFFFFFFFFFFC0 is so big it is unsigned long

Also needs a MAINTAINERS entry.

Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-03 19:24:19 -07:00
Thomas Gleixner dace145374 [PATCH] irq-flags: misc drivers: Use the new IRQF_ constants
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-02 13:58:50 -07:00
David S. Miller 8070b2b1ec [IOAT]: Do not dereference THIS_MODULE directly to set unsafe.
Use the __unsafe() macro instead.

Noticed by Miles Lane.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-26 00:10:46 -07:00
Andrew Morton 17f3ae08b6 [I/OAT]: Do not use for_each_cpu().
for_each_cpu() is going away (and is gone in -mm).

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17 21:25:58 -07:00
Chris Leech de5506e155 [I/OAT]: Utility functions for offloading sk_buff to iovec copies
Provides for pinning user space pages in memory, copying to iovecs,
and copying from sk_buffs including fragmented and chained sk_buffs.

Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17 21:25:46 -07:00
Chris Leech db21733488 [I/OAT]: Setup the networking subsystem as a DMA client
Attempts to allocate per-CPU DMA channels

Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17 21:24:58 -07:00
David S. Miller 57c651f74c [I/OAT]: Move PCI_DEVICE_ID_INTEL_IOAT to linux/pci_ids.h
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17 21:18:50 -07:00
David S. Miller 6b00c92c4b [I/OAT]: ioatdma.c needs linux/dma-mapping.h
For DMA_*_MASK defines.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17 21:18:48 -07:00
Chris Leech 0bbd5f4e97 [I/OAT]: Driver for the Intel(R) I/OAT DMA engine
Adds a new ioatdma driver

Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17 21:18:46 -07:00
Chris Leech c13c8260da [I/OAT]: DMA memcpy subsystem
Provides an API for offloading memory copies to DMA devices

Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17 21:18:43 -07:00