Commit Graph

70 Commits

Author SHA1 Message Date
Linus Walleij 29eb7bd01e mmc: card: do away with indirection pointer
We have enough vtables in the kernel as it is, we don't need
this one to create even more artificial separation of concerns.

As is proved by the Makefile:

obj-$(CONFIG_MMC_BLOCK)         += mmc_block.o
mmc_block-objs                  := block.o queue.o

block.c and queue.c are baked into the same mmc_block.o object.
So why would one of these objects access a function in the
other object by dereferencing a pointer?

Create a new block.h header file for the single shared function
from block to queue and remove the function pointer and just
call the queue request function.

Apart from making the code more readable, this also makes link
optimizations possible and probably speeds up the call as well.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-09-26 21:31:31 +02:00
Adrian Hunter 869c554808 mmc: fix use-after-free of struct request
We call mmc_req_is_special() after having processed a request, but
it could be freed after that. Check that ahead of time, and use
the cached value.

Reported-by: Hans de Goede <hdegoede@redhat.com>
Tested-by: Hans de Goede <hdegoede@redhat.com>
Fixes: c2df40dfb8 ("drivers: use req op accessor")

Signed-off-by: Jens Axboe <axboe@fb.com>
2016-08-25 14:11:43 -06:00
Adrian Hunter 7afafc8a44 block: Fix secure erase
Commit 288dab8a35 ("block: add a separate operation type for secure
erase") split REQ_OP_SECURE_ERASE from REQ_OP_DISCARD without considering
all the places REQ_OP_DISCARD was being used to mean either. Fix those.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Fixes: 288dab8a35 ("block: add a separate operation type for secure erase")
Signed-off-by: Jens Axboe <axboe@fb.com>
2016-08-16 09:16:51 -06:00
Christoph Hellwig 288dab8a35 block: add a separate operation type for secure erase
Instead of overloading the discard support with the REQ_SECURE flag.
Use the opportunity to rename the queue flag as well, and remove the
dead checks for this flag in the RAID 1 and RAID 10 drivers that don't
claim support for secure erase.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
2016-06-09 09:52:25 -06:00
Mike Christie c2df40dfb8 drivers: use req op accessor
The req operation REQ_OP is separated from the rq_flag_bits
definition. This converts the block layer drivers to
use req_op to get the op from the request struct.

Signed-off-by: Mike Christie <mchristi@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
2016-06-07 13:41:38 -06:00
Dan Williams da81ed16bd scatterlist: remove open coded sg_unmark_end instances
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
[hch: split from a larger patch by Dan]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-17 08:12:56 -06:00
Jens Axboe 2bb4cd5cc4 block: have drivers use blk_queue_max_discard_sectors()
Some drivers use it now, others just set the limits field manually.
But in preparation for splitting this into a hard and soft limit,
ensure that they all call the proper function for setting the hw
limit for discards.

Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-07-17 08:41:53 -06:00
Rabin Vincent a8c27c0bea mmc: queue: prevent soft lockups on PREEMPT=n
On systems with CONFIG_PREEMPT=n, under certain circumstances, mmcqd
can continuously process requests for several seconds without blocking,
triggering the soft lockup watchdog.  For example, this can happen if
mmcqd runs on the CPU which services the controller's interrupt, and
a process on a different CPU continuously writes to the MMC block
device.

 NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mmcqd/0:664]
 CPU: 0 PID: 664 Comm: mmcqd/0 Not tainted 4.1.0-rc7+ #4
 PC is at _raw_spin_unlock_irqrestore+0x24/0x28
 LR is at mmc_start_request+0x104/0x134
 ...
 [<805112a8>] (_raw_spin_unlock_irqrestore) from [<803db664>] (mmc_start_request+0x104/0x134)
 [<803db664>] (mmc_start_request) from [<803dc008>] (mmc_start_req+0x274/0x394)
 [<803dc008>] (mmc_start_req) from [<803eb2c4>] (mmc_blk_issue_rw_rq+0xd0/0xb98)
 [<803eb2c4>] (mmc_blk_issue_rw_rq) from [<803ebe8c>] (mmc_blk_issue_rq+0x100/0x470)
 [<803ebe8c>] (mmc_blk_issue_rq) from [<803ecab8>] (mmc_queue_thread+0xd0/0x170)
 [<803ecab8>] (mmc_queue_thread) from [<8003fd14>] (kthread+0xe0/0xfc)
 [<8003fd14>] (kthread) from [<8000f768>] (ret_from_fork+0x14/0x2c)

Fix it by adding a cond_resched() in the request handling loop so that
other processes get a chance to run.

Signed-off-by: Rabin Vincent <rabin.vincent@axis.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-06-18 09:21:04 +02:00
Fabian Frederick 7551847ca0 mmc: queue: use swap() in mmc_queue_thread()
Use kernel.h macro definition.

Thanks to Julia Lawall for Coccinelle scripting support.

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-06-15 10:26:29 +02:00
Chuanxiao Dong 4e93b9a6ab mmc: card: Don't access RPMB partitions for normal read/write
During kernel boot, it will try to read some logical sectors
of each block device node for the possible partition table.

But since RPMB partition is special and can not be accessed
by normal eMMC read / write CMDs, it will cause below error
messages during kernel boot:
...
 mmc0: Got data interrupt 0x00000002 even though no data operation was in progress.
 mmcblk0rpmb: error -110 transferring data, sector 0, nr 32, cmd response 0x900, card status 0xb00
 mmcblk0rpmb: retrying using single block read
 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
 end_request: I/O error, dev mmcblk0rpmb, sector 0
 Buffer I/O error on device mmcblk0rpmb, logical block 0
 end_request: I/O error, dev mmcblk0rpmb, sector 8
 Buffer I/O error on device mmcblk0rpmb, logical block 1
 end_request: I/O error, dev mmcblk0rpmb, sector 16
 Buffer I/O error on device mmcblk0rpmb, logical block 2
 end_request: I/O error, dev mmcblk0rpmb, sector 24
 Buffer I/O error on device mmcblk0rpmb, logical block 3
...

This patch will discard the access request in eMMC queue if
it is RPMB partition access request. By this way, it avoids
trigger above error messages.

Fixes: 090d25fe22 ("mmc: core: Expose access to RPMB partition")
Signed-off-by: Yunpeng Gao <yunpeng.gao@intel.com>
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Tested-by: Michael Shigorin <mike@altlinux.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-05-06 14:55:12 +02:00
Bhuvanesh Surachari fdb409f636 mmc: queue: Improve error handling during allocation of bounce buffers
Allocation of previous bounce buffer in mmc_init_queue when the current
bounce buffer allocation fails was leading to a crash later in
__blk_segment_map_sg. Error handling is improved by allocating previous
bounce buffer only if the current bounce buffer allocation succeeds.

Signed-off-by: Bhuvanesh Surachari <bhuvanesh_surachari@mentor.com>
Signed-off-by: Harish Jenny K N <harish_kandiga@mentor.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2014-12-05 10:33:17 +01:00
Linus Torvalds e75437fb93 Merge branch 'for-3.18/drivers' of git://git.kernel.dk/linux-block
Pull block layer driver update from Jens Axboe:
 "This is the block driver pull request for 3.18.  Not a lot in there
  this round, and nothing earth shattering.

   - A round of drbd fixes from the linbit team, and an improvement in
     asender performance.

   - Removal of deprecated (and unused) IRQF_DISABLED flag in rsxx and
     hd from Michael Opdenacker.

   - Disable entropy collection from flash devices by default, from Mike
     Snitzer.

   - A small collection of xen blkfront/back fixes from Roger Pau Monné
     and Vitaly Kuznetsov"

* 'for-3.18/drivers' of git://git.kernel.dk/linux-block:
  block: disable entropy contributions for nonrot devices
  xen, blkfront: factor out flush-related checks from do_blkif_request()
  xen-blkback: fix leak on grant map error path
  xen/blkback: unmap all persistent grants when frontend gets disconnected
  rsxx: Remove deprecated IRQF_DISABLED
  block: hd: remove deprecated IRQF_DISABLED
  drbd: use RB_DECLARE_CALLBACKS() to define augment callbacks
  drbd: compute the end before rb_insert_augmented()
  drbd: Add missing newline in resync progress display in /proc/drbd
  drbd: reduce lock contention in drbd_worker
  drbd: Improve asender performance
  drbd: Get rid of the WORK_PENDING macro
  drbd: Get rid of the __no_warn and __cond_lock macros
  drbd: Avoid inconsistent locking warning
  drbd: Remove superfluous newline from "resync_extents" debugfs entry.
  drbd: Use consistent names for all the bi_end_io callbacks
  drbd: Use better variable names
2014-10-18 12:12:45 -07:00
Mike Snitzer b277da0a8a block: disable entropy contributions for nonrot devices
Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
QUEUE_FLAG_NONROT.

Historically, all block devices have automatically made entropy
contributions.  But as previously stated in commit e2e1a148 ("block: add
sysfs knob for turning off disk entropy contributions"):
    - On SSD disks, the completion times aren't as random as they
      are for rotational drives. So it's questionable whether they
      should contribute to the random pool in the first place.
    - Calling add_disk_randomness() has a lot of overhead.

There are more reliable sources for randomness than non-rotational block
devices.  From a security perspective it is better to err on the side of
caution than to allow entropy contributions from unreliable "random"
sources.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
2014-10-04 10:55:32 -06:00
Joe Perches 6606110d89 mmc: Convert pr_warning to pr_warn
Use the much more common pr_warn instead of pr_warning.

Other miscellanea:

o Coalesce formats
o Realign arguments
o Remove extra spaces when coalescing formats

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2014-09-24 10:13:09 +02:00
Russell King e83b366487 Fix uses of dma_max_pfn() when converting to a limiting address
We must use a 64-bit for this, otherwise overflowed bits get lost, and
that can result in a lower than intended value set.

Fixes: 8e0cb8a1f6 ("ARM: 7797/1: mmc: Use dma_max_pfn(dev) helper for bounce_limit calculations")
Fixes: 7d35496dd9 ("ARM: 7796/1: scsi: Use dma_max_pfn(dev) helper for bounce_limit calculations")
Tested-Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-17 23:08:41 +00:00
Santosh Shilimkar 8e0cb8a1f6 ARM: 7797/1: mmc: Use dma_max_pfn(dev) helper for bounce_limit calculations
DMA bounce limit is the maximum direct DMA'able memory beyond which
bounce buffers has to be used to perform dma operations. MMC queue layr
relies on dma_mask but its calculation is based on max_*pfn which
don't have uniform meaning across architectures. So make use of
dma_max_pfn() which is expected to return the DMAable maximum pfn
value across architectures.

Cc: Chris Ball <cjb@laptop.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31 14:49:27 +00:00
Maya Erez 775a9362b5 mmc: card: Adding support for sanitize in eMMC 4.5
The sanitize support is added as a user-app ioctl call, and
was removed from the block-device request, since its purpose is
to be invoked not via File-System but by a user.

This feature deletes the unmap memory region of the eMMC card,
by writing to a specific register in the EXT_CSD.

unmap region is the memory region that was previously deleted
(by erase, trim or discard operation).

In order to avoid timeout when sanitizing large-scale cards,
the timeout for sanitize operation is 240 seconds.

Signed-off-by: Yaniv Gardi <ygardi@codeaurora.org>
Signed-off-by: Maya Erez <merez@codeaurora.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2013-05-26 14:23:13 -04:00
Seungwon Jeon ef3a69c7a4 mmc: block: fix the host's claim-release in special request
For normal request mmc_blk_issue_rq is called twice with asynchronous
transfer(cur and prev). Host's claim and release can be done in each
mmc_blk_issue_rq. However, Special request is currently excluded in
asynchronous transfer. After special request is finished, if there is
no new request, mmc_release_host won't be called in mmc_blk_issue_rq.
The problem is founded during mmc_suspend.

[<c0541124>] (__schedule+0x0/0x78c) from [<c05419e8>] (schedule+0x38/0x78)
[<c05419b0>] (schedule+0x0/0x78) from [<c03a843c>] (__mmc_claim_host+0xac/0x1b4)
[<c03a8390>] (__mmc_claim_host+0x0/0x1b4) from [<c03ac98c>] (mmc_suspend+0x28/0x9c)
[<c03ac964>] (mmc_suspend+0x0/0x9c) from [<c03aad24>] (mmc_suspend_host+0xb4/0x194)
...

Reported-by: Johan Rudholm <jrudholm@gmail.com>
Signed-off-by: Seungwon Jeon <tgih.jun@samsung.com>
Tested-by: Johan Rudholm <johan.rudholm@stericsson.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2013-03-22 13:29:36 -04:00
Seungwon Jeon ce39f9d17c mmc: support packed write command for eMMC4.5 devices
This patch supports packed write command of eMMC4.5 devices.  Several
writes can be grouped in packed command and all data of the individual
commands can be sent in a single transfer on the bus. Large amounts of
data in one transfer rather than several data of small size are
effective for eMMC write internally.  As a result, packed command help
write throughput be improved.  The following tables show the results
of packed write.

Type A:
test     none |  packed
iozone   25.8 |  31
tiotest  27.6 |  31.2
lmdd     31.2 |  35.4

Type B:
test     none |  packed
iozone   44.1 |  51.1
tiotest  47.9 |  52.5
lmdd     51.6 |  59.2

Type C:
test     none |  packed
iozone   19.5 |  32
tiotest  19.9 |  34.5
lmdd     22.8 |  40.7

Signed-off-by: Seungwon Jeon <tgih.jun@samsung.com>
Reviewed-by: Maya Erez <merez@codeaurora.org>
Reviewed-by: Namjae Jeon <linkinjeon@gmail.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2013-02-24 14:37:16 -05:00
Konstantin Dorfman 2220eedfd7 mmc: fix async request mechanism for sequential read scenarios
When current request is running on the bus and if next request fetched
by mmcqd is NULL, mmc context (mmcqd thread) gets blocked until the
current request completes. This means that if new request comes in while
the mmcqd thread is blocked, this new request can not be prepared in
parallel to current ongoing request. This may result in delaying the new
request execution and increase it's latency.

This change allows to wake up the MMC thread on new request arrival.
Now once the MMC thread is woken up, a new request can be fetched and
prepared in parallel to the current running request which means this new
request can be started immediately after the current running request
completes.

With this change read throughput is improved by 16%.

Signed-off-by: Konstantin Dorfman <kdorfman@codeaurora.org>
Reviewed-by: Seungwon Jeon <tgih.jun@samsung.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2013-02-11 13:28:49 -05:00
Seungwon Jeon 369d321ed1 mmc: queue: exclude asynchronous transfer for special request
Unlike normal r/w request, special requests(discard, flush)
is finished with a one-time issue_fn. Request change to
mqrq_prev makes unnecessary call.

Signed-off-by: Seungwon Jeon <tgih.jun@samsung.com>
Reviewed-by: Konstantin Dorfman <kdorfman@codeaurora.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2013-02-11 13:28:48 -05:00
Seungwon Jeon 45c5a914e6 mmc: queue: amend buffer swap for non-blocking transfer
In case both 'req' and 'mq->mqrq_prev->req' are null, there is no request
to be processed. That means there is no need to switch buffer.
Switching buffer is required only after finishing 'issue_fn'.

Signed-off-by: Seungwon Jeon <tgih.jun@samsung.com>
Reviewed-by: Per Forlin <per.forlin@stericsson.com>
Tested-by: Johan Rudholm <johan.rudholm@stericsson.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2012-12-06 13:54:37 -05:00
Venkatraman S b41b6f1d1c mmc: queue: remove redundant memsets
Not needed to memset, as they are pointers and are assigned
to proper values in the next line anyway.

Signed-off-by: Venkatraman S <svenkatr@ti.com>
Reviewed-by: Namjae Jeon <linkinjeon@gmail.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2012-05-09 10:10:46 -04:00
Venkatraman S 1b50f5f392 mmc: queue: rename mmc_request function
The name mmc_request is used for both the issue function
and a data structure, which creates conflicts in symbol lookups
in editors. Rename the function to mmc_request_fn.

Signed-off-by: Venkatraman S <svenkatr@ti.com>
Reviewed-by: Namjae Jeon <linkinjeon@gmail.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2012-05-09 10:08:54 -04:00
Adrian Hunter 7194efb8f0 mmc: fixes for eMMC v4.5 discard operation
eMMC v4.5 discard operation is significantly different from the
existing trim operation because it is not guaranteed to work with
the new sanitize operation.  Consequently mmc_can_trim() is
separated from mmc_can_discard().

Also the new discard operation does not result in the sectors being
set to all-zeros, so discard_zeroes_data must not be set.

In addition, the new discard has the same timeout as trim, but from
v4.5 trim is defined to use the hc timeout.  The timeout calculation
is adjusted accordingly.

Fixes apply to linux 3.2 on.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: <stable@vger.kernel.org>
Acked-by: Jaehoon Chung <jh80.chung@samsung.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2012-04-20 20:28:55 -04:00
Sujit Reddy Thumma a8ad82cc1b mmc: card: Kill block requests if card is removed
Kill block requests when the host realizes that the card is
removed from the slot and is sure that subsequent requests
are bound to fail. Do this silently so that the block
layer doesn't output unnecessary error messages.

Signed-off-by: Sujit Reddy Thumma <sthumma@codeaurora.org>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2012-01-11 23:58:44 -05:00
Kyungmin Park d9ddd62943 mmc: core: mmc sanitize feature support for v4.5
In the v4.5, there's no secure erase & trim support.
Instead it supports the sanitize feature.

Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-10-26 16:32:26 -04:00
Girish K S a3c76eb9d4 mmc: replace printk with appropriate display macro
All the files using printk function for displaying kernel messages
in the mmc driver have been replaced with corresponding macro.

Signed-off-by: Girish K S <girish.shivananjappa@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-10-26 16:32:22 -04:00
Venkatraman S 7513cd7af8 mmc: queue: declare mmc_alloc_sg as static
Fix the sparse warning "drivers/mmc/card/queue.c:111:20: warning:
symbol 'mmc_alloc_sg' was not declared. Should it be static?"

Signed-off-by: Venkatraman S <svenkatr@ti.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-10-26 15:43:35 -04:00
Per Forlin ee8a43a51c mmc: block: add handling for two parallel block requests in issue_rw_rq
Change mmc_blk_issue_rw_rq() to become asynchronous.
The execution flow looks like this:

* The mmc-queue calls issue_rw_rq(), which sends the request
  to the host and returns back to the mmc-queue.
* The mmc-queue calls issue_rw_rq() again with a new request.
* This new request is prepared in issue_rw_rq(), then it waits for
  the active request to complete before pushing it to the host.
* When the mmc-queue is empty it will call issue_rw_rq() with a NULL
  req to finish off the active request without starting a new request.

Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:15 -04:00
Per Forlin 04296b7bfd mmc: queue: add a second mmc queue request member
Add an additional mmc queue request instance to make way for two active
block requests. One request may be active while the other request is
being prepared.

Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:15 -04:00
Per Forlin 97868a2bdf mmc: block: add member in mmc queue struct to hold request data
The way the request data is organized in the mmc queue struct, it only
allows processing of one request at a time.  This patch adds a new struct
to hold mmc queue request data such as sg list, request, blk request and
bounce buffers, and updates any functions depending on the mmc queue
struct. This prepares for using multiple active requests in one mmc queue.

Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:13 -04:00
Adrian Hunter e056a1b5b6 mmc: queue: let host controllers specify maximum discard timeout
Some host controllers will not operate without a hardware
timeout that is limited in value.  However large discards
require large timeouts, so there needs to be a way to
specify the maximum discard size.

A host controller driver may now specify the maximum discard
timeout possible so that max_discard_sectors can be calculated.

However, for eMMC when the High Capacity Erase Group Size
is not in use, the timeout calculation depends on clock
rate which may change.  For that case Preferred Erase Size
is used instead.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:03 -04:00
Adrian Hunter c31b55cd4e mmc: queue: bring discard_granularity/alignment into line with SCSI
SCSI defines discard alignment as the offset to the first
optimal discard.  In the case of SD/MMC, that is always zero
which is the default.

SCSI defines discard granularity as a hint of a optimal
discard size.  That is much better expressed by the MMC
"preferred erase size" (pref_erase) field.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-06-25 18:53:05 -04:00
Adrian Hunter d09408ade0 mmc: queue: append partition subname to queue thread name
For example, an eMMC with 2 boot partitions will have 3 threads.
The names change from:

   40 ?        00:00:00 mmcqd/0
   41 ?        00:00:00 mmcqd/0
   42 ?        00:00:00 mmcqd/0

to:

   40 ?        00:00:00 mmcqd/0
   41 ?        00:00:00 mmcqd/0boot0
   42 ?        00:00:00 mmcqd/0boot1

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andrei Warkentin <andreiw@motorola.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-06-25 18:52:57 -04:00
John Ogness 0b38c4ebf0 mmc: remove redundant irq disabling
There is no need to disable irq's when using the sg_copy_*_buffer()
functions because those functions do that already. There are also
no races for the mm_queue struct here that would require the irq's
to be disabled before calling sg_copy_*_buffer().

Signed-off-by: John Ogness <john.ogness@linutronix.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-05-24 20:59:17 -04:00
Jens Axboe 7eaceaccab block: remove per-queue plugging
Code has been converted over to the new explicit on-stack plugging,
and delay users have been converted to use the new API for that.
So lets kill off the old plugging along with aops->sync_page().

Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2011-03-10 08:52:07 +01:00
Ethan Du de528fa3f9 mmc: name mmc queue thread by host index
Usually there are multiple mmc host controllers; rename mmc queue thread
by host index so we can easily identify which controller it belongs to.

Signed-off-by: Ethan Du <ethan.too@gmail.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2010-10-23 21:11:16 +08:00
Thomas Gleixner 632cf92a72 mmc: Convert "mutex" to semaphore
Get rid of init_MUTEX[_LOCKED]() and use sema_init() instead.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mmc@vger.kernel.org
Signed-off-by: Chris Ball <cjb@laptop.org>
2010-10-23 21:11:12 +08:00
Martin K. Petersen a36274e018 mmc: Remove distinction between hw and phys segments
We have deprecated the distinction between hardware and physical
segments in the block layer.  Consolidate the two limits into one in
drivers/mmc/.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2010-10-23 21:11:11 +08:00
Tejun Heo 4913efe456 block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush()
Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA
requests.  Deprecate barrier.  All REQ_HARDBARRIERs are failed with
-EOPNOTSUPP and blk_queue_ordered() is replaced with simpler
blk_queue_flush().

blk_queue_flush() takes combinations of REQ_FLUSH and FUA.  If a
device has write cache and can flush it, it should set REQ_FLUSH.  If
the device can handle FUA writes, it should also set REQ_FUA.

All blk_queue_ordered() users are converted.

* ORDERED_DRAIN is mapped to 0 which is the default value.
* ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH.
* ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Boaz Harrosh <bharrosh@panasas.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Alasdair G Kergon <agk@redhat.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Cc: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2010-09-10 12:35:36 +02:00
Adrian Hunter 4980454868 mmc_block: add support for secure discard
Secure discard is implemented by Secure Trim if the discard is unaligned
or Secure Erase otherwise.

Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Acked-by: Jens Axboe <axboe@kernel.dk>
Cc: Kyungmin Park <kmpark@infradead.org>
Cc: Madhusudhan Chikkature <madhu.cr@ti.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ben Gardiner <bengardiner@nanometrics.ca>
Cc: <linux-mmc@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-12 08:43:30 -07:00
Adrian Hunter bd788c9665 mmc_block: add discard support
Enable MMC to service discard requests.  In the case of SD and MMC cards
that do not support trim, discards become erases.  In the case of cards
(MMC) that only allow erases in multiples of erase group size, round to
the nearest completely discarded erase group.

Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Acked-by: Jens Axboe <axboe@kernel.dk>
Cc: Kyungmin Park <kmpark@infradead.org>
Cc: Madhusudhan Chikkature <madhu.cr@ti.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ben Gardiner <bengardiner@nanometrics.ca>
Cc: <linux-mmc@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-12 08:43:30 -07:00
FUJITA Tomonori 00fff26539 block: remove q->prepare_flush_fn completely
This removes q->prepare_flush_fn completely (changes the
blk_queue_ordered API).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2010-08-07 18:24:15 +02:00
Christoph Hellwig 33659ebbae block: remove wrappers for request type/flags
Remove all the trivial wrappers for the cmd_type and cmd_flags fields in
struct requests.  This allows much easier grepping for different request
types instead of unwinding through macros.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2010-08-07 18:17:56 +02:00
Tejun Heo 5a0e3ad6af include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files.  percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed.  Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability.  As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

  http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
  only the necessary includes are there.  ie. if only gfp is used,
  gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
  blocks and try to put the new include such that its order conforms
  to its surrounding.  It's put in the include block which contains
  core kernel includes, in the same order that the rest are ordered -
  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
  doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
  because the file doesn't have fitting include block), it prints out
  an error message indicating which .h file needs to be added to the
  file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
   over 4000 files, deleting around 700 includes and adding ~480 gfp.h
   and ~3000 slab.h inclusions.  The script emitted errors for ~400
   files.

2. Each error was manually checked.  Some didn't need the inclusion,
   some needed manual addition while adding it to implementation .h or
   embedding .c file was more appropriate for others.  This step added
   inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
   from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
   e.g. lib/decompress_*.c used malloc/free() wrappers around slab
   APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
   editing them as sprinkling gfp.h and slab.h inclusions around .h
   files could easily lead to inclusion dependency hell.  Most gfp.h
   inclusion directives were ignored as stuff from gfp.h was usually
   wildly available and often used in preprocessor macros.  Each
   slab.h inclusion directive was examined and added manually as
   necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
   distributed build env didn't work with gcov compiles) and a few
   more options had to be turned off depending on archs to make things
   build (like ipr on powerpc/64 which failed due to missing writeq).

   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
   * powerpc and powerpc64 SMP allmodconfig
   * sparc and sparc64 SMP allmodconfig
   * ia64 SMP allmodconfig
   * s390 SMP allmodconfig
   * alpha SMP allmodconfig
   * um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
   a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-30 22:02:32 +09:00
Martin K. Petersen 8a78362c4e block: Consolidate phys_segment and hw_segment limits
Except for SCSI no device drivers distinguish between physical and
hardware segment limits.  Consolidate the two into a single segment
limit.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26 13:58:08 +01:00
Martin K. Petersen 086fa5ff08 block: Rename blk_queue_max_sectors to blk_queue_max_hw_sectors
The block layer calling convention is blk_queue_<limit name>.
blk_queue_max_sectors predates this practice, leading to some confusion.
Rename the function to appropriately reflect that its intended use is to
set max_hw_sectors.

Also introduce a temporary wrapper for backwards compability.  This can
be removed after the merge window is closed.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26 13:58:08 +01:00
Adrian Hunter 5fa83ce284 mmc_block: fix queue cleanup
The main bug was that 'blk_cleanup_queue()' was called while the block
device could still be in use, for example, because the card was removed
while files were still open.

In addition, to be sure that 'mmc_request()' will get called for all new
requests (so it can error them out), the queue is emptied during cleanup.
This is done after the worker thread is stopped to avoid racing with it.

Finally, it is not a device error for this to be happening, so quiet the
(sometimes very many) error messages.

Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Cc: <linux-mmc@vger.kernel.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-11 09:34:06 -08:00
Tejun Heo 9934c8c045 block: implement and enforce request peek/start/fetch
Till now block layer allowed two separate modes of request execution.
A request is always acquired from the request queue via
elv_next_request().  After that, drivers are free to either dequeue it
or process it without dequeueing.  Dequeue allows elv_next_request()
to return the next request so that multiple requests can be in flight.

Executing requests without dequeueing has its merits mostly in
allowing drivers for simpler devices which can't do sg to deal with
segments only without considering request boundary.  However, the
benefit this brings is dubious and declining while the cost of the API
ambiguity is increasing.  Segment based drivers are usually for very
old or limited devices and as converting to dequeueing model isn't
difficult, it doesn't justify the API overhead it puts on block layer
and its more modern users.

Previous patches converted all block low level drivers to dequeueing
model.  This patch completes the API transition by...

* renaming elv_next_request() to blk_peek_request()

* renaming blkdev_dequeue_request() to blk_start_request()

* adding blk_fetch_request() which is combination of peek and start

* disallowing completion of queued (not started) requests

* applying new API to all LLDs

Renamings are for consistency and to break out of tree code so that
it's apparent that out of tree drivers need updating.

[ Impact: block request issue API cleanup, no functional change ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: unsik Kim <donari75@gmail.com>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Laurent Vivier <Laurent@lvivier.info>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: Stefan Weinhuber <wein@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-05-11 09:52:18 +02:00