linux-sg2042/drivers/mmc/core/queue.c

490 lines
11 KiB
C
Raw Normal View History

/*
* Copyright (C) 2003 Russell King, All Rights Reserved.
* Copyright 2006-2007 Pierre Ossman
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/blkdev.h>
#include <linux/freezer.h>
#include <linux/kthread.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/mmc/card.h>
#include <linux/mmc/host.h>
#include "queue.h"
#include "block.h"
#define MMC_QUEUE_BOUNCESZ 65536
/*
* Prepare a MMC request. This just filters out odd stuff.
*/
static int mmc_prep_request(struct request_queue *q, struct request *req)
{
struct mmc_queue *mq = q->queuedata;
/*
* We only like normal block requests and discards.
*/
if (req->cmd_type != REQ_TYPE_FS && req_op(req) != REQ_OP_DISCARD &&
req_op(req) != REQ_OP_SECURE_ERASE) {
blk_dump_rq_flags(req, "MMC bad request");
return BLKPREP_KILL;
}
mmc: card: Don't access RPMB partitions for normal read/write During kernel boot, it will try to read some logical sectors of each block device node for the possible partition table. But since RPMB partition is special and can not be accessed by normal eMMC read / write CMDs, it will cause below error messages during kernel boot: ... mmc0: Got data interrupt 0x00000002 even though no data operation was in progress. mmcblk0rpmb: error -110 transferring data, sector 0, nr 32, cmd response 0x900, card status 0xb00 mmcblk0rpmb: retrying using single block read mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 end_request: I/O error, dev mmcblk0rpmb, sector 0 Buffer I/O error on device mmcblk0rpmb, logical block 0 end_request: I/O error, dev mmcblk0rpmb, sector 8 Buffer I/O error on device mmcblk0rpmb, logical block 1 end_request: I/O error, dev mmcblk0rpmb, sector 16 Buffer I/O error on device mmcblk0rpmb, logical block 2 end_request: I/O error, dev mmcblk0rpmb, sector 24 Buffer I/O error on device mmcblk0rpmb, logical block 3 ... This patch will discard the access request in eMMC queue if it is RPMB partition access request. By this way, it avoids trigger above error messages. Fixes: 090d25fe224c ("mmc: core: Expose access to RPMB partition") Signed-off-by: Yunpeng Gao <yunpeng.gao@intel.com> Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com> Tested-by: Michael Shigorin <mike@altlinux.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2014-08-12 12:01:30 +08:00
if (mq && (mmc_card_removed(mq->card) || mmc_access_rpmb(mq)))
return BLKPREP_KILL;
req->rq_flags |= RQF_DONTPREP;
return BLKPREP_OK;
}
static int mmc_queue_thread(void *d)
{
struct mmc_queue *mq = d;
struct request_queue *q = mq->queue;
struct mmc_context_info *cntx = &mq->card->host->context_info;
current->flags |= PF_MEMALLOC;
down(&mq->thread_sem);
do {
struct request *req = NULL;
spin_lock_irq(q->queue_lock);
set_current_state(TASK_INTERRUPTIBLE);
req = blk_fetch_request(q);
mq->asleep = false;
cntx->is_waiting_last_req = false;
cntx->is_new_req = false;
if (!req) {
/*
* Dispatch queue is empty so set flags for
* mmc_request_fn() to wake us up.
*/
if (mq->mqrq_prev->req)
cntx->is_waiting_last_req = true;
else
mq->asleep = true;
}
mq->mqrq_cur->req = req;
spin_unlock_irq(q->queue_lock);
if (req || mq->mqrq_prev->req) {
bool req_is_special = mmc_req_is_special(req);
set_current_state(TASK_RUNNING);
mmc_blk_issue_rq(mq, req);
cond_resched();
if (mq->flags & MMC_QUEUE_NEW_REQUEST) {
mq->flags &= ~MMC_QUEUE_NEW_REQUEST;
continue; /* fetch again */
}
/*
* Current request becomes previous request
* and vice versa.
* In case of special requests, current request
* has been finished. Do not assign it to previous
* request.
*/
if (req_is_special)
mq->mqrq_cur->req = NULL;
mq->mqrq_prev->brq.mrq.data = NULL;
mq->mqrq_prev->req = NULL;
swap(mq->mqrq_prev, mq->mqrq_cur);
} else {
if (kthread_should_stop()) {
set_current_state(TASK_RUNNING);
break;
}
up(&mq->thread_sem);
schedule();
down(&mq->thread_sem);
}
} while (1);
up(&mq->thread_sem);
return 0;
}
/*
* Generic MMC request handler. This is called for any queue on a
* particular host. When the host is not busy, we look for a request
* on any queue on this host, and attempt to issue it. This may
* not be the queue we were asked to process.
*/
static void mmc_request_fn(struct request_queue *q)
{
struct mmc_queue *mq = q->queuedata;
struct request *req;
struct mmc_context_info *cntx;
if (!mq) {
while ((req = blk_fetch_request(q)) != NULL) {
req->rq_flags |= RQF_QUIET;
__blk_end_request_all(req, -EIO);
}
return;
}
cntx = &mq->card->host->context_info;
if (cntx->is_waiting_last_req) {
cntx->is_new_req = true;
wake_up_interruptible(&cntx->wait);
}
if (mq->asleep)
wake_up_process(mq->thread);
}
static struct scatterlist *mmc_alloc_sg(int sg_len, int *err)
{
struct scatterlist *sg;
sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL);
if (!sg)
*err = -ENOMEM;
else {
*err = 0;
sg_init_table(sg, sg_len);
}
return sg;
}
static void mmc_queue_setup_discard(struct request_queue *q,
struct mmc_card *card)
{
unsigned max_discard;
max_discard = mmc_calc_max_discard(card);
if (!max_discard)
return;
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q);
blk_queue_max_discard_sectors(q, max_discard);
if (card->erased_byte == 0 && !mmc_can_discard(card))
q->limits.discard_zeroes_data = 1;
q->limits.discard_granularity = card->pref_erase << 9;
/* granularity must not be greater than max. discard */
if (card->pref_erase > max_discard)
q->limits.discard_granularity = 0;
if (mmc_can_secure_erase_trim(card))
queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
}
#ifdef CONFIG_MMC_BLOCK_BOUNCE
static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
unsigned int bouncesz)
{
int i;
for (i = 0; i < mq->qdepth; i++) {
mq->mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
if (!mq->mqrq[i].bounce_buf)
goto out_err;
}
return true;
out_err:
while (--i >= 0) {
kfree(mq->mqrq[i].bounce_buf);
mq->mqrq[i].bounce_buf = NULL;
}
pr_warn("%s: unable to allocate bounce buffers\n",
mmc_card_name(mq->card));
return false;
}
static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
unsigned int bouncesz)
{
int i, ret;
for (i = 0; i < mq->qdepth; i++) {
mq->mqrq[i].sg = mmc_alloc_sg(1, &ret);
if (ret)
return ret;
mq->mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
if (ret)
return ret;
}
return 0;
}
#endif
static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs)
{
int i, ret;
for (i = 0; i < mq->qdepth; i++) {
mq->mqrq[i].sg = mmc_alloc_sg(max_segs, &ret);
if (ret)
return ret;
}
return 0;
}
static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq)
{
kfree(mqrq->bounce_sg);
mqrq->bounce_sg = NULL;
kfree(mqrq->sg);
mqrq->sg = NULL;
kfree(mqrq->bounce_buf);
mqrq->bounce_buf = NULL;
}
static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq)
{
int i;
for (i = 0; i < mq->qdepth; i++)
mmc_queue_req_free_bufs(&mq->mqrq[i]);
}
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
* @card: mmc card to attach this queue
* @lock: queue lock
* @subname: partition subname
*
* Initialise a MMC card request queue.
*/
int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
spinlock_t *lock, const char *subname)
{
struct mmc_host *host = card->host;
u64 limit = BLK_BOUNCE_HIGH;
bool bounce = false;
int ret = -ENOMEM;
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
mq->card = card;
mq->queue = blk_init_queue(mmc_request_fn, lock);
if (!mq->queue)
return -ENOMEM;
mq->qdepth = 2;
mq->mqrq = kcalloc(mq->qdepth, sizeof(struct mmc_queue_req),
GFP_KERNEL);
if (!mq->mqrq)
goto blk_cleanup;
mq->mqrq_cur = &mq->mqrq[0];
mq->mqrq_prev = &mq->mqrq[1];
mq->queue->queuedata = mq;
blk_queue_prep_rq(mq->queue, mmc_prep_request);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue);
queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue);
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
#ifdef CONFIG_MMC_BLOCK_BOUNCE
if (host->max_segs == 1) {
unsigned int bouncesz;
bouncesz = MMC_QUEUE_BOUNCESZ;
if (bouncesz > host->max_req_size)
bouncesz = host->max_req_size;
if (bouncesz > host->max_seg_size)
bouncesz = host->max_seg_size;
if (bouncesz > (host->max_blk_count * 512))
bouncesz = host->max_blk_count * 512;
if (bouncesz > 512 &&
mmc_queue_alloc_bounce_bufs(mq, bouncesz)) {
blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
blk_queue_max_hw_sectors(mq->queue, bouncesz / 512);
blk_queue_max_segments(mq->queue, bouncesz / 512);
blk_queue_max_segment_size(mq->queue, bouncesz);
ret = mmc_queue_alloc_bounce_sgs(mq, bouncesz);
if (ret)
goto cleanup_queue;
bounce = true;
}
}
#endif
if (!bounce) {
blk_queue_bounce_limit(mq->queue, limit);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
ret = mmc_queue_alloc_sgs(mq, host->max_segs);
if (ret)
goto cleanup_queue;
}
sema_init(&mq->thread_sem, 1);
mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s",
host->index, subname ? subname : "");
if (IS_ERR(mq->thread)) {
ret = PTR_ERR(mq->thread);
goto cleanup_queue;
}
return 0;
cleanup_queue:
mmc_queue_reqs_free_bufs(mq);
kfree(mq->mqrq);
mq->mqrq = NULL;
blk_cleanup:
blk_cleanup_queue(mq->queue);
return ret;
}
void mmc_cleanup_queue(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
/* Make sure the queue isn't suspended, as that will deadlock */
mmc_queue_resume(mq);
/* Then terminate our worker thread */
kthread_stop(mq->thread);
/* Empty the queue */
spin_lock_irqsave(q->queue_lock, flags);
q->queuedata = NULL;
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
mmc_queue_reqs_free_bufs(mq);
kfree(mq->mqrq);
mq->mqrq = NULL;
mq->card = NULL;
}
EXPORT_SYMBOL(mmc_cleanup_queue);
/**
* mmc_queue_suspend - suspend a MMC request queue
* @mq: MMC queue to suspend
*
* Stop the block request queue, and wait for our thread to
* complete any outstanding requests. This ensures that we
* won't suspend while a request is being processed.
*/
void mmc_queue_suspend(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
if (!(mq->flags & MMC_QUEUE_SUSPENDED)) {
mq->flags |= MMC_QUEUE_SUSPENDED;
spin_lock_irqsave(q->queue_lock, flags);
blk_stop_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
down(&mq->thread_sem);
}
}
/**
* mmc_queue_resume - resume a previously suspended MMC request queue
* @mq: MMC queue to resume
*/
void mmc_queue_resume(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
if (mq->flags & MMC_QUEUE_SUSPENDED) {
mq->flags &= ~MMC_QUEUE_SUSPENDED;
up(&mq->thread_sem);
spin_lock_irqsave(q->queue_lock, flags);
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
}
/*
* Prepare the sg list(s) to be handed of to the host driver
*/
unsigned int mmc_queue_map_sg(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
{
unsigned int sg_len;
size_t buflen;
struct scatterlist *sg;
int i;
mmc: block: delete packed command support I've had it with this code now. The packed command support is a complex hurdle in the MMC/SD block layer, around 500+ lines of code which was introduced in 2013 in commit ce39f9d17c14 ("mmc: support packed write command for eMMC4.5 devices") commit abd9ac144947 ("mmc: add packed command feature of eMMC4.5") ...and since then it has been rotting. The original author of the code has disappeared from the community and the mail address is bouncing. For the code to be exercised the host must flag that it supports packed commands, so in mmc_blk_prep_packed_list() which is called for every single request, the following construction appears: u8 max_packed_rw = 0; if ((rq_data_dir(cur) == WRITE) && mmc_host_packed_wr(card->host)) max_packed_rw = card->ext_csd.max_packed_writes; if (max_packed_rw == 0) goto no_packed; This has the following logical deductions: - Only WRITE commands can really be packed, so the solution is only half-done: we support packed WRITE but not packed READ. The packed command support has not been finalized by supporting reads in three years! - mmc_host_packed_wr() is just a static inline that checks host->caps2 & MMC_CAP2_PACKED_WR. The problem with this is that NO upstream host sets this capability flag! No driver in the kernel is using it, and we can't test it. Packed command may be supported in out-of-tree code, but I doubt it. I doubt that the code is even working anymore due to other refactorings in the MMC block layer, who would notice if patches affecting it broke packed commands? No one. - There is no Device Tree binding or code to mark a host as supporting packed read or write commands, just this flag in caps2, so for sure there are not any DT systems using it either. It has other problems as well: mmc_blk_prep_packed_list() is speculatively picking requests out of the request queue with blk_fetch_request() making the MMC/SD stack harder to convert to the multiqueue block layer. By this we get rid of an obstacle. The way I see it this is just cruft littering the MMC/SD stack. Cc: Namjae Jeon <namjae.jeon@samsung.com> Cc: Maya Erez <qca_merez@qca.qualcomm.com> Acked-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-11-25 17:35:00 +08:00
if (!mqrq->bounce_buf)
return blk_rq_map_sg(mq->queue, mqrq->req, mqrq->sg);
mmc: block: delete packed command support I've had it with this code now. The packed command support is a complex hurdle in the MMC/SD block layer, around 500+ lines of code which was introduced in 2013 in commit ce39f9d17c14 ("mmc: support packed write command for eMMC4.5 devices") commit abd9ac144947 ("mmc: add packed command feature of eMMC4.5") ...and since then it has been rotting. The original author of the code has disappeared from the community and the mail address is bouncing. For the code to be exercised the host must flag that it supports packed commands, so in mmc_blk_prep_packed_list() which is called for every single request, the following construction appears: u8 max_packed_rw = 0; if ((rq_data_dir(cur) == WRITE) && mmc_host_packed_wr(card->host)) max_packed_rw = card->ext_csd.max_packed_writes; if (max_packed_rw == 0) goto no_packed; This has the following logical deductions: - Only WRITE commands can really be packed, so the solution is only half-done: we support packed WRITE but not packed READ. The packed command support has not been finalized by supporting reads in three years! - mmc_host_packed_wr() is just a static inline that checks host->caps2 & MMC_CAP2_PACKED_WR. The problem with this is that NO upstream host sets this capability flag! No driver in the kernel is using it, and we can't test it. Packed command may be supported in out-of-tree code, but I doubt it. I doubt that the code is even working anymore due to other refactorings in the MMC block layer, who would notice if patches affecting it broke packed commands? No one. - There is no Device Tree binding or code to mark a host as supporting packed read or write commands, just this flag in caps2, so for sure there are not any DT systems using it either. It has other problems as well: mmc_blk_prep_packed_list() is speculatively picking requests out of the request queue with blk_fetch_request() making the MMC/SD stack harder to convert to the multiqueue block layer. By this we get rid of an obstacle. The way I see it this is just cruft littering the MMC/SD stack. Cc: Namjae Jeon <namjae.jeon@samsung.com> Cc: Maya Erez <qca_merez@qca.qualcomm.com> Acked-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-11-25 17:35:00 +08:00
sg_len = blk_rq_map_sg(mq->queue, mqrq->req, mqrq->bounce_sg);
mqrq->bounce_sg_len = sg_len;
buflen = 0;
for_each_sg(mqrq->bounce_sg, sg, sg_len, i)
buflen += sg->length;
sg_init_one(mqrq->sg, mqrq->bounce_buf, buflen);
return 1;
}
/*
* If writing, bounce the data to the buffer before the request
* is sent to the host driver
*/
void mmc_queue_bounce_pre(struct mmc_queue_req *mqrq)
{
if (!mqrq->bounce_buf)
return;
if (rq_data_dir(mqrq->req) != WRITE)
return;
sg_copy_to_buffer(mqrq->bounce_sg, mqrq->bounce_sg_len,
mqrq->bounce_buf, mqrq->sg[0].length);
}
/*
* If reading, bounce the data from the buffer after the request
* has been handled by the host driver
*/
void mmc_queue_bounce_post(struct mmc_queue_req *mqrq)
{
if (!mqrq->bounce_buf)
return;
if (rq_data_dir(mqrq->req) != READ)
return;
sg_copy_from_buffer(mqrq->bounce_sg, mqrq->bounce_sg_len,
mqrq->bounce_buf, mqrq->sg[0].length);
}