mmc: sdio: Use mmc_pre_req() / mmc_post_req()

SDHCI changed from using a tasklet to finish requests, to using an IRQ
thread i.e. commit c07a48c265 ("mmc: sdhci: Remove finish_tasklet").
Because this increased the latency to complete requests, a preparatory
change was made to complete the request from the IRQ handler if
possible i.e. commit 19d2f695f4 ("mmc: sdhci: Call mmc_request_done()
from IRQ handler if possible").  That alleviated the situation for MMC
block devices because the MMC block driver makes use of mmc_pre_req()
and mmc_post_req() so that successful requests are completed in the IRQ
handler and any DMA unmapping is handled separately in mmc_post_req().
However SDIO was still affected, and an example has been reported with
up to 20% degradation in performance.

Looking at SDIO I/O helper functions, sdio_io_rw_ext_helper() appeared
to be a possible candidate for making use of asynchronous requests
within its I/O loops, but analysis revealed that these loops almost
never iterate more than once, so the complexity of the change would not
be warrented.

Instead, mmc_pre_req() and mmc_post_req() are added before and after I/O
submission (mmc_wait_for_req) in mmc_io_rw_extended().  This still has
the potential benefit of reducing the duration of interrupt handlers, as
well as addressing the latency issue for SDHCI.  It also seems a more
reasonable solution than forcing drivers to do everything in the IRQ
handler.

Reported-by: Dmitry Osipenko <digetx@gmail.com>
Fixes: c07a48c265 ("mmc: sdhci: Remove finish_tasklet")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20200903082007.18715-1-adrian.hunter@intel.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
This commit is contained in:
Adrian Hunter 2020-09-03 11:20:07 +03:00 committed by Ulf Hansson
parent 060522d897
commit f0c393e210
1 changed files with 22 additions and 17 deletions

View File

@ -121,6 +121,7 @@ int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
struct sg_table sgtable; struct sg_table sgtable;
unsigned int nents, left_size, i; unsigned int nents, left_size, i;
unsigned int seg_size = card->host->max_seg_size; unsigned int seg_size = card->host->max_seg_size;
int err;
WARN_ON(blksz == 0); WARN_ON(blksz == 0);
@ -170,28 +171,32 @@ int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
mmc_set_data_timeout(&data, card); mmc_set_data_timeout(&data, card);
mmc_pre_req(card->host, &mrq);
mmc_wait_for_req(card->host, &mrq); mmc_wait_for_req(card->host, &mrq);
if (cmd.error)
err = cmd.error;
else if (data.error)
err = data.error;
else if (mmc_host_is_spi(card->host))
/* host driver already reported errors */
err = 0;
else if (cmd.resp[0] & R5_ERROR)
err = -EIO;
else if (cmd.resp[0] & R5_FUNCTION_NUMBER)
err = -EINVAL;
else if (cmd.resp[0] & R5_OUT_OF_RANGE)
err = -ERANGE;
else
err = 0;
mmc_post_req(card->host, &mrq, err);
if (nents > 1) if (nents > 1)
sg_free_table(&sgtable); sg_free_table(&sgtable);
if (cmd.error) return err;
return cmd.error;
if (data.error)
return data.error;
if (mmc_host_is_spi(card->host)) {
/* host driver already reported errors */
} else {
if (cmd.resp[0] & R5_ERROR)
return -EIO;
if (cmd.resp[0] & R5_FUNCTION_NUMBER)
return -EINVAL;
if (cmd.resp[0] & R5_OUT_OF_RANGE)
return -ERANGE;
}
return 0;
} }
int sdio_reset(struct mmc_host *host) int sdio_reset(struct mmc_host *host)