MMC core:
- Continue to refactor the mmc block code to prepare for blkmq - Move mmc block debugfs into block module - Next step for eMMC CMDQ by adding a new mmc host interface for it - Move Kconfig option MMC_DEBUG from core to host - Some additional minor improvements MMC host: - Declare structs as const when applicable - Explicitly request exclusive reset control when applicable - Improve some error paths and other various cleanups - sdhci: Preparations to support SDHCI OMAP - sdhci: Improve some PM related code - sdhci: Re-factoring and modernizations - sdhci-xenon: Add runtime PM and system sleep support - sdhci-xenon: Add support for eMMC HS400 Enhanced Strobe - sdhci-cadence: Add system sleep support - sdhci-of-at91: Improve system sleep support - dw_mmc: Add support for Hisilicon hi3660 - sunxi: Add support for A83T eMMC - sunxi: Add support for DDR52 mode - meson-gx: Add support for UHS-I SD-cards - meson-gx: Cleanups and improvements - tmio: Fix CMD12 (STOP) handling - tmio: Cleanups and improvements - renesas_sdhi: Add r8a7743/5 support - renesas-sdhi: Add support for R-Car Gen3 SDHI DMAC - renesas_sdhi: Cleanups and improvements -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABAgAGBQJZr7CEAAoJEP4mhCVzWIwp5WQQAK9l2Hg1k4tFzxQ5EmKB/9Sm r3eS2GrosqrsCffR3vSSKnmva/lOrQuBzhqzx1MvWAByMUc5w8Yc8OowrKGhCWm9 pAzi/3Tnjf7A9nAq+0NeGhkwybckam9ZpGhMyC1E4bp63g6PCoHjTcqOMVnjYxHz cUNNQUz7oCjW6tjtpvdJQWZuIGiScNuyxrTYKi8SUpQZ0LQo8nU9DujKcwsKsZed gYEIimqOqZnGz1rWs/EP2Y5TSoPVxvnb6nc90gt8kh0nfXYumxKjEmHZ0PB7K97b pioCN/THtkDgdYn8j3gnDXZYYa6JA4fKKOw+S6VZraLoVLeDtLo5zK353Rr3BscI SddxLePp5WclRal+WulLLJs1FeY5PN3ji+mxC3FAG6cvCqIyosyU8HKG79Lhwwl6 7qlaDf27BhK71Sf17jzxtc5OwVTkSsY+9iKzVZAw5tIHSLR+nwhjM2vlAVU+oG2r KAsuVO1CVAqYbeIBJ85R6bPzgRGxQ0Kmkqwxe1QDVhgXl3eC5Ot5N/bOifv7HzV+ m+6W1Wdw6/tUKD5g5c6s2WMijXgTdEnfj7dYXmHHN4q1abAKj0cOVjXtmVb90DHM 5tvfxNurQZCCLo2A88/BYXRd299vBzOy9HAWvMvt5effQfxgFfpC1gc9NkfUTfkA FTOQ96vOpOmAH5uA0Xvm =850Z -----END PGP SIGNATURE----- Merge tag 'mmc-v4.14' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc Pull MMC updates from Ulf Hansson: "MMC core: - Continue to refactor the mmc block code to prepare for blkmq - Move mmc block debugfs into block module - Next step for eMMC CMDQ by adding a new mmc host interface for it - Move Kconfig option MMC_DEBUG from core to host - Some additional minor improvements MMC host: - Declare structs as const when applicable - Explicitly request exclusive reset control when applicable - Improve some error paths and other various cleanups - sdhci: Preparations to support SDHCI OMAP - sdhci: Improve some PM related code - sdhci: Re-factoring and modernizations - sdhci-xenon: Add runtime PM and system sleep support - sdhci-xenon: Add support for eMMC HS400 Enhanced Strobe - sdhci-cadence: Add system sleep support - sdhci-of-at91: Improve system sleep support - dw_mmc: Add support for Hisilicon hi3660 - sunxi: Add support for A83T eMMC - sunxi: Add support for DDR52 mode - meson-gx: Add support for UHS-I SD-cards - meson-gx: Cleanups and improvements - tmio: Fix CMD12 (STOP) handling - tmio: Cleanups and improvements - renesas_sdhi: Add r8a7743/5 support - renesas-sdhi: Add support for R-Car Gen3 SDHI DMAC - renesas_sdhi: Cleanups and improvements" * tag 'mmc-v4.14' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (145 commits) mmc: renesas_sdhi: Add r8a7743/5 support mmc: meson-gx: fix __ffsdi2 undefined on arm32 mmc: sdhci-xenon: add runtime pm support and reimplement standby mmc: core: Move mmc_start_areq() declaration mmc: mmci: stop building qcom dml as module mmc: sunxi: Reset the device at probe time clk: sunxi-ng: Provide a default reset hook mmc: meson-gx: rework tuning function mmc: meson-gx: change default tx phase mmc: meson-gx: implement voltage switch callback mmc: meson-gx: use CCF to handle the clock phases mmc: meson-gx: implement card_busy callback mmc: meson-gx: simplify interrupt handler mmc: meson-gx: work around clk-stop issue mmc: meson-gx: fix dual data rate mode frequencies mmc: meson-gx: rework clock init function mmc: meson-gx: rework clk_set function mmc: meson-gx: rework set_ios function mmc: meson-gx: cfg init overwrite values mmc: meson-gx: initialize sane clk default before clock register ...
This commit is contained in:
commit
15d8ffc964
|
@ -11,6 +11,8 @@ Required properties:
|
|||
- "renesas,mmcif-r7s72100" for the MMCIF found in r7s72100 SoCs
|
||||
- "renesas,mmcif-r8a73a4" for the MMCIF found in r8a73a4 SoCs
|
||||
- "renesas,mmcif-r8a7740" for the MMCIF found in r8a7740 SoCs
|
||||
- "renesas,mmcif-r8a7743" for the MMCIF found in r8a7743 SoCs
|
||||
- "renesas,mmcif-r8a7745" for the MMCIF found in r8a7745 SoCs
|
||||
- "renesas,mmcif-r8a7778" for the MMCIF found in r8a7778 SoCs
|
||||
- "renesas,mmcif-r8a7790" for the MMCIF found in r8a7790 SoCs
|
||||
- "renesas,mmcif-r8a7791" for the MMCIF found in r8a7791 SoCs
|
||||
|
@ -21,7 +23,7 @@ Required properties:
|
|||
- interrupts: Some SoCs have only 1 shared interrupt, while others have either
|
||||
2 or 3 individual interrupts (error, int, card detect). Below is the number
|
||||
of interrupts for each SoC:
|
||||
1: r8a73a4, r8a7778, r8a7790, r8a7791, r8a7793, r8a7794
|
||||
1: r8a73a4, r8a7743, r8a7745, r8a7778, r8a7790, r8a7791, r8a7793, r8a7794
|
||||
2: r8a7740, sh73a0
|
||||
3: r7s72100
|
||||
|
||||
|
|
|
@ -15,6 +15,7 @@ Required Properties:
|
|||
- "rockchip,rk3288-dw-mshc": for Rockchip RK3288
|
||||
- "rockchip,rv1108-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RV1108
|
||||
- "rockchip,rk3036-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RK3036
|
||||
- "rockchip,rk3228-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RK322x
|
||||
- "rockchip,rk3328-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RK3328
|
||||
- "rockchip,rk3368-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RK3368
|
||||
- "rockchip,rk3399-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RK3399
|
||||
|
|
|
@ -12,6 +12,7 @@ Required properties:
|
|||
* "allwinner,sun4i-a10-mmc"
|
||||
* "allwinner,sun5i-a13-mmc"
|
||||
* "allwinner,sun7i-a20-mmc"
|
||||
* "allwinner,sun8i-a83t-emmc"
|
||||
* "allwinner,sun9i-a80-mmc"
|
||||
* "allwinner,sun50i-a64-emmc"
|
||||
* "allwinner,sun50i-a64-mmc"
|
||||
|
|
|
@ -15,6 +15,8 @@ Required properties:
|
|||
"renesas,sdhi-r7s72100" - SDHI IP on R7S72100 SoC
|
||||
"renesas,sdhi-r8a73a4" - SDHI IP on R8A73A4 SoC
|
||||
"renesas,sdhi-r8a7740" - SDHI IP on R8A7740 SoC
|
||||
"renesas,sdhi-r8a7743" - SDHI IP on R8A7743 SoC
|
||||
"renesas,sdhi-r8a7745" - SDHI IP on R8A7745 SoC
|
||||
"renesas,sdhi-r8a7778" - SDHI IP on R8A7778 SoC
|
||||
"renesas,sdhi-r8a7779" - SDHI IP on R8A7779 SoC
|
||||
"renesas,sdhi-r8a7790" - SDHI IP on R8A7790 SoC
|
||||
|
@ -33,10 +35,8 @@ Required properties:
|
|||
If 2 clocks are specified by the hardware, you must name them as
|
||||
"core" and "cd". If the controller only has 1 clock, naming is not
|
||||
required.
|
||||
Below is the number clocks for each supported SoC:
|
||||
1: SH73A0, R8A73A4, R8A7740, R8A7778, R8A7779, R8A7790
|
||||
R8A7791, R8A7792, R8A7793, R8A7794, R8A7795, R8A7796
|
||||
2: R7S72100
|
||||
Devices which have more than 1 clock are listed below:
|
||||
2: R7S72100
|
||||
|
||||
Optional properties:
|
||||
- toshiba,mmc-wrprotect-disable: write-protect detection is unavailable
|
||||
|
|
|
@ -101,7 +101,6 @@
|
|||
mmc@0x15000 {
|
||||
compatible = "altr,socfpga-dw-mshc";
|
||||
reg = < 0x15000 0x400 >;
|
||||
num-slots = < 1 >;
|
||||
fifo-depth = < 16 >;
|
||||
card-detect-delay = < 200 >;
|
||||
clocks = <&apbclk>, <&mmcclk>;
|
||||
|
|
|
@ -104,7 +104,6 @@
|
|||
mmc@0x15000 {
|
||||
compatible = "snps,dw-mshc";
|
||||
reg = <0x15000 0x400>;
|
||||
num-slots = <1>;
|
||||
fifo-depth = <1024>;
|
||||
card-detect-delay = <200>;
|
||||
clocks = <&apbclk>, <&mmcclk>;
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
# Common objects
|
||||
lib-$(CONFIG_SUNXI_CCU) += ccu_common.o
|
||||
lib-$(CONFIG_SUNXI_CCU) += ccu_mmc_timing.o
|
||||
lib-$(CONFIG_SUNXI_CCU) += ccu_reset.o
|
||||
|
||||
# Base clock types
|
||||
|
|
|
@ -418,14 +418,8 @@ static SUNXI_CCU_PHASE(mmc1_sample_clk, "mmc1-sample", "mmc1",
|
|||
static SUNXI_CCU_PHASE(mmc1_output_clk, "mmc1-output", "mmc1",
|
||||
0x08c, 8, 3, 0);
|
||||
|
||||
/* TODO Support MMC2 clock's new timing mode. */
|
||||
static SUNXI_CCU_MP_WITH_MUX_GATE(mmc2_clk, "mmc2", mod0_default_parents,
|
||||
0x090,
|
||||
0, 4, /* M */
|
||||
16, 2, /* P */
|
||||
24, 2, /* mux */
|
||||
BIT(31), /* gate */
|
||||
0);
|
||||
static SUNXI_CCU_MP_MMC_WITH_MUX_GATE(mmc2_clk, "mmc2", mod0_default_parents,
|
||||
0x090, 0);
|
||||
|
||||
static SUNXI_CCU_PHASE(mmc2_sample_clk, "mmc2-sample", "mmc2",
|
||||
0x090, 20, 3, 0);
|
||||
|
|
|
@ -23,6 +23,10 @@
|
|||
#define CCU_FEATURE_FIXED_POSTDIV BIT(3)
|
||||
#define CCU_FEATURE_ALL_PREDIV BIT(4)
|
||||
#define CCU_FEATURE_LOCK_REG BIT(5)
|
||||
#define CCU_FEATURE_MMC_TIMING_SWITCH BIT(6)
|
||||
|
||||
/* MMC timing mode switch bit */
|
||||
#define CCU_MMC_NEW_TIMING_MODE BIT(30)
|
||||
|
||||
struct device_node;
|
||||
|
||||
|
|
|
@ -0,0 +1,70 @@
|
|||
/*
|
||||
* Copyright (c) 2017 Chen-Yu Tsai. All rights reserved.
|
||||
*
|
||||
* This software is licensed under the terms of the GNU General Public
|
||||
* License version 2, as published by the Free Software Foundation, and
|
||||
* may be copied, distributed, and modified under those terms.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/clk-provider.h>
|
||||
#include <linux/clk/sunxi-ng.h>
|
||||
|
||||
#include "ccu_common.h"
|
||||
|
||||
/**
|
||||
* sunxi_ccu_set_mmc_timing_mode: Configure the MMC clock timing mode
|
||||
* @clk: clock to be configured
|
||||
* @new_mode: true for new timing mode introduced in A83T and later
|
||||
*
|
||||
* Returns 0 on success, -ENOTSUPP if the clock does not support
|
||||
* switching modes.
|
||||
*/
|
||||
int sunxi_ccu_set_mmc_timing_mode(struct clk *clk, bool new_mode)
|
||||
{
|
||||
struct clk_hw *hw = __clk_get_hw(clk);
|
||||
struct ccu_common *cm = hw_to_ccu_common(hw);
|
||||
unsigned long flags;
|
||||
u32 val;
|
||||
|
||||
if (!(cm->features & CCU_FEATURE_MMC_TIMING_SWITCH))
|
||||
return -ENOTSUPP;
|
||||
|
||||
spin_lock_irqsave(cm->lock, flags);
|
||||
|
||||
val = readl(cm->base + cm->reg);
|
||||
if (new_mode)
|
||||
val |= CCU_MMC_NEW_TIMING_MODE;
|
||||
else
|
||||
val &= ~CCU_MMC_NEW_TIMING_MODE;
|
||||
writel(val, cm->base + cm->reg);
|
||||
|
||||
spin_unlock_irqrestore(cm->lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sunxi_ccu_set_mmc_timing_mode);
|
||||
|
||||
/**
|
||||
* sunxi_ccu_set_mmc_timing_mode: Get the current MMC clock timing mode
|
||||
* @clk: clock to query
|
||||
*
|
||||
* Returns 0 if the clock is in old timing mode, > 0 if it is in
|
||||
* new timing mode, and -ENOTSUPP if the clock does not support
|
||||
* this function.
|
||||
*/
|
||||
int sunxi_ccu_get_mmc_timing_mode(struct clk *clk)
|
||||
{
|
||||
struct clk_hw *hw = __clk_get_hw(clk);
|
||||
struct ccu_common *cm = hw_to_ccu_common(hw);
|
||||
|
||||
if (!(cm->features & CCU_FEATURE_MMC_TIMING_SWITCH))
|
||||
return -ENOTSUPP;
|
||||
|
||||
return !!(readl(cm->base + cm->reg) & CCU_MMC_NEW_TIMING_MODE);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sunxi_ccu_get_mmc_timing_mode);
|
|
@ -172,3 +172,83 @@ const struct clk_ops ccu_mp_ops = {
|
|||
.recalc_rate = ccu_mp_recalc_rate,
|
||||
.set_rate = ccu_mp_set_rate,
|
||||
};
|
||||
|
||||
/*
|
||||
* Support for MMC timing mode switching
|
||||
*
|
||||
* The MMC clocks on some SoCs support switching between old and
|
||||
* new timing modes. A platform specific API is provided to query
|
||||
* and set the timing mode on supported SoCs.
|
||||
*
|
||||
* In addition, a special class of ccu_mp_ops is provided, which
|
||||
* takes in to account the timing mode switch. When the new timing
|
||||
* mode is active, the clock output rate is halved. This new class
|
||||
* is a wrapper around the generic ccu_mp_ops. When clock rates
|
||||
* are passed through to ccu_mp_ops callbacks, they are doubled
|
||||
* if the new timing mode bit is set, to account for the post
|
||||
* divider. Conversely, when clock rates are passed back, they
|
||||
* are halved if the mode bit is set.
|
||||
*/
|
||||
|
||||
static unsigned long ccu_mp_mmc_recalc_rate(struct clk_hw *hw,
|
||||
unsigned long parent_rate)
|
||||
{
|
||||
unsigned long rate = ccu_mp_recalc_rate(hw, parent_rate);
|
||||
struct ccu_common *cm = hw_to_ccu_common(hw);
|
||||
u32 val = readl(cm->base + cm->reg);
|
||||
|
||||
if (val & CCU_MMC_NEW_TIMING_MODE)
|
||||
return rate / 2;
|
||||
return rate;
|
||||
}
|
||||
|
||||
static int ccu_mp_mmc_determine_rate(struct clk_hw *hw,
|
||||
struct clk_rate_request *req)
|
||||
{
|
||||
struct ccu_common *cm = hw_to_ccu_common(hw);
|
||||
u32 val = readl(cm->base + cm->reg);
|
||||
int ret;
|
||||
|
||||
/* adjust the requested clock rate */
|
||||
if (val & CCU_MMC_NEW_TIMING_MODE) {
|
||||
req->rate *= 2;
|
||||
req->min_rate *= 2;
|
||||
req->max_rate *= 2;
|
||||
}
|
||||
|
||||
ret = ccu_mp_determine_rate(hw, req);
|
||||
|
||||
/* re-adjust the requested clock rate back */
|
||||
if (val & CCU_MMC_NEW_TIMING_MODE) {
|
||||
req->rate /= 2;
|
||||
req->min_rate /= 2;
|
||||
req->max_rate /= 2;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ccu_mp_mmc_set_rate(struct clk_hw *hw, unsigned long rate,
|
||||
unsigned long parent_rate)
|
||||
{
|
||||
struct ccu_common *cm = hw_to_ccu_common(hw);
|
||||
u32 val = readl(cm->base + cm->reg);
|
||||
|
||||
if (val & CCU_MMC_NEW_TIMING_MODE)
|
||||
rate *= 2;
|
||||
|
||||
return ccu_mp_set_rate(hw, rate, parent_rate);
|
||||
}
|
||||
|
||||
const struct clk_ops ccu_mp_mmc_ops = {
|
||||
.disable = ccu_mp_disable,
|
||||
.enable = ccu_mp_enable,
|
||||
.is_enabled = ccu_mp_is_enabled,
|
||||
|
||||
.get_parent = ccu_mp_get_parent,
|
||||
.set_parent = ccu_mp_set_parent,
|
||||
|
||||
.determine_rate = ccu_mp_mmc_determine_rate,
|
||||
.recalc_rate = ccu_mp_mmc_recalc_rate,
|
||||
.set_rate = ccu_mp_mmc_set_rate,
|
||||
};
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
#ifndef _CCU_MP_H_
|
||||
#define _CCU_MP_H_
|
||||
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/clk-provider.h>
|
||||
|
||||
#include "ccu_common.h"
|
||||
|
@ -74,4 +75,33 @@ static inline struct ccu_mp *hw_to_ccu_mp(struct clk_hw *hw)
|
|||
|
||||
extern const struct clk_ops ccu_mp_ops;
|
||||
|
||||
/*
|
||||
* Special class of M-P clock that supports MMC timing modes
|
||||
*
|
||||
* Since the MMC clock registers all follow the same layout, we can
|
||||
* simplify the macro for this particular case. In addition, as
|
||||
* switching modes also affects the output clock rate, we need to
|
||||
* have CLK_GET_RATE_NOCACHE for all these types of clocks.
|
||||
*/
|
||||
|
||||
#define SUNXI_CCU_MP_MMC_WITH_MUX_GATE(_struct, _name, _parents, _reg, \
|
||||
_flags) \
|
||||
struct ccu_mp _struct = { \
|
||||
.enable = BIT(31), \
|
||||
.m = _SUNXI_CCU_DIV(0, 4), \
|
||||
.p = _SUNXI_CCU_DIV(16, 2), \
|
||||
.mux = _SUNXI_CCU_MUX(24, 2), \
|
||||
.common = { \
|
||||
.reg = _reg, \
|
||||
.features = CCU_FEATURE_MMC_TIMING_SWITCH, \
|
||||
.hw.init = CLK_HW_INIT_PARENTS(_name, \
|
||||
_parents, \
|
||||
&ccu_mp_mmc_ops, \
|
||||
CLK_GET_RATE_NOCACHE | \
|
||||
_flags), \
|
||||
} \
|
||||
}
|
||||
|
||||
extern const struct clk_ops ccu_mp_mmc_ops;
|
||||
|
||||
#endif /* _CCU_MP_H_ */
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
* the License, or (at your option) any later version.
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/reset-controller.h>
|
||||
|
||||
|
@ -49,7 +50,18 @@ static int ccu_reset_deassert(struct reset_controller_dev *rcdev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int ccu_reset_reset(struct reset_controller_dev *rcdev,
|
||||
unsigned long id)
|
||||
{
|
||||
ccu_reset_assert(rcdev, id);
|
||||
udelay(10);
|
||||
ccu_reset_deassert(rcdev, id);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct reset_control_ops ccu_reset_ops = {
|
||||
.assert = ccu_reset_assert,
|
||||
.deassert = ccu_reset_deassert,
|
||||
.reset = ccu_reset_reset,
|
||||
};
|
||||
|
|
|
@ -12,13 +12,6 @@ menuconfig MMC
|
|||
If you want MMC/SD/SDIO support, you should say Y here and
|
||||
also to your specific host controller driver.
|
||||
|
||||
config MMC_DEBUG
|
||||
bool "MMC debugging"
|
||||
depends on MMC != n
|
||||
help
|
||||
This is an option for use by developers; most people should
|
||||
say N here. This enables MMC core and driver debugging.
|
||||
|
||||
if MMC
|
||||
|
||||
source "drivers/mmc/core/Kconfig"
|
||||
|
|
|
@ -2,7 +2,5 @@
|
|||
# Makefile for the kernel mmc device drivers.
|
||||
#
|
||||
|
||||
subdir-ccflags-$(CONFIG_MMC_DEBUG) := -DDEBUG
|
||||
|
||||
obj-$(CONFIG_MMC) += core/
|
||||
obj-$(subst m,y,$(CONFIG_MMC)) += host/
|
||||
|
|
|
@ -36,6 +36,7 @@
|
|||
#include <linux/compat.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/idr.h>
|
||||
#include <linux/debugfs.h>
|
||||
|
||||
#include <linux/mmc/ioctl.h>
|
||||
#include <linux/mmc/card.h>
|
||||
|
@ -126,7 +127,7 @@ module_param(perdev_minors, int, 0444);
|
|||
MODULE_PARM_DESC(perdev_minors, "Minors numbers to allocate per device");
|
||||
|
||||
static inline int mmc_blk_part_switch(struct mmc_card *card,
|
||||
struct mmc_blk_data *md);
|
||||
unsigned int part_type);
|
||||
|
||||
static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk)
|
||||
{
|
||||
|
@ -188,7 +189,6 @@ static ssize_t power_ro_lock_store(struct device *dev,
|
|||
{
|
||||
int ret;
|
||||
struct mmc_blk_data *md, *part_md;
|
||||
struct mmc_card *card;
|
||||
struct mmc_queue *mq;
|
||||
struct request *req;
|
||||
unsigned long set;
|
||||
|
@ -201,7 +201,6 @@ static ssize_t power_ro_lock_store(struct device *dev,
|
|||
|
||||
md = mmc_blk_get(dev_to_disk(dev));
|
||||
mq = &md->queue;
|
||||
card = md->queue.card;
|
||||
|
||||
/* Dispatch locking to the block layer */
|
||||
req = blk_get_request(mq->queue, REQ_OP_DRV_OUT, __GFP_RECLAIM);
|
||||
|
@ -489,7 +488,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
|
|||
|
||||
mrq.cmd = &cmd;
|
||||
|
||||
err = mmc_blk_part_switch(card, md);
|
||||
err = mmc_blk_part_switch(card, md->part_type);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -554,35 +553,20 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
|
|||
return err;
|
||||
}
|
||||
|
||||
static int mmc_blk_ioctl_cmd(struct block_device *bdev,
|
||||
static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md,
|
||||
struct mmc_ioc_cmd __user *ic_ptr)
|
||||
{
|
||||
struct mmc_blk_ioc_data *idata;
|
||||
struct mmc_blk_ioc_data *idatas[1];
|
||||
struct mmc_blk_data *md;
|
||||
struct mmc_queue *mq;
|
||||
struct mmc_card *card;
|
||||
int err = 0, ioc_err = 0;
|
||||
struct request *req;
|
||||
|
||||
/*
|
||||
* The caller must have CAP_SYS_RAWIO, and must be calling this on the
|
||||
* whole block device, not on a partition. This prevents overspray
|
||||
* between sibling partitions.
|
||||
*/
|
||||
if ((!capable(CAP_SYS_RAWIO)) || (bdev != bdev->bd_contains))
|
||||
return -EPERM;
|
||||
|
||||
idata = mmc_blk_ioctl_copy_from_user(ic_ptr);
|
||||
if (IS_ERR(idata))
|
||||
return PTR_ERR(idata);
|
||||
|
||||
md = mmc_blk_get(bdev->bd_disk);
|
||||
if (!md) {
|
||||
err = -EINVAL;
|
||||
goto cmd_err;
|
||||
}
|
||||
|
||||
card = md->queue.card;
|
||||
if (IS_ERR(card)) {
|
||||
err = PTR_ERR(card);
|
||||
|
@ -598,7 +582,7 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
|
|||
__GFP_RECLAIM);
|
||||
idatas[0] = idata;
|
||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_IOCTL;
|
||||
req_to_mmc_queue_req(req)->idata = idatas;
|
||||
req_to_mmc_queue_req(req)->drv_op_data = idatas;
|
||||
req_to_mmc_queue_req(req)->ioc_count = 1;
|
||||
blk_execute_rq(mq->queue, NULL, req, 0);
|
||||
ioc_err = req_to_mmc_queue_req(req)->drv_op_result;
|
||||
|
@ -606,33 +590,22 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
|
|||
blk_put_request(req);
|
||||
|
||||
cmd_done:
|
||||
mmc_blk_put(md);
|
||||
cmd_err:
|
||||
kfree(idata->buf);
|
||||
kfree(idata);
|
||||
return ioc_err ? ioc_err : err;
|
||||
}
|
||||
|
||||
static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
|
||||
static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md,
|
||||
struct mmc_ioc_multi_cmd __user *user)
|
||||
{
|
||||
struct mmc_blk_ioc_data **idata = NULL;
|
||||
struct mmc_ioc_cmd __user *cmds = user->cmds;
|
||||
struct mmc_card *card;
|
||||
struct mmc_blk_data *md;
|
||||
struct mmc_queue *mq;
|
||||
int i, err = 0, ioc_err = 0;
|
||||
__u64 num_of_cmds;
|
||||
struct request *req;
|
||||
|
||||
/*
|
||||
* The caller must have CAP_SYS_RAWIO, and must be calling this on the
|
||||
* whole block device, not on a partition. This prevents overspray
|
||||
* between sibling partitions.
|
||||
*/
|
||||
if ((!capable(CAP_SYS_RAWIO)) || (bdev != bdev->bd_contains))
|
||||
return -EPERM;
|
||||
|
||||
if (copy_from_user(&num_of_cmds, &user->num_of_cmds,
|
||||
sizeof(num_of_cmds)))
|
||||
return -EFAULT;
|
||||
|
@ -656,16 +629,10 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
|
|||
}
|
||||
}
|
||||
|
||||
md = mmc_blk_get(bdev->bd_disk);
|
||||
if (!md) {
|
||||
err = -EINVAL;
|
||||
goto cmd_err;
|
||||
}
|
||||
|
||||
card = md->queue.card;
|
||||
if (IS_ERR(card)) {
|
||||
err = PTR_ERR(card);
|
||||
goto cmd_done;
|
||||
goto cmd_err;
|
||||
}
|
||||
|
||||
|
||||
|
@ -677,7 +644,7 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
|
|||
idata[0]->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN,
|
||||
__GFP_RECLAIM);
|
||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_IOCTL;
|
||||
req_to_mmc_queue_req(req)->idata = idata;
|
||||
req_to_mmc_queue_req(req)->drv_op_data = idata;
|
||||
req_to_mmc_queue_req(req)->ioc_count = num_of_cmds;
|
||||
blk_execute_rq(mq->queue, NULL, req, 0);
|
||||
ioc_err = req_to_mmc_queue_req(req)->drv_op_result;
|
||||
|
@ -688,8 +655,6 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
|
|||
|
||||
blk_put_request(req);
|
||||
|
||||
cmd_done:
|
||||
mmc_blk_put(md);
|
||||
cmd_err:
|
||||
for (i = 0; i < num_of_cmds; i++) {
|
||||
kfree(idata[i]->buf);
|
||||
|
@ -699,16 +664,47 @@ cmd_err:
|
|||
return ioc_err ? ioc_err : err;
|
||||
}
|
||||
|
||||
static int mmc_blk_check_blkdev(struct block_device *bdev)
|
||||
{
|
||||
/*
|
||||
* The caller must have CAP_SYS_RAWIO, and must be calling this on the
|
||||
* whole block device, not on a partition. This prevents overspray
|
||||
* between sibling partitions.
|
||||
*/
|
||||
if ((!capable(CAP_SYS_RAWIO)) || (bdev != bdev->bd_contains))
|
||||
return -EPERM;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mmc_blk_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct mmc_blk_data *md;
|
||||
int ret;
|
||||
|
||||
switch (cmd) {
|
||||
case MMC_IOC_CMD:
|
||||
return mmc_blk_ioctl_cmd(bdev,
|
||||
(struct mmc_ioc_cmd __user *)arg);
|
||||
ret = mmc_blk_check_blkdev(bdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
md = mmc_blk_get(bdev->bd_disk);
|
||||
if (!md)
|
||||
return -EINVAL;
|
||||
ret = mmc_blk_ioctl_cmd(md,
|
||||
(struct mmc_ioc_cmd __user *)arg);
|
||||
mmc_blk_put(md);
|
||||
return ret;
|
||||
case MMC_IOC_MULTI_CMD:
|
||||
return mmc_blk_ioctl_multi_cmd(bdev,
|
||||
(struct mmc_ioc_multi_cmd __user *)arg);
|
||||
ret = mmc_blk_check_blkdev(bdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
md = mmc_blk_get(bdev->bd_disk);
|
||||
if (!md)
|
||||
return -EINVAL;
|
||||
ret = mmc_blk_ioctl_multi_cmd(md,
|
||||
(struct mmc_ioc_multi_cmd __user *)arg);
|
||||
mmc_blk_put(md);
|
||||
return ret;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -765,29 +761,29 @@ static int mmc_blk_part_switch_post(struct mmc_card *card,
|
|||
}
|
||||
|
||||
static inline int mmc_blk_part_switch(struct mmc_card *card,
|
||||
struct mmc_blk_data *md)
|
||||
unsigned int part_type)
|
||||
{
|
||||
int ret = 0;
|
||||
struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev);
|
||||
|
||||
if (main_md->part_curr == md->part_type)
|
||||
if (main_md->part_curr == part_type)
|
||||
return 0;
|
||||
|
||||
if (mmc_card_mmc(card)) {
|
||||
u8 part_config = card->ext_csd.part_config;
|
||||
|
||||
ret = mmc_blk_part_switch_pre(card, md->part_type);
|
||||
ret = mmc_blk_part_switch_pre(card, part_type);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
part_config &= ~EXT_CSD_PART_CONFIG_ACC_MASK;
|
||||
part_config |= md->part_type;
|
||||
part_config |= part_type;
|
||||
|
||||
ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
|
||||
EXT_CSD_PART_CONFIG, part_config,
|
||||
card->ext_csd.part_time);
|
||||
if (ret) {
|
||||
mmc_blk_part_switch_post(card, md->part_type);
|
||||
mmc_blk_part_switch_post(card, part_type);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -796,7 +792,7 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
|
|||
ret = mmc_blk_part_switch_post(card, main_md->part_curr);
|
||||
}
|
||||
|
||||
main_md->part_curr = md->part_type;
|
||||
main_md->part_curr = part_type;
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1139,7 +1135,7 @@ static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host,
|
|||
int part_err;
|
||||
|
||||
main_md->part_curr = main_md->part_type;
|
||||
part_err = mmc_blk_part_switch(host->card, md);
|
||||
part_err = mmc_blk_part_switch(host->card, md->part_type);
|
||||
if (part_err) {
|
||||
/*
|
||||
* We have failed to get back into the correct
|
||||
|
@ -1178,6 +1174,10 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
|
|||
struct mmc_queue_req *mq_rq;
|
||||
struct mmc_card *card = mq->card;
|
||||
struct mmc_blk_data *md = mq->blkdata;
|
||||
struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev);
|
||||
struct mmc_blk_ioc_data **idata;
|
||||
u8 **ext_csd;
|
||||
u32 status;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
|
@ -1185,14 +1185,15 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
|
|||
|
||||
switch (mq_rq->drv_op) {
|
||||
case MMC_DRV_OP_IOCTL:
|
||||
idata = mq_rq->drv_op_data;
|
||||
for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) {
|
||||
ret = __mmc_blk_ioctl_cmd(card, md, mq_rq->idata[i]);
|
||||
ret = __mmc_blk_ioctl_cmd(card, md, idata[i]);
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
/* Always switch back to main area after RPMB access */
|
||||
if (md->area_type & MMC_BLK_DATA_AREA_RPMB)
|
||||
mmc_blk_part_switch(card, dev_get_drvdata(&card->dev));
|
||||
mmc_blk_part_switch(card, main_md->part_type);
|
||||
break;
|
||||
case MMC_DRV_OP_BOOT_WP:
|
||||
ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BOOT_WP,
|
||||
|
@ -1206,6 +1207,15 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
|
|||
card->ext_csd.boot_ro_lock |=
|
||||
EXT_CSD_BOOT_WP_B_PWR_WP_EN;
|
||||
break;
|
||||
case MMC_DRV_OP_GET_CARD_STATUS:
|
||||
ret = mmc_send_status(card, &status);
|
||||
if (!ret)
|
||||
ret = status;
|
||||
break;
|
||||
case MMC_DRV_OP_GET_EXT_CSD:
|
||||
ext_csd = mq_rq->drv_op_data;
|
||||
ret = mmc_get_ext_csd(card, ext_csd);
|
||||
break;
|
||||
default:
|
||||
pr_err("%s: unknown driver specific operation\n",
|
||||
md->disk->disk_name);
|
||||
|
@ -1943,7 +1953,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
|
|||
/* claim host only for the first request */
|
||||
mmc_get_card(card);
|
||||
|
||||
ret = mmc_blk_part_switch(card, md);
|
||||
ret = mmc_blk_part_switch(card, md->part_type);
|
||||
if (ret) {
|
||||
if (req) {
|
||||
blk_end_request_all(req, BLK_STS_IOERR);
|
||||
|
@ -2024,8 +2034,20 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
|
|||
int devidx, ret;
|
||||
|
||||
devidx = ida_simple_get(&mmc_blk_ida, 0, max_devices, GFP_KERNEL);
|
||||
if (devidx < 0)
|
||||
if (devidx < 0) {
|
||||
/*
|
||||
* We get -ENOSPC because there are no more any available
|
||||
* devidx. The reason may be that, either userspace haven't yet
|
||||
* unmounted the partitions, which postpones mmc_blk_release()
|
||||
* from being called, or the device has more partitions than
|
||||
* what we support.
|
||||
*/
|
||||
if (devidx == -ENOSPC)
|
||||
dev_err(mmc_dev(card->host),
|
||||
"no more device IDs available\n");
|
||||
|
||||
return ERR_PTR(devidx);
|
||||
}
|
||||
|
||||
md = kzalloc(sizeof(struct mmc_blk_data), GFP_KERNEL);
|
||||
if (!md) {
|
||||
|
@ -2283,6 +2305,134 @@ force_ro_fail:
|
|||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
|
||||
static int mmc_dbg_card_status_get(void *data, u64 *val)
|
||||
{
|
||||
struct mmc_card *card = data;
|
||||
struct mmc_blk_data *md = dev_get_drvdata(&card->dev);
|
||||
struct mmc_queue *mq = &md->queue;
|
||||
struct request *req;
|
||||
int ret;
|
||||
|
||||
/* Ask the block layer about the card status */
|
||||
req = blk_get_request(mq->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
|
||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_CARD_STATUS;
|
||||
blk_execute_rq(mq->queue, NULL, req, 0);
|
||||
ret = req_to_mmc_queue_req(req)->drv_op_result;
|
||||
if (ret >= 0) {
|
||||
*val = ret;
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
DEFINE_SIMPLE_ATTRIBUTE(mmc_dbg_card_status_fops, mmc_dbg_card_status_get,
|
||||
NULL, "%08llx\n");
|
||||
|
||||
/* That is two digits * 512 + 1 for newline */
|
||||
#define EXT_CSD_STR_LEN 1025
|
||||
|
||||
static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct mmc_card *card = inode->i_private;
|
||||
struct mmc_blk_data *md = dev_get_drvdata(&card->dev);
|
||||
struct mmc_queue *mq = &md->queue;
|
||||
struct request *req;
|
||||
char *buf;
|
||||
ssize_t n = 0;
|
||||
u8 *ext_csd;
|
||||
int err, i;
|
||||
|
||||
buf = kmalloc(EXT_CSD_STR_LEN + 1, GFP_KERNEL);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Ask the block layer for the EXT CSD */
|
||||
req = blk_get_request(mq->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
|
||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_EXT_CSD;
|
||||
req_to_mmc_queue_req(req)->drv_op_data = &ext_csd;
|
||||
blk_execute_rq(mq->queue, NULL, req, 0);
|
||||
err = req_to_mmc_queue_req(req)->drv_op_result;
|
||||
if (err) {
|
||||
pr_err("FAILED %d\n", err);
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
for (i = 0; i < 512; i++)
|
||||
n += sprintf(buf + n, "%02x", ext_csd[i]);
|
||||
n += sprintf(buf + n, "\n");
|
||||
|
||||
if (n != EXT_CSD_STR_LEN) {
|
||||
err = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
filp->private_data = buf;
|
||||
kfree(ext_csd);
|
||||
return 0;
|
||||
|
||||
out_free:
|
||||
kfree(buf);
|
||||
return err;
|
||||
}
|
||||
|
||||
static ssize_t mmc_ext_csd_read(struct file *filp, char __user *ubuf,
|
||||
size_t cnt, loff_t *ppos)
|
||||
{
|
||||
char *buf = filp->private_data;
|
||||
|
||||
return simple_read_from_buffer(ubuf, cnt, ppos,
|
||||
buf, EXT_CSD_STR_LEN);
|
||||
}
|
||||
|
||||
static int mmc_ext_csd_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
kfree(file->private_data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct file_operations mmc_dbg_ext_csd_fops = {
|
||||
.open = mmc_ext_csd_open,
|
||||
.read = mmc_ext_csd_read,
|
||||
.release = mmc_ext_csd_release,
|
||||
.llseek = default_llseek,
|
||||
};
|
||||
|
||||
static int mmc_blk_add_debugfs(struct mmc_card *card)
|
||||
{
|
||||
struct dentry *root;
|
||||
|
||||
if (!card->debugfs_root)
|
||||
return 0;
|
||||
|
||||
root = card->debugfs_root;
|
||||
|
||||
if (mmc_card_mmc(card) || mmc_card_sd(card)) {
|
||||
if (!debugfs_create_file("status", S_IRUSR, root, card,
|
||||
&mmc_dbg_card_status_fops))
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
if (mmc_card_mmc(card)) {
|
||||
if (!debugfs_create_file("ext_csd", S_IRUSR, root, card,
|
||||
&mmc_dbg_ext_csd_fops))
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
#else
|
||||
|
||||
static int mmc_blk_add_debugfs(struct mmc_card *card)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_DEBUG_FS */
|
||||
|
||||
static int mmc_blk_probe(struct mmc_card *card)
|
||||
{
|
||||
struct mmc_blk_data *md, *part_md;
|
||||
|
@ -2319,6 +2469,9 @@ static int mmc_blk_probe(struct mmc_card *card)
|
|||
goto out;
|
||||
}
|
||||
|
||||
/* Add two debugfs entries */
|
||||
mmc_blk_add_debugfs(card);
|
||||
|
||||
pm_runtime_set_autosuspend_delay(&card->dev, 3000);
|
||||
pm_runtime_use_autosuspend(&card->dev);
|
||||
|
||||
|
@ -2346,7 +2499,7 @@ static void mmc_blk_remove(struct mmc_card *card)
|
|||
mmc_blk_remove_parts(card, md);
|
||||
pm_runtime_get_sync(&card->dev);
|
||||
mmc_claim_host(card->host);
|
||||
mmc_blk_part_switch(card, md);
|
||||
mmc_blk_part_switch(card, md->part_type);
|
||||
mmc_release_host(card->host);
|
||||
if (card->type != MMC_TYPE_SD_COMBO)
|
||||
pm_runtime_disable(&card->dev);
|
||||
|
|
|
@ -260,6 +260,9 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
|
|||
|
||||
trace_mmc_request_start(host, mrq);
|
||||
|
||||
if (host->cqe_on)
|
||||
host->cqe_ops->cqe_off(host);
|
||||
|
||||
host->ops->request(host, mrq);
|
||||
}
|
||||
|
||||
|
@ -295,10 +298,8 @@ static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
|
|||
|
||||
static int mmc_mrq_prep(struct mmc_host *host, struct mmc_request *mrq)
|
||||
{
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
unsigned int i, sz;
|
||||
unsigned int i, sz = 0;
|
||||
struct scatterlist *sg;
|
||||
#endif
|
||||
|
||||
if (mrq->cmd) {
|
||||
mrq->cmd->error = 0;
|
||||
|
@ -314,13 +315,12 @@ static int mmc_mrq_prep(struct mmc_host *host, struct mmc_request *mrq)
|
|||
mrq->data->blocks > host->max_blk_count ||
|
||||
mrq->data->blocks * mrq->data->blksz > host->max_req_size)
|
||||
return -EINVAL;
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
sz = 0;
|
||||
|
||||
for_each_sg(mrq->data->sg, sg, mrq->data->sg_len, i)
|
||||
sz += sg->length;
|
||||
if (sz != mrq->data->blocks * mrq->data->blksz)
|
||||
return -EINVAL;
|
||||
#endif
|
||||
|
||||
mrq->data->error = 0;
|
||||
mrq->data->mrq = mrq;
|
||||
if (mrq->stop) {
|
||||
|
@ -736,8 +736,8 @@ void mmc_set_data_timeout(struct mmc_data *data, const struct mmc_card *card)
|
|||
if (data->flags & MMC_DATA_WRITE)
|
||||
mult <<= card->csd.r2w_factor;
|
||||
|
||||
data->timeout_ns = card->csd.tacc_ns * mult;
|
||||
data->timeout_clks = card->csd.tacc_clks * mult;
|
||||
data->timeout_ns = card->csd.taac_ns * mult;
|
||||
data->timeout_clks = card->csd.taac_clks * mult;
|
||||
|
||||
/*
|
||||
* SD cards also have an upper limit on the timeout.
|
||||
|
@ -766,7 +766,7 @@ void mmc_set_data_timeout(struct mmc_data *data, const struct mmc_card *card)
|
|||
/*
|
||||
* SDHC cards always use these fixed values.
|
||||
*/
|
||||
if (timeout_us > limit_us || mmc_card_blockaddr(card)) {
|
||||
if (timeout_us > limit_us) {
|
||||
data->timeout_ns = limit_us * 1000;
|
||||
data->timeout_clks = 0;
|
||||
}
|
||||
|
@ -982,6 +982,9 @@ int mmc_execute_tuning(struct mmc_card *card)
|
|||
if (!host->ops->execute_tuning)
|
||||
return 0;
|
||||
|
||||
if (host->cqe_on)
|
||||
host->cqe_ops->cqe_off(host);
|
||||
|
||||
if (mmc_card_mmc(card))
|
||||
opcode = MMC_SEND_TUNING_BLOCK_HS200;
|
||||
else
|
||||
|
@ -1021,6 +1024,9 @@ void mmc_set_bus_width(struct mmc_host *host, unsigned int width)
|
|||
*/
|
||||
void mmc_set_initial_state(struct mmc_host *host)
|
||||
{
|
||||
if (host->cqe_on)
|
||||
host->cqe_ops->cqe_off(host);
|
||||
|
||||
mmc_retune_disable(host);
|
||||
|
||||
if (mmc_host_is_spi(host))
|
||||
|
@ -1137,11 +1143,11 @@ int mmc_of_parse_voltage(struct device_node *np, u32 *mask)
|
|||
voltage_ranges = of_get_property(np, "voltage-ranges", &num_ranges);
|
||||
num_ranges = num_ranges / sizeof(*voltage_ranges) / 2;
|
||||
if (!voltage_ranges) {
|
||||
pr_debug("%s: voltage-ranges unspecified\n", np->full_name);
|
||||
pr_debug("%pOF: voltage-ranges unspecified\n", np);
|
||||
return 0;
|
||||
}
|
||||
if (!num_ranges) {
|
||||
pr_err("%s: voltage-ranges empty\n", np->full_name);
|
||||
pr_err("%pOF: voltage-ranges empty\n", np);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -1153,8 +1159,8 @@ int mmc_of_parse_voltage(struct device_node *np, u32 *mask)
|
|||
be32_to_cpu(voltage_ranges[j]),
|
||||
be32_to_cpu(voltage_ranges[j + 1]));
|
||||
if (!ocr_mask) {
|
||||
pr_err("%s: voltage-range #%d is invalid\n",
|
||||
np->full_name, i);
|
||||
pr_err("%pOF: voltage-range #%d is invalid\n",
|
||||
np, i);
|
||||
return -EINVAL;
|
||||
}
|
||||
*mask |= ocr_mask;
|
||||
|
@ -1769,13 +1775,6 @@ void mmc_detach_bus(struct mmc_host *host)
|
|||
static void _mmc_detect_change(struct mmc_host *host, unsigned long delay,
|
||||
bool cd_irq)
|
||||
{
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
unsigned long flags;
|
||||
spin_lock_irqsave(&host->lock, flags);
|
||||
WARN_ON(host->removed);
|
||||
spin_unlock_irqrestore(&host->lock, flags);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* If the device is configured as wakeup, we prevent a new sleep for
|
||||
* 5 s to give provision for user space to consume the event.
|
||||
|
@ -1869,14 +1868,14 @@ static unsigned int mmc_mmc_erase_timeout(struct mmc_card *card,
|
|||
} else {
|
||||
/* CSD Erase Group Size uses write timeout */
|
||||
unsigned int mult = (10 << card->csd.r2w_factor);
|
||||
unsigned int timeout_clks = card->csd.tacc_clks * mult;
|
||||
unsigned int timeout_clks = card->csd.taac_clks * mult;
|
||||
unsigned int timeout_us;
|
||||
|
||||
/* Avoid overflow: e.g. tacc_ns=80000000 mult=1280 */
|
||||
if (card->csd.tacc_ns < 1000000)
|
||||
timeout_us = (card->csd.tacc_ns * mult) / 1000;
|
||||
/* Avoid overflow: e.g. taac_ns=80000000 mult=1280 */
|
||||
if (card->csd.taac_ns < 1000000)
|
||||
timeout_us = (card->csd.taac_ns * mult) / 1000;
|
||||
else
|
||||
timeout_us = (card->csd.tacc_ns / 1000) * mult;
|
||||
timeout_us = (card->csd.taac_ns / 1000) * mult;
|
||||
|
||||
/*
|
||||
* ios.clock is only a target. The real clock rate might be
|
||||
|
@ -2446,10 +2445,9 @@ static int mmc_rescan_try_freq(struct mmc_host *host, unsigned freq)
|
|||
{
|
||||
host->f_init = freq;
|
||||
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
pr_info("%s: %s: trying to init card at %u Hz\n",
|
||||
pr_debug("%s: %s: trying to init card at %u Hz\n",
|
||||
mmc_hostname(host), __func__, host->f_init);
|
||||
#endif
|
||||
|
||||
mmc_power_up(host, host->ocr_avail);
|
||||
|
||||
/*
|
||||
|
@ -2646,12 +2644,6 @@ void mmc_start_host(struct mmc_host *host)
|
|||
|
||||
void mmc_stop_host(struct mmc_host *host)
|
||||
{
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
unsigned long flags;
|
||||
spin_lock_irqsave(&host->lock, flags);
|
||||
host->removed = 1;
|
||||
spin_unlock_irqrestore(&host->lock, flags);
|
||||
#endif
|
||||
if (host->slot.cd_irq >= 0) {
|
||||
if (host->slot.cd_wake_enabled)
|
||||
disable_irq_wake(host->slot.cd_irq);
|
||||
|
@ -2686,9 +2678,7 @@ int mmc_power_save_host(struct mmc_host *host)
|
|||
{
|
||||
int ret = 0;
|
||||
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
pr_info("%s: %s: powering down\n", mmc_hostname(host), __func__);
|
||||
#endif
|
||||
pr_debug("%s: %s: powering down\n", mmc_hostname(host), __func__);
|
||||
|
||||
mmc_bus_get(host);
|
||||
|
||||
|
@ -2712,9 +2702,7 @@ int mmc_power_restore_host(struct mmc_host *host)
|
|||
{
|
||||
int ret;
|
||||
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
pr_info("%s: %s: powering up\n", mmc_hostname(host), __func__);
|
||||
#endif
|
||||
pr_debug("%s: %s: powering up\n", mmc_hostname(host), __func__);
|
||||
|
||||
mmc_bus_get(host);
|
||||
|
||||
|
|
|
@ -107,6 +107,12 @@ static inline void mmc_unregister_pm_notifier(struct mmc_host *host) { }
|
|||
void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq);
|
||||
bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq);
|
||||
|
||||
struct mmc_async_req;
|
||||
|
||||
struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
|
||||
struct mmc_async_req *areq,
|
||||
enum mmc_blk_status *ret_stat);
|
||||
|
||||
int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
|
||||
unsigned int arg);
|
||||
int mmc_can_erase(struct mmc_card *card);
|
||||
|
|
|
@ -281,85 +281,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host)
|
|||
debugfs_remove_recursive(host->debugfs_root);
|
||||
}
|
||||
|
||||
static int mmc_dbg_card_status_get(void *data, u64 *val)
|
||||
{
|
||||
struct mmc_card *card = data;
|
||||
u32 status;
|
||||
int ret;
|
||||
|
||||
mmc_get_card(card);
|
||||
|
||||
ret = mmc_send_status(data, &status);
|
||||
if (!ret)
|
||||
*val = status;
|
||||
|
||||
mmc_put_card(card);
|
||||
|
||||
return ret;
|
||||
}
|
||||
DEFINE_SIMPLE_ATTRIBUTE(mmc_dbg_card_status_fops, mmc_dbg_card_status_get,
|
||||
NULL, "%08llx\n");
|
||||
|
||||
#define EXT_CSD_STR_LEN 1025
|
||||
|
||||
static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct mmc_card *card = inode->i_private;
|
||||
char *buf;
|
||||
ssize_t n = 0;
|
||||
u8 *ext_csd;
|
||||
int err, i;
|
||||
|
||||
buf = kmalloc(EXT_CSD_STR_LEN + 1, GFP_KERNEL);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
mmc_get_card(card);
|
||||
err = mmc_get_ext_csd(card, &ext_csd);
|
||||
mmc_put_card(card);
|
||||
if (err)
|
||||
goto out_free;
|
||||
|
||||
for (i = 0; i < 512; i++)
|
||||
n += sprintf(buf + n, "%02x", ext_csd[i]);
|
||||
n += sprintf(buf + n, "\n");
|
||||
|
||||
if (n != EXT_CSD_STR_LEN) {
|
||||
err = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
filp->private_data = buf;
|
||||
kfree(ext_csd);
|
||||
return 0;
|
||||
|
||||
out_free:
|
||||
kfree(buf);
|
||||
return err;
|
||||
}
|
||||
|
||||
static ssize_t mmc_ext_csd_read(struct file *filp, char __user *ubuf,
|
||||
size_t cnt, loff_t *ppos)
|
||||
{
|
||||
char *buf = filp->private_data;
|
||||
|
||||
return simple_read_from_buffer(ubuf, cnt, ppos,
|
||||
buf, EXT_CSD_STR_LEN);
|
||||
}
|
||||
|
||||
static int mmc_ext_csd_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
kfree(file->private_data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct file_operations mmc_dbg_ext_csd_fops = {
|
||||
.open = mmc_ext_csd_open,
|
||||
.read = mmc_ext_csd_read,
|
||||
.release = mmc_ext_csd_release,
|
||||
.llseek = default_llseek,
|
||||
};
|
||||
|
||||
void mmc_add_card_debugfs(struct mmc_card *card)
|
||||
{
|
||||
struct mmc_host *host = card->host;
|
||||
|
@ -382,16 +303,6 @@ void mmc_add_card_debugfs(struct mmc_card *card)
|
|||
if (!debugfs_create_x32("state", S_IRUSR, root, &card->state))
|
||||
goto err;
|
||||
|
||||
if (mmc_card_mmc(card) || mmc_card_sd(card))
|
||||
if (!debugfs_create_file("status", S_IRUSR, root, card,
|
||||
&mmc_dbg_card_status_fops))
|
||||
goto err;
|
||||
|
||||
if (mmc_card_mmc(card))
|
||||
if (!debugfs_create_file("ext_csd", S_IRUSR, root, card,
|
||||
&mmc_dbg_ext_csd_fops))
|
||||
goto err;
|
||||
|
||||
return;
|
||||
|
||||
err:
|
||||
|
|
|
@ -111,6 +111,12 @@ void mmc_retune_hold(struct mmc_host *host)
|
|||
host->hold_retune += 1;
|
||||
}
|
||||
|
||||
void mmc_retune_hold_now(struct mmc_host *host)
|
||||
{
|
||||
host->retune_now = 0;
|
||||
host->hold_retune += 1;
|
||||
}
|
||||
|
||||
void mmc_retune_release(struct mmc_host *host)
|
||||
{
|
||||
if (host->hold_retune)
|
||||
|
|
|
@ -19,6 +19,7 @@ void mmc_unregister_host_class(void);
|
|||
void mmc_retune_enable(struct mmc_host *host);
|
||||
void mmc_retune_disable(struct mmc_host *host);
|
||||
void mmc_retune_hold(struct mmc_host *host);
|
||||
void mmc_retune_hold_now(struct mmc_host *host);
|
||||
void mmc_retune_release(struct mmc_host *host);
|
||||
int mmc_retune(struct mmc_host *host);
|
||||
void mmc_retune_pause(struct mmc_host *host);
|
||||
|
|
|
@ -41,11 +41,11 @@ static const unsigned char tran_mant[] = {
|
|||
35, 40, 45, 50, 55, 60, 70, 80,
|
||||
};
|
||||
|
||||
static const unsigned int tacc_exp[] = {
|
||||
static const unsigned int taac_exp[] = {
|
||||
1, 10, 100, 1000, 10000, 100000, 1000000, 10000000,
|
||||
};
|
||||
|
||||
static const unsigned int tacc_mant[] = {
|
||||
static const unsigned int taac_mant[] = {
|
||||
0, 10, 12, 13, 15, 20, 25, 30,
|
||||
35, 40, 45, 50, 55, 60, 70, 80,
|
||||
};
|
||||
|
@ -153,8 +153,8 @@ static int mmc_decode_csd(struct mmc_card *card)
|
|||
csd->mmca_vsn = UNSTUFF_BITS(resp, 122, 4);
|
||||
m = UNSTUFF_BITS(resp, 115, 4);
|
||||
e = UNSTUFF_BITS(resp, 112, 3);
|
||||
csd->tacc_ns = (tacc_exp[e] * tacc_mant[m] + 9) / 10;
|
||||
csd->tacc_clks = UNSTUFF_BITS(resp, 104, 8) * 100;
|
||||
csd->taac_ns = (taac_exp[e] * taac_mant[m] + 9) / 10;
|
||||
csd->taac_clks = UNSTUFF_BITS(resp, 104, 8) * 100;
|
||||
|
||||
m = UNSTUFF_BITS(resp, 99, 4);
|
||||
e = UNSTUFF_BITS(resp, 96, 3);
|
||||
|
@ -1790,29 +1790,6 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
|
|||
*/
|
||||
card->reenable_cmdq = card->ext_csd.cmdq_en;
|
||||
|
||||
/*
|
||||
* The mandatory minimum values are defined for packed command.
|
||||
* read: 5, write: 3
|
||||
*/
|
||||
if (card->ext_csd.max_packed_writes >= 3 &&
|
||||
card->ext_csd.max_packed_reads >= 5 &&
|
||||
host->caps2 & MMC_CAP2_PACKED_CMD) {
|
||||
err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
|
||||
EXT_CSD_EXP_EVENTS_CTRL,
|
||||
EXT_CSD_PACKED_EVENT_EN,
|
||||
card->ext_csd.generic_cmd6_time);
|
||||
if (err && err != -EBADMSG)
|
||||
goto free_card;
|
||||
if (err) {
|
||||
pr_warn("%s: Enabling packed event failed\n",
|
||||
mmc_hostname(card->host));
|
||||
card->ext_csd.packed_event_en = 0;
|
||||
err = 0;
|
||||
} else {
|
||||
card->ext_csd.packed_event_en = 1;
|
||||
}
|
||||
}
|
||||
|
||||
if (!oldcard)
|
||||
host->card = card;
|
||||
|
||||
|
|
|
@ -83,6 +83,7 @@ int mmc_send_status(struct mmc_card *card, u32 *status)
|
|||
{
|
||||
return __mmc_send_status(card, status, MMC_CMD_RETRIES);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mmc_send_status);
|
||||
|
||||
static int _mmc_select_card(struct mmc_host *host, struct mmc_card *card)
|
||||
{
|
||||
|
@ -946,7 +947,7 @@ static int mmc_read_bkops_status(struct mmc_card *card)
|
|||
/**
|
||||
* mmc_start_bkops - start BKOPS for supported cards
|
||||
* @card: MMC card to start BKOPS
|
||||
* @form_exception: A flag to indicate if this function was
|
||||
* @from_exception: A flag to indicate if this function was
|
||||
* called due to an exception raised by the card
|
||||
*
|
||||
* Start background operations whenever requested.
|
||||
|
|
|
@ -800,38 +800,44 @@ static int mmc_test_check_broken_result(struct mmc_test_card *test,
|
|||
return ret;
|
||||
}
|
||||
|
||||
struct mmc_test_req {
|
||||
struct mmc_request mrq;
|
||||
struct mmc_command sbc;
|
||||
struct mmc_command cmd;
|
||||
struct mmc_command stop;
|
||||
struct mmc_command status;
|
||||
struct mmc_data data;
|
||||
};
|
||||
|
||||
/*
|
||||
* Tests nonblock transfer with certain parameters
|
||||
*/
|
||||
static void mmc_test_nonblock_reset(struct mmc_request *mrq,
|
||||
struct mmc_command *cmd,
|
||||
struct mmc_command *stop,
|
||||
struct mmc_data *data)
|
||||
static void mmc_test_req_reset(struct mmc_test_req *rq)
|
||||
{
|
||||
memset(mrq, 0, sizeof(struct mmc_request));
|
||||
memset(cmd, 0, sizeof(struct mmc_command));
|
||||
memset(data, 0, sizeof(struct mmc_data));
|
||||
memset(stop, 0, sizeof(struct mmc_command));
|
||||
memset(rq, 0, sizeof(struct mmc_test_req));
|
||||
|
||||
mrq->cmd = cmd;
|
||||
mrq->data = data;
|
||||
mrq->stop = stop;
|
||||
rq->mrq.cmd = &rq->cmd;
|
||||
rq->mrq.data = &rq->data;
|
||||
rq->mrq.stop = &rq->stop;
|
||||
}
|
||||
|
||||
static struct mmc_test_req *mmc_test_req_alloc(void)
|
||||
{
|
||||
struct mmc_test_req *rq = kmalloc(sizeof(*rq), GFP_KERNEL);
|
||||
|
||||
if (rq)
|
||||
mmc_test_req_reset(rq);
|
||||
|
||||
return rq;
|
||||
}
|
||||
|
||||
|
||||
static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
|
||||
struct scatterlist *sg, unsigned sg_len,
|
||||
unsigned dev_addr, unsigned blocks,
|
||||
unsigned blksz, int write, int count)
|
||||
{
|
||||
struct mmc_request mrq1;
|
||||
struct mmc_command cmd1;
|
||||
struct mmc_command stop1;
|
||||
struct mmc_data data1;
|
||||
|
||||
struct mmc_request mrq2;
|
||||
struct mmc_command cmd2;
|
||||
struct mmc_command stop2;
|
||||
struct mmc_data data2;
|
||||
|
||||
struct mmc_test_req *rq1, *rq2;
|
||||
struct mmc_test_async_req test_areq[2];
|
||||
struct mmc_async_req *done_areq;
|
||||
struct mmc_async_req *cur_areq = &test_areq[0].areq;
|
||||
|
@ -843,12 +849,16 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
|
|||
test_areq[0].test = test;
|
||||
test_areq[1].test = test;
|
||||
|
||||
mmc_test_nonblock_reset(&mrq1, &cmd1, &stop1, &data1);
|
||||
mmc_test_nonblock_reset(&mrq2, &cmd2, &stop2, &data2);
|
||||
rq1 = mmc_test_req_alloc();
|
||||
rq2 = mmc_test_req_alloc();
|
||||
if (!rq1 || !rq2) {
|
||||
ret = RESULT_FAIL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
cur_areq->mrq = &mrq1;
|
||||
cur_areq->mrq = &rq1->mrq;
|
||||
cur_areq->err_check = mmc_test_check_result_async;
|
||||
other_areq->mrq = &mrq2;
|
||||
other_areq->mrq = &rq2->mrq;
|
||||
other_areq->err_check = mmc_test_check_result_async;
|
||||
|
||||
for (i = 0; i < count; i++) {
|
||||
|
@ -861,14 +871,10 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
|
|||
goto err;
|
||||
}
|
||||
|
||||
if (done_areq) {
|
||||
if (done_areq->mrq == &mrq2)
|
||||
mmc_test_nonblock_reset(&mrq2, &cmd2,
|
||||
&stop2, &data2);
|
||||
else
|
||||
mmc_test_nonblock_reset(&mrq1, &cmd1,
|
||||
&stop1, &data1);
|
||||
}
|
||||
if (done_areq)
|
||||
mmc_test_req_reset(container_of(done_areq->mrq,
|
||||
struct mmc_test_req, mrq));
|
||||
|
||||
swap(cur_areq, other_areq);
|
||||
dev_addr += blocks;
|
||||
}
|
||||
|
@ -877,8 +883,9 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
|
|||
if (status != MMC_BLK_SUCCESS)
|
||||
ret = RESULT_FAIL;
|
||||
|
||||
return ret;
|
||||
err:
|
||||
kfree(rq1);
|
||||
kfree(rq2);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -2329,28 +2336,6 @@ static int mmc_test_reset(struct mmc_test_card *test)
|
|||
return RESULT_FAIL;
|
||||
}
|
||||
|
||||
struct mmc_test_req {
|
||||
struct mmc_request mrq;
|
||||
struct mmc_command sbc;
|
||||
struct mmc_command cmd;
|
||||
struct mmc_command stop;
|
||||
struct mmc_command status;
|
||||
struct mmc_data data;
|
||||
};
|
||||
|
||||
static struct mmc_test_req *mmc_test_req_alloc(void)
|
||||
{
|
||||
struct mmc_test_req *rq = kzalloc(sizeof(*rq), GFP_KERNEL);
|
||||
|
||||
if (rq) {
|
||||
rq->mrq.cmd = &rq->cmd;
|
||||
rq->mrq.data = &rq->data;
|
||||
rq->mrq.stop = &rq->stop;
|
||||
}
|
||||
|
||||
return rq;
|
||||
}
|
||||
|
||||
static int mmc_test_send_status(struct mmc_test_card *test,
|
||||
struct mmc_command *cmd)
|
||||
{
|
||||
|
|
|
@ -36,10 +36,14 @@ struct mmc_blk_request {
|
|||
* enum mmc_drv_op - enumerates the operations in the mmc_queue_req
|
||||
* @MMC_DRV_OP_IOCTL: ioctl operation
|
||||
* @MMC_DRV_OP_BOOT_WP: write protect boot partitions
|
||||
* @MMC_DRV_OP_GET_CARD_STATUS: get card status
|
||||
* @MMC_DRV_OP_GET_EXT_CSD: get the EXT CSD from an eMMC card
|
||||
*/
|
||||
enum mmc_drv_op {
|
||||
MMC_DRV_OP_IOCTL,
|
||||
MMC_DRV_OP_BOOT_WP,
|
||||
MMC_DRV_OP_GET_CARD_STATUS,
|
||||
MMC_DRV_OP_GET_EXT_CSD,
|
||||
};
|
||||
|
||||
struct mmc_queue_req {
|
||||
|
@ -51,7 +55,7 @@ struct mmc_queue_req {
|
|||
struct mmc_async_req areq;
|
||||
enum mmc_drv_op drv_op;
|
||||
int drv_op_result;
|
||||
struct mmc_blk_ioc_data **idata;
|
||||
void *drv_op_data;
|
||||
unsigned int ioc_count;
|
||||
};
|
||||
|
||||
|
|
|
@ -39,11 +39,11 @@ static const unsigned char tran_mant[] = {
|
|||
35, 40, 45, 50, 55, 60, 70, 80,
|
||||
};
|
||||
|
||||
static const unsigned int tacc_exp[] = {
|
||||
static const unsigned int taac_exp[] = {
|
||||
1, 10, 100, 1000, 10000, 100000, 1000000, 10000000,
|
||||
};
|
||||
|
||||
static const unsigned int tacc_mant[] = {
|
||||
static const unsigned int taac_mant[] = {
|
||||
0, 10, 12, 13, 15, 20, 25, 30,
|
||||
35, 40, 45, 50, 55, 60, 70, 80,
|
||||
};
|
||||
|
@ -111,8 +111,8 @@ static int mmc_decode_csd(struct mmc_card *card)
|
|||
case 0:
|
||||
m = UNSTUFF_BITS(resp, 115, 4);
|
||||
e = UNSTUFF_BITS(resp, 112, 3);
|
||||
csd->tacc_ns = (tacc_exp[e] * tacc_mant[m] + 9) / 10;
|
||||
csd->tacc_clks = UNSTUFF_BITS(resp, 104, 8) * 100;
|
||||
csd->taac_ns = (taac_exp[e] * taac_mant[m] + 9) / 10;
|
||||
csd->taac_clks = UNSTUFF_BITS(resp, 104, 8) * 100;
|
||||
|
||||
m = UNSTUFF_BITS(resp, 99, 4);
|
||||
e = UNSTUFF_BITS(resp, 96, 3);
|
||||
|
@ -148,8 +148,8 @@ static int mmc_decode_csd(struct mmc_card *card)
|
|||
*/
|
||||
mmc_card_set_blockaddr(card);
|
||||
|
||||
csd->tacc_ns = 0; /* Unused */
|
||||
csd->tacc_clks = 0; /* Unused */
|
||||
csd->taac_ns = 0; /* Unused */
|
||||
csd->taac_clks = 0; /* Unused */
|
||||
|
||||
m = UNSTUFF_BITS(resp, 99, 4);
|
||||
e = UNSTUFF_BITS(resp, 96, 3);
|
||||
|
|
|
@ -4,6 +4,15 @@
|
|||
|
||||
comment "MMC/SD/SDIO Host Controller Drivers"
|
||||
|
||||
config MMC_DEBUG
|
||||
bool "MMC host drivers debugginG"
|
||||
depends on MMC != n
|
||||
help
|
||||
This is an option for use by developers; most people should
|
||||
say N here. This enables MMC host driver debugging. And further
|
||||
added host drivers please don't invent their private macro for
|
||||
debugging.
|
||||
|
||||
config MMC_ARMMMCI
|
||||
tristate "ARM AMBA Multimedia Card Interface support"
|
||||
depends on ARM_AMBA
|
||||
|
@ -15,7 +24,7 @@ config MMC_ARMMMCI
|
|||
If unsure, say N.
|
||||
|
||||
config MMC_QCOM_DML
|
||||
tristate "Qualcomm Data Mover for SD Card Controller"
|
||||
bool "Qualcomm Data Mover for SD Card Controller"
|
||||
depends on MMC_ARMMMCI && QCOM_BAM_DMA
|
||||
default y
|
||||
help
|
||||
|
@ -354,7 +363,7 @@ config MMC_MOXART
|
|||
|
||||
config MMC_SDHCI_ST
|
||||
tristate "SDHCI support on STMicroelectronics SoC"
|
||||
depends on ARCH_STI
|
||||
depends on ARCH_STI || FSP2
|
||||
depends on MMC_SDHCI_PLTFM
|
||||
select MMC_SDHCI_IO_ACCESSORS
|
||||
help
|
||||
|
@ -494,7 +503,7 @@ config MMC_GOLDFISH
|
|||
|
||||
config MMC_SPI
|
||||
tristate "MMC/SD/SDIO over SPI"
|
||||
depends on SPI_MASTER && !HIGHMEM && HAS_DMA
|
||||
depends on SPI_MASTER && HAS_DMA
|
||||
select CRC7
|
||||
select CRC_ITU_T
|
||||
help
|
||||
|
@ -575,10 +584,29 @@ config MMC_SDHI
|
|||
depends on SUPERH || ARM || ARM64
|
||||
depends on SUPERH || ARCH_RENESAS || COMPILE_TEST
|
||||
select MMC_TMIO_CORE
|
||||
select MMC_SDHI_SYS_DMAC if (SUPERH || ARM)
|
||||
select MMC_SDHI_INTERNAL_DMAC if ARM64
|
||||
help
|
||||
This provides support for the SDHI SD/SDIO controller found in
|
||||
Renesas SuperH, ARM and ARM64 based SoCs
|
||||
|
||||
config MMC_SDHI_SYS_DMAC
|
||||
tristate "DMA for SDHI SD/SDIO controllers using SYS-DMAC"
|
||||
depends on MMC_SDHI
|
||||
help
|
||||
This provides DMA support for SDHI SD/SDIO controllers
|
||||
using SYS-DMAC via DMA Engine. This supports the controllers
|
||||
found in SuperH and Renesas ARM based SoCs.
|
||||
|
||||
config MMC_SDHI_INTERNAL_DMAC
|
||||
tristate "DMA for SDHI SD/SDIO controllers using on-chip bus mastering"
|
||||
depends on ARM64 || COMPILE_TEST
|
||||
depends on MMC_SDHI
|
||||
help
|
||||
This provides DMA support for SDHI SD/SDIO controllers
|
||||
using on-chip bus mastering. This supports the controllers
|
||||
found in arm64 based SoCs.
|
||||
|
||||
config MMC_CB710
|
||||
tristate "ENE CB710 MMC/SD Interface support"
|
||||
depends on PCI
|
||||
|
|
|
@ -2,8 +2,9 @@
|
|||
# Makefile for MMC/SD host controller drivers
|
||||
#
|
||||
|
||||
obj-$(CONFIG_MMC_ARMMMCI) += mmci.o
|
||||
obj-$(CONFIG_MMC_QCOM_DML) += mmci_qcom_dml.o
|
||||
obj-$(CONFIG_MMC_ARMMMCI) += armmmci.o
|
||||
armmmci-y := mmci.o
|
||||
armmmci-$(CONFIG_MMC_QCOM_DML) += mmci_qcom_dml.o
|
||||
obj-$(CONFIG_MMC_PXA) += pxamci.o
|
||||
obj-$(CONFIG_MMC_MXC) += mxcmmc.o
|
||||
obj-$(CONFIG_MMC_MXS) += mxs-mmc.o
|
||||
|
@ -36,7 +37,13 @@ obj-$(CONFIG_MMC_S3C) += s3cmci.o
|
|||
obj-$(CONFIG_MMC_SDRICOH_CS) += sdricoh_cs.o
|
||||
obj-$(CONFIG_MMC_TMIO) += tmio_mmc.o
|
||||
obj-$(CONFIG_MMC_TMIO_CORE) += tmio_mmc_core.o
|
||||
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_core.o renesas_sdhi_sys_dmac.o
|
||||
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_core.o
|
||||
ifeq ($(subst m,y,$(CONFIG_MMC_SDHI_SYS_DMAC)),y)
|
||||
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_sys_dmac.o
|
||||
endif
|
||||
ifeq ($(subst m,y,$(CONFIG_MMC_SDHI_INTERNAL_DMAC)),y)
|
||||
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_internal_dmac.o
|
||||
endif
|
||||
obj-$(CONFIG_MMC_CB710) += cb710-mmc.o
|
||||
obj-$(CONFIG_MMC_VIA_SDMMC) += via-sdmmc.o
|
||||
obj-$(CONFIG_SDH_BFIN) += bfin_sdh.o
|
||||
|
|
|
@ -290,7 +290,6 @@ static irqreturn_t goldfish_mmc_irq(int irq, void *dev_id)
|
|||
u16 status;
|
||||
int end_command = 0;
|
||||
int end_transfer = 0;
|
||||
int transfer_error = 0;
|
||||
int state_changed = 0;
|
||||
int cmd_timeout = 0;
|
||||
|
||||
|
@ -322,9 +321,7 @@ static irqreturn_t goldfish_mmc_irq(int irq, void *dev_id)
|
|||
if (end_command)
|
||||
goldfish_mmc_cmd_done(host, host->cmd);
|
||||
|
||||
if (transfer_error)
|
||||
goldfish_mmc_xfer_done(host, host->data);
|
||||
else if (end_transfer) {
|
||||
if (end_transfer) {
|
||||
host->dma_done = 1;
|
||||
goldfish_mmc_end_of_data(host, host->data);
|
||||
} else if (host->data != NULL) {
|
||||
|
@ -347,8 +344,7 @@ static irqreturn_t goldfish_mmc_irq(int irq, void *dev_id)
|
|||
mmc_detect_change(host->mmc, 0);
|
||||
}
|
||||
|
||||
if (!end_command && !end_transfer &&
|
||||
!transfer_error && !state_changed && !cmd_timeout) {
|
||||
if (!end_command && !end_transfer && !state_changed && !cmd_timeout) {
|
||||
status = GOLDFISH_MMC_READ(host, MMC_INT_STATUS);
|
||||
dev_info(mmc_dev(host->mmc),"spurious irq 0x%04x\n", status);
|
||||
if (status != 0) {
|
||||
|
|
|
@ -665,14 +665,15 @@ atmci_of_init(struct platform_device *pdev)
|
|||
|
||||
for_each_child_of_node(np, cnp) {
|
||||
if (of_property_read_u32(cnp, "reg", &slot_id)) {
|
||||
dev_warn(&pdev->dev, "reg property is missing for %s\n",
|
||||
cnp->full_name);
|
||||
dev_warn(&pdev->dev, "reg property is missing for %pOF\n",
|
||||
cnp);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (slot_id >= ATMCI_MAX_NR_SLOTS) {
|
||||
dev_warn(&pdev->dev, "can't have more than %d slots\n",
|
||||
ATMCI_MAX_NR_SLOTS);
|
||||
of_node_put(cnp);
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -1083,7 +1084,6 @@ static u32
|
|||
atmci_prepare_data_pdc(struct atmel_mci *host, struct mmc_data *data)
|
||||
{
|
||||
u32 iflags, tmp;
|
||||
unsigned int sg_len;
|
||||
int i;
|
||||
|
||||
data->error = -EINPROGRESS;
|
||||
|
@ -1108,8 +1108,8 @@ atmci_prepare_data_pdc(struct atmel_mci *host, struct mmc_data *data)
|
|||
|
||||
/* Configure PDC */
|
||||
host->data_size = data->blocks * data->blksz;
|
||||
sg_len = dma_map_sg(&host->pdev->dev, data->sg, data->sg_len,
|
||||
mmc_get_dma_dir(data));
|
||||
dma_map_sg(&host->pdev->dev, data->sg, data->sg_len,
|
||||
mmc_get_dma_dir(data));
|
||||
|
||||
if ((!host->caps.has_rwproof)
|
||||
&& (host->data->flags & MMC_DATA_WRITE)) {
|
||||
|
|
|
@ -1252,7 +1252,7 @@ static void bcm2835_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
|||
mutex_unlock(&host->mutex);
|
||||
}
|
||||
|
||||
static struct mmc_host_ops bcm2835_ops = {
|
||||
static const struct mmc_host_ops bcm2835_ops = {
|
||||
.request = bcm2835_request,
|
||||
.set_ios = bcm2835_set_ios,
|
||||
.hw_reset = bcm2835_reset,
|
||||
|
|
|
@ -342,18 +342,7 @@ static struct platform_driver octeon_mmc_driver = {
|
|||
},
|
||||
};
|
||||
|
||||
static int __init octeon_mmc_init(void)
|
||||
{
|
||||
return platform_driver_register(&octeon_mmc_driver);
|
||||
}
|
||||
|
||||
static void __exit octeon_mmc_cleanup(void)
|
||||
{
|
||||
platform_driver_unregister(&octeon_mmc_driver);
|
||||
}
|
||||
|
||||
module_init(octeon_mmc_init);
|
||||
module_exit(octeon_mmc_cleanup);
|
||||
module_platform_driver(octeon_mmc_driver);
|
||||
|
||||
MODULE_AUTHOR("Cavium Inc. <support@cavium.com>");
|
||||
MODULE_DESCRIPTION("Low-level driver for Cavium OCTEON MMC/SSD card");
|
||||
|
|
|
@ -957,14 +957,12 @@ static int cvm_mmc_of_parse(struct device *dev, struct cvm_mmc_slot *slot)
|
|||
|
||||
ret = of_property_read_u32(node, "reg", &id);
|
||||
if (ret) {
|
||||
dev_err(dev, "Missing or invalid reg property on %s\n",
|
||||
of_node_full_name(node));
|
||||
dev_err(dev, "Missing or invalid reg property on %pOF\n", node);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (id >= CAVIUM_MAX_MMC || slot->host->slot[id]) {
|
||||
dev_err(dev, "Invalid reg property on %s\n",
|
||||
of_node_full_name(node));
|
||||
dev_err(dev, "Invalid reg property on %pOF\n", node);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
|
|
@ -1062,7 +1062,7 @@ static void mmc_davinci_enable_sdio_irq(struct mmc_host *mmc, int enable)
|
|||
}
|
||||
}
|
||||
|
||||
static struct mmc_host_ops mmc_davinci_ops = {
|
||||
static const struct mmc_host_ops mmc_davinci_ops = {
|
||||
.request = mmc_davinci_request,
|
||||
.set_ios = mmc_davinci_set_ios,
|
||||
.get_cd = mmc_davinci_get_cd,
|
||||
|
|
|
@ -8,6 +8,8 @@
|
|||
* (at your option) any later version.
|
||||
*/
|
||||
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/mmc/host.h>
|
||||
|
@ -28,7 +30,35 @@
|
|||
#define AO_SCTRL_SEL18 BIT(10)
|
||||
#define AO_SCTRL_CTRL3 0x40C
|
||||
|
||||
#define DWMMC_SDIO_ID 2
|
||||
|
||||
#define SOC_SCTRL_SCPERCTRL5 (0x314)
|
||||
#define SDCARD_IO_SEL18 BIT(2)
|
||||
|
||||
#define SDCARD_RD_THRESHOLD (512)
|
||||
|
||||
#define GENCLK_DIV (7)
|
||||
|
||||
#define GPIO_CLK_ENABLE BIT(16)
|
||||
#define GPIO_CLK_DIV_MASK GENMASK(11, 8)
|
||||
#define GPIO_USE_SAMPLE_DLY_MASK GENMASK(13, 13)
|
||||
#define UHS_REG_EXT_SAMPLE_PHASE_MASK GENMASK(20, 16)
|
||||
#define UHS_REG_EXT_SAMPLE_DRVPHASE_MASK GENMASK(25, 21)
|
||||
#define UHS_REG_EXT_SAMPLE_DLY_MASK GENMASK(30, 26)
|
||||
|
||||
#define TIMING_MODE 3
|
||||
#define TIMING_CFG_NUM 10
|
||||
|
||||
#define NUM_PHASES (40)
|
||||
|
||||
#define ENABLE_SHIFT_MIN_SMPL (4)
|
||||
#define ENABLE_SHIFT_MAX_SMPL (12)
|
||||
#define USE_DLY_MIN_SMPL (11)
|
||||
#define USE_DLY_MAX_SMPL (14)
|
||||
|
||||
struct k3_priv {
|
||||
int ctrl_id;
|
||||
u32 cur_speed;
|
||||
struct regmap *reg;
|
||||
};
|
||||
|
||||
|
@ -38,6 +68,41 @@ static unsigned long dw_mci_hi6220_caps[] = {
|
|||
0
|
||||
};
|
||||
|
||||
struct hs_timing {
|
||||
u32 drv_phase;
|
||||
u32 smpl_dly;
|
||||
u32 smpl_phase_max;
|
||||
u32 smpl_phase_min;
|
||||
};
|
||||
|
||||
struct hs_timing hs_timing_cfg[TIMING_MODE][TIMING_CFG_NUM] = {
|
||||
{ /* reserved */ },
|
||||
{ /* SD */
|
||||
{7, 0, 15, 15,}, /* 0: LEGACY 400k */
|
||||
{6, 0, 4, 4,}, /* 1: MMC_HS */
|
||||
{6, 0, 3, 3,}, /* 2: SD_HS */
|
||||
{6, 0, 15, 15,}, /* 3: SDR12 */
|
||||
{6, 0, 2, 2,}, /* 4: SDR25 */
|
||||
{4, 0, 11, 0,}, /* 5: SDR50 */
|
||||
{6, 4, 15, 0,}, /* 6: SDR104 */
|
||||
{0}, /* 7: DDR50 */
|
||||
{0}, /* 8: DDR52 */
|
||||
{0}, /* 9: HS200 */
|
||||
},
|
||||
{ /* SDIO */
|
||||
{7, 0, 15, 15,}, /* 0: LEGACY 400k */
|
||||
{0}, /* 1: MMC_HS */
|
||||
{6, 0, 15, 15,}, /* 2: SD_HS */
|
||||
{6, 0, 15, 15,}, /* 3: SDR12 */
|
||||
{6, 0, 0, 0,}, /* 4: SDR25 */
|
||||
{4, 0, 12, 0,}, /* 5: SDR50 */
|
||||
{5, 4, 15, 0,}, /* 6: SDR104 */
|
||||
{0}, /* 7: DDR50 */
|
||||
{0}, /* 8: DDR52 */
|
||||
{0}, /* 9: HS200 */
|
||||
}
|
||||
};
|
||||
|
||||
static void dw_mci_k3_set_ios(struct dw_mci *host, struct mmc_ios *ios)
|
||||
{
|
||||
int ret;
|
||||
|
@ -66,6 +131,10 @@ static int dw_mci_hi6220_parse_dt(struct dw_mci *host)
|
|||
if (IS_ERR(priv->reg))
|
||||
priv->reg = NULL;
|
||||
|
||||
priv->ctrl_id = of_alias_get_id(host->dev->of_node, "mshc");
|
||||
if (priv->ctrl_id < 0)
|
||||
priv->ctrl_id = 0;
|
||||
|
||||
host->priv = priv;
|
||||
return 0;
|
||||
}
|
||||
|
@ -144,7 +213,236 @@ static const struct dw_mci_drv_data hi6220_data = {
|
|||
.execute_tuning = dw_mci_hi6220_execute_tuning,
|
||||
};
|
||||
|
||||
static void dw_mci_hs_set_timing(struct dw_mci *host, int timing,
|
||||
int smpl_phase)
|
||||
{
|
||||
u32 drv_phase;
|
||||
u32 smpl_dly;
|
||||
u32 use_smpl_dly = 0;
|
||||
u32 enable_shift = 0;
|
||||
u32 reg_value;
|
||||
int ctrl_id;
|
||||
struct k3_priv *priv;
|
||||
|
||||
priv = host->priv;
|
||||
ctrl_id = priv->ctrl_id;
|
||||
|
||||
drv_phase = hs_timing_cfg[ctrl_id][timing].drv_phase;
|
||||
smpl_dly = hs_timing_cfg[ctrl_id][timing].smpl_dly;
|
||||
if (smpl_phase == -1)
|
||||
smpl_phase = (hs_timing_cfg[ctrl_id][timing].smpl_phase_max +
|
||||
hs_timing_cfg[ctrl_id][timing].smpl_phase_min) / 2;
|
||||
|
||||
switch (timing) {
|
||||
case MMC_TIMING_UHS_SDR104:
|
||||
if (smpl_phase >= USE_DLY_MIN_SMPL &&
|
||||
smpl_phase <= USE_DLY_MAX_SMPL)
|
||||
use_smpl_dly = 1;
|
||||
/* fallthrough */
|
||||
case MMC_TIMING_UHS_SDR50:
|
||||
if (smpl_phase >= ENABLE_SHIFT_MIN_SMPL &&
|
||||
smpl_phase <= ENABLE_SHIFT_MAX_SMPL)
|
||||
enable_shift = 1;
|
||||
break;
|
||||
}
|
||||
|
||||
mci_writel(host, GPIO, 0x0);
|
||||
usleep_range(5, 10);
|
||||
|
||||
reg_value = FIELD_PREP(UHS_REG_EXT_SAMPLE_PHASE_MASK, smpl_phase) |
|
||||
FIELD_PREP(UHS_REG_EXT_SAMPLE_DLY_MASK, smpl_dly) |
|
||||
FIELD_PREP(UHS_REG_EXT_SAMPLE_DRVPHASE_MASK, drv_phase);
|
||||
mci_writel(host, UHS_REG_EXT, reg_value);
|
||||
|
||||
mci_writel(host, ENABLE_SHIFT, enable_shift);
|
||||
|
||||
reg_value = FIELD_PREP(GPIO_CLK_DIV_MASK, GENCLK_DIV) |
|
||||
FIELD_PREP(GPIO_USE_SAMPLE_DLY_MASK, use_smpl_dly);
|
||||
mci_writel(host, GPIO, (unsigned int)reg_value | GPIO_CLK_ENABLE);
|
||||
|
||||
/* We should delay 1ms wait for timing setting finished. */
|
||||
usleep_range(1000, 2000);
|
||||
}
|
||||
|
||||
static int dw_mci_hi3660_init(struct dw_mci *host)
|
||||
{
|
||||
mci_writel(host, CDTHRCTL, SDMMC_SET_THLD(SDCARD_RD_THRESHOLD,
|
||||
SDMMC_CARD_RD_THR_EN));
|
||||
|
||||
dw_mci_hs_set_timing(host, MMC_TIMING_LEGACY, -1);
|
||||
host->bus_hz /= (GENCLK_DIV + 1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dw_mci_set_sel18(struct dw_mci *host, bool set)
|
||||
{
|
||||
int ret;
|
||||
unsigned int val;
|
||||
struct k3_priv *priv;
|
||||
|
||||
priv = host->priv;
|
||||
|
||||
val = set ? SDCARD_IO_SEL18 : 0;
|
||||
ret = regmap_update_bits(priv->reg, SOC_SCTRL_SCPERCTRL5,
|
||||
SDCARD_IO_SEL18, val);
|
||||
if (ret) {
|
||||
dev_err(host->dev, "sel18 %u error\n", val);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void dw_mci_hi3660_set_ios(struct dw_mci *host, struct mmc_ios *ios)
|
||||
{
|
||||
int ret;
|
||||
unsigned long wanted;
|
||||
unsigned long actual;
|
||||
struct k3_priv *priv = host->priv;
|
||||
|
||||
if (!ios->clock || ios->clock == priv->cur_speed)
|
||||
return;
|
||||
|
||||
wanted = ios->clock * (GENCLK_DIV + 1);
|
||||
ret = clk_set_rate(host->ciu_clk, wanted);
|
||||
if (ret) {
|
||||
dev_err(host->dev, "failed to set rate %luHz\n", wanted);
|
||||
return;
|
||||
}
|
||||
actual = clk_get_rate(host->ciu_clk);
|
||||
|
||||
dw_mci_hs_set_timing(host, ios->timing, -1);
|
||||
host->bus_hz = actual / (GENCLK_DIV + 1);
|
||||
host->current_speed = 0;
|
||||
priv->cur_speed = host->bus_hz;
|
||||
}
|
||||
|
||||
static int dw_mci_get_best_clksmpl(unsigned int sample_flag)
|
||||
{
|
||||
int i;
|
||||
int interval;
|
||||
unsigned int v;
|
||||
unsigned int len;
|
||||
unsigned int range_start = 0;
|
||||
unsigned int range_length = 0;
|
||||
unsigned int middle_range = 0;
|
||||
|
||||
if (!sample_flag)
|
||||
return -EIO;
|
||||
|
||||
if (~sample_flag == 0)
|
||||
return 0;
|
||||
|
||||
i = ffs(sample_flag) - 1;
|
||||
|
||||
/*
|
||||
* A clock cycle is divided into 32 phases,
|
||||
* each of which is represented by a bit,
|
||||
* finding the optimal phase.
|
||||
*/
|
||||
while (i < 32) {
|
||||
v = ror32(sample_flag, i);
|
||||
len = ffs(~v) - 1;
|
||||
|
||||
if (len > range_length) {
|
||||
range_length = len;
|
||||
range_start = i;
|
||||
}
|
||||
|
||||
interval = ffs(v >> len) - 1;
|
||||
if (interval < 0)
|
||||
break;
|
||||
|
||||
i += len + interval;
|
||||
}
|
||||
|
||||
middle_range = range_start + range_length / 2;
|
||||
if (middle_range >= 32)
|
||||
middle_range %= 32;
|
||||
|
||||
return middle_range;
|
||||
}
|
||||
|
||||
static int dw_mci_hi3660_execute_tuning(struct dw_mci_slot *slot, u32 opcode)
|
||||
{
|
||||
int i = 0;
|
||||
struct dw_mci *host = slot->host;
|
||||
struct mmc_host *mmc = slot->mmc;
|
||||
int smpl_phase = 0;
|
||||
u32 tuning_sample_flag = 0;
|
||||
int best_clksmpl = 0;
|
||||
|
||||
for (i = 0; i < NUM_PHASES; ++i, ++smpl_phase) {
|
||||
smpl_phase %= 32;
|
||||
|
||||
mci_writel(host, TMOUT, ~0);
|
||||
dw_mci_hs_set_timing(host, mmc->ios.timing, smpl_phase);
|
||||
|
||||
if (!mmc_send_tuning(mmc, opcode, NULL))
|
||||
tuning_sample_flag |= (1 << smpl_phase);
|
||||
else
|
||||
tuning_sample_flag &= ~(1 << smpl_phase);
|
||||
}
|
||||
|
||||
best_clksmpl = dw_mci_get_best_clksmpl(tuning_sample_flag);
|
||||
if (best_clksmpl < 0) {
|
||||
dev_err(host->dev, "All phases bad!\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
dw_mci_hs_set_timing(host, mmc->ios.timing, best_clksmpl);
|
||||
|
||||
dev_info(host->dev, "tuning ok best_clksmpl %u tuning_sample_flag %x\n",
|
||||
best_clksmpl, tuning_sample_flag);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dw_mci_hi3660_switch_voltage(struct mmc_host *mmc,
|
||||
struct mmc_ios *ios)
|
||||
{
|
||||
int ret = 0;
|
||||
struct dw_mci_slot *slot = mmc_priv(mmc);
|
||||
struct k3_priv *priv;
|
||||
struct dw_mci *host;
|
||||
|
||||
host = slot->host;
|
||||
priv = host->priv;
|
||||
|
||||
if (!priv || !priv->reg)
|
||||
return 0;
|
||||
|
||||
if (priv->ctrl_id == DWMMC_SDIO_ID)
|
||||
return 0;
|
||||
|
||||
if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330)
|
||||
ret = dw_mci_set_sel18(host, 0);
|
||||
else if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180)
|
||||
ret = dw_mci_set_sel18(host, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (!IS_ERR(mmc->supply.vqmmc)) {
|
||||
ret = mmc_regulator_set_vqmmc(mmc, ios);
|
||||
if (ret) {
|
||||
dev_err(host->dev, "Regulator set error %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dw_mci_drv_data hi3660_data = {
|
||||
.init = dw_mci_hi3660_init,
|
||||
.set_ios = dw_mci_hi3660_set_ios,
|
||||
.parse_dt = dw_mci_hi6220_parse_dt,
|
||||
.execute_tuning = dw_mci_hi3660_execute_tuning,
|
||||
.switch_voltage = dw_mci_hi3660_switch_voltage,
|
||||
};
|
||||
|
||||
static const struct of_device_id dw_mci_k3_match[] = {
|
||||
{ .compatible = "hisilicon,hi3660-dw-mshc", .data = &hi3660_data, },
|
||||
{ .compatible = "hisilicon,hi4511-dw-mshc", .data = &k3_drv_data, },
|
||||
{ .compatible = "hisilicon,hi6220-dw-mshc", .data = &hi6220_data, },
|
||||
{},
|
||||
|
|
|
@ -398,6 +398,21 @@ static u32 dw_mci_prep_stop_abort(struct dw_mci *host, struct mmc_command *cmd)
|
|||
return cmdr;
|
||||
}
|
||||
|
||||
static inline void dw_mci_set_cto(struct dw_mci *host)
|
||||
{
|
||||
unsigned int cto_clks;
|
||||
unsigned int cto_ms;
|
||||
|
||||
cto_clks = mci_readl(host, TMOUT) & 0xff;
|
||||
cto_ms = DIV_ROUND_UP(cto_clks, host->bus_hz / 1000);
|
||||
|
||||
/* add a bit spare time */
|
||||
cto_ms += 10;
|
||||
|
||||
mod_timer(&host->cto_timer,
|
||||
jiffies + msecs_to_jiffies(cto_ms) + 1);
|
||||
}
|
||||
|
||||
static void dw_mci_start_command(struct dw_mci *host,
|
||||
struct mmc_command *cmd, u32 cmd_flags)
|
||||
{
|
||||
|
@ -410,6 +425,10 @@ static void dw_mci_start_command(struct dw_mci *host,
|
|||
wmb(); /* drain writebuffer */
|
||||
dw_mci_wait_while_busy(host, cmd_flags);
|
||||
|
||||
/* response expected command only */
|
||||
if (cmd_flags & SDMMC_CMD_RESP_EXP)
|
||||
dw_mci_set_cto(host);
|
||||
|
||||
mci_writel(host, CMD, cmd_flags | SDMMC_CMD_START);
|
||||
}
|
||||
|
||||
|
@ -2599,6 +2618,7 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
|
|||
}
|
||||
|
||||
if (pending & DW_MCI_CMD_ERROR_FLAGS) {
|
||||
del_timer(&host->cto_timer);
|
||||
mci_writel(host, RINTSTS, DW_MCI_CMD_ERROR_FLAGS);
|
||||
host->cmd_status = pending;
|
||||
smp_wmb(); /* drain writebuffer */
|
||||
|
@ -2642,6 +2662,7 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
|
|||
}
|
||||
|
||||
if (pending & SDMMC_INT_CMD_DONE) {
|
||||
del_timer(&host->cto_timer);
|
||||
mci_writel(host, RINTSTS, SDMMC_INT_CMD_DONE);
|
||||
dw_mci_cmd_interrupt(host, pending);
|
||||
}
|
||||
|
@ -2914,6 +2935,30 @@ static void dw_mci_cmd11_timer(unsigned long arg)
|
|||
tasklet_schedule(&host->tasklet);
|
||||
}
|
||||
|
||||
static void dw_mci_cto_timer(unsigned long arg)
|
||||
{
|
||||
struct dw_mci *host = (struct dw_mci *)arg;
|
||||
|
||||
switch (host->state) {
|
||||
case STATE_SENDING_CMD11:
|
||||
case STATE_SENDING_CMD:
|
||||
case STATE_SENDING_STOP:
|
||||
/*
|
||||
* If CMD_DONE interrupt does NOT come in sending command
|
||||
* state, we should notify the driver to terminate current
|
||||
* transfer and report a command timeout to the core.
|
||||
*/
|
||||
host->cmd_status = SDMMC_INT_RTO;
|
||||
set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
|
||||
tasklet_schedule(&host->tasklet);
|
||||
break;
|
||||
default:
|
||||
dev_warn(host->dev, "Unexpected command timeout, state %d\n",
|
||||
host->state);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static void dw_mci_dto_timer(unsigned long arg)
|
||||
{
|
||||
struct dw_mci *host = (struct dw_mci *)arg;
|
||||
|
@ -2950,7 +2995,7 @@ static struct dw_mci_board *dw_mci_parse_dt(struct dw_mci *host)
|
|||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
/* find reset controller when exist */
|
||||
pdata->rstc = devm_reset_control_get_optional(dev, "reset");
|
||||
pdata->rstc = devm_reset_control_get_optional_exclusive(dev, "reset");
|
||||
if (IS_ERR(pdata->rstc)) {
|
||||
if (PTR_ERR(pdata->rstc) == -EPROBE_DEFER)
|
||||
return ERR_PTR(-EPROBE_DEFER);
|
||||
|
@ -3067,6 +3112,12 @@ int dw_mci_probe(struct dw_mci *host)
|
|||
goto err_clk_ciu;
|
||||
}
|
||||
|
||||
if (!IS_ERR(host->pdata->rstc)) {
|
||||
reset_control_assert(host->pdata->rstc);
|
||||
usleep_range(10, 50);
|
||||
reset_control_deassert(host->pdata->rstc);
|
||||
}
|
||||
|
||||
if (drv_data && drv_data->init) {
|
||||
ret = drv_data->init(host);
|
||||
if (ret) {
|
||||
|
@ -3076,15 +3127,12 @@ int dw_mci_probe(struct dw_mci *host)
|
|||
}
|
||||
}
|
||||
|
||||
if (!IS_ERR(host->pdata->rstc)) {
|
||||
reset_control_assert(host->pdata->rstc);
|
||||
usleep_range(10, 50);
|
||||
reset_control_deassert(host->pdata->rstc);
|
||||
}
|
||||
|
||||
setup_timer(&host->cmd11_timer,
|
||||
dw_mci_cmd11_timer, (unsigned long)host);
|
||||
|
||||
setup_timer(&host->cto_timer,
|
||||
dw_mci_cto_timer, (unsigned long)host);
|
||||
|
||||
setup_timer(&host->dto_timer,
|
||||
dw_mci_dto_timer, (unsigned long)host);
|
||||
|
||||
|
|
|
@ -126,6 +126,7 @@ struct dw_mci_dma_slave {
|
|||
* @irq: The irq value to be passed to request_irq.
|
||||
* @sdio_id0: Number of slot0 in the SDIO interrupt registers.
|
||||
* @cmd11_timer: Timer for SD3.0 voltage switch over scheme.
|
||||
* @cto_timer: Timer for broken command transfer over scheme.
|
||||
* @dto_timer: Timer for broken data transfer over scheme.
|
||||
*
|
||||
* Locking
|
||||
|
@ -232,6 +233,7 @@ struct dw_mci {
|
|||
int sdio_id0;
|
||||
|
||||
struct timer_list cmd11_timer;
|
||||
struct timer_list cto_timer;
|
||||
struct timer_list dto_timer;
|
||||
};
|
||||
|
||||
|
@ -314,6 +316,8 @@ struct dw_mci_board {
|
|||
#define SDMMC_DSCADDR 0x094
|
||||
#define SDMMC_BUFADDR 0x098
|
||||
#define SDMMC_CDTHRCTL 0x100
|
||||
#define SDMMC_UHS_REG_EXT 0x108
|
||||
#define SDMMC_ENABLE_SHIFT 0x110
|
||||
#define SDMMC_DATA(x) (x)
|
||||
/*
|
||||
* Registers to support idmac 64-bit address mode
|
||||
|
|
|
@ -42,22 +42,18 @@
|
|||
|
||||
#define SD_EMMC_CLOCK 0x0
|
||||
#define CLK_DIV_MASK GENMASK(5, 0)
|
||||
#define CLK_DIV_MAX 63
|
||||
#define CLK_SRC_MASK GENMASK(7, 6)
|
||||
#define CLK_SRC_XTAL 0 /* external crystal */
|
||||
#define CLK_SRC_XTAL_RATE 24000000
|
||||
#define CLK_SRC_PLL 1 /* FCLK_DIV2 */
|
||||
#define CLK_SRC_PLL_RATE 1000000000
|
||||
#define CLK_CORE_PHASE_MASK GENMASK(9, 8)
|
||||
#define CLK_TX_PHASE_MASK GENMASK(11, 10)
|
||||
#define CLK_RX_PHASE_MASK GENMASK(13, 12)
|
||||
#define CLK_PHASE_0 0
|
||||
#define CLK_PHASE_90 1
|
||||
#define CLK_PHASE_180 2
|
||||
#define CLK_PHASE_270 3
|
||||
#define CLK_TX_DELAY_MASK GENMASK(19, 16)
|
||||
#define CLK_RX_DELAY_MASK GENMASK(23, 20)
|
||||
#define CLK_DELAY_STEP_PS 200
|
||||
#define CLK_PHASE_STEP 30
|
||||
#define CLK_PHASE_POINT_NUM (360 / CLK_PHASE_STEP)
|
||||
#define CLK_ALWAYS_ON BIT(24)
|
||||
|
||||
#define SD_EMMC_DElAY 0x4
|
||||
#define SD_EMMC_DELAY 0x4
|
||||
#define SD_EMMC_ADJUST 0x8
|
||||
#define SD_EMMC_CALOUT 0x10
|
||||
#define SD_EMMC_START 0x40
|
||||
|
@ -81,18 +77,25 @@
|
|||
|
||||
#define SD_EMMC_STATUS 0x48
|
||||
#define STATUS_BUSY BIT(31)
|
||||
#define STATUS_DATI GENMASK(23, 16)
|
||||
|
||||
#define SD_EMMC_IRQ_EN 0x4c
|
||||
#define IRQ_EN_MASK GENMASK(13, 0)
|
||||
#define IRQ_RXD_ERR_MASK GENMASK(7, 0)
|
||||
#define IRQ_TXD_ERR BIT(8)
|
||||
#define IRQ_DESC_ERR BIT(9)
|
||||
#define IRQ_RESP_ERR BIT(10)
|
||||
#define IRQ_CRC_ERR \
|
||||
(IRQ_RXD_ERR_MASK | IRQ_TXD_ERR | IRQ_DESC_ERR | IRQ_RESP_ERR)
|
||||
#define IRQ_RESP_TIMEOUT BIT(11)
|
||||
#define IRQ_DESC_TIMEOUT BIT(12)
|
||||
#define IRQ_TIMEOUTS \
|
||||
(IRQ_RESP_TIMEOUT | IRQ_DESC_TIMEOUT)
|
||||
#define IRQ_END_OF_CHAIN BIT(13)
|
||||
#define IRQ_RESP_STATUS BIT(14)
|
||||
#define IRQ_SDIO BIT(15)
|
||||
#define IRQ_EN_MASK \
|
||||
(IRQ_CRC_ERR | IRQ_TIMEOUTS | IRQ_END_OF_CHAIN | IRQ_RESP_STATUS |\
|
||||
IRQ_SDIO)
|
||||
|
||||
#define SD_EMMC_CMD_CFG 0x50
|
||||
#define SD_EMMC_CMD_ARG 0x54
|
||||
|
@ -118,12 +121,6 @@
|
|||
|
||||
#define MUX_CLK_NUM_PARENTS 2
|
||||
|
||||
struct meson_tuning_params {
|
||||
u8 core_phase;
|
||||
u8 tx_phase;
|
||||
u8 rx_phase;
|
||||
};
|
||||
|
||||
struct sd_emmc_desc {
|
||||
u32 cmd_cfg;
|
||||
u32 cmd_arg;
|
||||
|
@ -139,12 +136,14 @@ struct meson_host {
|
|||
spinlock_t lock;
|
||||
void __iomem *regs;
|
||||
struct clk *core_clk;
|
||||
struct clk_mux mux;
|
||||
struct clk *mux_clk;
|
||||
unsigned long current_clock;
|
||||
struct clk *mmc_clk;
|
||||
struct clk *rx_clk;
|
||||
struct clk *tx_clk;
|
||||
unsigned long req_rate;
|
||||
|
||||
struct clk_divider cfg_div;
|
||||
struct clk *cfg_div_clk;
|
||||
struct pinctrl *pinctrl;
|
||||
struct pinctrl_state *pins_default;
|
||||
struct pinctrl_state *pins_clk_gate;
|
||||
|
||||
unsigned int bounce_buf_size;
|
||||
void *bounce_buf;
|
||||
|
@ -152,7 +151,6 @@ struct meson_host {
|
|||
struct sd_emmc_desc *descs;
|
||||
dma_addr_t descs_dma_addr;
|
||||
|
||||
struct meson_tuning_params tp;
|
||||
bool vqmmc_enabled;
|
||||
};
|
||||
|
||||
|
@ -179,6 +177,90 @@ struct meson_host {
|
|||
#define CMD_RESP_MASK GENMASK(31, 1)
|
||||
#define CMD_RESP_SRAM BIT(0)
|
||||
|
||||
struct meson_mmc_phase {
|
||||
struct clk_hw hw;
|
||||
void __iomem *reg;
|
||||
unsigned long phase_mask;
|
||||
unsigned long delay_mask;
|
||||
unsigned int delay_step_ps;
|
||||
};
|
||||
|
||||
#define to_meson_mmc_phase(_hw) container_of(_hw, struct meson_mmc_phase, hw)
|
||||
|
||||
static int meson_mmc_clk_get_phase(struct clk_hw *hw)
|
||||
{
|
||||
struct meson_mmc_phase *mmc = to_meson_mmc_phase(hw);
|
||||
unsigned int phase_num = 1 << hweight_long(mmc->phase_mask);
|
||||
unsigned long period_ps, p, d;
|
||||
int degrees;
|
||||
u32 val;
|
||||
|
||||
val = readl(mmc->reg);
|
||||
p = (val & mmc->phase_mask) >> __ffs(mmc->phase_mask);
|
||||
degrees = p * 360 / phase_num;
|
||||
|
||||
if (mmc->delay_mask) {
|
||||
period_ps = DIV_ROUND_UP((unsigned long)NSEC_PER_SEC * 1000,
|
||||
clk_get_rate(hw->clk));
|
||||
d = (val & mmc->delay_mask) >> __ffs(mmc->delay_mask);
|
||||
degrees += d * mmc->delay_step_ps * 360 / period_ps;
|
||||
degrees %= 360;
|
||||
}
|
||||
|
||||
return degrees;
|
||||
}
|
||||
|
||||
static void meson_mmc_apply_phase_delay(struct meson_mmc_phase *mmc,
|
||||
unsigned int phase,
|
||||
unsigned int delay)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
val = readl(mmc->reg);
|
||||
val &= ~mmc->phase_mask;
|
||||
val |= phase << __ffs(mmc->phase_mask);
|
||||
|
||||
if (mmc->delay_mask) {
|
||||
val &= ~mmc->delay_mask;
|
||||
val |= delay << __ffs(mmc->delay_mask);
|
||||
}
|
||||
|
||||
writel(val, mmc->reg);
|
||||
}
|
||||
|
||||
static int meson_mmc_clk_set_phase(struct clk_hw *hw, int degrees)
|
||||
{
|
||||
struct meson_mmc_phase *mmc = to_meson_mmc_phase(hw);
|
||||
unsigned int phase_num = 1 << hweight_long(mmc->phase_mask);
|
||||
unsigned long period_ps, d = 0, r;
|
||||
uint64_t p;
|
||||
|
||||
p = degrees % 360;
|
||||
|
||||
if (!mmc->delay_mask) {
|
||||
p = DIV_ROUND_CLOSEST_ULL(p, 360 / phase_num);
|
||||
} else {
|
||||
period_ps = DIV_ROUND_UP((unsigned long)NSEC_PER_SEC * 1000,
|
||||
clk_get_rate(hw->clk));
|
||||
|
||||
/* First compute the phase index (p), the remainder (r) is the
|
||||
* part we'll try to acheive using the delays (d).
|
||||
*/
|
||||
r = do_div(p, 360 / phase_num);
|
||||
d = DIV_ROUND_CLOSEST(r * period_ps,
|
||||
360 * mmc->delay_step_ps);
|
||||
d = min(d, mmc->delay_mask >> __ffs(mmc->delay_mask));
|
||||
}
|
||||
|
||||
meson_mmc_apply_phase_delay(mmc, p, d);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct clk_ops meson_mmc_clk_phase_ops = {
|
||||
.get_phase = meson_mmc_clk_get_phase,
|
||||
.set_phase = meson_mmc_clk_set_phase,
|
||||
};
|
||||
|
||||
static unsigned int meson_mmc_get_timeout_msecs(struct mmc_data *data)
|
||||
{
|
||||
unsigned int timeout = data->timeout_ns / NSEC_PER_MSEC;
|
||||
|
@ -271,58 +353,102 @@ static void meson_mmc_post_req(struct mmc_host *mmc, struct mmc_request *mrq,
|
|||
mmc_get_dma_dir(data));
|
||||
}
|
||||
|
||||
static int meson_mmc_clk_set(struct meson_host *host, unsigned long clk_rate)
|
||||
static bool meson_mmc_timing_is_ddr(struct mmc_ios *ios)
|
||||
{
|
||||
if (ios->timing == MMC_TIMING_MMC_DDR52 ||
|
||||
ios->timing == MMC_TIMING_UHS_DDR50 ||
|
||||
ios->timing == MMC_TIMING_MMC_HS400)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Gating the clock on this controller is tricky. It seems the mmc clock
|
||||
* is also used by the controller. It may crash during some operation if the
|
||||
* clock is stopped. The safest thing to do, whenever possible, is to keep
|
||||
* clock running at stop it at the pad using the pinmux.
|
||||
*/
|
||||
static void meson_mmc_clk_gate(struct meson_host *host)
|
||||
{
|
||||
struct mmc_host *mmc = host->mmc;
|
||||
int ret;
|
||||
u32 cfg;
|
||||
|
||||
if (clk_rate) {
|
||||
if (WARN_ON(clk_rate > mmc->f_max))
|
||||
clk_rate = mmc->f_max;
|
||||
else if (WARN_ON(clk_rate < mmc->f_min))
|
||||
clk_rate = mmc->f_min;
|
||||
}
|
||||
|
||||
if (clk_rate == host->current_clock)
|
||||
return 0;
|
||||
|
||||
/* stop clock */
|
||||
cfg = readl(host->regs + SD_EMMC_CFG);
|
||||
if (!(cfg & CFG_STOP_CLOCK)) {
|
||||
if (host->pins_clk_gate) {
|
||||
pinctrl_select_state(host->pinctrl, host->pins_clk_gate);
|
||||
} else {
|
||||
/*
|
||||
* If the pinmux is not provided - default to the classic and
|
||||
* unsafe method
|
||||
*/
|
||||
cfg = readl(host->regs + SD_EMMC_CFG);
|
||||
cfg |= CFG_STOP_CLOCK;
|
||||
writel(cfg, host->regs + SD_EMMC_CFG);
|
||||
}
|
||||
}
|
||||
|
||||
dev_dbg(host->dev, "change clock rate %u -> %lu\n",
|
||||
mmc->actual_clock, clk_rate);
|
||||
static void meson_mmc_clk_ungate(struct meson_host *host)
|
||||
{
|
||||
u32 cfg;
|
||||
|
||||
if (!clk_rate) {
|
||||
if (host->pins_clk_gate)
|
||||
pinctrl_select_state(host->pinctrl, host->pins_default);
|
||||
|
||||
/* Make sure the clock is not stopped in the controller */
|
||||
cfg = readl(host->regs + SD_EMMC_CFG);
|
||||
cfg &= ~CFG_STOP_CLOCK;
|
||||
writel(cfg, host->regs + SD_EMMC_CFG);
|
||||
}
|
||||
|
||||
static int meson_mmc_clk_set(struct meson_host *host, struct mmc_ios *ios)
|
||||
{
|
||||
struct mmc_host *mmc = host->mmc;
|
||||
unsigned long rate = ios->clock;
|
||||
int ret;
|
||||
u32 cfg;
|
||||
|
||||
/* DDR modes require higher module clock */
|
||||
if (meson_mmc_timing_is_ddr(ios))
|
||||
rate <<= 1;
|
||||
|
||||
/* Same request - bail-out */
|
||||
if (host->req_rate == rate)
|
||||
return 0;
|
||||
|
||||
/* stop clock */
|
||||
meson_mmc_clk_gate(host);
|
||||
host->req_rate = 0;
|
||||
|
||||
if (!rate) {
|
||||
mmc->actual_clock = 0;
|
||||
host->current_clock = 0;
|
||||
/* return with clock being stopped */
|
||||
return 0;
|
||||
}
|
||||
|
||||
ret = clk_set_rate(host->cfg_div_clk, clk_rate);
|
||||
/* Stop the clock during rate change to avoid glitches */
|
||||
cfg = readl(host->regs + SD_EMMC_CFG);
|
||||
cfg |= CFG_STOP_CLOCK;
|
||||
writel(cfg, host->regs + SD_EMMC_CFG);
|
||||
|
||||
ret = clk_set_rate(host->mmc_clk, rate);
|
||||
if (ret) {
|
||||
dev_err(host->dev, "Unable to set cfg_div_clk to %lu. ret=%d\n",
|
||||
clk_rate, ret);
|
||||
rate, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
mmc->actual_clock = clk_get_rate(host->cfg_div_clk);
|
||||
host->current_clock = clk_rate;
|
||||
host->req_rate = rate;
|
||||
mmc->actual_clock = clk_get_rate(host->mmc_clk);
|
||||
|
||||
if (clk_rate != mmc->actual_clock)
|
||||
dev_dbg(host->dev,
|
||||
"divider requested rate %lu != actual rate %u\n",
|
||||
clk_rate, mmc->actual_clock);
|
||||
/* We should report the real output frequency of the controller */
|
||||
if (meson_mmc_timing_is_ddr(ios))
|
||||
mmc->actual_clock >>= 1;
|
||||
|
||||
dev_dbg(host->dev, "clk rate: %u Hz\n", mmc->actual_clock);
|
||||
if (ios->clock != mmc->actual_clock)
|
||||
dev_dbg(host->dev, "requested rate was %u\n", ios->clock);
|
||||
|
||||
/* (re)start clock */
|
||||
cfg = readl(host->regs + SD_EMMC_CFG);
|
||||
cfg &= ~CFG_STOP_CLOCK;
|
||||
writel(cfg, host->regs + SD_EMMC_CFG);
|
||||
meson_mmc_clk_ungate(host);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -335,11 +461,21 @@ static int meson_mmc_clk_set(struct meson_host *host, unsigned long clk_rate)
|
|||
static int meson_mmc_clk_init(struct meson_host *host)
|
||||
{
|
||||
struct clk_init_data init;
|
||||
struct clk_mux *mux;
|
||||
struct clk_divider *div;
|
||||
struct meson_mmc_phase *core, *tx, *rx;
|
||||
struct clk *clk;
|
||||
char clk_name[32];
|
||||
int i, ret = 0;
|
||||
const char *mux_parent_names[MUX_CLK_NUM_PARENTS];
|
||||
const char *clk_div_parents[1];
|
||||
u32 clk_reg, cfg;
|
||||
const char *clk_parent[1];
|
||||
u32 clk_reg;
|
||||
|
||||
/* init SD_EMMC_CLOCK to sane defaults w/min clock rate */
|
||||
clk_reg = 0;
|
||||
clk_reg |= CLK_ALWAYS_ON;
|
||||
clk_reg |= CLK_DIV_MASK;
|
||||
writel(clk_reg, host->regs + SD_EMMC_CLOCK);
|
||||
|
||||
/* get the mux parents */
|
||||
for (i = 0; i < MUX_CLK_NUM_PARENTS; i++) {
|
||||
|
@ -358,103 +494,238 @@ static int meson_mmc_clk_init(struct meson_host *host)
|
|||
}
|
||||
|
||||
/* create the mux */
|
||||
mux = devm_kzalloc(host->dev, sizeof(*mux), GFP_KERNEL);
|
||||
if (!mux)
|
||||
return -ENOMEM;
|
||||
|
||||
snprintf(clk_name, sizeof(clk_name), "%s#mux", dev_name(host->dev));
|
||||
init.name = clk_name;
|
||||
init.ops = &clk_mux_ops;
|
||||
init.flags = 0;
|
||||
init.parent_names = mux_parent_names;
|
||||
init.num_parents = MUX_CLK_NUM_PARENTS;
|
||||
host->mux.reg = host->regs + SD_EMMC_CLOCK;
|
||||
host->mux.shift = __bf_shf(CLK_SRC_MASK);
|
||||
host->mux.mask = CLK_SRC_MASK;
|
||||
host->mux.flags = 0;
|
||||
host->mux.table = NULL;
|
||||
host->mux.hw.init = &init;
|
||||
|
||||
host->mux_clk = devm_clk_register(host->dev, &host->mux.hw);
|
||||
if (WARN_ON(IS_ERR(host->mux_clk)))
|
||||
return PTR_ERR(host->mux_clk);
|
||||
mux->reg = host->regs + SD_EMMC_CLOCK;
|
||||
mux->shift = __ffs(CLK_SRC_MASK);
|
||||
mux->mask = CLK_SRC_MASK >> mux->shift;
|
||||
mux->hw.init = &init;
|
||||
|
||||
clk = devm_clk_register(host->dev, &mux->hw);
|
||||
if (WARN_ON(IS_ERR(clk)))
|
||||
return PTR_ERR(clk);
|
||||
|
||||
/* create the divider */
|
||||
div = devm_kzalloc(host->dev, sizeof(*div), GFP_KERNEL);
|
||||
if (!div)
|
||||
return -ENOMEM;
|
||||
|
||||
snprintf(clk_name, sizeof(clk_name), "%s#div", dev_name(host->dev));
|
||||
init.name = clk_name;
|
||||
init.ops = &clk_divider_ops;
|
||||
init.flags = CLK_SET_RATE_PARENT;
|
||||
clk_div_parents[0] = __clk_get_name(host->mux_clk);
|
||||
init.parent_names = clk_div_parents;
|
||||
init.num_parents = ARRAY_SIZE(clk_div_parents);
|
||||
clk_parent[0] = __clk_get_name(clk);
|
||||
init.parent_names = clk_parent;
|
||||
init.num_parents = 1;
|
||||
|
||||
host->cfg_div.reg = host->regs + SD_EMMC_CLOCK;
|
||||
host->cfg_div.shift = __bf_shf(CLK_DIV_MASK);
|
||||
host->cfg_div.width = __builtin_popcountl(CLK_DIV_MASK);
|
||||
host->cfg_div.hw.init = &init;
|
||||
host->cfg_div.flags = CLK_DIVIDER_ONE_BASED |
|
||||
CLK_DIVIDER_ROUND_CLOSEST | CLK_DIVIDER_ALLOW_ZERO;
|
||||
div->reg = host->regs + SD_EMMC_CLOCK;
|
||||
div->shift = __ffs(CLK_DIV_MASK);
|
||||
div->width = __builtin_popcountl(CLK_DIV_MASK);
|
||||
div->hw.init = &init;
|
||||
div->flags = (CLK_DIVIDER_ONE_BASED |
|
||||
CLK_DIVIDER_ROUND_CLOSEST);
|
||||
|
||||
host->cfg_div_clk = devm_clk_register(host->dev, &host->cfg_div.hw);
|
||||
if (WARN_ON(PTR_ERR_OR_ZERO(host->cfg_div_clk)))
|
||||
return PTR_ERR(host->cfg_div_clk);
|
||||
clk = devm_clk_register(host->dev, &div->hw);
|
||||
if (WARN_ON(IS_ERR(clk)))
|
||||
return PTR_ERR(clk);
|
||||
|
||||
/* create the mmc core clock */
|
||||
core = devm_kzalloc(host->dev, sizeof(*core), GFP_KERNEL);
|
||||
if (!core)
|
||||
return -ENOMEM;
|
||||
|
||||
snprintf(clk_name, sizeof(clk_name), "%s#core", dev_name(host->dev));
|
||||
init.name = clk_name;
|
||||
init.ops = &meson_mmc_clk_phase_ops;
|
||||
init.flags = CLK_SET_RATE_PARENT;
|
||||
clk_parent[0] = __clk_get_name(clk);
|
||||
init.parent_names = clk_parent;
|
||||
init.num_parents = 1;
|
||||
|
||||
core->reg = host->regs + SD_EMMC_CLOCK;
|
||||
core->phase_mask = CLK_CORE_PHASE_MASK;
|
||||
core->hw.init = &init;
|
||||
|
||||
host->mmc_clk = devm_clk_register(host->dev, &core->hw);
|
||||
if (WARN_ON(PTR_ERR_OR_ZERO(host->mmc_clk)))
|
||||
return PTR_ERR(host->mmc_clk);
|
||||
|
||||
/* create the mmc tx clock */
|
||||
tx = devm_kzalloc(host->dev, sizeof(*tx), GFP_KERNEL);
|
||||
if (!tx)
|
||||
return -ENOMEM;
|
||||
|
||||
snprintf(clk_name, sizeof(clk_name), "%s#tx", dev_name(host->dev));
|
||||
init.name = clk_name;
|
||||
init.ops = &meson_mmc_clk_phase_ops;
|
||||
init.flags = 0;
|
||||
clk_parent[0] = __clk_get_name(host->mmc_clk);
|
||||
init.parent_names = clk_parent;
|
||||
init.num_parents = 1;
|
||||
|
||||
tx->reg = host->regs + SD_EMMC_CLOCK;
|
||||
tx->phase_mask = CLK_TX_PHASE_MASK;
|
||||
tx->delay_mask = CLK_TX_DELAY_MASK;
|
||||
tx->delay_step_ps = CLK_DELAY_STEP_PS;
|
||||
tx->hw.init = &init;
|
||||
|
||||
host->tx_clk = devm_clk_register(host->dev, &tx->hw);
|
||||
if (WARN_ON(PTR_ERR_OR_ZERO(host->tx_clk)))
|
||||
return PTR_ERR(host->tx_clk);
|
||||
|
||||
/* create the mmc rx clock */
|
||||
rx = devm_kzalloc(host->dev, sizeof(*rx), GFP_KERNEL);
|
||||
if (!rx)
|
||||
return -ENOMEM;
|
||||
|
||||
snprintf(clk_name, sizeof(clk_name), "%s#rx", dev_name(host->dev));
|
||||
init.name = clk_name;
|
||||
init.ops = &meson_mmc_clk_phase_ops;
|
||||
init.flags = 0;
|
||||
clk_parent[0] = __clk_get_name(host->mmc_clk);
|
||||
init.parent_names = clk_parent;
|
||||
init.num_parents = 1;
|
||||
|
||||
rx->reg = host->regs + SD_EMMC_CLOCK;
|
||||
rx->phase_mask = CLK_RX_PHASE_MASK;
|
||||
rx->delay_mask = CLK_RX_DELAY_MASK;
|
||||
rx->delay_step_ps = CLK_DELAY_STEP_PS;
|
||||
rx->hw.init = &init;
|
||||
|
||||
host->rx_clk = devm_clk_register(host->dev, &rx->hw);
|
||||
if (WARN_ON(PTR_ERR_OR_ZERO(host->rx_clk)))
|
||||
return PTR_ERR(host->rx_clk);
|
||||
|
||||
/* init SD_EMMC_CLOCK to sane defaults w/min clock rate */
|
||||
clk_reg = 0;
|
||||
clk_reg |= FIELD_PREP(CLK_CORE_PHASE_MASK, host->tp.core_phase);
|
||||
clk_reg |= FIELD_PREP(CLK_TX_PHASE_MASK, host->tp.tx_phase);
|
||||
clk_reg |= FIELD_PREP(CLK_RX_PHASE_MASK, host->tp.rx_phase);
|
||||
clk_reg |= FIELD_PREP(CLK_SRC_MASK, CLK_SRC_XTAL);
|
||||
clk_reg |= FIELD_PREP(CLK_DIV_MASK, CLK_DIV_MAX);
|
||||
clk_reg &= ~CLK_ALWAYS_ON;
|
||||
writel(clk_reg, host->regs + SD_EMMC_CLOCK);
|
||||
|
||||
/* Ensure clock starts in "auto" mode, not "always on" */
|
||||
cfg = readl(host->regs + SD_EMMC_CFG);
|
||||
cfg &= ~CFG_CLK_ALWAYS_ON;
|
||||
cfg |= CFG_AUTO_CLK;
|
||||
writel(cfg, host->regs + SD_EMMC_CFG);
|
||||
|
||||
ret = clk_prepare_enable(host->cfg_div_clk);
|
||||
host->mmc->f_min = clk_round_rate(host->mmc_clk, 400000);
|
||||
ret = clk_set_rate(host->mmc_clk, host->mmc->f_min);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Get the nearest minimum clock to 400KHz */
|
||||
host->mmc->f_min = clk_round_rate(host->cfg_div_clk, 400000);
|
||||
/*
|
||||
* Set phases : These values are mostly the datasheet recommended ones
|
||||
* except for the Tx phase. Datasheet recommends 180 but some cards
|
||||
* fail at initialisation with it. 270 works just fine, it fixes these
|
||||
* initialisation issues and enable eMMC DDR52 mode.
|
||||
*/
|
||||
clk_set_phase(host->mmc_clk, 180);
|
||||
clk_set_phase(host->tx_clk, 270);
|
||||
clk_set_phase(host->rx_clk, 0);
|
||||
|
||||
ret = meson_mmc_clk_set(host, host->mmc->f_min);
|
||||
if (ret)
|
||||
clk_disable_unprepare(host->cfg_div_clk);
|
||||
|
||||
return ret;
|
||||
return clk_prepare_enable(host->mmc_clk);
|
||||
}
|
||||
|
||||
static void meson_mmc_set_tuning_params(struct mmc_host *mmc)
|
||||
static void meson_mmc_shift_map(unsigned long *map, unsigned long shift)
|
||||
{
|
||||
DECLARE_BITMAP(left, CLK_PHASE_POINT_NUM);
|
||||
DECLARE_BITMAP(right, CLK_PHASE_POINT_NUM);
|
||||
|
||||
/*
|
||||
* shift the bitmap right and reintroduce the dropped bits on the left
|
||||
* of the bitmap
|
||||
*/
|
||||
bitmap_shift_right(right, map, shift, CLK_PHASE_POINT_NUM);
|
||||
bitmap_shift_left(left, map, CLK_PHASE_POINT_NUM - shift,
|
||||
CLK_PHASE_POINT_NUM);
|
||||
bitmap_or(map, left, right, CLK_PHASE_POINT_NUM);
|
||||
}
|
||||
|
||||
static void meson_mmc_find_next_region(unsigned long *map,
|
||||
unsigned long *start,
|
||||
unsigned long *stop)
|
||||
{
|
||||
*start = find_next_bit(map, CLK_PHASE_POINT_NUM, *start);
|
||||
*stop = find_next_zero_bit(map, CLK_PHASE_POINT_NUM, *start);
|
||||
}
|
||||
|
||||
static int meson_mmc_find_tuning_point(unsigned long *test)
|
||||
{
|
||||
unsigned long shift, stop, offset = 0, start = 0, size = 0;
|
||||
|
||||
/* Get the all good/all bad situation out the way */
|
||||
if (bitmap_full(test, CLK_PHASE_POINT_NUM))
|
||||
return 0; /* All points are good so point 0 will do */
|
||||
else if (bitmap_empty(test, CLK_PHASE_POINT_NUM))
|
||||
return -EIO; /* No successful tuning point */
|
||||
|
||||
/*
|
||||
* Now we know there is a least one region find. Make sure it does
|
||||
* not wrap by the shifting the bitmap if necessary
|
||||
*/
|
||||
shift = find_first_zero_bit(test, CLK_PHASE_POINT_NUM);
|
||||
if (shift != 0)
|
||||
meson_mmc_shift_map(test, shift);
|
||||
|
||||
while (start < CLK_PHASE_POINT_NUM) {
|
||||
meson_mmc_find_next_region(test, &start, &stop);
|
||||
|
||||
if ((stop - start) > size) {
|
||||
offset = start;
|
||||
size = stop - start;
|
||||
}
|
||||
|
||||
start = stop;
|
||||
}
|
||||
|
||||
/* Get the center point of the region */
|
||||
offset += (size / 2);
|
||||
|
||||
/* Shift the result back */
|
||||
offset = (offset + shift) % CLK_PHASE_POINT_NUM;
|
||||
|
||||
return offset;
|
||||
}
|
||||
|
||||
static int meson_mmc_clk_phase_tuning(struct mmc_host *mmc, u32 opcode,
|
||||
struct clk *clk)
|
||||
{
|
||||
int point, ret;
|
||||
DECLARE_BITMAP(test, CLK_PHASE_POINT_NUM);
|
||||
|
||||
dev_dbg(mmc_dev(mmc), "%s phase/delay tunning...\n",
|
||||
__clk_get_name(clk));
|
||||
bitmap_zero(test, CLK_PHASE_POINT_NUM);
|
||||
|
||||
/* Explore tuning points */
|
||||
for (point = 0; point < CLK_PHASE_POINT_NUM; point++) {
|
||||
clk_set_phase(clk, point * CLK_PHASE_STEP);
|
||||
ret = mmc_send_tuning(mmc, opcode, NULL);
|
||||
if (!ret)
|
||||
set_bit(point, test);
|
||||
}
|
||||
|
||||
/* Find the optimal tuning point and apply it */
|
||||
point = meson_mmc_find_tuning_point(test);
|
||||
if (point < 0)
|
||||
return point; /* tuning failed */
|
||||
|
||||
clk_set_phase(clk, point * CLK_PHASE_STEP);
|
||||
dev_dbg(mmc_dev(mmc), "success with phase: %d\n",
|
||||
clk_get_phase(clk));
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int meson_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode)
|
||||
{
|
||||
struct meson_host *host = mmc_priv(mmc);
|
||||
u32 regval;
|
||||
|
||||
/* stop clock */
|
||||
regval = readl(host->regs + SD_EMMC_CFG);
|
||||
regval |= CFG_STOP_CLOCK;
|
||||
writel(regval, host->regs + SD_EMMC_CFG);
|
||||
|
||||
regval = readl(host->regs + SD_EMMC_CLOCK);
|
||||
regval &= ~CLK_CORE_PHASE_MASK;
|
||||
regval |= FIELD_PREP(CLK_CORE_PHASE_MASK, host->tp.core_phase);
|
||||
regval &= ~CLK_TX_PHASE_MASK;
|
||||
regval |= FIELD_PREP(CLK_TX_PHASE_MASK, host->tp.tx_phase);
|
||||
regval &= ~CLK_RX_PHASE_MASK;
|
||||
regval |= FIELD_PREP(CLK_RX_PHASE_MASK, host->tp.rx_phase);
|
||||
writel(regval, host->regs + SD_EMMC_CLOCK);
|
||||
|
||||
/* start clock */
|
||||
regval = readl(host->regs + SD_EMMC_CFG);
|
||||
regval &= ~CFG_STOP_CLOCK;
|
||||
writel(regval, host->regs + SD_EMMC_CFG);
|
||||
return meson_mmc_clk_phase_tuning(mmc, opcode, host->rx_clk);
|
||||
}
|
||||
|
||||
static void meson_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
{
|
||||
struct meson_host *host = mmc_priv(mmc);
|
||||
u32 bus_width;
|
||||
u32 val, orig;
|
||||
u32 bus_width, val;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* GPIO regulator, only controls switching between 1v8 and
|
||||
|
@ -482,18 +753,17 @@ static void meson_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
|||
int ret = regulator_enable(mmc->supply.vqmmc);
|
||||
|
||||
if (ret < 0)
|
||||
dev_err(mmc_dev(mmc),
|
||||
dev_err(host->dev,
|
||||
"failed to enable vqmmc regulator\n");
|
||||
else
|
||||
host->vqmmc_enabled = true;
|
||||
}
|
||||
|
||||
/* Reset rx phase */
|
||||
clk_set_phase(host->rx_clk, 0);
|
||||
break;
|
||||
}
|
||||
|
||||
|
||||
meson_mmc_clk_set(host, ios->clock);
|
||||
|
||||
/* Bus width */
|
||||
switch (ios->bus_width) {
|
||||
case MMC_BUS_WIDTH_1:
|
||||
|
@ -512,26 +782,23 @@ static void meson_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
|||
}
|
||||
|
||||
val = readl(host->regs + SD_EMMC_CFG);
|
||||
orig = val;
|
||||
|
||||
val &= ~CFG_BUS_WIDTH_MASK;
|
||||
val |= FIELD_PREP(CFG_BUS_WIDTH_MASK, bus_width);
|
||||
|
||||
val &= ~CFG_DDR;
|
||||
if (ios->timing == MMC_TIMING_UHS_DDR50 ||
|
||||
ios->timing == MMC_TIMING_MMC_DDR52 ||
|
||||
ios->timing == MMC_TIMING_MMC_HS400)
|
||||
if (meson_mmc_timing_is_ddr(ios))
|
||||
val |= CFG_DDR;
|
||||
|
||||
val &= ~CFG_CHK_DS;
|
||||
if (ios->timing == MMC_TIMING_MMC_HS400)
|
||||
val |= CFG_CHK_DS;
|
||||
|
||||
if (val != orig) {
|
||||
writel(val, host->regs + SD_EMMC_CFG);
|
||||
dev_dbg(host->dev, "%s: SD_EMMC_CFG: 0x%08x -> 0x%08x\n",
|
||||
__func__, orig, val);
|
||||
}
|
||||
err = meson_mmc_clk_set(host, ios);
|
||||
if (err)
|
||||
dev_err(host->dev, "Failed to set clock: %d\n,", err);
|
||||
|
||||
writel(val, host->regs + SD_EMMC_CFG);
|
||||
dev_dbg(host->dev, "SD_EMMC_CFG: 0x%08x\n", val);
|
||||
}
|
||||
|
||||
static void meson_mmc_request_done(struct mmc_host *mmc,
|
||||
|
@ -729,57 +996,40 @@ static irqreturn_t meson_mmc_irq(int irq, void *dev_id)
|
|||
struct mmc_command *cmd;
|
||||
struct mmc_data *data;
|
||||
u32 irq_en, status, raw_status;
|
||||
irqreturn_t ret = IRQ_HANDLED;
|
||||
irqreturn_t ret = IRQ_NONE;
|
||||
|
||||
if (WARN_ON(!host))
|
||||
if (WARN_ON(!host) || WARN_ON(!host->cmd))
|
||||
return IRQ_NONE;
|
||||
|
||||
cmd = host->cmd;
|
||||
|
||||
if (WARN_ON(!cmd))
|
||||
return IRQ_NONE;
|
||||
|
||||
data = cmd->data;
|
||||
|
||||
spin_lock(&host->lock);
|
||||
|
||||
cmd = host->cmd;
|
||||
data = cmd->data;
|
||||
irq_en = readl(host->regs + SD_EMMC_IRQ_EN);
|
||||
raw_status = readl(host->regs + SD_EMMC_STATUS);
|
||||
status = raw_status & irq_en;
|
||||
|
||||
if (!status) {
|
||||
dev_warn(host->dev, "Spurious IRQ! status=0x%08x, irq_en=0x%08x\n",
|
||||
raw_status, irq_en);
|
||||
ret = IRQ_NONE;
|
||||
cmd->error = 0;
|
||||
if (status & IRQ_CRC_ERR) {
|
||||
dev_dbg(host->dev, "CRC Error - status 0x%08x\n", status);
|
||||
cmd->error = -EILSEQ;
|
||||
ret = IRQ_HANDLED;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (status & IRQ_TIMEOUTS) {
|
||||
dev_dbg(host->dev, "Timeout - status 0x%08x\n", status);
|
||||
cmd->error = -ETIMEDOUT;
|
||||
ret = IRQ_HANDLED;
|
||||
goto out;
|
||||
}
|
||||
|
||||
meson_mmc_read_resp(host->mmc, cmd);
|
||||
|
||||
cmd->error = 0;
|
||||
if (status & IRQ_RXD_ERR_MASK) {
|
||||
dev_dbg(host->dev, "Unhandled IRQ: RXD error\n");
|
||||
cmd->error = -EILSEQ;
|
||||
if (status & IRQ_SDIO) {
|
||||
dev_dbg(host->dev, "IRQ: SDIO TODO.\n");
|
||||
ret = IRQ_HANDLED;
|
||||
}
|
||||
if (status & IRQ_TXD_ERR) {
|
||||
dev_dbg(host->dev, "Unhandled IRQ: TXD error\n");
|
||||
cmd->error = -EILSEQ;
|
||||
}
|
||||
if (status & IRQ_DESC_ERR)
|
||||
dev_dbg(host->dev, "Unhandled IRQ: Descriptor error\n");
|
||||
if (status & IRQ_RESP_ERR) {
|
||||
dev_dbg(host->dev, "Unhandled IRQ: Response error\n");
|
||||
cmd->error = -EILSEQ;
|
||||
}
|
||||
if (status & IRQ_RESP_TIMEOUT) {
|
||||
dev_dbg(host->dev, "Unhandled IRQ: Response timeout\n");
|
||||
cmd->error = -ETIMEDOUT;
|
||||
}
|
||||
if (status & IRQ_DESC_TIMEOUT) {
|
||||
dev_dbg(host->dev, "Unhandled IRQ: Descriptor timeout\n");
|
||||
cmd->error = -ETIMEDOUT;
|
||||
}
|
||||
if (status & IRQ_SDIO)
|
||||
dev_dbg(host->dev, "Unhandled IRQ: SDIO.\n");
|
||||
|
||||
if (status & (IRQ_END_OF_CHAIN | IRQ_RESP_STATUS)) {
|
||||
if (data && !cmd->error)
|
||||
|
@ -787,26 +1037,20 @@ static irqreturn_t meson_mmc_irq(int irq, void *dev_id)
|
|||
if (meson_mmc_bounce_buf_read(data) ||
|
||||
meson_mmc_get_next_command(cmd))
|
||||
ret = IRQ_WAKE_THREAD;
|
||||
} else {
|
||||
dev_warn(host->dev, "Unknown IRQ! status=0x%04x: MMC CMD%u arg=0x%08x flags=0x%08x stop=%d\n",
|
||||
status, cmd->opcode, cmd->arg,
|
||||
cmd->flags, cmd->mrq->stop ? 1 : 0);
|
||||
if (cmd->data) {
|
||||
struct mmc_data *data = cmd->data;
|
||||
|
||||
dev_warn(host->dev, "\tblksz %u blocks %u flags 0x%08x (%s%s)",
|
||||
data->blksz, data->blocks, data->flags,
|
||||
data->flags & MMC_DATA_WRITE ? "write" : "",
|
||||
data->flags & MMC_DATA_READ ? "read" : "");
|
||||
}
|
||||
else
|
||||
ret = IRQ_HANDLED;
|
||||
}
|
||||
|
||||
out:
|
||||
/* ack all (enabled) interrupts */
|
||||
writel(status, host->regs + SD_EMMC_STATUS);
|
||||
/* ack all enabled interrupts */
|
||||
writel(irq_en, host->regs + SD_EMMC_STATUS);
|
||||
|
||||
if (ret == IRQ_HANDLED)
|
||||
meson_mmc_request_done(host->mmc, cmd->mrq);
|
||||
else if (ret == IRQ_NONE)
|
||||
dev_warn(host->dev,
|
||||
"Unexpected IRQ! status=0x%08x, irq_en=0x%08x\n",
|
||||
raw_status, irq_en);
|
||||
|
||||
spin_unlock(&host->lock);
|
||||
return ret;
|
||||
|
@ -839,29 +1083,6 @@ static irqreturn_t meson_mmc_irq_thread(int irq, void *dev_id)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int meson_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode)
|
||||
{
|
||||
struct meson_host *host = mmc_priv(mmc);
|
||||
struct meson_tuning_params tp_old = host->tp;
|
||||
int ret = -EINVAL, i, cmd_error;
|
||||
|
||||
dev_info(mmc_dev(mmc), "(re)tuning...\n");
|
||||
|
||||
for (i = CLK_PHASE_0; i <= CLK_PHASE_270; i++) {
|
||||
host->tp.rx_phase = i;
|
||||
/* exclude the active parameter set if retuning */
|
||||
if (!memcmp(&tp_old, &host->tp, sizeof(tp_old)) &&
|
||||
mmc->doing_retune)
|
||||
continue;
|
||||
meson_mmc_set_tuning_params(mmc);
|
||||
ret = mmc_send_tuning(mmc, opcode, &cmd_error);
|
||||
if (!ret)
|
||||
break;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* NOTE: we only need this until the GPIO/pinctrl driver can handle
|
||||
* interrupts. For now, the MMC core will use this for polling.
|
||||
|
@ -888,6 +1109,38 @@ static void meson_mmc_cfg_init(struct meson_host *host)
|
|||
writel(cfg, host->regs + SD_EMMC_CFG);
|
||||
}
|
||||
|
||||
static int meson_mmc_card_busy(struct mmc_host *mmc)
|
||||
{
|
||||
struct meson_host *host = mmc_priv(mmc);
|
||||
u32 regval;
|
||||
|
||||
regval = readl(host->regs + SD_EMMC_STATUS);
|
||||
|
||||
/* We are only interrested in lines 0 to 3, so mask the other ones */
|
||||
return !(FIELD_GET(STATUS_DATI, regval) & 0xf);
|
||||
}
|
||||
|
||||
static int meson_mmc_voltage_switch(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
{
|
||||
/* vqmmc regulator is available */
|
||||
if (!IS_ERR(mmc->supply.vqmmc)) {
|
||||
/*
|
||||
* The usual amlogic setup uses a GPIO to switch from one
|
||||
* regulator to the other. While the voltage ramp up is
|
||||
* pretty fast, care must be taken when switching from 3.3v
|
||||
* to 1.8v. Please make sure the regulator framework is aware
|
||||
* of your own regulator constraints
|
||||
*/
|
||||
return mmc_regulator_set_vqmmc(mmc, ios);
|
||||
}
|
||||
|
||||
/* no vqmmc regulator, assume fixed regulator at 3/3.3V */
|
||||
if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330)
|
||||
return 0;
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static const struct mmc_host_ops meson_mmc_ops = {
|
||||
.request = meson_mmc_request,
|
||||
.set_ios = meson_mmc_set_ios,
|
||||
|
@ -895,6 +1148,8 @@ static const struct mmc_host_ops meson_mmc_ops = {
|
|||
.pre_req = meson_mmc_pre_req,
|
||||
.post_req = meson_mmc_post_req,
|
||||
.execute_tuning = meson_mmc_execute_tuning,
|
||||
.card_busy = meson_mmc_card_busy,
|
||||
.start_signal_voltage_switch = meson_mmc_voltage_switch,
|
||||
};
|
||||
|
||||
static int meson_mmc_probe(struct platform_device *pdev)
|
||||
|
@ -941,6 +1196,27 @@ static int meson_mmc_probe(struct platform_device *pdev)
|
|||
goto free_host;
|
||||
}
|
||||
|
||||
host->pinctrl = devm_pinctrl_get(&pdev->dev);
|
||||
if (IS_ERR(host->pinctrl)) {
|
||||
ret = PTR_ERR(host->pinctrl);
|
||||
goto free_host;
|
||||
}
|
||||
|
||||
host->pins_default = pinctrl_lookup_state(host->pinctrl,
|
||||
PINCTRL_STATE_DEFAULT);
|
||||
if (IS_ERR(host->pins_default)) {
|
||||
ret = PTR_ERR(host->pins_default);
|
||||
goto free_host;
|
||||
}
|
||||
|
||||
host->pins_clk_gate = pinctrl_lookup_state(host->pinctrl,
|
||||
"clk-gate");
|
||||
if (IS_ERR(host->pins_clk_gate)) {
|
||||
dev_warn(&pdev->dev,
|
||||
"can't get clk-gate pinctrl, using clk_stop bit\n");
|
||||
host->pins_clk_gate = NULL;
|
||||
}
|
||||
|
||||
host->core_clk = devm_clk_get(&pdev->dev, "core");
|
||||
if (IS_ERR(host->core_clk)) {
|
||||
ret = PTR_ERR(host->core_clk);
|
||||
|
@ -951,30 +1227,28 @@ static int meson_mmc_probe(struct platform_device *pdev)
|
|||
if (ret)
|
||||
goto free_host;
|
||||
|
||||
host->tp.core_phase = CLK_PHASE_180;
|
||||
host->tp.tx_phase = CLK_PHASE_0;
|
||||
host->tp.rx_phase = CLK_PHASE_0;
|
||||
|
||||
ret = meson_mmc_clk_init(host);
|
||||
if (ret)
|
||||
goto err_core_clk;
|
||||
|
||||
/* set config to sane default */
|
||||
meson_mmc_cfg_init(host);
|
||||
|
||||
/* Stop execution */
|
||||
writel(0, host->regs + SD_EMMC_START);
|
||||
|
||||
/* clear, ack, enable all interrupts */
|
||||
/* clear, ack and enable interrupts */
|
||||
writel(0, host->regs + SD_EMMC_IRQ_EN);
|
||||
writel(IRQ_EN_MASK, host->regs + SD_EMMC_STATUS);
|
||||
writel(IRQ_EN_MASK, host->regs + SD_EMMC_IRQ_EN);
|
||||
|
||||
/* set config to sane default */
|
||||
meson_mmc_cfg_init(host);
|
||||
writel(IRQ_CRC_ERR | IRQ_TIMEOUTS | IRQ_END_OF_CHAIN,
|
||||
host->regs + SD_EMMC_STATUS);
|
||||
writel(IRQ_CRC_ERR | IRQ_TIMEOUTS | IRQ_END_OF_CHAIN,
|
||||
host->regs + SD_EMMC_IRQ_EN);
|
||||
|
||||
ret = devm_request_threaded_irq(&pdev->dev, irq, meson_mmc_irq,
|
||||
meson_mmc_irq_thread, IRQF_SHARED,
|
||||
NULL, host);
|
||||
if (ret)
|
||||
goto err_div_clk;
|
||||
goto err_init_clk;
|
||||
|
||||
mmc->caps |= MMC_CAP_CMD23;
|
||||
mmc->max_blk_count = CMD_CFG_LENGTH_MASK;
|
||||
|
@ -990,7 +1264,7 @@ static int meson_mmc_probe(struct platform_device *pdev)
|
|||
if (host->bounce_buf == NULL) {
|
||||
dev_err(host->dev, "Unable to map allocate DMA bounce buffer.\n");
|
||||
ret = -ENOMEM;
|
||||
goto err_div_clk;
|
||||
goto err_init_clk;
|
||||
}
|
||||
|
||||
host->descs = dma_alloc_coherent(host->dev, SD_EMMC_DESC_BUF_LEN,
|
||||
|
@ -1009,8 +1283,8 @@ static int meson_mmc_probe(struct platform_device *pdev)
|
|||
err_bounce_buf:
|
||||
dma_free_coherent(host->dev, host->bounce_buf_size,
|
||||
host->bounce_buf, host->bounce_dma_addr);
|
||||
err_div_clk:
|
||||
clk_disable_unprepare(host->cfg_div_clk);
|
||||
err_init_clk:
|
||||
clk_disable_unprepare(host->mmc_clk);
|
||||
err_core_clk:
|
||||
clk_disable_unprepare(host->core_clk);
|
||||
free_host:
|
||||
|
@ -1032,7 +1306,7 @@ static int meson_mmc_remove(struct platform_device *pdev)
|
|||
dma_free_coherent(host->dev, host->bounce_buf_size,
|
||||
host->bounce_buf, host->bounce_dma_addr);
|
||||
|
||||
clk_disable_unprepare(host->cfg_div_clk);
|
||||
clk_disable_unprepare(host->mmc_clk);
|
||||
clk_disable_unprepare(host->core_clk);
|
||||
|
||||
mmc_free_host(host->mmc);
|
||||
|
|
|
@ -1904,7 +1904,7 @@ static const struct dev_pm_ops mmci_dev_pm_ops = {
|
|||
SET_RUNTIME_PM_OPS(mmci_runtime_suspend, mmci_runtime_resume, NULL)
|
||||
};
|
||||
|
||||
static struct amba_id mmci_ids[] = {
|
||||
static const struct amba_id mmci_ids[] = {
|
||||
{
|
||||
.id = 0x00041180,
|
||||
.mask = 0xff0fffff,
|
||||
|
|
|
@ -546,7 +546,7 @@ static int moxart_get_ro(struct mmc_host *mmc)
|
|||
return !!(readl(host->base + REG_STATUS) & WRITE_PROT);
|
||||
}
|
||||
|
||||
static struct mmc_host_ops moxart_ops = {
|
||||
static const struct mmc_host_ops moxart_ops = {
|
||||
.request = moxart_request,
|
||||
.set_ios = moxart_set_ios,
|
||||
.get_ro = moxart_get_ro,
|
||||
|
|
|
@ -1579,12 +1579,13 @@ static void msdc_hw_reset(struct mmc_host *mmc)
|
|||
sdr_clr_bits(host->base + EMMC_IOCON, 1);
|
||||
}
|
||||
|
||||
static struct mmc_host_ops mt_msdc_ops = {
|
||||
static const struct mmc_host_ops mt_msdc_ops = {
|
||||
.post_req = msdc_post_req,
|
||||
.pre_req = msdc_pre_req,
|
||||
.request = msdc_ops_request,
|
||||
.set_ios = msdc_ops_set_ios,
|
||||
.get_ro = mmc_gpio_get_ro,
|
||||
.get_cd = mmc_gpio_get_cd,
|
||||
.start_signal_voltage_switch = msdc_ops_switch_volt,
|
||||
.card_busy = msdc_card_busy,
|
||||
.execute_tuning = msdc_execute_tuning,
|
||||
|
|
|
@ -681,6 +681,9 @@ static void mxcmci_data_done(struct mxcmci_host *host, unsigned int stat)
|
|||
|
||||
spin_unlock_irqrestore(&host->lock, flags);
|
||||
|
||||
if (data_error)
|
||||
return;
|
||||
|
||||
mxcmci_read_response(host, stat);
|
||||
host->cmd = NULL;
|
||||
|
||||
|
@ -1014,8 +1017,10 @@ static int mxcmci_probe(struct platform_device *pdev)
|
|||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
irq = platform_get_irq(pdev, 0);
|
||||
if (irq < 0)
|
||||
return -EINVAL;
|
||||
if (irq < 0) {
|
||||
dev_err(&pdev->dev, "failed to get IRQ: %d\n", irq);
|
||||
return irq;
|
||||
}
|
||||
|
||||
mmc = mmc_alloc_host(sizeof(*host), &pdev->dev);
|
||||
if (!mmc)
|
||||
|
@ -1098,8 +1103,13 @@ static int mxcmci_probe(struct platform_device *pdev)
|
|||
goto out_free;
|
||||
}
|
||||
|
||||
clk_prepare_enable(host->clk_per);
|
||||
clk_prepare_enable(host->clk_ipg);
|
||||
ret = clk_prepare_enable(host->clk_per);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
|
||||
ret = clk_prepare_enable(host->clk_ipg);
|
||||
if (ret)
|
||||
goto out_clk_per_put;
|
||||
|
||||
mxcmci_softreset(host);
|
||||
|
||||
|
@ -1168,8 +1178,9 @@ out_free_dma:
|
|||
dma_release_channel(host->dma);
|
||||
|
||||
out_clk_put:
|
||||
clk_disable_unprepare(host->clk_per);
|
||||
clk_disable_unprepare(host->clk_ipg);
|
||||
out_clk_per_put:
|
||||
clk_disable_unprepare(host->clk_per);
|
||||
|
||||
out_free:
|
||||
mmc_free_host(mmc);
|
||||
|
@ -1212,10 +1223,17 @@ static int __maybe_unused mxcmci_resume(struct device *dev)
|
|||
{
|
||||
struct mmc_host *mmc = dev_get_drvdata(dev);
|
||||
struct mxcmci_host *host = mmc_priv(mmc);
|
||||
int ret;
|
||||
|
||||
clk_prepare_enable(host->clk_per);
|
||||
clk_prepare_enable(host->clk_ipg);
|
||||
return 0;
|
||||
ret = clk_prepare_enable(host->clk_per);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = clk_prepare_enable(host->clk_ipg);
|
||||
if (ret)
|
||||
clk_disable_unprepare(host->clk_per);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(mxcmci_pm_ops, mxcmci_suspend, mxcmci_resume);
|
||||
|
|
|
@ -71,7 +71,7 @@ struct mmc_spi_platform_data *mmc_spi_get_pdata(struct spi_device *spi)
|
|||
struct device *dev = &spi->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct of_mmc_spi *oms;
|
||||
const u32 *voltage_ranges;
|
||||
const __be32 *voltage_ranges;
|
||||
int num_ranges;
|
||||
int i;
|
||||
|
||||
|
|
|
@ -2076,9 +2076,9 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
|
|||
host->dbclk = NULL;
|
||||
}
|
||||
|
||||
/* Since we do only SG emulation, we can have as many segs
|
||||
* as we want. */
|
||||
mmc->max_segs = 1024;
|
||||
/* Set this to a value that allows allocating an entire descriptor
|
||||
* list within a page (zero order allocation). */
|
||||
mmc->max_segs = 64;
|
||||
|
||||
mmc->max_blk_size = 512; /* Block Length at max can be 1024 */
|
||||
mmc->max_blk_count = 0xFFFF; /* No. of Blocks is 16 bits */
|
||||
|
@ -2322,7 +2322,7 @@ static int omap_hsmmc_runtime_resume(struct device *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct dev_pm_ops omap_hsmmc_dev_pm_ops = {
|
||||
static const struct dev_pm_ops omap_hsmmc_dev_pm_ops = {
|
||||
SET_SYSTEM_SLEEP_PM_OPS(omap_hsmmc_suspend, omap_hsmmc_resume)
|
||||
.runtime_suspend = omap_hsmmc_runtime_suspend,
|
||||
.runtime_resume = omap_hsmmc_runtime_resume,
|
||||
|
|
|
@ -31,6 +31,8 @@ struct renesas_sdhi_of_data {
|
|||
int scc_offset;
|
||||
struct renesas_sdhi_scc *taps;
|
||||
int taps_num;
|
||||
unsigned int max_blk_count;
|
||||
unsigned short max_segs;
|
||||
};
|
||||
|
||||
int renesas_sdhi_probe(struct platform_device *pdev,
|
||||
|
|
|
@ -40,6 +40,7 @@
|
|||
#define EXT_ACC 0xe4
|
||||
|
||||
#define SDHI_VER_GEN2_SDR50 0x490c
|
||||
#define SDHI_VER_RZ_A1 0x820b
|
||||
/* very old datasheets said 0x490c for SDR104, too. They are wrong! */
|
||||
#define SDHI_VER_GEN2_SDR104 0xcb0d
|
||||
#define SDHI_VER_GEN3_SD 0xcc10
|
||||
|
@ -398,12 +399,14 @@ static void renesas_sdhi_hw_reset(struct tmio_mmc_host *host)
|
|||
sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL));
|
||||
}
|
||||
|
||||
static int renesas_sdhi_wait_idle(struct tmio_mmc_host *host)
|
||||
static int renesas_sdhi_wait_idle(struct tmio_mmc_host *host, u32 bit)
|
||||
{
|
||||
int timeout = 1000;
|
||||
/* CBSY is set when busy, SCLKDIVEN is cleared when busy */
|
||||
u32 wait_state = (bit == TMIO_STAT_CMD_BUSY ? TMIO_STAT_CMD_BUSY : 0);
|
||||
|
||||
while (--timeout && !(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS)
|
||||
& TMIO_STAT_SCLKDIVEN))
|
||||
while (--timeout && (sd_ctrl_read16_and_16_as_32(host, CTL_STATUS)
|
||||
& bit) == wait_state)
|
||||
udelay(1);
|
||||
|
||||
if (!timeout) {
|
||||
|
@ -416,17 +419,22 @@ static int renesas_sdhi_wait_idle(struct tmio_mmc_host *host)
|
|||
|
||||
static int renesas_sdhi_write16_hook(struct tmio_mmc_host *host, int addr)
|
||||
{
|
||||
u32 bit = TMIO_STAT_SCLKDIVEN;
|
||||
|
||||
switch (addr) {
|
||||
case CTL_SD_CMD:
|
||||
case CTL_STOP_INTERNAL_ACTION:
|
||||
case CTL_XFER_BLK_COUNT:
|
||||
case CTL_SD_CARD_CLK_CTL:
|
||||
case CTL_SD_XFER_LEN:
|
||||
case CTL_SD_MEM_CARD_OPT:
|
||||
case CTL_TRANSACTION_CTL:
|
||||
case CTL_DMA_ENABLE:
|
||||
case EXT_ACC:
|
||||
return renesas_sdhi_wait_idle(host);
|
||||
if (host->pdata->flags & TMIO_MMC_HAVE_CBSY)
|
||||
bit = TMIO_STAT_CMD_BUSY;
|
||||
/* fallthrough */
|
||||
case CTL_SD_CARD_CLK_CTL:
|
||||
return renesas_sdhi_wait_idle(host, bit);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -452,10 +460,11 @@ static int renesas_sdhi_multi_io_quirk(struct mmc_card *card,
|
|||
|
||||
static void renesas_sdhi_enable_dma(struct tmio_mmc_host *host, bool enable)
|
||||
{
|
||||
sd_ctrl_write16(host, CTL_DMA_ENABLE, enable ? 2 : 0);
|
||||
/* Iff regs are 8 byte apart, sdbuf is 64 bit. Otherwise always 32. */
|
||||
int width = (host->bus_shift == 2) ? 64 : 32;
|
||||
|
||||
/* enable 32bit access if DMA mode if possibile */
|
||||
renesas_sdhi_sdbuf_width(host, enable ? 32 : 16);
|
||||
sd_ctrl_write16(host, CTL_DMA_ENABLE, enable ? DMA_ENABLE_DMASDRW : 0);
|
||||
renesas_sdhi_sdbuf_width(host, enable ? width : 16);
|
||||
}
|
||||
|
||||
int renesas_sdhi_probe(struct platform_device *pdev,
|
||||
|
@ -526,6 +535,8 @@ int renesas_sdhi_probe(struct platform_device *pdev,
|
|||
mmc_data->capabilities |= of_data->capabilities;
|
||||
mmc_data->capabilities2 |= of_data->capabilities2;
|
||||
mmc_data->dma_rx_offset = of_data->dma_rx_offset;
|
||||
mmc_data->max_blk_count = of_data->max_blk_count;
|
||||
mmc_data->max_segs = of_data->max_segs;
|
||||
dma_priv->dma_buswidth = of_data->dma_buswidth;
|
||||
host->bus_shift = of_data->bus_shift;
|
||||
}
|
||||
|
@ -579,6 +590,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
|
|||
if (ret < 0)
|
||||
goto efree;
|
||||
|
||||
/* One Gen2 SDHI incarnation does NOT have a CBSY bit */
|
||||
if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN2_SDR50)
|
||||
mmc_data->flags &= ~TMIO_MMC_HAVE_CBSY;
|
||||
|
||||
/* Enable tuning iff we have an SCC and a supported mode */
|
||||
if (of_data && of_data->scc_offset &&
|
||||
(host->mmc->caps & MMC_CAP_UHS_SDR104 ||
|
||||
|
|
|
@ -0,0 +1,287 @@
|
|||
/*
|
||||
* DMA support for Internal DMAC with SDHI SD/SDIO controller
|
||||
*
|
||||
* Copyright (C) 2016-17 Renesas Electronics Corporation
|
||||
* Copyright (C) 2016-17 Horms Solutions, Simon Horman
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#include <linux/device.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/io-64-nonatomic-hi-lo.h>
|
||||
#include <linux/mfd/tmio.h>
|
||||
#include <linux/mmc/host.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/scatterlist.h>
|
||||
#include <linux/sys_soc.h>
|
||||
|
||||
#include "renesas_sdhi.h"
|
||||
#include "tmio_mmc.h"
|
||||
|
||||
#define DM_CM_DTRAN_MODE 0x820
|
||||
#define DM_CM_DTRAN_CTRL 0x828
|
||||
#define DM_CM_RST 0x830
|
||||
#define DM_CM_INFO1 0x840
|
||||
#define DM_CM_INFO1_MASK 0x848
|
||||
#define DM_CM_INFO2 0x850
|
||||
#define DM_CM_INFO2_MASK 0x858
|
||||
#define DM_DTRAN_ADDR 0x880
|
||||
|
||||
/* DM_CM_DTRAN_MODE */
|
||||
#define DTRAN_MODE_CH_NUM_CH0 0 /* "downstream" = for write commands */
|
||||
#define DTRAN_MODE_CH_NUM_CH1 BIT(16) /* "uptream" = for read commands */
|
||||
#define DTRAN_MODE_BUS_WID_TH (BIT(5) | BIT(4))
|
||||
#define DTRAN_MODE_ADDR_MODE BIT(0) /* 1 = Increment address */
|
||||
|
||||
/* DM_CM_DTRAN_CTRL */
|
||||
#define DTRAN_CTRL_DM_START BIT(0)
|
||||
|
||||
/* DM_CM_RST */
|
||||
#define RST_DTRANRST1 BIT(9)
|
||||
#define RST_DTRANRST0 BIT(8)
|
||||
#define RST_RESERVED_BITS GENMASK_ULL(32, 0)
|
||||
|
||||
/* DM_CM_INFO1 and DM_CM_INFO1_MASK */
|
||||
#define INFO1_CLEAR 0
|
||||
#define INFO1_DTRANEND1 BIT(17)
|
||||
#define INFO1_DTRANEND0 BIT(16)
|
||||
|
||||
/* DM_CM_INFO2 and DM_CM_INFO2_MASK */
|
||||
#define INFO2_DTRANERR1 BIT(17)
|
||||
#define INFO2_DTRANERR0 BIT(16)
|
||||
|
||||
/*
|
||||
* Specification of this driver:
|
||||
* - host->chan_{rx,tx} will be used as a flag of enabling/disabling the dma
|
||||
* - Since this SDHI DMAC register set has 16 but 32-bit width, we
|
||||
* need a custom accessor.
|
||||
*/
|
||||
|
||||
/* Definitions for sampling clocks */
|
||||
static struct renesas_sdhi_scc rcar_gen3_scc_taps[] = {
|
||||
{
|
||||
.clk_rate = 0,
|
||||
.tap = 0x00000300,
|
||||
},
|
||||
};
|
||||
|
||||
static const struct renesas_sdhi_of_data of_rcar_gen3_compatible = {
|
||||
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE |
|
||||
TMIO_MMC_CLK_ACTUAL | TMIO_MMC_HAVE_CBSY |
|
||||
TMIO_MMC_MIN_RCAR2,
|
||||
.capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
|
||||
MMC_CAP_CMD23,
|
||||
.bus_shift = 2,
|
||||
.scc_offset = 0x1000,
|
||||
.taps = rcar_gen3_scc_taps,
|
||||
.taps_num = ARRAY_SIZE(rcar_gen3_scc_taps),
|
||||
/* Gen3 SDHI DMAC can handle 0xffffffff blk count, but seg = 1 */
|
||||
.max_blk_count = 0xffffffff,
|
||||
.max_segs = 1,
|
||||
};
|
||||
|
||||
static const struct of_device_id renesas_sdhi_internal_dmac_of_match[] = {
|
||||
{ .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_gen3_compatible, },
|
||||
{ .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_gen3_compatible, },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, renesas_sdhi_internal_dmac_of_match);
|
||||
|
||||
static void
|
||||
renesas_sdhi_internal_dmac_dm_write(struct tmio_mmc_host *host,
|
||||
int addr, u64 val)
|
||||
{
|
||||
writeq(val, host->ctl + addr);
|
||||
}
|
||||
|
||||
static void
|
||||
renesas_sdhi_internal_dmac_enable_dma(struct tmio_mmc_host *host, bool enable)
|
||||
{
|
||||
if (!host->chan_tx || !host->chan_rx)
|
||||
return;
|
||||
|
||||
if (!enable)
|
||||
renesas_sdhi_internal_dmac_dm_write(host, DM_CM_INFO1,
|
||||
INFO1_CLEAR);
|
||||
|
||||
if (host->dma->enable)
|
||||
host->dma->enable(host, enable);
|
||||
}
|
||||
|
||||
static void
|
||||
renesas_sdhi_internal_dmac_abort_dma(struct tmio_mmc_host *host) {
|
||||
u64 val = RST_DTRANRST1 | RST_DTRANRST0;
|
||||
|
||||
renesas_sdhi_internal_dmac_enable_dma(host, false);
|
||||
|
||||
renesas_sdhi_internal_dmac_dm_write(host, DM_CM_RST,
|
||||
RST_RESERVED_BITS & ~val);
|
||||
renesas_sdhi_internal_dmac_dm_write(host, DM_CM_RST,
|
||||
RST_RESERVED_BITS | val);
|
||||
|
||||
renesas_sdhi_internal_dmac_enable_dma(host, true);
|
||||
}
|
||||
|
||||
static void
|
||||
renesas_sdhi_internal_dmac_dataend_dma(struct tmio_mmc_host *host) {
|
||||
tasklet_schedule(&host->dma_complete);
|
||||
}
|
||||
|
||||
static void
|
||||
renesas_sdhi_internal_dmac_start_dma(struct tmio_mmc_host *host,
|
||||
struct mmc_data *data)
|
||||
{
|
||||
struct scatterlist *sg = host->sg_ptr;
|
||||
u32 dtran_mode = DTRAN_MODE_BUS_WID_TH | DTRAN_MODE_ADDR_MODE;
|
||||
enum dma_data_direction dir;
|
||||
int ret;
|
||||
u32 irq_mask;
|
||||
|
||||
/* This DMAC cannot handle if sg_len is not 1 */
|
||||
WARN_ON(host->sg_len > 1);
|
||||
|
||||
/* This DMAC cannot handle if buffer is not 8-bytes alignment */
|
||||
if (!IS_ALIGNED(sg->offset, 8)) {
|
||||
host->force_pio = true;
|
||||
renesas_sdhi_internal_dmac_enable_dma(host, false);
|
||||
return;
|
||||
}
|
||||
|
||||
if (data->flags & MMC_DATA_READ) {
|
||||
dtran_mode |= DTRAN_MODE_CH_NUM_CH1;
|
||||
dir = DMA_FROM_DEVICE;
|
||||
irq_mask = TMIO_STAT_RXRDY;
|
||||
} else {
|
||||
dtran_mode |= DTRAN_MODE_CH_NUM_CH0;
|
||||
dir = DMA_TO_DEVICE;
|
||||
irq_mask = TMIO_STAT_TXRQ;
|
||||
}
|
||||
|
||||
ret = dma_map_sg(&host->pdev->dev, sg, host->sg_len, dir);
|
||||
if (ret < 0)
|
||||
return;
|
||||
|
||||
renesas_sdhi_internal_dmac_enable_dma(host, true);
|
||||
|
||||
/* disable PIO irqs to avoid "PIO IRQ in DMA mode!" */
|
||||
tmio_mmc_disable_mmc_irqs(host, irq_mask);
|
||||
|
||||
/* set dma parameters */
|
||||
renesas_sdhi_internal_dmac_dm_write(host, DM_CM_DTRAN_MODE,
|
||||
dtran_mode);
|
||||
renesas_sdhi_internal_dmac_dm_write(host, DM_DTRAN_ADDR,
|
||||
sg->dma_address);
|
||||
}
|
||||
|
||||
static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
|
||||
{
|
||||
struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
|
||||
|
||||
tmio_mmc_enable_mmc_irqs(host, TMIO_STAT_DATAEND);
|
||||
|
||||
/* start the DMAC */
|
||||
renesas_sdhi_internal_dmac_dm_write(host, DM_CM_DTRAN_CTRL,
|
||||
DTRAN_CTRL_DM_START);
|
||||
}
|
||||
|
||||
static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
|
||||
{
|
||||
struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
|
||||
enum dma_data_direction dir;
|
||||
|
||||
spin_lock_irq(&host->lock);
|
||||
|
||||
if (!host->data)
|
||||
goto out;
|
||||
|
||||
if (host->data->flags & MMC_DATA_READ)
|
||||
dir = DMA_FROM_DEVICE;
|
||||
else
|
||||
dir = DMA_TO_DEVICE;
|
||||
|
||||
renesas_sdhi_internal_dmac_enable_dma(host, false);
|
||||
dma_unmap_sg(&host->pdev->dev, host->sg_ptr, host->sg_len, dir);
|
||||
|
||||
tmio_mmc_do_data_irq(host);
|
||||
out:
|
||||
spin_unlock_irq(&host->lock);
|
||||
}
|
||||
|
||||
static void
|
||||
renesas_sdhi_internal_dmac_request_dma(struct tmio_mmc_host *host,
|
||||
struct tmio_mmc_data *pdata)
|
||||
{
|
||||
/* Each value is set to non-zero to assume "enabling" each DMA */
|
||||
host->chan_rx = host->chan_tx = (void *)0xdeadbeaf;
|
||||
|
||||
tasklet_init(&host->dma_complete,
|
||||
renesas_sdhi_internal_dmac_complete_tasklet_fn,
|
||||
(unsigned long)host);
|
||||
tasklet_init(&host->dma_issue,
|
||||
renesas_sdhi_internal_dmac_issue_tasklet_fn,
|
||||
(unsigned long)host);
|
||||
}
|
||||
|
||||
static void
|
||||
renesas_sdhi_internal_dmac_release_dma(struct tmio_mmc_host *host)
|
||||
{
|
||||
/* Each value is set to zero to assume "disabling" each DMA */
|
||||
host->chan_rx = host->chan_tx = NULL;
|
||||
}
|
||||
|
||||
static const struct tmio_mmc_dma_ops renesas_sdhi_internal_dmac_dma_ops = {
|
||||
.start = renesas_sdhi_internal_dmac_start_dma,
|
||||
.enable = renesas_sdhi_internal_dmac_enable_dma,
|
||||
.request = renesas_sdhi_internal_dmac_request_dma,
|
||||
.release = renesas_sdhi_internal_dmac_release_dma,
|
||||
.abort = renesas_sdhi_internal_dmac_abort_dma,
|
||||
.dataend = renesas_sdhi_internal_dmac_dataend_dma,
|
||||
};
|
||||
|
||||
/*
|
||||
* Whitelist of specific R-Car Gen3 SoC ES versions to use this DMAC
|
||||
* implementation as others may use a different implementation.
|
||||
*/
|
||||
static const struct soc_device_attribute gen3_soc_whitelist[] = {
|
||||
{ .soc_id = "r8a7795", .revision = "ES1.*" },
|
||||
{ .soc_id = "r8a7795", .revision = "ES2.0" },
|
||||
{ .soc_id = "r8a7796", .revision = "ES1.0" },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
|
||||
static int renesas_sdhi_internal_dmac_probe(struct platform_device *pdev)
|
||||
{
|
||||
if (!soc_device_match(gen3_soc_whitelist))
|
||||
return -ENODEV;
|
||||
|
||||
return renesas_sdhi_probe(pdev, &renesas_sdhi_internal_dmac_dma_ops);
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops renesas_sdhi_internal_dmac_dev_pm_ops = {
|
||||
SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
|
||||
pm_runtime_force_resume)
|
||||
SET_RUNTIME_PM_OPS(tmio_mmc_host_runtime_suspend,
|
||||
tmio_mmc_host_runtime_resume,
|
||||
NULL)
|
||||
};
|
||||
|
||||
static struct platform_driver renesas_internal_dmac_sdhi_driver = {
|
||||
.driver = {
|
||||
.name = "renesas_sdhi_internal_dmac",
|
||||
.pm = &renesas_sdhi_internal_dmac_dev_pm_ops,
|
||||
.of_match_table = renesas_sdhi_internal_dmac_of_match,
|
||||
},
|
||||
.probe = renesas_sdhi_internal_dmac_probe,
|
||||
.remove = renesas_sdhi_remove,
|
||||
};
|
||||
|
||||
module_platform_driver(renesas_internal_dmac_sdhi_driver);
|
||||
|
||||
MODULE_DESCRIPTION("Renesas SDHI driver for internal DMAC");
|
||||
MODULE_AUTHOR("Yoshihiro Shimoda");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
* DMA function for TMIO MMC implementations
|
||||
* DMA support use of SYS DMAC with SDHI SD/SDIO controller
|
||||
*
|
||||
* Copyright (C) 2016-17 Renesas Electronics Corporation
|
||||
* Copyright (C) 2016-17 Sang Engineering, Wolfram Sang
|
||||
|
@ -18,8 +18,10 @@
|
|||
#include <linux/mmc/host.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/scatterlist.h>
|
||||
#include <linux/sys_soc.h>
|
||||
|
||||
#include "renesas_sdhi.h"
|
||||
#include "tmio_mmc.h"
|
||||
|
@ -31,7 +33,8 @@ static const struct renesas_sdhi_of_data of_default_cfg = {
|
|||
};
|
||||
|
||||
static const struct renesas_sdhi_of_data of_rz_compatible = {
|
||||
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_32BIT_DATA_PORT,
|
||||
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_32BIT_DATA_PORT |
|
||||
TMIO_MMC_HAVE_CBSY,
|
||||
.tmio_ocr_mask = MMC_VDD_32_33,
|
||||
.capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ,
|
||||
};
|
||||
|
@ -56,7 +59,8 @@ static struct renesas_sdhi_scc rcar_gen2_scc_taps[] = {
|
|||
|
||||
static const struct renesas_sdhi_of_data of_rcar_gen2_compatible = {
|
||||
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE |
|
||||
TMIO_MMC_CLK_ACTUAL | TMIO_MMC_MIN_RCAR2,
|
||||
TMIO_MMC_CLK_ACTUAL | TMIO_MMC_HAVE_CBSY |
|
||||
TMIO_MMC_MIN_RCAR2,
|
||||
.capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
|
||||
MMC_CAP_CMD23,
|
||||
.dma_buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES,
|
||||
|
@ -76,7 +80,8 @@ static struct renesas_sdhi_scc rcar_gen3_scc_taps[] = {
|
|||
|
||||
static const struct renesas_sdhi_of_data of_rcar_gen3_compatible = {
|
||||
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE |
|
||||
TMIO_MMC_CLK_ACTUAL | TMIO_MMC_MIN_RCAR2,
|
||||
TMIO_MMC_CLK_ACTUAL | TMIO_MMC_HAVE_CBSY |
|
||||
TMIO_MMC_MIN_RCAR2,
|
||||
.capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
|
||||
MMC_CAP_CMD23,
|
||||
.bus_shift = 2,
|
||||
|
@ -93,6 +98,8 @@ static const struct of_device_id renesas_sdhi_sys_dmac_of_match[] = {
|
|||
{ .compatible = "renesas,sdhi-r7s72100", .data = &of_rz_compatible, },
|
||||
{ .compatible = "renesas,sdhi-r8a7778", .data = &of_rcar_gen1_compatible, },
|
||||
{ .compatible = "renesas,sdhi-r8a7779", .data = &of_rcar_gen1_compatible, },
|
||||
{ .compatible = "renesas,sdhi-r8a7743", .data = &of_rcar_gen2_compatible, },
|
||||
{ .compatible = "renesas,sdhi-r8a7745", .data = &of_rcar_gen2_compatible, },
|
||||
{ .compatible = "renesas,sdhi-r8a7790", .data = &of_rcar_gen2_compatible, },
|
||||
{ .compatible = "renesas,sdhi-r8a7791", .data = &of_rcar_gen2_compatible, },
|
||||
{ .compatible = "renesas,sdhi-r8a7792", .data = &of_rcar_gen2_compatible, },
|
||||
|
@ -126,6 +133,11 @@ static void renesas_sdhi_sys_dmac_abort_dma(struct tmio_mmc_host *host)
|
|||
renesas_sdhi_sys_dmac_enable_dma(host, true);
|
||||
}
|
||||
|
||||
static void renesas_sdhi_sys_dmac_dataend_dma(struct tmio_mmc_host *host)
|
||||
{
|
||||
complete(&host->dma_dataend);
|
||||
}
|
||||
|
||||
static void renesas_sdhi_sys_dmac_dma_callback(void *arg)
|
||||
{
|
||||
struct tmio_mmc_host *host = arg;
|
||||
|
@ -451,10 +463,24 @@ static const struct tmio_mmc_dma_ops renesas_sdhi_sys_dmac_dma_ops = {
|
|||
.request = renesas_sdhi_sys_dmac_request_dma,
|
||||
.release = renesas_sdhi_sys_dmac_release_dma,
|
||||
.abort = renesas_sdhi_sys_dmac_abort_dma,
|
||||
.dataend = renesas_sdhi_sys_dmac_dataend_dma,
|
||||
};
|
||||
|
||||
/*
|
||||
* Whitelist of specific R-Car Gen3 SoC ES versions to use this DMAC
|
||||
* implementation. Currently empty as all supported ES versions use
|
||||
* the internal DMAC.
|
||||
*/
|
||||
static const struct soc_device_attribute gen3_soc_whitelist[] = {
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
|
||||
static int renesas_sdhi_sys_dmac_probe(struct platform_device *pdev)
|
||||
{
|
||||
if (of_device_get_match_data(&pdev->dev) == &of_rcar_gen3_compatible &&
|
||||
!soc_device_match(gen3_soc_whitelist))
|
||||
return -ENODEV;
|
||||
|
||||
return renesas_sdhi_probe(pdev, &renesas_sdhi_sys_dmac_dma_ops);
|
||||
}
|
||||
|
||||
|
|
|
@ -909,7 +909,7 @@ static int sd_set_bus_width(struct rtsx_usb_sdmmc *host,
|
|||
unsigned char bus_width)
|
||||
{
|
||||
int err = 0;
|
||||
u8 width[] = {
|
||||
static const u8 width[] = {
|
||||
[MMC_BUS_WIDTH_1] = SD_BUS_WIDTH_1BIT,
|
||||
[MMC_BUS_WIDTH_4] = SD_BUS_WIDTH_4BIT,
|
||||
[MMC_BUS_WIDTH_8] = SD_BUS_WIDTH_8BIT,
|
||||
|
|
|
@ -1313,7 +1313,7 @@ static void s3cmci_enable_sdio_irq(struct mmc_host *mmc, int enable)
|
|||
s3cmci_check_sdio_irq(host);
|
||||
}
|
||||
|
||||
static struct mmc_host_ops s3cmci_ops = {
|
||||
static const struct mmc_host_ops s3cmci_ops = {
|
||||
.request = s3cmci_request,
|
||||
.set_ios = s3cmci_set_ios,
|
||||
.get_ro = mmc_gpio_get_ro,
|
||||
|
|
|
@ -294,13 +294,10 @@ static int sdhci_acpi_sdio_probe_slot(struct platform_device *pdev,
|
|||
const char *hid, const char *uid)
|
||||
{
|
||||
struct sdhci_acpi_host *c = platform_get_drvdata(pdev);
|
||||
struct sdhci_host *host;
|
||||
|
||||
if (!c || !c->host)
|
||||
return 0;
|
||||
|
||||
host = c->host;
|
||||
|
||||
/* Platform specific code during sdio probe slot goes here */
|
||||
|
||||
return 0;
|
||||
|
@ -432,7 +429,6 @@ static const struct sdhci_acpi_slot *sdhci_acpi_get_slot(const char *hid,
|
|||
static int sdhci_acpi_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
acpi_handle handle = ACPI_HANDLE(dev);
|
||||
struct acpi_device *device, *child;
|
||||
struct sdhci_acpi_host *c;
|
||||
struct sdhci_host *host;
|
||||
|
@ -442,7 +438,8 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
|
|||
const char *uid;
|
||||
int err;
|
||||
|
||||
if (acpi_bus_get_device(handle, &device))
|
||||
device = ACPI_COMPANION(dev);
|
||||
if (!device)
|
||||
return -ENODEV;
|
||||
|
||||
hid = acpi_device_hid(device);
|
||||
|
|
|
@ -186,7 +186,7 @@ static void sdhci_bcm_kona_init_74_clocks(struct sdhci_host *host,
|
|||
udelay(740);
|
||||
}
|
||||
|
||||
static struct sdhci_ops sdhci_bcm_kona_ops = {
|
||||
static const struct sdhci_ops sdhci_bcm_kona_ops = {
|
||||
.set_clock = sdhci_set_clock,
|
||||
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
|
||||
.get_timeout_clock = sdhci_pltfm_clk_get_max_clock,
|
||||
|
@ -197,7 +197,7 @@ static struct sdhci_ops sdhci_bcm_kona_ops = {
|
|||
.card_event = sdhci_bcm_kona_card_event,
|
||||
};
|
||||
|
||||
static struct sdhci_pltfm_data sdhci_pltfm_data_kona = {
|
||||
static const struct sdhci_pltfm_data sdhci_pltfm_data_kona = {
|
||||
.ops = &sdhci_bcm_kona_ops,
|
||||
.quirks = SDHCI_QUIRK_NO_CARD_NO_RESET |
|
||||
SDHCI_QUIRK_BROKEN_TIMEOUT_VAL | SDHCI_QUIRK_32BIT_DMA_ADDR |
|
||||
|
|
|
@ -21,41 +21,6 @@
|
|||
|
||||
#include "sdhci-pltfm.h"
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
|
||||
static int sdhci_brcmstb_suspend(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
int res;
|
||||
|
||||
if (host->tuning_mode != SDHCI_TUNING_MODE_3)
|
||||
mmc_retune_needed(host->mmc);
|
||||
|
||||
res = sdhci_suspend_host(host);
|
||||
if (res)
|
||||
return res;
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
return res;
|
||||
}
|
||||
|
||||
static int sdhci_brcmstb_resume(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
int err;
|
||||
|
||||
err = clk_prepare_enable(pltfm_host->clk);
|
||||
if (err)
|
||||
return err;
|
||||
return sdhci_resume_host(host);
|
||||
}
|
||||
|
||||
#endif /* CONFIG_PM_SLEEP */
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(sdhci_brcmstb_pmops, sdhci_brcmstb_suspend,
|
||||
sdhci_brcmstb_resume);
|
||||
|
||||
static const struct sdhci_ops sdhci_brcmstb_ops = {
|
||||
.set_clock = sdhci_set_clock,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
|
@ -63,7 +28,7 @@ static const struct sdhci_ops sdhci_brcmstb_ops = {
|
|||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||
};
|
||||
|
||||
static struct sdhci_pltfm_data sdhci_brcmstb_pdata = {
|
||||
static const struct sdhci_pltfm_data sdhci_brcmstb_pdata = {
|
||||
.ops = &sdhci_brcmstb_ops,
|
||||
};
|
||||
|
||||
|
@ -131,7 +96,7 @@ MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match);
|
|||
static struct platform_driver sdhci_brcmstb_driver = {
|
||||
.driver = {
|
||||
.name = "sdhci-brcmstb",
|
||||
.pm = &sdhci_brcmstb_pmops,
|
||||
.pm = &sdhci_pltfm_pmops,
|
||||
.of_match_table = of_match_ptr(sdhci_brcm_of_match),
|
||||
},
|
||||
.probe = sdhci_brcmstb_probe,
|
||||
|
|
|
@ -67,9 +67,16 @@
|
|||
*/
|
||||
#define SDHCI_CDNS_MAX_TUNING_LOOP 40
|
||||
|
||||
struct sdhci_cdns_phy_param {
|
||||
u8 addr;
|
||||
u8 data;
|
||||
};
|
||||
|
||||
struct sdhci_cdns_priv {
|
||||
void __iomem *hrs_addr;
|
||||
bool enhanced_strobe;
|
||||
unsigned int nr_phy_params;
|
||||
struct sdhci_cdns_phy_param phy_params[0];
|
||||
};
|
||||
|
||||
struct sdhci_cdns_phy_cfg {
|
||||
|
@ -115,9 +122,22 @@ static int sdhci_cdns_write_phy_reg(struct sdhci_cdns_priv *priv,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int sdhci_cdns_phy_init(struct device_node *np,
|
||||
struct sdhci_cdns_priv *priv)
|
||||
static unsigned int sdhci_cdns_phy_param_count(struct device_node *np)
|
||||
{
|
||||
unsigned int count = 0;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(sdhci_cdns_phy_cfgs); i++)
|
||||
if (of_property_read_bool(np, sdhci_cdns_phy_cfgs[i].property))
|
||||
count++;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static void sdhci_cdns_phy_param_parse(struct device_node *np,
|
||||
struct sdhci_cdns_priv *priv)
|
||||
{
|
||||
struct sdhci_cdns_phy_param *p = priv->phy_params;
|
||||
u32 val;
|
||||
int ret, i;
|
||||
|
||||
|
@ -127,9 +147,19 @@ static int sdhci_cdns_phy_init(struct device_node *np,
|
|||
if (ret)
|
||||
continue;
|
||||
|
||||
ret = sdhci_cdns_write_phy_reg(priv,
|
||||
sdhci_cdns_phy_cfgs[i].addr,
|
||||
val);
|
||||
p->addr = sdhci_cdns_phy_cfgs[i].addr;
|
||||
p->data = val;
|
||||
p++;
|
||||
}
|
||||
}
|
||||
|
||||
static int sdhci_cdns_phy_init(struct sdhci_cdns_priv *priv)
|
||||
{
|
||||
int ret, i;
|
||||
|
||||
for (i = 0; i < priv->nr_phy_params; i++) {
|
||||
ret = sdhci_cdns_write_phy_reg(priv, priv->phy_params[i].addr,
|
||||
priv->phy_params[i].data);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
@ -302,6 +332,8 @@ static int sdhci_cdns_probe(struct platform_device *pdev)
|
|||
struct sdhci_pltfm_host *pltfm_host;
|
||||
struct sdhci_cdns_priv *priv;
|
||||
struct clk *clk;
|
||||
size_t priv_size;
|
||||
unsigned int nr_phy_params;
|
||||
int ret;
|
||||
struct device *dev = &pdev->dev;
|
||||
|
||||
|
@ -313,7 +345,9 @@ static int sdhci_cdns_probe(struct platform_device *pdev)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
host = sdhci_pltfm_init(pdev, &sdhci_cdns_pltfm_data, sizeof(*priv));
|
||||
nr_phy_params = sdhci_cdns_phy_param_count(dev->of_node);
|
||||
priv_size = sizeof(*priv) + sizeof(priv->phy_params[0]) * nr_phy_params;
|
||||
host = sdhci_pltfm_init(pdev, &sdhci_cdns_pltfm_data, priv_size);
|
||||
if (IS_ERR(host)) {
|
||||
ret = PTR_ERR(host);
|
||||
goto disable_clk;
|
||||
|
@ -322,7 +356,8 @@ static int sdhci_cdns_probe(struct platform_device *pdev)
|
|||
pltfm_host = sdhci_priv(host);
|
||||
pltfm_host->clk = clk;
|
||||
|
||||
priv = sdhci_cdns_priv(host);
|
||||
priv = sdhci_pltfm_priv(pltfm_host);
|
||||
priv->nr_phy_params = nr_phy_params;
|
||||
priv->hrs_addr = host->ioaddr;
|
||||
priv->enhanced_strobe = false;
|
||||
host->ioaddr += SDHCI_CDNS_SRS_BASE;
|
||||
|
@ -336,7 +371,9 @@ static int sdhci_cdns_probe(struct platform_device *pdev)
|
|||
if (ret)
|
||||
goto free;
|
||||
|
||||
ret = sdhci_cdns_phy_init(dev->of_node, priv);
|
||||
sdhci_cdns_phy_param_parse(dev->of_node, priv);
|
||||
|
||||
ret = sdhci_cdns_phy_init(priv);
|
||||
if (ret)
|
||||
goto free;
|
||||
|
||||
|
@ -353,6 +390,39 @@ disable_clk:
|
|||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int sdhci_cdns_resume(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct sdhci_cdns_priv *priv = sdhci_pltfm_priv(pltfm_host);
|
||||
int ret;
|
||||
|
||||
ret = clk_prepare_enable(pltfm_host->clk);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = sdhci_cdns_phy_init(priv);
|
||||
if (ret)
|
||||
goto disable_clk;
|
||||
|
||||
ret = sdhci_resume_host(host);
|
||||
if (ret)
|
||||
goto disable_clk;
|
||||
|
||||
return 0;
|
||||
|
||||
disable_clk:
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
||||
static const struct dev_pm_ops sdhci_cdns_pm_ops = {
|
||||
SET_SYSTEM_SLEEP_PM_OPS(sdhci_pltfm_suspend, sdhci_cdns_resume)
|
||||
};
|
||||
|
||||
static const struct of_device_id sdhci_cdns_match[] = {
|
||||
{ .compatible = "socionext,uniphier-sd4hc" },
|
||||
{ .compatible = "cdns,sd4hc" },
|
||||
|
@ -363,7 +433,7 @@ MODULE_DEVICE_TABLE(of, sdhci_cdns_match);
|
|||
static struct platform_driver sdhci_cdns_driver = {
|
||||
.driver = {
|
||||
.name = "sdhci-cdns",
|
||||
.pm = &sdhci_pltfm_pmops,
|
||||
.pm = &sdhci_cdns_pm_ops,
|
||||
.of_match_table = sdhci_cdns_match,
|
||||
},
|
||||
.probe = sdhci_cdns_probe,
|
||||
|
|
|
@ -54,6 +54,9 @@
|
|||
#define ESDHC_CLOCK_HCKEN 0x00000002
|
||||
#define ESDHC_CLOCK_IPGEN 0x00000001
|
||||
|
||||
/* Host Controller Capabilities Register 2 */
|
||||
#define ESDHC_CAPABILITIES_1 0x114
|
||||
|
||||
/* Tuning Block Control Register */
|
||||
#define ESDHC_TBCTL 0x120
|
||||
#define ESDHC_TB_EN 0x00000004
|
||||
|
|
|
@ -611,7 +611,7 @@ static void msm_hc_select_hs400(struct sdhci_host *host)
|
|||
* HS400 - divided clock (free running MCLK/2)
|
||||
* All other modes - default (free running MCLK)
|
||||
*/
|
||||
void sdhci_msm_hc_select_mode(struct sdhci_host *host)
|
||||
static void sdhci_msm_hc_select_mode(struct sdhci_host *host)
|
||||
{
|
||||
struct mmc_ios ios = host->mmc->ios;
|
||||
|
||||
|
@ -1049,7 +1049,7 @@ static unsigned int sdhci_msm_get_min_clock(struct sdhci_host *host)
|
|||
* instead directly control the GCC clock as per
|
||||
* HW recommendation.
|
||||
**/
|
||||
void __sdhci_msm_set_clock(struct sdhci_host *host, unsigned int clock)
|
||||
static void __sdhci_msm_set_clock(struct sdhci_host *host, unsigned int clock)
|
||||
{
|
||||
u16 clk;
|
||||
/*
|
||||
|
@ -1133,6 +1133,7 @@ static int sdhci_msm_probe(struct platform_device *pdev)
|
|||
if (IS_ERR(host))
|
||||
return PTR_ERR(host);
|
||||
|
||||
host->sdma_boundary = 0;
|
||||
pltfm_host = sdhci_priv(host);
|
||||
msm_host = sdhci_pltfm_priv(pltfm_host);
|
||||
msm_host->mmc = host->mmc;
|
||||
|
|
|
@ -216,13 +216,13 @@ static void sdhci_arasan_hs400_enhanced_strobe(struct mmc_host *mmc,
|
|||
u32 vendor;
|
||||
struct sdhci_host *host = mmc_priv(mmc);
|
||||
|
||||
vendor = readl(host->ioaddr + SDHCI_ARASAN_VENDOR_REGISTER);
|
||||
vendor = sdhci_readl(host, SDHCI_ARASAN_VENDOR_REGISTER);
|
||||
if (ios->enhanced_strobe)
|
||||
vendor |= VENDOR_ENHANCED_STROBE;
|
||||
else
|
||||
vendor &= ~VENDOR_ENHANCED_STROBE;
|
||||
|
||||
writel(vendor, host->ioaddr + SDHCI_ARASAN_VENDOR_REGISTER);
|
||||
sdhci_writel(host, vendor, SDHCI_ARASAN_VENDOR_REGISTER);
|
||||
}
|
||||
|
||||
static void sdhci_arasan_reset(struct sdhci_host *host, u8 mask)
|
||||
|
@ -262,7 +262,7 @@ static int sdhci_arasan_voltage_switch(struct mmc_host *mmc,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
static struct sdhci_ops sdhci_arasan_ops = {
|
||||
static const struct sdhci_ops sdhci_arasan_ops = {
|
||||
.set_clock = sdhci_arasan_set_clock,
|
||||
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
|
||||
.get_timeout_clock = sdhci_pltfm_clk_get_max_clock,
|
||||
|
@ -271,7 +271,7 @@ static struct sdhci_ops sdhci_arasan_ops = {
|
|||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||
};
|
||||
|
||||
static struct sdhci_pltfm_data sdhci_arasan_pdata = {
|
||||
static const struct sdhci_pltfm_data sdhci_arasan_pdata = {
|
||||
.ops = &sdhci_arasan_ops,
|
||||
.quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
|
||||
|
|
|
@ -42,6 +42,7 @@ struct sdhci_at91_priv {
|
|||
struct clk *hclock;
|
||||
struct clk *gck;
|
||||
struct clk *mainck;
|
||||
bool restore_needed;
|
||||
};
|
||||
|
||||
static void sdhci_at91_set_force_card_detect(struct sdhci_host *host)
|
||||
|
@ -146,6 +147,100 @@ static const struct of_device_id sdhci_at91_dt_match[] = {
|
|||
};
|
||||
MODULE_DEVICE_TABLE(of, sdhci_at91_dt_match);
|
||||
|
||||
static int sdhci_at91_set_clks_presets(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct sdhci_at91_priv *priv = sdhci_pltfm_priv(pltfm_host);
|
||||
int ret;
|
||||
unsigned int caps0, caps1;
|
||||
unsigned int clk_base, clk_mul;
|
||||
unsigned int gck_rate, real_gck_rate;
|
||||
unsigned int preset_div;
|
||||
|
||||
/*
|
||||
* The mult clock is provided by as a generated clock by the PMC
|
||||
* controller. In order to set the rate of gck, we have to get the
|
||||
* base clock rate and the clock mult from capabilities.
|
||||
*/
|
||||
clk_prepare_enable(priv->hclock);
|
||||
caps0 = readl(host->ioaddr + SDHCI_CAPABILITIES);
|
||||
caps1 = readl(host->ioaddr + SDHCI_CAPABILITIES_1);
|
||||
clk_base = (caps0 & SDHCI_CLOCK_V3_BASE_MASK) >> SDHCI_CLOCK_BASE_SHIFT;
|
||||
clk_mul = (caps1 & SDHCI_CLOCK_MUL_MASK) >> SDHCI_CLOCK_MUL_SHIFT;
|
||||
gck_rate = clk_base * 1000000 * (clk_mul + 1);
|
||||
ret = clk_set_rate(priv->gck, gck_rate);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to set gck");
|
||||
clk_disable_unprepare(priv->hclock);
|
||||
return ret;
|
||||
}
|
||||
/*
|
||||
* We need to check if we have the requested rate for gck because in
|
||||
* some cases this rate could be not supported. If it happens, the rate
|
||||
* is the closest one gck can provide. We have to update the value
|
||||
* of clk mul.
|
||||
*/
|
||||
real_gck_rate = clk_get_rate(priv->gck);
|
||||
if (real_gck_rate != gck_rate) {
|
||||
clk_mul = real_gck_rate / (clk_base * 1000000) - 1;
|
||||
caps1 &= (~SDHCI_CLOCK_MUL_MASK);
|
||||
caps1 |= ((clk_mul << SDHCI_CLOCK_MUL_SHIFT) &
|
||||
SDHCI_CLOCK_MUL_MASK);
|
||||
/* Set capabilities in r/w mode. */
|
||||
writel(SDMMC_CACR_KEY | SDMMC_CACR_CAPWREN,
|
||||
host->ioaddr + SDMMC_CACR);
|
||||
writel(caps1, host->ioaddr + SDHCI_CAPABILITIES_1);
|
||||
/* Set capabilities in ro mode. */
|
||||
writel(0, host->ioaddr + SDMMC_CACR);
|
||||
dev_info(dev, "update clk mul to %u as gck rate is %u Hz\n",
|
||||
clk_mul, real_gck_rate);
|
||||
}
|
||||
|
||||
/*
|
||||
* We have to set preset values because it depends on the clk_mul
|
||||
* value. Moreover, SDR104 is supported in a degraded mode since the
|
||||
* maximum sd clock value is 120 MHz instead of 208 MHz. For that
|
||||
* reason, we need to use presets to support SDR104.
|
||||
*/
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 24000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_SDR12);
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 50000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_SDR25);
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 100000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_SDR50);
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 120000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_SDR104);
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 50000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_DDR50);
|
||||
|
||||
clk_prepare_enable(priv->mainck);
|
||||
clk_prepare_enable(priv->gck);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int sdhci_at91_suspend(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct sdhci_at91_priv *priv = sdhci_pltfm_priv(pltfm_host);
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_force_suspend(dev);
|
||||
|
||||
priv->restore_needed = true;
|
||||
|
||||
return ret;
|
||||
}
|
||||
#endif /* CONFIG_PM_SLEEP */
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int sdhci_at91_runtime_suspend(struct device *dev)
|
||||
{
|
||||
|
@ -173,6 +268,15 @@ static int sdhci_at91_runtime_resume(struct device *dev)
|
|||
struct sdhci_at91_priv *priv = sdhci_pltfm_priv(pltfm_host);
|
||||
int ret;
|
||||
|
||||
if (priv->restore_needed) {
|
||||
ret = sdhci_at91_set_clks_presets(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
priv->restore_needed = false;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(priv->mainck);
|
||||
if (ret) {
|
||||
dev_err(dev, "can't enable mainck\n");
|
||||
|
@ -191,13 +295,13 @@ static int sdhci_at91_runtime_resume(struct device *dev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
out:
|
||||
return sdhci_runtime_resume_host(host);
|
||||
}
|
||||
#endif /* CONFIG_PM */
|
||||
|
||||
static const struct dev_pm_ops sdhci_at91_dev_pm_ops = {
|
||||
SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
|
||||
pm_runtime_force_resume)
|
||||
SET_SYSTEM_SLEEP_PM_OPS(sdhci_at91_suspend, pm_runtime_force_resume)
|
||||
SET_RUNTIME_PM_OPS(sdhci_at91_runtime_suspend,
|
||||
sdhci_at91_runtime_resume,
|
||||
NULL)
|
||||
|
@ -210,11 +314,7 @@ static int sdhci_at91_probe(struct platform_device *pdev)
|
|||
struct sdhci_host *host;
|
||||
struct sdhci_pltfm_host *pltfm_host;
|
||||
struct sdhci_at91_priv *priv;
|
||||
unsigned int caps0, caps1;
|
||||
unsigned int clk_base, clk_mul;
|
||||
unsigned int gck_rate, real_gck_rate;
|
||||
int ret;
|
||||
unsigned int preset_div;
|
||||
|
||||
match = of_match_device(sdhci_at91_dt_match, &pdev->dev);
|
||||
if (!match)
|
||||
|
@ -246,66 +346,11 @@ static int sdhci_at91_probe(struct platform_device *pdev)
|
|||
return PTR_ERR(priv->gck);
|
||||
}
|
||||
|
||||
/*
|
||||
* The mult clock is provided by as a generated clock by the PMC
|
||||
* controller. In order to set the rate of gck, we have to get the
|
||||
* base clock rate and the clock mult from capabilities.
|
||||
*/
|
||||
clk_prepare_enable(priv->hclock);
|
||||
caps0 = readl(host->ioaddr + SDHCI_CAPABILITIES);
|
||||
caps1 = readl(host->ioaddr + SDHCI_CAPABILITIES_1);
|
||||
clk_base = (caps0 & SDHCI_CLOCK_V3_BASE_MASK) >> SDHCI_CLOCK_BASE_SHIFT;
|
||||
clk_mul = (caps1 & SDHCI_CLOCK_MUL_MASK) >> SDHCI_CLOCK_MUL_SHIFT;
|
||||
gck_rate = clk_base * 1000000 * (clk_mul + 1);
|
||||
ret = clk_set_rate(priv->gck, gck_rate);
|
||||
if (ret < 0) {
|
||||
dev_err(&pdev->dev, "failed to set gck");
|
||||
goto hclock_disable_unprepare;
|
||||
}
|
||||
/*
|
||||
* We need to check if we have the requested rate for gck because in
|
||||
* some cases this rate could be not supported. If it happens, the rate
|
||||
* is the closest one gck can provide. We have to update the value
|
||||
* of clk mul.
|
||||
*/
|
||||
real_gck_rate = clk_get_rate(priv->gck);
|
||||
if (real_gck_rate != gck_rate) {
|
||||
clk_mul = real_gck_rate / (clk_base * 1000000) - 1;
|
||||
caps1 &= (~SDHCI_CLOCK_MUL_MASK);
|
||||
caps1 |= ((clk_mul << SDHCI_CLOCK_MUL_SHIFT) & SDHCI_CLOCK_MUL_MASK);
|
||||
/* Set capabilities in r/w mode. */
|
||||
writel(SDMMC_CACR_KEY | SDMMC_CACR_CAPWREN, host->ioaddr + SDMMC_CACR);
|
||||
writel(caps1, host->ioaddr + SDHCI_CAPABILITIES_1);
|
||||
/* Set capabilities in ro mode. */
|
||||
writel(0, host->ioaddr + SDMMC_CACR);
|
||||
dev_info(&pdev->dev, "update clk mul to %u as gck rate is %u Hz\n",
|
||||
clk_mul, real_gck_rate);
|
||||
}
|
||||
ret = sdhci_at91_set_clks_presets(&pdev->dev);
|
||||
if (ret)
|
||||
goto sdhci_pltfm_free;
|
||||
|
||||
/*
|
||||
* We have to set preset values because it depends on the clk_mul
|
||||
* value. Moreover, SDR104 is supported in a degraded mode since the
|
||||
* maximum sd clock value is 120 MHz instead of 208 MHz. For that
|
||||
* reason, we need to use presets to support SDR104.
|
||||
*/
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 24000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_SDR12);
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 50000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_SDR25);
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 100000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_SDR50);
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 120000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_SDR104);
|
||||
preset_div = DIV_ROUND_UP(real_gck_rate, 50000000) - 1;
|
||||
writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
|
||||
host->ioaddr + SDHCI_PRESET_FOR_DDR50);
|
||||
|
||||
clk_prepare_enable(priv->mainck);
|
||||
clk_prepare_enable(priv->gck);
|
||||
priv->restore_needed = false;
|
||||
|
||||
ret = mmc_of_parse(host->mmc);
|
||||
if (ret)
|
||||
|
@ -368,8 +413,8 @@ pm_runtime_disable:
|
|||
clocks_disable_unprepare:
|
||||
clk_disable_unprepare(priv->gck);
|
||||
clk_disable_unprepare(priv->mainck);
|
||||
hclock_disable_unprepare:
|
||||
clk_disable_unprepare(priv->hclock);
|
||||
sdhci_pltfm_free:
|
||||
sdhci_pltfm_free(pdev);
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -86,6 +86,17 @@ static u32 esdhc_readl_fixup(struct sdhci_host *host,
|
|||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* DTS properties of mmc host are used to enable each speed mode
|
||||
* according to soc and board capability. So clean up
|
||||
* SDR50/SDR104/DDR50 support bits here.
|
||||
*/
|
||||
if (spec_reg == SDHCI_CAPABILITIES_1) {
|
||||
ret = value & ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_SDR104 |
|
||||
SDHCI_SUPPORT_DDR50);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = value;
|
||||
return ret;
|
||||
}
|
||||
|
@ -249,7 +260,11 @@ static u32 esdhc_be_readl(struct sdhci_host *host, int reg)
|
|||
u32 ret;
|
||||
u32 value;
|
||||
|
||||
value = ioread32be(host->ioaddr + reg);
|
||||
if (reg == SDHCI_CAPABILITIES_1)
|
||||
value = ioread32be(host->ioaddr + ESDHC_CAPABILITIES_1);
|
||||
else
|
||||
value = ioread32be(host->ioaddr + reg);
|
||||
|
||||
ret = esdhc_readl_fixup(host, reg, value);
|
||||
|
||||
return ret;
|
||||
|
@ -260,7 +275,11 @@ static u32 esdhc_le_readl(struct sdhci_host *host, int reg)
|
|||
u32 ret;
|
||||
u32 value;
|
||||
|
||||
value = ioread32(host->ioaddr + reg);
|
||||
if (reg == SDHCI_CAPABILITIES_1)
|
||||
value = ioread32(host->ioaddr + ESDHC_CAPABILITIES_1);
|
||||
else
|
||||
value = ioread32(host->ioaddr + reg);
|
||||
|
||||
ret = esdhc_readl_fixup(host, reg, value);
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -35,7 +35,6 @@
|
|||
#include "sdhci-pci-o2micro.h"
|
||||
|
||||
static int sdhci_pci_enable_dma(struct sdhci_host *host);
|
||||
static void sdhci_pci_set_bus_width(struct sdhci_host *host, int width);
|
||||
static void sdhci_pci_hw_reset(struct sdhci_host *host);
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
|
@ -562,7 +561,7 @@ static const struct sdhci_ops sdhci_intel_byt_ops = {
|
|||
.set_clock = sdhci_set_clock,
|
||||
.set_power = sdhci_intel_set_power,
|
||||
.enable_dma = sdhci_pci_enable_dma,
|
||||
.set_bus_width = sdhci_pci_set_bus_width,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
.reset = sdhci_reset,
|
||||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||
.hw_reset = sdhci_pci_hw_reset,
|
||||
|
@ -730,6 +729,24 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
|
|||
#define INTEL_MRFLD_SD 2
|
||||
#define INTEL_MRFLD_SDIO 3
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static void intel_mrfld_mmc_fix_up_power_slot(struct sdhci_pci_slot *slot)
|
||||
{
|
||||
struct acpi_device *device, *child;
|
||||
|
||||
device = ACPI_COMPANION(&slot->chip->pdev->dev);
|
||||
if (!device)
|
||||
return;
|
||||
|
||||
acpi_device_fix_up_power(device);
|
||||
list_for_each_entry(child, &device->children, node)
|
||||
if (child->status.present && child->status.enabled)
|
||||
acpi_device_fix_up_power(child);
|
||||
}
|
||||
#else
|
||||
static inline void intel_mrfld_mmc_fix_up_power_slot(struct sdhci_pci_slot *slot) {}
|
||||
#endif
|
||||
|
||||
static int intel_mrfld_mmc_probe_slot(struct sdhci_pci_slot *slot)
|
||||
{
|
||||
unsigned int func = PCI_FUNC(slot->chip->pdev->devfn);
|
||||
|
@ -751,6 +768,8 @@ static int intel_mrfld_mmc_probe_slot(struct sdhci_pci_slot *slot)
|
|||
default:
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
intel_mrfld_mmc_fix_up_power_slot(slot);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1197,7 +1216,7 @@ static int amd_probe(struct sdhci_pci_chip *chip)
|
|||
static const struct sdhci_ops amd_sdhci_pci_ops = {
|
||||
.set_clock = sdhci_set_clock,
|
||||
.enable_dma = sdhci_pci_enable_dma,
|
||||
.set_bus_width = sdhci_pci_set_bus_width,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
.reset = sdhci_reset,
|
||||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||
.platform_execute_tuning = amd_execute_tuning,
|
||||
|
@ -1313,29 +1332,6 @@ static int sdhci_pci_enable_dma(struct sdhci_host *host)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void sdhci_pci_set_bus_width(struct sdhci_host *host, int width)
|
||||
{
|
||||
u8 ctrl;
|
||||
|
||||
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
|
||||
|
||||
switch (width) {
|
||||
case MMC_BUS_WIDTH_8:
|
||||
ctrl |= SDHCI_CTRL_8BITBUS;
|
||||
ctrl &= ~SDHCI_CTRL_4BITBUS;
|
||||
break;
|
||||
case MMC_BUS_WIDTH_4:
|
||||
ctrl |= SDHCI_CTRL_4BITBUS;
|
||||
ctrl &= ~SDHCI_CTRL_8BITBUS;
|
||||
break;
|
||||
default:
|
||||
ctrl &= ~(SDHCI_CTRL_8BITBUS | SDHCI_CTRL_4BITBUS);
|
||||
break;
|
||||
}
|
||||
|
||||
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
|
||||
}
|
||||
|
||||
static void sdhci_pci_gpio_hw_reset(struct sdhci_host *host)
|
||||
{
|
||||
struct sdhci_pci_slot *slot = sdhci_priv(host);
|
||||
|
@ -1362,7 +1358,7 @@ static void sdhci_pci_hw_reset(struct sdhci_host *host)
|
|||
static const struct sdhci_ops sdhci_pci_ops = {
|
||||
.set_clock = sdhci_set_clock,
|
||||
.enable_dma = sdhci_pci_enable_dma,
|
||||
.set_bus_width = sdhci_pci_set_bus_width,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
.reset = sdhci_reset,
|
||||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||
.hw_reset = sdhci_pci_hw_reset,
|
||||
|
|
|
@ -97,7 +97,7 @@ static const struct sdhci_ops pic32_sdhci_ops = {
|
|||
.get_ro = pic32_sdhci_get_ro,
|
||||
};
|
||||
|
||||
static struct sdhci_pltfm_data sdhci_pic32_pdata = {
|
||||
static const struct sdhci_pltfm_data sdhci_pic32_pdata = {
|
||||
.ops = &pic32_sdhci_ops,
|
||||
.quirks = SDHCI_QUIRK_NO_HISPD_BIT,
|
||||
.quirks2 = SDHCI_QUIRK2_NO_1_8_V,
|
||||
|
|
|
@ -209,22 +209,42 @@ int sdhci_pltfm_unregister(struct platform_device *pdev)
|
|||
EXPORT_SYMBOL_GPL(sdhci_pltfm_unregister);
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int sdhci_pltfm_suspend(struct device *dev)
|
||||
int sdhci_pltfm_suspend(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
int ret;
|
||||
|
||||
if (host->tuning_mode != SDHCI_TUNING_MODE_3)
|
||||
mmc_retune_needed(host->mmc);
|
||||
|
||||
return sdhci_suspend_host(host);
|
||||
}
|
||||
ret = sdhci_suspend_host(host);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
static int sdhci_pltfm_resume(struct device *dev)
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sdhci_pltfm_suspend);
|
||||
|
||||
int sdhci_pltfm_resume(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
int ret;
|
||||
|
||||
return sdhci_resume_host(host);
|
||||
ret = clk_prepare_enable(pltfm_host->clk);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = sdhci_resume_host(host);
|
||||
if (ret)
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sdhci_pltfm_resume);
|
||||
#endif
|
||||
|
||||
const struct dev_pm_ops sdhci_pltfm_pmops = {
|
||||
|
|
|
@ -109,6 +109,8 @@ static inline void *sdhci_pltfm_priv(struct sdhci_pltfm_host *host)
|
|||
return host->private;
|
||||
}
|
||||
|
||||
int sdhci_pltfm_suspend(struct device *dev);
|
||||
int sdhci_pltfm_resume(struct device *dev);
|
||||
extern const struct dev_pm_ops sdhci_pltfm_pmops;
|
||||
|
||||
#endif /* _DRIVERS_MMC_SDHCI_PLTFM_H */
|
||||
|
|
|
@ -178,17 +178,17 @@ static int sdhci_pxav2_probe(struct platform_device *pdev)
|
|||
|
||||
pltfm_host = sdhci_priv(host);
|
||||
|
||||
clk = clk_get(dev, "PXA-SDHCLK");
|
||||
clk = devm_clk_get(dev, "PXA-SDHCLK");
|
||||
if (IS_ERR(clk)) {
|
||||
dev_err(dev, "failed to get io clock\n");
|
||||
ret = PTR_ERR(clk);
|
||||
goto err_clk_get;
|
||||
goto free;
|
||||
}
|
||||
pltfm_host->clk = clk;
|
||||
ret = clk_prepare_enable(clk);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to enable io clock\n");
|
||||
goto err_clk_enable;
|
||||
goto free;
|
||||
}
|
||||
|
||||
host->quirks = SDHCI_QUIRK_BROKEN_ADMA
|
||||
|
@ -223,34 +223,18 @@ static int sdhci_pxav2_probe(struct platform_device *pdev)
|
|||
ret = sdhci_add_host(host);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to add host\n");
|
||||
goto err_add_host;
|
||||
goto disable_clk;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_add_host:
|
||||
disable_clk:
|
||||
clk_disable_unprepare(clk);
|
||||
err_clk_enable:
|
||||
clk_put(clk);
|
||||
err_clk_get:
|
||||
free:
|
||||
sdhci_pltfm_free(pdev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int sdhci_pxav2_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct sdhci_host *host = platform_get_drvdata(pdev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
|
||||
sdhci_remove_host(host, 1);
|
||||
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
clk_put(pltfm_host->clk);
|
||||
sdhci_pltfm_free(pdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver sdhci_pxav2_driver = {
|
||||
.driver = {
|
||||
.name = "sdhci-pxav2",
|
||||
|
@ -258,7 +242,7 @@ static struct platform_driver sdhci_pxav2_driver = {
|
|||
.pm = &sdhci_pltfm_pmops,
|
||||
},
|
||||
.probe = sdhci_pxav2_probe,
|
||||
.remove = sdhci_pxav2_remove,
|
||||
.remove = sdhci_pltfm_unregister,
|
||||
};
|
||||
|
||||
module_platform_driver(sdhci_pxav2_driver);
|
||||
|
|
|
@ -337,7 +337,7 @@ static const struct sdhci_ops pxav3_sdhci_ops = {
|
|||
.set_uhs_signaling = pxav3_set_uhs_signaling,
|
||||
};
|
||||
|
||||
static struct sdhci_pltfm_data sdhci_pxav3_pdata = {
|
||||
static const struct sdhci_pltfm_data sdhci_pxav3_pdata = {
|
||||
.quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK
|
||||
| SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC
|
||||
| SDHCI_QUIRK_32BIT_ADMA_SIZE
|
||||
|
|
|
@ -414,43 +414,11 @@ static void sdhci_cmu_set_clock(struct sdhci_host *host, unsigned int clock)
|
|||
sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
|
||||
}
|
||||
|
||||
/**
|
||||
* sdhci_s3c_set_bus_width - support 8bit buswidth
|
||||
* @host: The SDHCI host being queried
|
||||
* @width: MMC_BUS_WIDTH_ macro for the bus width being requested
|
||||
*
|
||||
* We have 8-bit width support but is not a v3 controller.
|
||||
* So we add platform_bus_width() and support 8bit width.
|
||||
*/
|
||||
static void sdhci_s3c_set_bus_width(struct sdhci_host *host, int width)
|
||||
{
|
||||
u8 ctrl;
|
||||
|
||||
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
|
||||
|
||||
switch (width) {
|
||||
case MMC_BUS_WIDTH_8:
|
||||
ctrl |= SDHCI_CTRL_8BITBUS;
|
||||
ctrl &= ~SDHCI_CTRL_4BITBUS;
|
||||
break;
|
||||
case MMC_BUS_WIDTH_4:
|
||||
ctrl |= SDHCI_CTRL_4BITBUS;
|
||||
ctrl &= ~SDHCI_CTRL_8BITBUS;
|
||||
break;
|
||||
default:
|
||||
ctrl &= ~SDHCI_CTRL_4BITBUS;
|
||||
ctrl &= ~SDHCI_CTRL_8BITBUS;
|
||||
break;
|
||||
}
|
||||
|
||||
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
|
||||
}
|
||||
|
||||
static struct sdhci_ops sdhci_s3c_ops = {
|
||||
.get_max_clock = sdhci_s3c_get_max_clk,
|
||||
.set_clock = sdhci_s3c_set_clock,
|
||||
.get_min_clock = sdhci_s3c_get_min_clock,
|
||||
.set_bus_width = sdhci_s3c_set_bus_width,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
.reset = sdhci_reset,
|
||||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||
};
|
||||
|
|
|
@ -146,7 +146,7 @@ retry:
|
|||
return rc;
|
||||
}
|
||||
|
||||
static struct sdhci_ops sdhci_sirf_ops = {
|
||||
static const struct sdhci_ops sdhci_sirf_ops = {
|
||||
.read_l = sdhci_sirf_readl_le,
|
||||
.read_w = sdhci_sirf_readw_le,
|
||||
.platform_execute_tuning = sdhci_sirf_execute_tuning,
|
||||
|
@ -157,7 +157,7 @@ static struct sdhci_ops sdhci_sirf_ops = {
|
|||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||
};
|
||||
|
||||
static struct sdhci_pltfm_data sdhci_sirf_pdata = {
|
||||
static const struct sdhci_pltfm_data sdhci_sirf_pdata = {
|
||||
.ops = &sdhci_sirf_ops,
|
||||
.quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
|
||||
SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
|
||||
|
@ -230,43 +230,6 @@ err_clk_prepare:
|
|||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int sdhci_sirf_suspend(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
int ret;
|
||||
|
||||
if (host->tuning_mode != SDHCI_TUNING_MODE_3)
|
||||
mmc_retune_needed(host->mmc);
|
||||
|
||||
ret = sdhci_suspend_host(host);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
clk_disable(pltfm_host->clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sdhci_sirf_resume(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
int ret;
|
||||
|
||||
ret = clk_enable(pltfm_host->clk);
|
||||
if (ret) {
|
||||
dev_dbg(dev, "Resume: Error enabling clock\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return sdhci_resume_host(host);
|
||||
}
|
||||
#endif
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(sdhci_sirf_pm_ops, sdhci_sirf_suspend, sdhci_sirf_resume);
|
||||
|
||||
static const struct of_device_id sdhci_sirf_of_match[] = {
|
||||
{ .compatible = "sirf,prima2-sdhc" },
|
||||
{ }
|
||||
|
@ -277,7 +240,7 @@ static struct platform_driver sdhci_sirf_driver = {
|
|||
.driver = {
|
||||
.name = "sdhci-sirf",
|
||||
.of_match_table = sdhci_sirf_of_match,
|
||||
.pm = &sdhci_sirf_pm_ops,
|
||||
.pm = &sdhci_pltfm_pmops,
|
||||
},
|
||||
.probe = sdhci_sirf_probe,
|
||||
.remove = sdhci_pltfm_unregister,
|
||||
|
|
|
@ -371,7 +371,7 @@ static int sdhci_st_probe(struct platform_device *pdev)
|
|||
if (IS_ERR(icnclk))
|
||||
icnclk = NULL;
|
||||
|
||||
rstc = devm_reset_control_get(&pdev->dev, NULL);
|
||||
rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
|
||||
if (IS_ERR(rstc))
|
||||
rstc = NULL;
|
||||
else
|
||||
|
@ -394,8 +394,17 @@ static int sdhci_st_probe(struct platform_device *pdev)
|
|||
goto err_of;
|
||||
}
|
||||
|
||||
clk_prepare_enable(clk);
|
||||
clk_prepare_enable(icnclk);
|
||||
ret = clk_prepare_enable(clk);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "Failed to prepare clock\n");
|
||||
goto err_of;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(icnclk);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "Failed to prepare icn clock\n");
|
||||
goto err_icnclk;
|
||||
}
|
||||
|
||||
/* Configure the FlashSS Top registers for setting eMMC TX/RX delay */
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
|
@ -429,6 +438,7 @@ static int sdhci_st_probe(struct platform_device *pdev)
|
|||
|
||||
err_out:
|
||||
clk_disable_unprepare(icnclk);
|
||||
err_icnclk:
|
||||
clk_disable_unprepare(clk);
|
||||
err_of:
|
||||
sdhci_pltfm_free(pdev);
|
||||
|
@ -487,9 +497,17 @@ static int sdhci_st_resume(struct device *dev)
|
|||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct st_mmc_platform_data *pdata = sdhci_pltfm_priv(pltfm_host);
|
||||
struct device_node *np = dev->of_node;
|
||||
int ret;
|
||||
|
||||
clk_prepare_enable(pltfm_host->clk);
|
||||
clk_prepare_enable(pdata->icnclk);
|
||||
ret = clk_prepare_enable(pltfm_host->clk);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = clk_prepare_enable(pdata->icnclk);
|
||||
if (ret) {
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (pdata->rstc)
|
||||
reset_control_deassert(pdata->rstc);
|
||||
|
|
|
@ -190,25 +190,6 @@ static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)
|
|||
tegra_host->ddr_signaling = false;
|
||||
}
|
||||
|
||||
static void tegra_sdhci_set_bus_width(struct sdhci_host *host, int bus_width)
|
||||
{
|
||||
u32 ctrl;
|
||||
|
||||
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
|
||||
if ((host->mmc->caps & MMC_CAP_8_BIT_DATA) &&
|
||||
(bus_width == MMC_BUS_WIDTH_8)) {
|
||||
ctrl &= ~SDHCI_CTRL_4BITBUS;
|
||||
ctrl |= SDHCI_CTRL_8BITBUS;
|
||||
} else {
|
||||
ctrl &= ~SDHCI_CTRL_8BITBUS;
|
||||
if (bus_width == MMC_BUS_WIDTH_4)
|
||||
ctrl |= SDHCI_CTRL_4BITBUS;
|
||||
else
|
||||
ctrl &= ~SDHCI_CTRL_4BITBUS;
|
||||
}
|
||||
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
|
||||
}
|
||||
|
||||
static void tegra_sdhci_pad_autocalib(struct sdhci_host *host)
|
||||
{
|
||||
u32 val;
|
||||
|
@ -323,7 +304,7 @@ static const struct sdhci_ops tegra_sdhci_ops = {
|
|||
.read_w = tegra_sdhci_readw,
|
||||
.write_l = tegra_sdhci_writel,
|
||||
.set_clock = tegra_sdhci_set_clock,
|
||||
.set_bus_width = tegra_sdhci_set_bus_width,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
.reset = tegra_sdhci_reset,
|
||||
.platform_execute_tuning = tegra_sdhci_execute_tuning,
|
||||
.set_uhs_signaling = tegra_sdhci_set_uhs_signaling,
|
||||
|
@ -371,7 +352,7 @@ static const struct sdhci_ops tegra114_sdhci_ops = {
|
|||
.write_w = tegra_sdhci_writew,
|
||||
.write_l = tegra_sdhci_writel,
|
||||
.set_clock = tegra_sdhci_set_clock,
|
||||
.set_bus_width = tegra_sdhci_set_bus_width,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
.reset = tegra_sdhci_reset,
|
||||
.platform_execute_tuning = tegra_sdhci_execute_tuning,
|
||||
.set_uhs_signaling = tegra_sdhci_set_uhs_signaling,
|
||||
|
@ -508,7 +489,8 @@ static int sdhci_tegra_probe(struct platform_device *pdev)
|
|||
clk_prepare_enable(clk);
|
||||
pltfm_host->clk = clk;
|
||||
|
||||
tegra_host->rst = devm_reset_control_get(&pdev->dev, "sdhci");
|
||||
tegra_host->rst = devm_reset_control_get_exclusive(&pdev->dev,
|
||||
"sdhci");
|
||||
if (IS_ERR(tegra_host->rst)) {
|
||||
rc = PTR_ERR(tegra_host->rst);
|
||||
dev_err(&pdev->dev, "failed to get reset control: %d\n", rc);
|
||||
|
|
|
@ -409,17 +409,30 @@ static int xenon_emmc_phy_config_tuning(struct sdhci_host *host)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void xenon_emmc_phy_disable_data_strobe(struct sdhci_host *host)
|
||||
static void xenon_emmc_phy_disable_strobe(struct sdhci_host *host)
|
||||
{
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host);
|
||||
u32 reg;
|
||||
|
||||
/* Disable SDHC Data Strobe */
|
||||
/* Disable both SDHC Data Strobe and Enhanced Strobe */
|
||||
reg = sdhci_readl(host, XENON_SLOT_EMMC_CTRL);
|
||||
reg &= ~XENON_ENABLE_DATA_STROBE;
|
||||
reg &= ~(XENON_ENABLE_DATA_STROBE | XENON_ENABLE_RESP_STROBE);
|
||||
sdhci_writel(host, reg, XENON_SLOT_EMMC_CTRL);
|
||||
|
||||
/* Clear Strobe line Pull down or Pull up */
|
||||
if (priv->phy_type == EMMC_5_0_PHY) {
|
||||
reg = sdhci_readl(host, XENON_EMMC_5_0_PHY_PAD_CONTROL);
|
||||
reg &= ~(XENON_EMMC5_FC_QSP_PD | XENON_EMMC5_FC_QSP_PU);
|
||||
sdhci_writel(host, reg, XENON_EMMC_5_0_PHY_PAD_CONTROL);
|
||||
} else {
|
||||
reg = sdhci_readl(host, XENON_EMMC_PHY_PAD_CONTROL1);
|
||||
reg &= ~(XENON_EMMC5_1_FC_QSP_PD | XENON_EMMC5_1_FC_QSP_PU);
|
||||
sdhci_writel(host, reg, XENON_EMMC_PHY_PAD_CONTROL1);
|
||||
}
|
||||
}
|
||||
|
||||
/* Set HS400 Data Strobe */
|
||||
/* Set HS400 Data Strobe and Enhanced Strobe */
|
||||
static void xenon_emmc_phy_strobe_delay_adj(struct sdhci_host *host)
|
||||
{
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
|
@ -439,6 +452,15 @@ static void xenon_emmc_phy_strobe_delay_adj(struct sdhci_host *host)
|
|||
/* Enable SDHC Data Strobe */
|
||||
reg = sdhci_readl(host, XENON_SLOT_EMMC_CTRL);
|
||||
reg |= XENON_ENABLE_DATA_STROBE;
|
||||
/*
|
||||
* Enable SDHC Enhanced Strobe if supported
|
||||
* Xenon Enhanced Strobe should be enabled only when
|
||||
* 1. card is in HS400 mode and
|
||||
* 2. SDCLK is higher than 52MHz
|
||||
* 3. DLL is enabled
|
||||
*/
|
||||
if (host->mmc->ios.enhanced_strobe)
|
||||
reg |= XENON_ENABLE_RESP_STROBE;
|
||||
sdhci_writel(host, reg, XENON_SLOT_EMMC_CTRL);
|
||||
|
||||
/* Set Data Strobe Pull down */
|
||||
|
@ -615,7 +637,7 @@ static void xenon_emmc_phy_set(struct sdhci_host *host,
|
|||
sdhci_writel(host, phy_regs->logic_timing_val,
|
||||
phy_regs->logic_timing_adj);
|
||||
else
|
||||
xenon_emmc_phy_disable_data_strobe(host);
|
||||
xenon_emmc_phy_disable_strobe(host);
|
||||
|
||||
phy_init:
|
||||
xenon_emmc_phy_init(host);
|
||||
|
@ -705,7 +727,7 @@ void xenon_soc_pad_ctrl(struct sdhci_host *host,
|
|||
|
||||
/*
|
||||
* Setting PHY when card is working in High Speed Mode.
|
||||
* HS400 set data strobe line.
|
||||
* HS400 set Data Strobe and Enhanced Strobe if it is supported.
|
||||
* HS200/SDR104 set tuning config to prepare for tuning.
|
||||
*/
|
||||
static int xenon_hs_delay_adj(struct sdhci_host *host)
|
||||
|
|
|
@ -18,6 +18,8 @@
|
|||
#include <linux/ktime.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/pm.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include "sdhci-pltfm.h"
|
||||
#include "sdhci-xenon.h"
|
||||
|
@ -330,7 +332,8 @@ static int xenon_execute_tuning(struct mmc_host *mmc, u32 opcode)
|
|||
{
|
||||
struct sdhci_host *host = mmc_priv(mmc);
|
||||
|
||||
if (host->timing == MMC_TIMING_UHS_DDR50)
|
||||
if (host->timing == MMC_TIMING_UHS_DDR50 ||
|
||||
host->timing == MMC_TIMING_MMC_DDR52)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
|
@ -463,7 +466,6 @@ static int xenon_probe(struct platform_device *pdev)
|
|||
{
|
||||
struct sdhci_pltfm_host *pltfm_host;
|
||||
struct sdhci_host *host;
|
||||
struct xenon_priv *priv;
|
||||
int err;
|
||||
|
||||
host = sdhci_pltfm_init(pdev, &sdhci_xenon_pdata,
|
||||
|
@ -472,7 +474,6 @@ static int xenon_probe(struct platform_device *pdev)
|
|||
return PTR_ERR(host);
|
||||
|
||||
pltfm_host = sdhci_priv(host);
|
||||
priv = sdhci_pltfm_priv(pltfm_host);
|
||||
|
||||
/*
|
||||
* Link Xenon specific mmc_host_ops function,
|
||||
|
@ -507,13 +508,24 @@ static int xenon_probe(struct platform_device *pdev)
|
|||
if (err)
|
||||
goto err_clk;
|
||||
|
||||
pm_runtime_get_noresume(&pdev->dev);
|
||||
pm_runtime_set_active(&pdev->dev);
|
||||
pm_runtime_set_autosuspend_delay(&pdev->dev, 50);
|
||||
pm_runtime_use_autosuspend(&pdev->dev);
|
||||
pm_runtime_enable(&pdev->dev);
|
||||
pm_suspend_ignore_children(&pdev->dev, 1);
|
||||
|
||||
err = sdhci_add_host(host);
|
||||
if (err)
|
||||
goto remove_sdhc;
|
||||
|
||||
pm_runtime_put_autosuspend(&pdev->dev);
|
||||
|
||||
return 0;
|
||||
|
||||
remove_sdhc:
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
pm_runtime_put_noidle(&pdev->dev);
|
||||
xenon_sdhc_unprepare(host);
|
||||
err_clk:
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
|
@ -527,6 +539,10 @@ static int xenon_remove(struct platform_device *pdev)
|
|||
struct sdhci_host *host = platform_get_drvdata(pdev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
|
||||
pm_runtime_get_sync(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
pm_runtime_put_noidle(&pdev->dev);
|
||||
|
||||
sdhci_remove_host(host, 0);
|
||||
|
||||
xenon_sdhc_unprepare(host);
|
||||
|
@ -538,6 +554,84 @@ static int xenon_remove(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int xenon_suspend(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host);
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_force_suspend(dev);
|
||||
|
||||
priv->restore_needed = true;
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int xenon_runtime_suspend(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host);
|
||||
int ret;
|
||||
|
||||
ret = sdhci_runtime_suspend_host(host);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (host->tuning_mode != SDHCI_TUNING_MODE_3)
|
||||
mmc_retune_needed(host->mmc);
|
||||
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
/*
|
||||
* Need to update the priv->clock here, or when runtime resume
|
||||
* back, phy don't aware the clock change and won't adjust phy
|
||||
* which will cause cmd err
|
||||
*/
|
||||
priv->clock = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int xenon_runtime_resume(struct device *dev)
|
||||
{
|
||||
struct sdhci_host *host = dev_get_drvdata(dev);
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host);
|
||||
int ret;
|
||||
|
||||
ret = clk_prepare_enable(pltfm_host->clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "can't enable mainck\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (priv->restore_needed) {
|
||||
ret = xenon_sdhc_prepare(host);
|
||||
if (ret)
|
||||
goto out;
|
||||
priv->restore_needed = false;
|
||||
}
|
||||
|
||||
ret = sdhci_runtime_resume_host(host);
|
||||
if (ret)
|
||||
goto out;
|
||||
return 0;
|
||||
out:
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
return ret;
|
||||
}
|
||||
#endif /* CONFIG_PM */
|
||||
|
||||
static const struct dev_pm_ops sdhci_xenon_dev_pm_ops = {
|
||||
SET_SYSTEM_SLEEP_PM_OPS(xenon_suspend,
|
||||
pm_runtime_force_resume)
|
||||
SET_RUNTIME_PM_OPS(xenon_runtime_suspend,
|
||||
xenon_runtime_resume,
|
||||
NULL)
|
||||
};
|
||||
|
||||
static const struct of_device_id sdhci_xenon_dt_ids[] = {
|
||||
{ .compatible = "marvell,armada-ap806-sdhci",},
|
||||
{ .compatible = "marvell,armada-cp110-sdhci",},
|
||||
|
@ -550,7 +644,7 @@ static struct platform_driver sdhci_xenon_driver = {
|
|||
.driver = {
|
||||
.name = "xenon-sdhci",
|
||||
.of_match_table = sdhci_xenon_dt_ids,
|
||||
.pm = &sdhci_pltfm_pmops,
|
||||
.pm = &sdhci_xenon_dev_pm_ops,
|
||||
},
|
||||
.probe = xenon_probe,
|
||||
.remove = xenon_remove,
|
||||
|
|
|
@ -33,6 +33,7 @@
|
|||
#define XENON_TUNING_STEP_DIVIDER BIT(6)
|
||||
|
||||
#define XENON_SLOT_EMMC_CTRL 0x0130
|
||||
#define XENON_ENABLE_RESP_STROBE BIT(25)
|
||||
#define XENON_ENABLE_DATA_STROBE BIT(24)
|
||||
|
||||
#define XENON_SLOT_RETUNING_REQ_CTRL 0x0144
|
||||
|
@ -90,6 +91,7 @@ struct xenon_priv {
|
|||
*/
|
||||
void *phy_params;
|
||||
struct xenon_emmc_phy_regs *emmc_phy_regs;
|
||||
bool restore_needed;
|
||||
};
|
||||
|
||||
int xenon_phy_adj(struct sdhci_host *host, struct mmc_ios *ios);
|
||||
|
|
|
@ -897,8 +897,8 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
|
|||
sdhci_set_transfer_irqs(host);
|
||||
|
||||
/* Set the DMA boundary value and block size */
|
||||
sdhci_writew(host, SDHCI_MAKE_BLKSZ(SDHCI_DEFAULT_BOUNDARY_ARG,
|
||||
data->blksz), SDHCI_BLOCK_SIZE);
|
||||
sdhci_writew(host, SDHCI_MAKE_BLKSZ(host->sdma_boundary, data->blksz),
|
||||
SDHCI_BLOCK_SIZE);
|
||||
sdhci_writew(host, data->blocks, SDHCI_BLOCK_COUNT);
|
||||
}
|
||||
|
||||
|
@ -1173,24 +1173,35 @@ void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(sdhci_send_command);
|
||||
|
||||
static void sdhci_read_rsp_136(struct sdhci_host *host, struct mmc_command *cmd)
|
||||
{
|
||||
int i, reg;
|
||||
|
||||
for (i = 0; i < 4; i++) {
|
||||
reg = SDHCI_RESPONSE + (3 - i) * 4;
|
||||
cmd->resp[i] = sdhci_readl(host, reg);
|
||||
}
|
||||
|
||||
if (host->quirks2 & SDHCI_QUIRK2_RSP_136_HAS_CRC)
|
||||
return;
|
||||
|
||||
/* CRC is stripped so we need to do some shifting */
|
||||
for (i = 0; i < 4; i++) {
|
||||
cmd->resp[i] <<= 8;
|
||||
if (i != 3)
|
||||
cmd->resp[i] |= cmd->resp[i + 1] >> 24;
|
||||
}
|
||||
}
|
||||
|
||||
static void sdhci_finish_command(struct sdhci_host *host)
|
||||
{
|
||||
struct mmc_command *cmd = host->cmd;
|
||||
int i;
|
||||
|
||||
host->cmd = NULL;
|
||||
|
||||
if (cmd->flags & MMC_RSP_PRESENT) {
|
||||
if (cmd->flags & MMC_RSP_136) {
|
||||
/* CRC is stripped so we need to do some shifting. */
|
||||
for (i = 0;i < 4;i++) {
|
||||
cmd->resp[i] = sdhci_readl(host,
|
||||
SDHCI_RESPONSE + (3-i)*4) << 8;
|
||||
if (i != 3)
|
||||
cmd->resp[i] |=
|
||||
sdhci_readb(host,
|
||||
SDHCI_RESPONSE + (3-i)*4-1);
|
||||
}
|
||||
sdhci_read_rsp_136(host, cmd);
|
||||
} else {
|
||||
cmd->resp[0] = sdhci_readl(host, SDHCI_RESPONSE);
|
||||
}
|
||||
|
@ -1544,10 +1555,9 @@ void sdhci_set_bus_width(struct sdhci_host *host, int width)
|
|||
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
|
||||
if (width == MMC_BUS_WIDTH_8) {
|
||||
ctrl &= ~SDHCI_CTRL_4BITBUS;
|
||||
if (host->version >= SDHCI_SPEC_300)
|
||||
ctrl |= SDHCI_CTRL_8BITBUS;
|
||||
ctrl |= SDHCI_CTRL_8BITBUS;
|
||||
} else {
|
||||
if (host->version >= SDHCI_SPEC_300)
|
||||
if (host->mmc->caps & MMC_CAP_8_BIT_DATA)
|
||||
ctrl &= ~SDHCI_CTRL_8BITBUS;
|
||||
if (width == MMC_BUS_WIDTH_4)
|
||||
ctrl |= SDHCI_CTRL_4BITBUS;
|
||||
|
@ -1641,19 +1651,20 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
|||
|
||||
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
|
||||
|
||||
if ((ios->timing == MMC_TIMING_SD_HS ||
|
||||
ios->timing == MMC_TIMING_MMC_HS ||
|
||||
ios->timing == MMC_TIMING_MMC_HS400 ||
|
||||
ios->timing == MMC_TIMING_MMC_HS200 ||
|
||||
ios->timing == MMC_TIMING_MMC_DDR52 ||
|
||||
ios->timing == MMC_TIMING_UHS_SDR50 ||
|
||||
ios->timing == MMC_TIMING_UHS_SDR104 ||
|
||||
ios->timing == MMC_TIMING_UHS_DDR50 ||
|
||||
ios->timing == MMC_TIMING_UHS_SDR25)
|
||||
&& !(host->quirks & SDHCI_QUIRK_NO_HISPD_BIT))
|
||||
ctrl |= SDHCI_CTRL_HISPD;
|
||||
else
|
||||
ctrl &= ~SDHCI_CTRL_HISPD;
|
||||
if (!(host->quirks & SDHCI_QUIRK_NO_HISPD_BIT)) {
|
||||
if (ios->timing == MMC_TIMING_SD_HS ||
|
||||
ios->timing == MMC_TIMING_MMC_HS ||
|
||||
ios->timing == MMC_TIMING_MMC_HS400 ||
|
||||
ios->timing == MMC_TIMING_MMC_HS200 ||
|
||||
ios->timing == MMC_TIMING_MMC_DDR52 ||
|
||||
ios->timing == MMC_TIMING_UHS_SDR50 ||
|
||||
ios->timing == MMC_TIMING_UHS_SDR104 ||
|
||||
ios->timing == MMC_TIMING_UHS_DDR50 ||
|
||||
ios->timing == MMC_TIMING_UHS_SDR25)
|
||||
ctrl |= SDHCI_CTRL_HISPD;
|
||||
else
|
||||
ctrl &= ~SDHCI_CTRL_HISPD;
|
||||
}
|
||||
|
||||
if (host->version >= SDHCI_SPEC_300) {
|
||||
u16 clk, ctrl_2;
|
||||
|
@ -2037,6 +2048,7 @@ static void sdhci_send_tuning(struct sdhci_host *host, u32 opcode)
|
|||
struct mmc_command cmd = {};
|
||||
struct mmc_request mrq = {};
|
||||
unsigned long flags;
|
||||
u32 b = host->sdma_boundary;
|
||||
|
||||
spin_lock_irqsave(&host->lock, flags);
|
||||
|
||||
|
@ -2052,9 +2064,9 @@ static void sdhci_send_tuning(struct sdhci_host *host, u32 opcode)
|
|||
*/
|
||||
if (cmd.opcode == MMC_SEND_TUNING_BLOCK_HS200 &&
|
||||
mmc->ios.bus_width == MMC_BUS_WIDTH_8)
|
||||
sdhci_writew(host, SDHCI_MAKE_BLKSZ(7, 128), SDHCI_BLOCK_SIZE);
|
||||
sdhci_writew(host, SDHCI_MAKE_BLKSZ(b, 128), SDHCI_BLOCK_SIZE);
|
||||
else
|
||||
sdhci_writew(host, SDHCI_MAKE_BLKSZ(7, 64), SDHCI_BLOCK_SIZE);
|
||||
sdhci_writew(host, SDHCI_MAKE_BLKSZ(b, 64), SDHCI_BLOCK_SIZE);
|
||||
|
||||
/*
|
||||
* The tuning block is sent by the card to the host controller.
|
||||
|
@ -2502,7 +2514,6 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask)
|
|||
sdhci_finish_command(host);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
static void sdhci_adma_show_error(struct sdhci_host *host)
|
||||
{
|
||||
void *desc = host->adma_table;
|
||||
|
@ -2530,9 +2541,6 @@ static void sdhci_adma_show_error(struct sdhci_host *host)
|
|||
break;
|
||||
}
|
||||
}
|
||||
#else
|
||||
static void sdhci_adma_show_error(struct sdhci_host *host) { }
|
||||
#endif
|
||||
|
||||
static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
|
||||
{
|
||||
|
@ -2938,7 +2946,8 @@ int sdhci_runtime_resume_host(struct sdhci_host *host)
|
|||
|
||||
sdhci_init(host, 0);
|
||||
|
||||
if (mmc->ios.power_mode != MMC_POWER_UNDEFINED) {
|
||||
if (mmc->ios.power_mode != MMC_POWER_UNDEFINED &&
|
||||
mmc->ios.power_mode != MMC_POWER_OFF) {
|
||||
/* Force clock and power re-program */
|
||||
host->pwr = 0;
|
||||
host->clock = 0;
|
||||
|
@ -2998,7 +3007,7 @@ void sdhci_cqe_enable(struct mmc_host *mmc)
|
|||
ctrl |= SDHCI_CTRL_ADMA32;
|
||||
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
|
||||
|
||||
sdhci_writew(host, SDHCI_MAKE_BLKSZ(SDHCI_DEFAULT_BOUNDARY_ARG, 512),
|
||||
sdhci_writew(host, SDHCI_MAKE_BLKSZ(host->sdma_boundary, 512),
|
||||
SDHCI_BLOCK_SIZE);
|
||||
|
||||
/* Set maximum timeout */
|
||||
|
@ -3119,6 +3128,8 @@ struct sdhci_host *sdhci_alloc_host(struct device *dev,
|
|||
|
||||
host->tuning_delay = -1;
|
||||
|
||||
host->sdma_boundary = SDHCI_DEFAULT_BOUNDARY_ARG;
|
||||
|
||||
return host;
|
||||
}
|
||||
|
||||
|
@ -3230,6 +3241,13 @@ int sdhci_setup_host(struct sdhci_host *host)
|
|||
if (ret == -EPROBE_DEFER)
|
||||
return ret;
|
||||
|
||||
DBG("Version: 0x%08x | Present: 0x%08x\n",
|
||||
sdhci_readw(host, SDHCI_HOST_VERSION),
|
||||
sdhci_readl(host, SDHCI_PRESENT_STATE));
|
||||
DBG("Caps: 0x%08x | Caps_1: 0x%08x\n",
|
||||
sdhci_readl(host, SDHCI_CAPABILITIES),
|
||||
sdhci_readl(host, SDHCI_CAPABILITIES_1));
|
||||
|
||||
sdhci_read_caps(host);
|
||||
|
||||
override_timeout_clk = host->timeout_clk;
|
||||
|
@ -3747,10 +3765,6 @@ int __sdhci_add_host(struct sdhci_host *host)
|
|||
goto untasklet;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
sdhci_dumpregs(host);
|
||||
#endif
|
||||
|
||||
ret = sdhci_led_register(host);
|
||||
if (ret) {
|
||||
pr_err("%s: Failed to register LED device: %d\n",
|
||||
|
|
|
@ -435,6 +435,8 @@ struct sdhci_host {
|
|||
#define SDHCI_QUIRK2_ACMD23_BROKEN (1<<14)
|
||||
/* Broken Clock divider zero in controller */
|
||||
#define SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN (1<<15)
|
||||
/* Controller has CRC in 136 bit Command Response */
|
||||
#define SDHCI_QUIRK2_RSP_136_HAS_CRC (1<<16)
|
||||
|
||||
int irq; /* Device IRQ */
|
||||
void __iomem *ioaddr; /* Mapped address */
|
||||
|
@ -541,6 +543,9 @@ struct sdhci_host {
|
|||
/* Delay (ms) between tuning commands */
|
||||
int tuning_delay;
|
||||
|
||||
/* Host SDMA buffer boundary. */
|
||||
u32 sdma_boundary;
|
||||
|
||||
unsigned long private[0] ____cacheline_aligned;
|
||||
};
|
||||
|
||||
|
|
|
@ -385,7 +385,7 @@ static int sdricoh_get_ro(struct mmc_host *mmc)
|
|||
return (status & STATUS_CARD_LOCKED);
|
||||
}
|
||||
|
||||
static struct mmc_host_ops sdricoh_ops = {
|
||||
static const struct mmc_host_ops sdricoh_ops = {
|
||||
.request = sdricoh_request,
|
||||
.set_ios = sdricoh_set_ios,
|
||||
.get_ro = sdricoh_get_ro,
|
||||
|
|
|
@ -1079,7 +1079,7 @@ static void sh_mmcif_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
|||
host->state = STATE_IDLE;
|
||||
}
|
||||
|
||||
static struct mmc_host_ops sh_mmcif_ops = {
|
||||
static const struct mmc_host_ops sh_mmcif_ops = {
|
||||
.request = sh_mmcif_request,
|
||||
.set_ios = sh_mmcif_set_ios,
|
||||
.get_cd = mmc_gpio_get_cd,
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/err.h>
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/clk/sunxi-ng.h>
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
@ -259,7 +260,11 @@ struct sunxi_mmc_cfg {
|
|||
/* Does DATA0 needs to be masked while the clock is updated */
|
||||
bool mask_data0;
|
||||
|
||||
/* hardware only supports new timing mode */
|
||||
bool needs_new_timings;
|
||||
|
||||
/* hardware can switch between old and new timing modes */
|
||||
bool has_timings_switch;
|
||||
};
|
||||
|
||||
struct sunxi_mmc_host {
|
||||
|
@ -293,6 +298,9 @@ struct sunxi_mmc_host {
|
|||
|
||||
/* vqmmc */
|
||||
bool vqmmc_enabled;
|
||||
|
||||
/* timings */
|
||||
bool use_new_timings;
|
||||
};
|
||||
|
||||
static int sunxi_mmc_reset_host(struct sunxi_mmc_host *host)
|
||||
|
@ -714,6 +722,11 @@ static int sunxi_mmc_clk_set_phase(struct sunxi_mmc_host *host,
|
|||
{
|
||||
int index;
|
||||
|
||||
/* clk controller delays not used under new timings mode */
|
||||
if (host->use_new_timings)
|
||||
return 0;
|
||||
|
||||
/* some old controllers don't support delays */
|
||||
if (!host->cfg->clk_delays)
|
||||
return 0;
|
||||
|
||||
|
@ -747,7 +760,7 @@ static int sunxi_mmc_clk_set_rate(struct sunxi_mmc_host *host,
|
|||
{
|
||||
struct mmc_host *mmc = host->mmc;
|
||||
long rate;
|
||||
u32 rval, clock = ios->clock;
|
||||
u32 rval, clock = ios->clock, div = 1;
|
||||
int ret;
|
||||
|
||||
ret = sunxi_mmc_oclk_onoff(host, 0);
|
||||
|
@ -760,10 +773,30 @@ static int sunxi_mmc_clk_set_rate(struct sunxi_mmc_host *host,
|
|||
if (!ios->clock)
|
||||
return 0;
|
||||
|
||||
/* 8 bit DDR requires a higher module clock */
|
||||
/*
|
||||
* Under the old timing mode, 8 bit DDR requires the module
|
||||
* clock to be double the card clock. Under the new timing
|
||||
* mode, all DDR modes require a doubled module clock.
|
||||
*
|
||||
* We currently only support the standard MMC DDR52 mode.
|
||||
* This block should be updated once support for other DDR
|
||||
* modes is added.
|
||||
*/
|
||||
if (ios->timing == MMC_TIMING_MMC_DDR52 &&
|
||||
ios->bus_width == MMC_BUS_WIDTH_8)
|
||||
(host->use_new_timings ||
|
||||
ios->bus_width == MMC_BUS_WIDTH_8)) {
|
||||
div = 2;
|
||||
clock <<= 1;
|
||||
}
|
||||
|
||||
if (host->use_new_timings && host->cfg->has_timings_switch) {
|
||||
ret = sunxi_ccu_set_mmc_timing_mode(host->clk_mmc, true);
|
||||
if (ret) {
|
||||
dev_err(mmc_dev(mmc),
|
||||
"error setting new timing mode\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
rate = clk_round_rate(host->clk_mmc, clock);
|
||||
if (rate < 0) {
|
||||
|
@ -782,24 +815,23 @@ static int sunxi_mmc_clk_set_rate(struct sunxi_mmc_host *host,
|
|||
return ret;
|
||||
}
|
||||
|
||||
/* clear internal divider */
|
||||
/* set internal divider */
|
||||
rval = mmc_readl(host, REG_CLKCR);
|
||||
rval &= ~0xff;
|
||||
/* set internal divider for 8 bit eMMC DDR, so card clock is right */
|
||||
if (ios->timing == MMC_TIMING_MMC_DDR52 &&
|
||||
ios->bus_width == MMC_BUS_WIDTH_8) {
|
||||
rval |= 1;
|
||||
rate >>= 1;
|
||||
}
|
||||
rval |= div - 1;
|
||||
mmc_writel(host, REG_CLKCR, rval);
|
||||
|
||||
if (host->cfg->needs_new_timings) {
|
||||
/* update card clock rate to account for internal divider */
|
||||
rate /= div;
|
||||
|
||||
if (host->use_new_timings) {
|
||||
/* Don't touch the delay bits */
|
||||
rval = mmc_readl(host, REG_SD_NTSR);
|
||||
rval |= SDXC_2X_TIMING_MODE;
|
||||
mmc_writel(host, REG_SD_NTSR, rval);
|
||||
}
|
||||
|
||||
/* sunxi_mmc_clk_set_phase expects the actual card clock rate */
|
||||
ret = sunxi_mmc_clk_set_phase(host, ios, rate);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@ -1048,7 +1080,7 @@ static int sunxi_mmc_card_busy(struct mmc_host *mmc)
|
|||
return !!(mmc_readl(host, REG_STAS) & SDXC_CARD_DATA_BUSY);
|
||||
}
|
||||
|
||||
static struct mmc_host_ops sunxi_mmc_ops = {
|
||||
static const struct mmc_host_ops sunxi_mmc_ops = {
|
||||
.request = sunxi_mmc_request,
|
||||
.set_ios = sunxi_mmc_set_ios,
|
||||
.get_ro = mmc_gpio_get_ro,
|
||||
|
@ -1094,6 +1126,13 @@ static const struct sunxi_mmc_cfg sun7i_a20_cfg = {
|
|||
.can_calibrate = false,
|
||||
};
|
||||
|
||||
static const struct sunxi_mmc_cfg sun8i_a83t_emmc_cfg = {
|
||||
.idma_des_size_bits = 16,
|
||||
.clk_delays = sunxi_mmc_clk_delays,
|
||||
.can_calibrate = false,
|
||||
.has_timings_switch = true,
|
||||
};
|
||||
|
||||
static const struct sunxi_mmc_cfg sun9i_a80_cfg = {
|
||||
.idma_des_size_bits = 16,
|
||||
.clk_delays = sun9i_mmc_clk_delays,
|
||||
|
@ -1118,6 +1157,7 @@ static const struct of_device_id sunxi_mmc_of_match[] = {
|
|||
{ .compatible = "allwinner,sun4i-a10-mmc", .data = &sun4i_a10_cfg },
|
||||
{ .compatible = "allwinner,sun5i-a13-mmc", .data = &sun5i_a13_cfg },
|
||||
{ .compatible = "allwinner,sun7i-a20-mmc", .data = &sun7i_a20_cfg },
|
||||
{ .compatible = "allwinner,sun8i-a83t-emmc", .data = &sun8i_a83t_emmc_cfg },
|
||||
{ .compatible = "allwinner,sun9i-a80-mmc", .data = &sun9i_a80_cfg },
|
||||
{ .compatible = "allwinner,sun50i-a64-mmc", .data = &sun50i_a64_cfg },
|
||||
{ .compatible = "allwinner,sun50i-a64-emmc", .data = &sun50i_a64_emmc_cfg },
|
||||
|
@ -1172,7 +1212,8 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
|
|||
}
|
||||
}
|
||||
|
||||
host->reset = devm_reset_control_get_optional(&pdev->dev, "ahb");
|
||||
host->reset = devm_reset_control_get_optional_exclusive(&pdev->dev,
|
||||
"ahb");
|
||||
if (PTR_ERR(host->reset) == -EPROBE_DEFER)
|
||||
return PTR_ERR(host->reset);
|
||||
|
||||
|
@ -1201,7 +1242,7 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
|
|||
}
|
||||
|
||||
if (!IS_ERR(host->reset)) {
|
||||
ret = reset_control_deassert(host->reset);
|
||||
ret = reset_control_reset(host->reset);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "reset err %d\n", ret);
|
||||
goto error_disable_clk_sample;
|
||||
|
@ -1262,6 +1303,30 @@ static int sunxi_mmc_probe(struct platform_device *pdev)
|
|||
goto error_free_host;
|
||||
}
|
||||
|
||||
if (host->cfg->has_timings_switch) {
|
||||
/*
|
||||
* Supports both old and new timing modes.
|
||||
* Try setting the clk to new timing mode.
|
||||
*/
|
||||
sunxi_ccu_set_mmc_timing_mode(host->clk_mmc, true);
|
||||
|
||||
/* And check the result */
|
||||
ret = sunxi_ccu_get_mmc_timing_mode(host->clk_mmc);
|
||||
if (ret < 0) {
|
||||
/*
|
||||
* For whatever reason we were not able to get
|
||||
* the current active mode. Default to old mode.
|
||||
*/
|
||||
dev_warn(&pdev->dev, "MMC clk timing mode unknown\n");
|
||||
host->use_new_timings = false;
|
||||
} else {
|
||||
host->use_new_timings = !!ret;
|
||||
}
|
||||
} else if (host->cfg->needs_new_timings) {
|
||||
/* Supports new timing mode only */
|
||||
host->use_new_timings = true;
|
||||
}
|
||||
|
||||
mmc->ops = &sunxi_mmc_ops;
|
||||
mmc->max_blk_count = 8192;
|
||||
mmc->max_blk_size = 4096;
|
||||
|
@ -1274,7 +1339,7 @@ static int sunxi_mmc_probe(struct platform_device *pdev)
|
|||
mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
|
||||
MMC_CAP_ERASE | MMC_CAP_SDIO_IRQ;
|
||||
|
||||
if (host->cfg->clk_delays)
|
||||
if (host->cfg->clk_delays || host->use_new_timings)
|
||||
mmc->caps |= MMC_CAP_1_8V_DDR;
|
||||
|
||||
ret = mmc_of_parse(mmc);
|
||||
|
|
|
@ -81,14 +81,14 @@
|
|||
#define TMIO_STAT_CMD_BUSY BIT(30)
|
||||
#define TMIO_STAT_ILL_ACCESS BIT(31)
|
||||
|
||||
/* Definitions for values the CTL_SD_CARD_CLK_CTL register can take */
|
||||
#define CLK_CTL_DIV_MASK 0xff
|
||||
#define CLK_CTL_SCLKEN BIT(8)
|
||||
|
||||
/* Definitions for values the CTL_SD_MEM_CARD_OPT register can take */
|
||||
#define CARD_OPT_WIDTH8 BIT(13)
|
||||
#define CARD_OPT_WIDTH BIT(15)
|
||||
|
||||
#define TMIO_BBS 512 /* Boot block size */
|
||||
|
||||
/* Definitions for values the CTL_SDIO_STATUS register can take */
|
||||
#define TMIO_SDIO_STAT_IOIRQ 0x0001
|
||||
#define TMIO_SDIO_STAT_EXPUB52 0x4000
|
||||
|
@ -97,6 +97,9 @@
|
|||
|
||||
#define TMIO_SDIO_SETBITS_MASK 0x0006
|
||||
|
||||
/* Definitions for values the CTL_DMA_ENABLE register can take */
|
||||
#define DMA_ENABLE_DMASDRW BIT(1)
|
||||
|
||||
/* Define some IRQ masks */
|
||||
/* This is the mask used at reset by the chip */
|
||||
#define TMIO_MASK_ALL 0x837f031d
|
||||
|
@ -122,6 +125,7 @@ struct tmio_mmc_dma_ops {
|
|||
struct tmio_mmc_data *pdata);
|
||||
void (*release)(struct tmio_mmc_host *host);
|
||||
void (*abort)(struct tmio_mmc_host *host);
|
||||
void (*dataend)(struct tmio_mmc_host *host);
|
||||
};
|
||||
|
||||
struct tmio_mmc_host {
|
||||
|
@ -151,6 +155,7 @@ struct tmio_mmc_host {
|
|||
struct dma_chan *chan_rx;
|
||||
struct dma_chan *chan_tx;
|
||||
struct completion dma_dataend;
|
||||
struct tasklet_struct dma_complete;
|
||||
struct tasklet_struct dma_issue;
|
||||
struct scatterlist bounce_sg;
|
||||
u8 *bounce_buf;
|
||||
|
|
|
@ -87,6 +87,12 @@ static inline void tmio_mmc_abort_dma(struct tmio_mmc_host *host)
|
|||
host->dma_ops->abort(host);
|
||||
}
|
||||
|
||||
static inline void tmio_mmc_dataend_dma(struct tmio_mmc_host *host)
|
||||
{
|
||||
if (host->dma_ops)
|
||||
host->dma_ops->dataend(host);
|
||||
}
|
||||
|
||||
void tmio_mmc_enable_mmc_irqs(struct tmio_mmc_host *host, u32 i)
|
||||
{
|
||||
host->sdcard_irq_mask &= ~(i & TMIO_MASK_IRQ);
|
||||
|
@ -201,7 +207,10 @@ static void tmio_mmc_clk_start(struct tmio_mmc_host *host)
|
|||
{
|
||||
sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, CLK_CTL_SCLKEN |
|
||||
sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
|
||||
msleep(host->pdata->flags & TMIO_MMC_MIN_RCAR2 ? 1 : 10);
|
||||
|
||||
/* HW engineers overrode docs: no sleep needed on R-Car2+ */
|
||||
if (!(host->pdata->flags & TMIO_MMC_MIN_RCAR2))
|
||||
msleep(10);
|
||||
|
||||
if (host->pdata->flags & TMIO_MMC_HAVE_HIGH_REG) {
|
||||
sd_ctrl_write16(host, CTL_CLK_AND_WAIT_CTL, 0x0100);
|
||||
|
@ -218,7 +227,10 @@ static void tmio_mmc_clk_stop(struct tmio_mmc_host *host)
|
|||
|
||||
sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, ~CLK_CTL_SCLKEN &
|
||||
sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
|
||||
msleep(host->pdata->flags & TMIO_MMC_MIN_RCAR2 ? 5 : 10);
|
||||
|
||||
/* HW engineers overrode docs: no sleep needed on R-Car2+ */
|
||||
if (!(host->pdata->flags & TMIO_MMC_MIN_RCAR2))
|
||||
msleep(10);
|
||||
}
|
||||
|
||||
static void tmio_mmc_set_clock(struct tmio_mmc_host *host,
|
||||
|
@ -343,12 +355,6 @@ static int tmio_mmc_start_command(struct tmio_mmc_host *host,
|
|||
int c = cmd->opcode;
|
||||
u32 irq_mask = TMIO_MASK_CMD;
|
||||
|
||||
/* CMD12 is handled by hardware */
|
||||
if (cmd->opcode == MMC_STOP_TRANSMISSION && !cmd->arg) {
|
||||
sd_ctrl_write16(host, CTL_STOP_INTERNAL_ACTION, TMIO_STOP_STP);
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (mmc_resp_type(cmd)) {
|
||||
case MMC_RSP_NONE: c |= RESP_NONE; break;
|
||||
case MMC_RSP_R1:
|
||||
|
@ -605,11 +611,11 @@ static void tmio_mmc_data_irq(struct tmio_mmc_host *host, unsigned int stat)
|
|||
|
||||
if (done) {
|
||||
tmio_mmc_disable_mmc_irqs(host, TMIO_STAT_DATAEND);
|
||||
complete(&host->dma_dataend);
|
||||
tmio_mmc_dataend_dma(host);
|
||||
}
|
||||
} else if (host->chan_rx && (data->flags & MMC_DATA_READ) && !host->force_pio) {
|
||||
tmio_mmc_disable_mmc_irqs(host, TMIO_STAT_DATAEND);
|
||||
complete(&host->dma_dataend);
|
||||
tmio_mmc_dataend_dma(host);
|
||||
} else {
|
||||
tmio_mmc_do_data_irq(host);
|
||||
tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_READOP | TMIO_MASK_WRITEOP);
|
||||
|
@ -1251,10 +1257,10 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
|
|||
|
||||
mmc->caps |= MMC_CAP_4_BIT_DATA | pdata->capabilities;
|
||||
mmc->caps2 |= pdata->capabilities2;
|
||||
mmc->max_segs = 32;
|
||||
mmc->max_segs = pdata->max_segs ? : 32;
|
||||
mmc->max_blk_size = 512;
|
||||
mmc->max_blk_count = (PAGE_SIZE / mmc->max_blk_size) *
|
||||
mmc->max_segs;
|
||||
mmc->max_blk_count = pdata->max_blk_count ? :
|
||||
(PAGE_SIZE / mmc->max_blk_size) * mmc->max_segs;
|
||||
mmc->max_req_size = mmc->max_blk_size * mmc->max_blk_count;
|
||||
mmc->max_seg_size = mmc->max_req_size;
|
||||
|
||||
|
|
|
@ -550,7 +550,7 @@ static int toshsd_get_cd(struct mmc_host *mmc)
|
|||
return !!(ioread16(host->ioaddr + SD_CARDSTATUS) & SD_CARD_PRESENT_0);
|
||||
}
|
||||
|
||||
static struct mmc_host_ops toshsd_ops = {
|
||||
static const struct mmc_host_ops toshsd_ops = {
|
||||
.request = toshsd_request,
|
||||
.set_ios = toshsd_set_ios,
|
||||
.get_ro = toshsd_get_ro,
|
||||
|
|
|
@ -1185,7 +1185,7 @@ static int usdhi6_sig_volt_switch(struct mmc_host *mmc, struct mmc_ios *ios)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static struct mmc_host_ops usdhi6_ops = {
|
||||
static const struct mmc_host_ops usdhi6_ops = {
|
||||
.request = usdhi6_request,
|
||||
.set_ios = usdhi6_set_ios,
|
||||
.get_cd = usdhi6_get_cd,
|
||||
|
|
|
@ -323,7 +323,7 @@ struct via_crdr_mmc_host {
|
|||
/* some devices need a very long delay for power to stabilize */
|
||||
#define VIA_CRDR_QUIRK_300MS_PWRDELAY 0x0001
|
||||
|
||||
static struct pci_device_id via_ids[] = {
|
||||
static const struct pci_device_id via_ids[] = {
|
||||
{PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_9530,
|
||||
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0,},
|
||||
{0,}
|
||||
|
|
|
@ -266,7 +266,7 @@ MODULE_PARM_DESC(firmware_rom_wait_states,
|
|||
#define ELAN_VENDOR_ID 0x2201
|
||||
#define VUB300_VENDOR_ID 0x0424
|
||||
#define VUB300_PRODUCT_ID 0x012C
|
||||
static struct usb_device_id vub300_table[] = {
|
||||
static const struct usb_device_id vub300_table[] = {
|
||||
{USB_DEVICE(ELAN_VENDOR_ID, VUB300_PRODUCT_ID)},
|
||||
{USB_DEVICE(VUB300_VENDOR_ID, VUB300_PRODUCT_ID)},
|
||||
{} /* Terminating entry */
|
||||
|
@ -2079,7 +2079,7 @@ static void vub300_init_card(struct mmc_host *mmc, struct mmc_card *card)
|
|||
dev_info(&vub300->udev->dev, "NO host QUIRKS for this card\n");
|
||||
}
|
||||
|
||||
static struct mmc_host_ops vub300_mmc_ops = {
|
||||
static const struct mmc_host_ops vub300_mmc_ops = {
|
||||
.request = vub300_mmc_request,
|
||||
.set_ios = vub300_mmc_set_ios,
|
||||
.get_ro = vub300_mmc_get_ro,
|
||||
|
|
|
@ -802,10 +802,8 @@ static void wbsd_request(struct mmc_host *mmc, struct mmc_request *mrq)
|
|||
break;
|
||||
|
||||
default:
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
pr_warn("%s: Data command %d is not supported by this controller\n",
|
||||
mmc_hostname(host->mmc), cmd->opcode);
|
||||
#endif
|
||||
cmd->error = -EINVAL;
|
||||
|
||||
goto done;
|
||||
|
|
|
@ -726,7 +726,7 @@ static int wmt_mci_get_cd(struct mmc_host *mmc)
|
|||
return !(cd ^ priv->cd_inverted);
|
||||
}
|
||||
|
||||
static struct mmc_host_ops wmt_mci_ops = {
|
||||
static const struct mmc_host_ops wmt_mci_ops = {
|
||||
.request = wmt_mci_request,
|
||||
.set_ios = wmt_mci_set_ios,
|
||||
.get_ro = wmt_mci_get_ro,
|
||||
|
@ -856,7 +856,9 @@ static int wmt_mci_probe(struct platform_device *pdev)
|
|||
goto fail5;
|
||||
}
|
||||
|
||||
clk_prepare_enable(priv->clk_sdmmc);
|
||||
ret = clk_prepare_enable(priv->clk_sdmmc);
|
||||
if (ret)
|
||||
goto fail6;
|
||||
|
||||
/* configure the controller to a known 'ready' state */
|
||||
wmt_reset_hardware(mmc);
|
||||
|
@ -866,6 +868,8 @@ static int wmt_mci_probe(struct platform_device *pdev)
|
|||
dev_info(&pdev->dev, "WMT SDHC Controller initialized\n");
|
||||
|
||||
return 0;
|
||||
fail6:
|
||||
clk_put(priv->clk_sdmmc);
|
||||
fail5:
|
||||
free_irq(dma_irq, priv);
|
||||
fail4:
|
||||
|
|
|
@ -0,0 +1,35 @@
|
|||
/*
|
||||
* Copyright (c) 2017 Chen-Yu Tsai. All rights reserved.
|
||||
*
|
||||
* This software is licensed under the terms of the GNU General Public
|
||||
* License version 2, as published by the Free Software Foundation, and
|
||||
* may be copied, distributed, and modified under those terms.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef _LINUX_CLK_SUNXI_NG_H_
|
||||
#define _LINUX_CLK_SUNXI_NG_H_
|
||||
|
||||
#include <linux/errno.h>
|
||||
|
||||
#ifdef CONFIG_SUNXI_CCU
|
||||
int sunxi_ccu_set_mmc_timing_mode(struct clk *clk, bool new_mode);
|
||||
int sunxi_ccu_get_mmc_timing_mode(struct clk *clk);
|
||||
#else
|
||||
static inline int sunxi_ccu_set_mmc_timing_mode(struct clk *clk,
|
||||
bool new_mode)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
|
||||
static inline int sunxi_ccu_get_mmc_timing_mode(struct clk *clk)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
|
@ -107,6 +107,9 @@
|
|||
*/
|
||||
#define TMIO_MMC_CLK_ACTUAL BIT(10)
|
||||
|
||||
/* Some controllers have a CBSY bit */
|
||||
#define TMIO_MMC_HAVE_CBSY BIT(11)
|
||||
|
||||
int tmio_core_mmc_enable(void __iomem *cnf, int shift, unsigned long base);
|
||||
int tmio_core_mmc_resume(void __iomem *cnf, int shift, unsigned long base);
|
||||
void tmio_core_mmc_pwr(void __iomem *cnf, int shift, int state);
|
||||
|
@ -128,6 +131,8 @@ struct tmio_mmc_data {
|
|||
unsigned int cd_gpio;
|
||||
int alignment_shift;
|
||||
dma_addr_t dma_rx_offset;
|
||||
unsigned int max_blk_count;
|
||||
unsigned short max_segs;
|
||||
void (*set_pwr)(struct platform_device *host, int state);
|
||||
void (*set_clk_div)(struct platform_device *host, int state);
|
||||
};
|
||||
|
|
|
@ -29,8 +29,8 @@ struct mmc_csd {
|
|||
unsigned char structure;
|
||||
unsigned char mmca_vsn;
|
||||
unsigned short cmdclass;
|
||||
unsigned short tacc_clks;
|
||||
unsigned int tacc_ns;
|
||||
unsigned short taac_clks;
|
||||
unsigned int taac_ns;
|
||||
unsigned int c_size;
|
||||
unsigned int r2w_factor;
|
||||
unsigned int max_dtr;
|
||||
|
|
|
@ -122,11 +122,18 @@ struct mmc_data {
|
|||
unsigned int timeout_clks; /* data timeout (in clocks) */
|
||||
unsigned int blksz; /* data block size */
|
||||
unsigned int blocks; /* number of blocks */
|
||||
unsigned int blk_addr; /* block address */
|
||||
int error; /* data error */
|
||||
unsigned int flags;
|
||||
|
||||
#define MMC_DATA_WRITE (1 << 8)
|
||||
#define MMC_DATA_READ (1 << 9)
|
||||
#define MMC_DATA_WRITE BIT(8)
|
||||
#define MMC_DATA_READ BIT(9)
|
||||
/* Extra flags used by CQE */
|
||||
#define MMC_DATA_QBR BIT(10) /* CQE queue barrier*/
|
||||
#define MMC_DATA_PRIO BIT(11) /* CQE high priority */
|
||||
#define MMC_DATA_REL_WR BIT(12) /* Reliable write */
|
||||
#define MMC_DATA_DAT_TAG BIT(13) /* Tag request */
|
||||
#define MMC_DATA_FORCED_PRG BIT(14) /* Forced programming */
|
||||
|
||||
unsigned int bytes_xfered;
|
||||
|
||||
|
@ -149,18 +156,22 @@ struct mmc_request {
|
|||
struct completion completion;
|
||||
struct completion cmd_completion;
|
||||
void (*done)(struct mmc_request *);/* completion function */
|
||||
/*
|
||||
* Notify uppers layers (e.g. mmc block driver) that recovery is needed
|
||||
* due to an error associated with the mmc_request. Currently used only
|
||||
* by CQE.
|
||||
*/
|
||||
void (*recovery_notifier)(struct mmc_request *);
|
||||
struct mmc_host *host;
|
||||
|
||||
/* Allow other commands during this ongoing data transfer or busy wait */
|
||||
bool cap_cmd_during_tfr;
|
||||
|
||||
int tag;
|
||||
};
|
||||
|
||||
struct mmc_card;
|
||||
struct mmc_async_req;
|
||||
|
||||
struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
|
||||
struct mmc_async_req *areq,
|
||||
enum mmc_blk_status *ret_stat);
|
||||
void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq);
|
||||
int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd,
|
||||
int retries);
|
||||
|
|
|
@ -162,6 +162,50 @@ struct mmc_host_ops {
|
|||
unsigned int direction, int blk_size);
|
||||
};
|
||||
|
||||
struct mmc_cqe_ops {
|
||||
/* Allocate resources, and make the CQE operational */
|
||||
int (*cqe_enable)(struct mmc_host *host, struct mmc_card *card);
|
||||
/* Free resources, and make the CQE non-operational */
|
||||
void (*cqe_disable)(struct mmc_host *host);
|
||||
/*
|
||||
* Issue a read, write or DCMD request to the CQE. Also deal with the
|
||||
* effect of ->cqe_off().
|
||||
*/
|
||||
int (*cqe_request)(struct mmc_host *host, struct mmc_request *mrq);
|
||||
/* Free resources (e.g. DMA mapping) associated with the request */
|
||||
void (*cqe_post_req)(struct mmc_host *host, struct mmc_request *mrq);
|
||||
/*
|
||||
* Prepare the CQE and host controller to accept non-CQ commands. There
|
||||
* is no corresponding ->cqe_on(), instead ->cqe_request() is required
|
||||
* to deal with that.
|
||||
*/
|
||||
void (*cqe_off)(struct mmc_host *host);
|
||||
/*
|
||||
* Wait for all CQE tasks to complete. Return an error if recovery
|
||||
* becomes necessary.
|
||||
*/
|
||||
int (*cqe_wait_for_idle)(struct mmc_host *host);
|
||||
/*
|
||||
* Notify CQE that a request has timed out. Return false if the request
|
||||
* completed or true if a timeout happened in which case indicate if
|
||||
* recovery is needed.
|
||||
*/
|
||||
bool (*cqe_timeout)(struct mmc_host *host, struct mmc_request *mrq,
|
||||
bool *recovery_needed);
|
||||
/*
|
||||
* Stop all CQE activity and prepare the CQE and host controller to
|
||||
* accept recovery commands.
|
||||
*/
|
||||
void (*cqe_recovery_start)(struct mmc_host *host);
|
||||
/*
|
||||
* Clear the queue and call mmc_cqe_request_done() on all requests.
|
||||
* Requests that errored will have the error set on the mmc_request
|
||||
* (data->error or cmd->error for DCMD). Requests that did not error
|
||||
* will have zero data bytes transferred.
|
||||
*/
|
||||
void (*cqe_recovery_finish)(struct mmc_host *host);
|
||||
};
|
||||
|
||||
struct mmc_async_req {
|
||||
/* active mmc request */
|
||||
struct mmc_request *mrq;
|
||||
|
@ -291,10 +335,6 @@ struct mmc_host {
|
|||
MMC_CAP2_HS200_1_2V_SDR)
|
||||
#define MMC_CAP2_CD_ACTIVE_HIGH (1 << 10) /* Card-detect signal active high */
|
||||
#define MMC_CAP2_RO_ACTIVE_HIGH (1 << 11) /* Write-protect signal active high */
|
||||
#define MMC_CAP2_PACKED_RD (1 << 12) /* Allow packed read */
|
||||
#define MMC_CAP2_PACKED_WR (1 << 13) /* Allow packed write */
|
||||
#define MMC_CAP2_PACKED_CMD (MMC_CAP2_PACKED_RD | \
|
||||
MMC_CAP2_PACKED_WR)
|
||||
#define MMC_CAP2_NO_PRESCAN_POWERUP (1 << 14) /* Don't power up before scan */
|
||||
#define MMC_CAP2_HS400_1_8V (1 << 15) /* Can support HS400 1.8V */
|
||||
#define MMC_CAP2_HS400_1_2V (1 << 16) /* Can support HS400 1.2V */
|
||||
|
@ -307,6 +347,8 @@ struct mmc_host {
|
|||
#define MMC_CAP2_HS400_ES (1 << 20) /* Host supports enhanced strobe */
|
||||
#define MMC_CAP2_NO_SD (1 << 21) /* Do not send SD commands during initialization */
|
||||
#define MMC_CAP2_NO_MMC (1 << 22) /* Do not send (e)MMC commands during initialization */
|
||||
#define MMC_CAP2_CQE (1 << 23) /* Has eMMC command queue engine */
|
||||
#define MMC_CAP2_CQE_DCMD (1 << 24) /* CQE can issue a direct command */
|
||||
|
||||
mmc_pm_flag_t pm_caps; /* supported pm features */
|
||||
|
||||
|
@ -328,9 +370,6 @@ struct mmc_host {
|
|||
unsigned int use_spi_crc:1;
|
||||
unsigned int claimed:1; /* host exclusively claimed */
|
||||
unsigned int bus_dead:1; /* bus has been released */
|
||||
#ifdef CONFIG_MMC_DEBUG
|
||||
unsigned int removed:1; /* host is being removed */
|
||||
#endif
|
||||
unsigned int can_retune:1; /* re-tuning can be used */
|
||||
unsigned int doing_retune:1; /* re-tuning in progress */
|
||||
unsigned int retune_now:1; /* do re-tuning at next req */
|
||||
|
@ -393,6 +432,13 @@ struct mmc_host {
|
|||
int dsr_req; /* DSR value is valid */
|
||||
u32 dsr; /* optional driver stage (DSR) value */
|
||||
|
||||
/* Command Queue Engine (CQE) support */
|
||||
const struct mmc_cqe_ops *cqe_ops;
|
||||
void *cqe_private;
|
||||
int cqe_qdepth;
|
||||
bool cqe_enabled;
|
||||
bool cqe_on;
|
||||
|
||||
unsigned long private[0] ____cacheline_aligned;
|
||||
};
|
||||
|
||||
|
|
|
@ -29,8 +29,10 @@ TRACE_EVENT(mmc_request_start,
|
|||
__field(unsigned int, sbc_flags)
|
||||
__field(unsigned int, sbc_retries)
|
||||
__field(unsigned int, blocks)
|
||||
__field(unsigned int, blk_addr)
|
||||
__field(unsigned int, blksz)
|
||||
__field(unsigned int, data_flags)
|
||||
__field(int, tag)
|
||||
__field(unsigned int, can_retune)
|
||||
__field(unsigned int, doing_retune)
|
||||
__field(unsigned int, retune_now)
|
||||
|
@ -42,10 +44,10 @@ TRACE_EVENT(mmc_request_start,
|
|||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->cmd_opcode = mrq->cmd->opcode;
|
||||
__entry->cmd_arg = mrq->cmd->arg;
|
||||
__entry->cmd_flags = mrq->cmd->flags;
|
||||
__entry->cmd_retries = mrq->cmd->retries;
|
||||
__entry->cmd_opcode = mrq->cmd ? mrq->cmd->opcode : 0;
|
||||
__entry->cmd_arg = mrq->cmd ? mrq->cmd->arg : 0;
|
||||
__entry->cmd_flags = mrq->cmd ? mrq->cmd->flags : 0;
|
||||
__entry->cmd_retries = mrq->cmd ? mrq->cmd->retries : 0;
|
||||
__entry->stop_opcode = mrq->stop ? mrq->stop->opcode : 0;
|
||||
__entry->stop_arg = mrq->stop ? mrq->stop->arg : 0;
|
||||
__entry->stop_flags = mrq->stop ? mrq->stop->flags : 0;
|
||||
|
@ -56,7 +58,9 @@ TRACE_EVENT(mmc_request_start,
|
|||
__entry->sbc_retries = mrq->sbc ? mrq->sbc->retries : 0;
|
||||
__entry->blksz = mrq->data ? mrq->data->blksz : 0;
|
||||
__entry->blocks = mrq->data ? mrq->data->blocks : 0;
|
||||
__entry->blk_addr = mrq->data ? mrq->data->blk_addr : 0;
|
||||
__entry->data_flags = mrq->data ? mrq->data->flags : 0;
|
||||
__entry->tag = mrq->tag;
|
||||
__entry->can_retune = host->can_retune;
|
||||
__entry->doing_retune = host->doing_retune;
|
||||
__entry->retune_now = host->retune_now;
|
||||
|
@ -71,8 +75,8 @@ TRACE_EVENT(mmc_request_start,
|
|||
"cmd_opcode=%u cmd_arg=0x%x cmd_flags=0x%x cmd_retries=%u "
|
||||
"stop_opcode=%u stop_arg=0x%x stop_flags=0x%x stop_retries=%u "
|
||||
"sbc_opcode=%u sbc_arg=0x%x sbc_flags=0x%x sbc_retires=%u "
|
||||
"blocks=%u block_size=%u data_flags=0x%x "
|
||||
"can_retune=%u doing_retune=%u retune_now=%u "
|
||||
"blocks=%u block_size=%u blk_addr=%u data_flags=0x%x "
|
||||
"tag=%d can_retune=%u doing_retune=%u retune_now=%u "
|
||||
"need_retune=%d hold_retune=%d retune_period=%u",
|
||||
__get_str(name), __entry->mrq,
|
||||
__entry->cmd_opcode, __entry->cmd_arg,
|
||||
|
@ -81,7 +85,8 @@ TRACE_EVENT(mmc_request_start,
|
|||
__entry->stop_flags, __entry->stop_retries,
|
||||
__entry->sbc_opcode, __entry->sbc_arg,
|
||||
__entry->sbc_flags, __entry->sbc_retries,
|
||||
__entry->blocks, __entry->blksz, __entry->data_flags,
|
||||
__entry->blocks, __entry->blk_addr,
|
||||
__entry->blksz, __entry->data_flags, __entry->tag,
|
||||
__entry->can_retune, __entry->doing_retune,
|
||||
__entry->retune_now, __entry->need_retune,
|
||||
__entry->hold_retune, __entry->retune_period)
|
||||
|
@ -108,6 +113,7 @@ TRACE_EVENT(mmc_request_done,
|
|||
__field(unsigned int, sbc_retries)
|
||||
__field(unsigned int, bytes_xfered)
|
||||
__field(int, data_err)
|
||||
__field(int, tag)
|
||||
__field(unsigned int, can_retune)
|
||||
__field(unsigned int, doing_retune)
|
||||
__field(unsigned int, retune_now)
|
||||
|
@ -119,10 +125,13 @@ TRACE_EVENT(mmc_request_done,
|
|||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->cmd_opcode = mrq->cmd->opcode;
|
||||
__entry->cmd_err = mrq->cmd->error;
|
||||
memcpy(__entry->cmd_resp, mrq->cmd->resp, 4);
|
||||
__entry->cmd_retries = mrq->cmd->retries;
|
||||
__entry->cmd_opcode = mrq->cmd ? mrq->cmd->opcode : 0;
|
||||
__entry->cmd_err = mrq->cmd ? mrq->cmd->error : 0;
|
||||
__entry->cmd_resp[0] = mrq->cmd ? mrq->cmd->resp[0] : 0;
|
||||
__entry->cmd_resp[1] = mrq->cmd ? mrq->cmd->resp[1] : 0;
|
||||
__entry->cmd_resp[2] = mrq->cmd ? mrq->cmd->resp[2] : 0;
|
||||
__entry->cmd_resp[3] = mrq->cmd ? mrq->cmd->resp[3] : 0;
|
||||
__entry->cmd_retries = mrq->cmd ? mrq->cmd->retries : 0;
|
||||
__entry->stop_opcode = mrq->stop ? mrq->stop->opcode : 0;
|
||||
__entry->stop_err = mrq->stop ? mrq->stop->error : 0;
|
||||
__entry->stop_resp[0] = mrq->stop ? mrq->stop->resp[0] : 0;
|
||||
|
@ -139,6 +148,7 @@ TRACE_EVENT(mmc_request_done,
|
|||
__entry->sbc_retries = mrq->sbc ? mrq->sbc->retries : 0;
|
||||
__entry->bytes_xfered = mrq->data ? mrq->data->bytes_xfered : 0;
|
||||
__entry->data_err = mrq->data ? mrq->data->error : 0;
|
||||
__entry->tag = mrq->tag;
|
||||
__entry->can_retune = host->can_retune;
|
||||
__entry->doing_retune = host->doing_retune;
|
||||
__entry->retune_now = host->retune_now;
|
||||
|
@ -154,7 +164,7 @@ TRACE_EVENT(mmc_request_done,
|
|||
"cmd_retries=%u stop_opcode=%u stop_err=%d "
|
||||
"stop_resp=0x%x 0x%x 0x%x 0x%x stop_retries=%u "
|
||||
"sbc_opcode=%u sbc_err=%d sbc_resp=0x%x 0x%x 0x%x 0x%x "
|
||||
"sbc_retries=%u bytes_xfered=%u data_err=%d "
|
||||
"sbc_retries=%u bytes_xfered=%u data_err=%d tag=%d "
|
||||
"can_retune=%u doing_retune=%u retune_now=%u need_retune=%d "
|
||||
"hold_retune=%d retune_period=%u",
|
||||
__get_str(name), __entry->mrq,
|
||||
|
@ -170,7 +180,7 @@ TRACE_EVENT(mmc_request_done,
|
|||
__entry->sbc_resp[0], __entry->sbc_resp[1],
|
||||
__entry->sbc_resp[2], __entry->sbc_resp[3],
|
||||
__entry->sbc_retries,
|
||||
__entry->bytes_xfered, __entry->data_err,
|
||||
__entry->bytes_xfered, __entry->data_err, __entry->tag,
|
||||
__entry->can_retune, __entry->doing_retune,
|
||||
__entry->retune_now, __entry->need_retune,
|
||||
__entry->hold_retune, __entry->retune_period)
|
||||
|
|
Loading…
Reference in New Issue