MTD core:

* Handle possible -EPROBE_DEFER from parse_mtd_partitions()
 * Constify buf in mtd_write_user_prot_reg()
 * Constify name param in mtd_bdi_init
 * Fix fall-through warnings for Clang
 * Get rid of Big MTD Lock ouf of mtdchar
 * Drop mtd_mutex usage from mtdchar_open()
 * Don't lock when recursively deleting partitions
 * Use module_mtd_blktrans() to register driver when relevant
 * Parse MTD as NVMEM cells
 * New OTP (one-time-programmable) erase ioctl
 * Require write permissions for locking and badblock ioctls
 * physmap:
   - Fix error return code of physmap_flash_remove()
   - physmap-bt1-rom: Fix unintentional stack access
 * ofpart parser:
   - Support Linksys Northstar partitions
   - Make symbol 'bcm4908_partitions_quirks' static
   - Limit parsing of deprecated DT syntax
   - Support BCM4908 fixed partitions
 * Qcom parser:
   - Incompatible with spi-nor 4k sectors
   - Fix error condition
   - Extend Qcom SMEM parser to SPI flash
 
 CFI:
 * Disable broken buffered writes for CFI chips within ID 0x2201
 * Address a Coverity report for unused value
 
 SPI NOR core:
 * Add OTP support
 * Fix module unload while an op in progress
 * Add various cleanup patches
 * Add Michael and Pratyush as designated reviewers in MAINTAINERS
 
 SPI NOR controller drivers:
 * intel-spi:
   - Move platform data header to x86 subfolder
 
 NAND core:
 * Fix error handling in nand_prog_page_op() (x2)
 * Add a helper to retrieve the number of ECC bytes per step
 * Add a helper to retrieve the number of ECC steps
 * Let ECC engines advertize the exact number of steps
 * ECC Hamming:
   - Populate the public nsteps field
   - Use the public nsteps field
 * ECC BCH:
   - Populate the public nsteps field
   - Use the public nsteps field
 
 Raw NAND core:
 * Add support for secure regions in NAND memory
 * Try not to use the ECC private structures
 * Remove duplicate include in rawnand.h
 * BBT:
   - Skip bad blocks when searching for the BBT in NAND (APPLIED THEN REVERTED)
 
 Raw NAND controller drivers:
 * Qcom:
   - Convert bindings to YAML
   - Use dma_mapping_error() for error check
   - Add missing nand_cleanup() in error path
   - Return actual error code instead of -ENODEV
   - Update last code word register
   - Add helper to configure location register
   - Rename parameter name in macro
   - Add helper to check last code word
   - Convert nandc to chip in Read/Write helper
   - Update register macro name for 0x2c offset
 * GPMI:
   - Fix a double free in gpmi_nand_init
 * Rockchip:
   - Use flexible-array member instead of zero-length array
 * Atmel:
   - Update ecc_stats.corrected counter
 * MXC:
   - Remove unneeded of_match_ptr()
 * R852:
   - replace spin_lock_irqsave by spin_lock in hard IRQ
 * Brcmnand:
   - Move to polling in pio mode on oops write
   - Read/write oob during EDU transfer
   - Fix OOB R/W with Hamming ECC
 * FSMC:
   - Fix error code in fsmc_nand_probe()
 * OMAP:
   - Use ECC information from the generic structures
 
 SPI-NAND core:
 * Add missing MODULE_DEVICE_TABLE()
 
 SPI-NAND drivers:
 * gigadevice: Support GD5F1GQ5UExxG
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEE9HuaYnbmDhq/XIDIJWrqGEe9VoQFAmCBq3UACgkQJWrqGEe9
 VoS9Xwf+I9W5WZS+WLYdlJ9RvuPDYVyZolZdsxIOOaOGuMAYSLJI/GjWJyHAFdwJ
 qIPvr6qFnmKfUUaFZkln5Qk4QSfx+/t7+yj57e3lM9bSBEfTNToMI4AU8X6UAxa5
 NHnsU8O8sohnk2UrP1qRtk3IezJ0FdzHbQAc82GQ1QY+IE0NwHg0ETjgESR5iEfq
 PSnXb5Dgd637ENUlLsauSLbqFcD8sULvA3fJngxkQbBXDfv3KNZQhNsqw/l7NMY5
 rROk8/jv7kgnYs8ObdkWO9SZ9QIiFSLk3vgbrTTNl8ozYpIMVbsrLmsnh7G34RNO
 oWmbyOQdMDugaPYLJ9aTQmyI1qbKsA==
 =zJPs
 -----END PGP SIGNATURE-----

Merge tag 'mtd/for-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull mtd updates from Miquel Raynal:
 "MTD core:
   - Handle possible -EPROBE_DEFER from parse_mtd_partitions()
   - Constify buf in mtd_write_user_prot_reg()
   - Constify name param in mtd_bdi_init
   - Fix fall-through warnings for Clang
   - Get rid of Big MTD Lock ouf of mtdchar
   - Drop mtd_mutex usage from mtdchar_open()
   - Don't lock when recursively deleting partitions
   - Use module_mtd_blktrans() to register driver when relevant
   - Parse MTD as NVMEM cells
   - New OTP (one-time-programmable) erase ioctl
   - Require write permissions for locking and badblock ioctls
   - physmap:
      - Fix error return code of physmap_flash_remove()
      - physmap-bt1-rom: Fix unintentional stack access
   - ofpart parser:
      - Support Linksys Northstar partitions
      - Make symbol 'bcm4908_partitions_quirks' static
      - Limit parsing of deprecated DT syntax
      - Support BCM4908 fixed partitions
   - Qcom parser:
      - Incompatible with spi-nor 4k sectors
      - Fix error condition
      - Extend Qcom SMEM parser to SPI flash

  CFI:
   - Disable broken buffered writes for CFI chips within ID 0x2201
   - Address a Coverity report for unused value

  SPI NOR core:
   - Add OTP support
   - Fix module unload while an op in progress
   - Add various cleanup patches
   - Add Michael and Pratyush as designated reviewers in MAINTAINERS

  SPI NOR controller drivers:
   - intel-spi:
      - Move platform data header to x86 subfolder

  NAND core:
   - Fix error handling in nand_prog_page_op() (x2)
   - Add a helper to retrieve the number of ECC bytes per step
   - Add a helper to retrieve the number of ECC steps
   - Let ECC engines advertize the exact number of steps
   - ECC Hamming:
      - Populate the public nsteps field
      - Use the public nsteps field
   - ECC BCH:
      - Populate the public nsteps field
      - Use the public nsteps field

  Raw NAND core:
   - Add support for secure regions in NAND memory
   - Try not to use the ECC private structures
   - Remove duplicate include in rawnand.h
   - BBT:
      - Skip bad blocks when searching for the BBT in NAND (APPLIED THEN REVERTED)

  Raw NAND controller drivers:
   - Qcom:
      - Convert bindings to YAML
      - Use dma_mapping_error() for error check
      - Add missing nand_cleanup() in error path
      - Return actual error code instead of -ENODEV
      - Update last code word register
      - Add helper to configure location register
      - Rename parameter name in macro
      - Add helper to check last code word
      - Convert nandc to chip in Read/Write helper
      - Update register macro name for 0x2c offset
   - GPMI:
      - Fix a double free in gpmi_nand_init
   - Rockchip:
      - Use flexible-array member instead of zero-length array
   - Atmel:
      - Update ecc_stats.corrected counter
   - MXC:
      - Remove unneeded of_match_ptr()
   - R852:
      - replace spin_lock_irqsave by spin_lock in hard IRQ
   - Brcmnand:
      - Move to polling in pio mode on oops write
      - Read/write oob during EDU transfer
      - Fix OOB R/W with Hamming ECC
   - FSMC:
      - Fix error code in fsmc_nand_probe()
   - OMAP:
      - Use ECC information from the generic structures

  SPI-NAND core:
   - Add missing MODULE_DEVICE_TABLE()

  SPI-NAND drivers:
   - gigadevice: Support GD5F1GQ5UExxG"

* tag 'mtd/for-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (89 commits)
  Revert "mtd: rawnand: bbt: Skip bad blocks when searching for the BBT in NAND"
  mtd: core: Constify buf in mtd_write_user_prot_reg()
  Revert "mtd: spi-nor: macronix: Add support for mx25l51245g"
  mtd: spi-nor: core: Fix an issue of releasing resources during read/write
  mtd: cfi_cmdset_0002: remove redundant assignment to variable timeo
  mtd: cfi_cmdset_0002: Disable buffered writes for AMD chip 0x2201
  mtd: rawnand: qcom: Use dma_mapping_error() for error check
  mtd: rawnand: gpmi: Fix a double free in gpmi_nand_init
  mtd: rawnand: qcom: Add missing nand_cleanup() in error path
  mtd: rawnand: Add support for secure regions in NAND memory
  dt-bindings: mtd: Add a property to declare secure regions in NAND chips
  dt-bindings: mtd: Convert Qcom NANDc binding to YAML
  mtd: spi-nor: winbond: add OTP support to w25q32fw/jw
  mtd: spi-nor: implement OTP support for Winbond and similar flashes
  mtd: spi-nor: add OTP support
  mtd: spi-nor: swp: Improve code around spi_nor_check_lock_status_sr()
  mtd: spi-nor: Move Software Write Protection logic out of the core
  mtd: rawnand: bbt: Skip bad blocks when searching for the BBT in NAND
  include: linux: mtd: Remove duplicate include of nand.h
  mtd: parsers: ofpart: support Linksys Northstar partitions
  ...
This commit is contained in:
Linus Torvalds 2021-04-26 16:16:09 -07:00
commit 070a7252d2
71 changed files with 2153 additions and 941 deletions

View File

@ -143,6 +143,13 @@ patternProperties:
Ready/Busy pins. Active state refers to the NAND ready state and Ready/Busy pins. Active state refers to the NAND ready state and
should be set to GPIOD_ACTIVE_HIGH unless the signal is inverted. should be set to GPIOD_ACTIVE_HIGH unless the signal is inverted.
secure-regions:
$ref: /schemas/types.yaml#/definitions/uint64-matrix
description:
Regions in the NAND chip which are protected using a secure element
like Trustzone. This property contains the start address and size of
the secure regions present.
required: required:
- reg - reg

View File

@ -0,0 +1,74 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/mtd/partitions/linksys,ns-partitions.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Linksys Northstar partitioning
description: |
Linksys devices based on Broadcom Northstar architecture often use two
firmware partitions. One is used for regular booting, the other is treated as
fallback.
This binding allows defining all fixed partitions and marking those containing
firmware. System can use that information e.g. for booting or flashing
purposes.
maintainers:
- Rafał Miłecki <rafal@milecki.pl>
properties:
compatible:
const: linksys,ns-partitions
"#address-cells":
enum: [ 1, 2 ]
"#size-cells":
enum: [ 1, 2 ]
patternProperties:
"^partition@[0-9a-f]+$":
$ref: "partition.yaml#"
properties:
compatible:
items:
- const: linksys,ns-firmware
- const: brcm,trx
unevaluatedProperties: false
required:
- "#address-cells"
- "#size-cells"
additionalProperties: false
examples:
- |
partitions {
compatible = "linksys,ns-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "boot";
reg = <0x0 0x100000>;
read-only;
};
partition@100000 {
label = "nvram";
reg = <0x100000 0x100000>;
};
partition@200000 {
compatible = "linksys,ns-firmware", "brcm,trx";
reg = <0x200000 0xf00000>;
};
partition@1100000 {
compatible = "linksys,ns-firmware", "brcm,trx";
reg = <0x1100000 0xf00000>;
};
};

View File

@ -0,0 +1,99 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/mtd/partitions/nvmem-cells.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Nvmem cells
description: |
Any partition containing the compatible "nvmem-cells" will register as a
nvmem provider.
Each direct subnodes represents a nvmem cell following the nvmem binding.
Nvmem binding to declare nvmem-cells can be found in:
Documentation/devicetree/bindings/nvmem/nvmem.yaml
maintainers:
- Ansuel Smith <ansuelsmth@gmail.com>
allOf:
- $ref: /schemas/nvmem/nvmem.yaml#
properties:
compatible:
const: nvmem-cells
required:
- compatible
additionalProperties: true
examples:
- |
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
/* ... */
};
art: art@1200000 {
compatible = "nvmem-cells";
reg = <0x1200000 0x0140000>;
label = "art";
read-only;
#address-cells = <1>;
#size-cells = <1>;
macaddr_gmac1: macaddr_gmac1@0 {
reg = <0x0 0x6>;
};
macaddr_gmac2: macaddr_gmac2@6 {
reg = <0x6 0x6>;
};
pre_cal_24g: pre_cal_24g@1000 {
reg = <0x1000 0x2f20>;
};
pre_cal_5g: pre_cal_5g@5000{
reg = <0x5000 0x2f20>;
};
};
- |
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "bootloader";
reg = <0x000000 0x100000>;
read-only;
};
firmware@100000 {
compatible = "brcm,trx";
label = "firmware";
reg = <0x100000 0xe00000>;
};
calibration@f00000 {
compatible = "nvmem-cells";
label = "calibration";
reg = <0xf00000 0x100000>;
ranges = <0 0xf00000 0x100000>;
#address-cells = <1>;
#size-cells = <1>;
wifi0@0 {
reg = <0x000000 0x080000>;
};
wifi1@80000 {
reg = <0x080000 0x080000>;
};
};
};

View File

@ -0,0 +1,196 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/mtd/qcom,nandc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm NAND controller
maintainers:
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
properties:
compatible:
enum:
- qcom,ipq806x-nand
- qcom,ipq4019-nand
- qcom,ipq6018-nand
- qcom,ipq8074-nand
- qcom,sdx55-nand
reg:
maxItems: 1
clocks:
items:
- description: Core Clock
- description: Always ON Clock
clock-names:
items:
- const: core
- const: aon
"#address-cells": true
"#size-cells": true
patternProperties:
"^nand@[a-f0-9]$":
type: object
properties:
nand-bus-width:
const: 8
nand-ecc-strength:
enum: [1, 4, 8]
nand-ecc-step-size:
enum:
- 512
allOf:
- $ref: "nand-controller.yaml#"
- if:
properties:
compatible:
contains:
const: qcom,ipq806x-nand
then:
properties:
dmas:
items:
- description: rxtx DMA channel
dma-names:
items:
- const: rxtx
qcom,cmd-crci:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Must contain the ADM command type CRCI block instance number
specified for the NAND controller on the given platform
qcom,data-crci:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Must contain the ADM data type CRCI block instance number
specified for the NAND controller on the given platform
- if:
properties:
compatible:
contains:
enum:
- qcom,ipq4019-nand
- qcom,ipq6018-nand
- qcom,ipq8074-nand
- qcom,sdx55-nand
then:
properties:
dmas:
items:
- description: tx DMA channel
- description: rx DMA channel
- description: cmd DMA channel
dma-names:
items:
- const: tx
- const: rx
- const: cmd
required:
- compatible
- reg
- clocks
- clock-names
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-ipq806x.h>
nand-controller@1ac00000 {
compatible = "qcom,ipq806x-nand";
reg = <0x1ac00000 0x800>;
clocks = <&gcc EBI2_CLK>,
<&gcc EBI2_AON_CLK>;
clock-names = "core", "aon";
dmas = <&adm_dma 3>;
dma-names = "rxtx";
qcom,cmd-crci = <15>;
qcom,data-crci = <3>;
#address-cells = <1>;
#size-cells = <0>;
nand@0 {
reg = <0>;
nand-ecc-strength = <4>;
nand-bus-width = <8>;
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "boot-nand";
reg = <0 0x58a0000>;
};
partition@58a0000 {
label = "fs-nand";
reg = <0x58a0000 0x4000000>;
};
};
};
};
#include <dt-bindings/clock/qcom,gcc-ipq4019.h>
nand-controller@79b0000 {
compatible = "qcom,ipq4019-nand";
reg = <0x79b0000 0x1000>;
clocks = <&gcc GCC_QPIC_CLK>,
<&gcc GCC_QPIC_AHB_CLK>;
clock-names = "core", "aon";
dmas = <&qpicbam 0>,
<&qpicbam 1>,
<&qpicbam 2>;
dma-names = "tx", "rx", "cmd";
#address-cells = <1>;
#size-cells = <0>;
nand@0 {
reg = <0>;
nand-ecc-strength = <4>;
nand-bus-width = <8>;
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "boot-nand";
reg = <0 0x58a0000>;
};
partition@58a0000 {
label = "fs-nand";
reg = <0x58a0000 0x4000000>;
};
};
};
};
...

View File

@ -1,142 +0,0 @@
* Qualcomm NAND controller
Required properties:
- compatible: must be one of the following:
* "qcom,ipq806x-nand" - for EBI2 NAND controller being used in IPQ806x
SoC and it uses ADM DMA
* "qcom,ipq4019-nand" - for QPIC NAND controller v1.4.0 being used in
IPQ4019 SoC and it uses BAM DMA
* "qcom,ipq6018-nand" - for QPIC NAND controller v1.5.0 being used in
IPQ6018 SoC and it uses BAM DMA
* "qcom,ipq8074-nand" - for QPIC NAND controller v1.5.0 being used in
IPQ8074 SoC and it uses BAM DMA
* "qcom,sdx55-nand" - for QPIC NAND controller v2.0.0 being used in
SDX55 SoC and it uses BAM DMA
- reg: MMIO address range
- clocks: must contain core clock and always on clock
- clock-names: must contain "core" for the core clock and "aon" for the
always on clock
EBI2 specific properties:
- dmas: DMA specifier, consisting of a phandle to the ADM DMA
controller node and the channel number to be used for
NAND. Refer to dma.txt and qcom_adm.txt for more details
- dma-names: must be "rxtx"
- qcom,cmd-crci: must contain the ADM command type CRCI block instance
number specified for the NAND controller on the given
platform
- qcom,data-crci: must contain the ADM data type CRCI block instance
number specified for the NAND controller on the given
platform
QPIC specific properties:
- dmas: DMA specifier, consisting of a phandle to the BAM DMA
and the channel number to be used for NAND. Refer to
dma.txt, qcom_bam_dma.txt for more details
- dma-names: must contain all 3 channel names : "tx", "rx", "cmd"
- #address-cells: <1> - subnodes give the chip-select number
- #size-cells: <0>
* NAND chip-select
Each controller may contain one or more subnodes to represent enabled
chip-selects which (may) contain NAND flash chips. Their properties are as
follows.
Required properties:
- reg: a single integer representing the chip-select
number (e.g., 0, 1, 2, etc.)
- #address-cells: see partition.txt
- #size-cells: see partition.txt
Optional properties:
- nand-bus-width: see nand-controller.yaml
- nand-ecc-strength: see nand-controller.yaml. If not specified, then ECC strength will
be used according to chip requirement and available
OOB size.
Each nandcs device node may optionally contain a 'partitions' sub-node, which
further contains sub-nodes describing the flash partition mapping. See
partition.txt for more detail.
Example:
nand-controller@1ac00000 {
compatible = "qcom,ipq806x-nand";
reg = <0x1ac00000 0x800>;
clocks = <&gcc EBI2_CLK>,
<&gcc EBI2_AON_CLK>;
clock-names = "core", "aon";
dmas = <&adm_dma 3>;
dma-names = "rxtx";
qcom,cmd-crci = <15>;
qcom,data-crci = <3>;
#address-cells = <1>;
#size-cells = <0>;
nand@0 {
reg = <0>;
nand-ecc-strength = <4>;
nand-bus-width = <8>;
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "boot-nand";
reg = <0 0x58a0000>;
};
partition@58a0000 {
label = "fs-nand";
reg = <0x58a0000 0x4000000>;
};
};
};
};
nand-controller@79b0000 {
compatible = "qcom,ipq4019-nand";
reg = <0x79b0000 0x1000>;
clocks = <&gcc GCC_QPIC_CLK>,
<&gcc GCC_QPIC_AHB_CLK>;
clock-names = "core", "aon";
dmas = <&qpicbam 0>,
<&qpicbam 1>,
<&qpicbam 2>;
dma-names = "tx", "rx", "cmd";
#address-cells = <1>;
#size-cells = <0>;
nand@0 {
reg = <0>;
nand-ecc-strength = <4>;
nand-bus-width = <8>;
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "boot-nand";
reg = <0 0x58a0000>;
};
partition@58a0000 {
label = "fs-nand";
reg = <0x58a0000 0x4000000>;
};
};
};
};

View File

@ -20,9 +20,6 @@ description: |
storage device. storage device.
properties: properties:
$nodename:
pattern: "^(eeprom|efuse|nvram)(@.*|-[0-9a-f])*$"
"#address-cells": "#address-cells":
const: 1 const: 1

View File

@ -17005,6 +17005,8 @@ F: arch/arm/mach-spear/
SPI NOR SUBSYSTEM SPI NOR SUBSYSTEM
M: Tudor Ambarus <tudor.ambarus@microchip.com> M: Tudor Ambarus <tudor.ambarus@microchip.com>
R: Michael Walle <michael@walle.cc>
R: Pratyush Yadav <p.yadav@ti.com>
L: linux-mtd@lists.infradead.org L: linux-mtd@lists.infradead.org
S: Maintained S: Maintained
W: http://www.linux-mtd.infradead.org/ W: http://www.linux-mtd.infradead.org/

View File

@ -72,7 +72,8 @@ static int cfi_intelext_is_locked(struct mtd_info *mtd, loff_t ofs,
#ifdef CONFIG_MTD_OTP #ifdef CONFIG_MTD_OTP
static int cfi_intelext_read_fact_prot_reg (struct mtd_info *, loff_t, size_t, size_t *, u_char *); static int cfi_intelext_read_fact_prot_reg (struct mtd_info *, loff_t, size_t, size_t *, u_char *);
static int cfi_intelext_read_user_prot_reg (struct mtd_info *, loff_t, size_t, size_t *, u_char *); static int cfi_intelext_read_user_prot_reg (struct mtd_info *, loff_t, size_t, size_t *, u_char *);
static int cfi_intelext_write_user_prot_reg (struct mtd_info *, loff_t, size_t, size_t *, u_char *); static int cfi_intelext_write_user_prot_reg(struct mtd_info *, loff_t, size_t,
size_t *, const u_char *);
static int cfi_intelext_lock_user_prot_reg (struct mtd_info *, loff_t, size_t); static int cfi_intelext_lock_user_prot_reg (struct mtd_info *, loff_t, size_t);
static int cfi_intelext_get_fact_prot_info(struct mtd_info *, size_t, static int cfi_intelext_get_fact_prot_info(struct mtd_info *, size_t,
size_t *, struct otp_info *); size_t *, struct otp_info *);
@ -2447,10 +2448,10 @@ static int cfi_intelext_read_user_prot_reg(struct mtd_info *mtd, loff_t from,
static int cfi_intelext_write_user_prot_reg(struct mtd_info *mtd, loff_t from, static int cfi_intelext_write_user_prot_reg(struct mtd_info *mtd, loff_t from,
size_t len, size_t *retlen, size_t len, size_t *retlen,
u_char *buf) const u_char *buf)
{ {
return cfi_intelext_otp_walk(mtd, from, len, retlen, return cfi_intelext_otp_walk(mtd, from, len, retlen,
buf, do_otp_write, 1); (u_char *)buf, do_otp_write, 1);
} }
static int cfi_intelext_lock_user_prot_reg(struct mtd_info *mtd, static int cfi_intelext_lock_user_prot_reg(struct mtd_info *mtd,
@ -2549,6 +2550,7 @@ static int cfi_intelext_suspend(struct mtd_info *mtd)
anyway? The latter for now. */ anyway? The latter for now. */
printk(KERN_NOTICE "Flash device refused suspend due to active operation (state %d)\n", chip->state); printk(KERN_NOTICE "Flash device refused suspend due to active operation (state %d)\n", chip->state);
ret = -EAGAIN; ret = -EAGAIN;
break;
case FL_PM_SUSPENDED: case FL_PM_SUSPENDED:
break; break;
} }

View File

@ -80,7 +80,7 @@ static int cfi_amdstd_read_fact_prot_reg(struct mtd_info *, loff_t, size_t,
static int cfi_amdstd_read_user_prot_reg(struct mtd_info *, loff_t, size_t, static int cfi_amdstd_read_user_prot_reg(struct mtd_info *, loff_t, size_t,
size_t *, u_char *); size_t *, u_char *);
static int cfi_amdstd_write_user_prot_reg(struct mtd_info *, loff_t, size_t, static int cfi_amdstd_write_user_prot_reg(struct mtd_info *, loff_t, size_t,
size_t *, u_char *); size_t *, const u_char *);
static int cfi_amdstd_lock_user_prot_reg(struct mtd_info *, loff_t, size_t); static int cfi_amdstd_lock_user_prot_reg(struct mtd_info *, loff_t, size_t);
static int cfi_amdstd_panic_write(struct mtd_info *mtd, loff_t to, size_t len, static int cfi_amdstd_panic_write(struct mtd_info *mtd, loff_t to, size_t len,
@ -272,6 +272,10 @@ static void fixup_use_write_buffers(struct mtd_info *mtd)
{ {
struct map_info *map = mtd->priv; struct map_info *map = mtd->priv;
struct cfi_private *cfi = map->fldrv_priv; struct cfi_private *cfi = map->fldrv_priv;
if (cfi->mfr == CFI_MFR_AMD && cfi->id == 0x2201)
return;
if (cfi->cfiq->BufWriteTimeoutTyp) { if (cfi->cfiq->BufWriteTimeoutTyp) {
pr_debug("Using buffer write method\n"); pr_debug("Using buffer write method\n");
mtd->_write = cfi_amdstd_write_buffers; mtd->_write = cfi_amdstd_write_buffers;
@ -902,6 +906,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
/* Someone else might have been playing with it. */ /* Someone else might have been playing with it. */
goto retry; goto retry;
} }
return 0;
case FL_READY: case FL_READY:
case FL_CFI_QUERY: case FL_CFI_QUERY:
@ -1630,9 +1635,9 @@ static int cfi_amdstd_read_user_prot_reg(struct mtd_info *mtd, loff_t from,
static int cfi_amdstd_write_user_prot_reg(struct mtd_info *mtd, loff_t from, static int cfi_amdstd_write_user_prot_reg(struct mtd_info *mtd, loff_t from,
size_t len, size_t *retlen, size_t len, size_t *retlen,
u_char *buf) const u_char *buf)
{ {
return cfi_amdstd_otp_walk(mtd, from, len, retlen, buf, return cfi_amdstd_otp_walk(mtd, from, len, retlen, (u_char *)buf,
do_otp_write, 1); do_otp_write, 1);
} }
@ -1649,7 +1654,7 @@ static int __xipram do_write_oneword_once(struct map_info *map,
unsigned long adr, map_word datum, unsigned long adr, map_word datum,
int mode, struct cfi_private *cfi) int mode, struct cfi_private *cfi)
{ {
unsigned long timeo = jiffies + HZ; unsigned long timeo;
/* /*
* We use a 1ms + 1 jiffies generic timeout for writes (most devices * We use a 1ms + 1 jiffies generic timeout for writes (most devices
* have a max write time of a few hundreds usec). However, we should * have a max write time of a few hundreds usec). However, we should
@ -2994,6 +2999,7 @@ static int cfi_amdstd_suspend(struct mtd_info *mtd)
* as the whole point is that nobody can do anything * as the whole point is that nobody can do anything
* with the chip now anyway. * with the chip now anyway.
*/ */
break;
case FL_PM_SUSPENDED: case FL_PM_SUSPENDED:
break; break;

View File

@ -1332,6 +1332,8 @@ static int cfi_staa_suspend(struct mtd_info *mtd)
* as the whole point is that nobody can do anything * as the whole point is that nobody can do anything
* with the chip now anyway. * with the chip now anyway.
*/ */
break;
case FL_PM_SUSPENDED: case FL_PM_SUSPENDED:
break; break;

View File

@ -527,7 +527,7 @@ static int dataflash_read_user_otp(struct mtd_info *mtd,
} }
static int dataflash_write_user_otp(struct mtd_info *mtd, static int dataflash_write_user_otp(struct mtd_info *mtd,
loff_t from, size_t len, size_t *retlen, u_char *buf) loff_t from, size_t len, size_t *retlen, const u_char *buf)
{ {
struct spi_message m; struct spi_message m;
const size_t l = 4 + 64; const size_t l = 4 + 64;

View File

@ -1056,19 +1056,7 @@ static struct mtd_blktrans_ops ftl_tr = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static int __init init_ftl(void) module_mtd_blktrans(ftl_tr);
{
return register_mtd_blktrans(&ftl_tr);
}
static void __exit cleanup_ftl(void)
{
deregister_mtd_blktrans(&ftl_tr);
}
module_init(init_ftl);
module_exit(cleanup_ftl);
MODULE_LICENSE("Dual MPL/GPL"); MODULE_LICENSE("Dual MPL/GPL");
MODULE_AUTHOR("David Hinds <dahinds@users.sourceforge.net>"); MODULE_AUTHOR("David Hinds <dahinds@users.sourceforge.net>");

View File

@ -937,18 +937,7 @@ static struct mtd_blktrans_ops inftl_tr = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static int __init init_inftl(void) module_mtd_blktrans(inftl_tr);
{
return register_mtd_blktrans(&inftl_tr);
}
static void __exit cleanup_inftl(void)
{
deregister_mtd_blktrans(&inftl_tr);
}
module_init(init_inftl);
module_exit(cleanup_inftl);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Greg Ungerer <gerg@snapgear.com>, David Woodhouse <dwmw2@infradead.org>, Fabrice Bellard <fabrice.bellard@netgem.com> et al."); MODULE_AUTHOR("Greg Ungerer <gerg@snapgear.com>, David Woodhouse <dwmw2@infradead.org>, Fabrice Bellard <fabrice.bellard@netgem.com> et al.");

View File

@ -79,7 +79,7 @@ static void __xipram bt1_rom_map_copy_from(struct map_info *map,
if (shift) { if (shift) {
chunk = min_t(ssize_t, 4 - shift, len); chunk = min_t(ssize_t, 4 - shift, len);
data = readl_relaxed(src - shift); data = readl_relaxed(src - shift);
memcpy(to, &data + shift, chunk); memcpy(to, (char *)&data + shift, chunk);
src += chunk; src += chunk;
to += chunk; to += chunk;
len -= chunk; len -= chunk;

View File

@ -69,8 +69,10 @@ static int physmap_flash_remove(struct platform_device *dev)
int i, err = 0; int i, err = 0;
info = platform_get_drvdata(dev); info = platform_get_drvdata(dev);
if (!info) if (!info) {
err = -EINVAL;
goto out; goto out;
}
if (info->cmtd) { if (info->cmtd) {
err = mtd_device_unregister(info->cmtd); err = mtd_device_unregister(info->cmtd);

View File

@ -346,19 +346,7 @@ static struct mtd_blktrans_ops mtdblock_tr = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static int __init init_mtdblock(void) module_mtd_blktrans(mtdblock_tr);
{
return register_mtd_blktrans(&mtdblock_tr);
}
static void __exit cleanup_mtdblock(void)
{
deregister_mtd_blktrans(&mtdblock_tr);
}
module_init(init_mtdblock);
module_exit(cleanup_mtdblock);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Nicolas Pitre <nico@fluxnic.net> et al."); MODULE_AUTHOR("Nicolas Pitre <nico@fluxnic.net> et al.");

View File

@ -67,18 +67,7 @@ static struct mtd_blktrans_ops mtdblock_tr = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static int __init mtdblock_init(void) module_mtd_blktrans(mtdblock_tr);
{
return register_mtd_blktrans(&mtdblock_tr);
}
static void __exit mtdblock_exit(void)
{
deregister_mtd_blktrans(&mtdblock_tr);
}
module_init(mtdblock_init);
module_exit(mtdblock_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>"); MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>");

View File

@ -27,8 +27,6 @@
#include "mtdcore.h" #include "mtdcore.h"
static DEFINE_MUTEX(mtd_mutex);
/* /*
* Data structure to hold the pointer to the mtd device as well * Data structure to hold the pointer to the mtd device as well
* as mode information of various use cases. * as mode information of various use cases.
@ -58,13 +56,10 @@ static int mtdchar_open(struct inode *inode, struct file *file)
if ((file->f_mode & FMODE_WRITE) && (minor & 1)) if ((file->f_mode & FMODE_WRITE) && (minor & 1))
return -EACCES; return -EACCES;
mutex_lock(&mtd_mutex);
mtd = get_mtd_device(NULL, devnum); mtd = get_mtd_device(NULL, devnum);
if (IS_ERR(mtd)) { if (IS_ERR(mtd))
ret = PTR_ERR(mtd); return PTR_ERR(mtd);
goto out;
}
if (mtd->type == MTD_ABSENT) { if (mtd->type == MTD_ABSENT) {
ret = -ENODEV; ret = -ENODEV;
@ -84,13 +79,10 @@ static int mtdchar_open(struct inode *inode, struct file *file)
} }
mfi->mtd = mtd; mfi->mtd = mtd;
file->private_data = mfi; file->private_data = mfi;
mutex_unlock(&mtd_mutex);
return 0; return 0;
out1: out1:
put_mtd_device(mtd); put_mtd_device(mtd);
out:
mutex_unlock(&mtd_mutex);
return ret; return ret;
} /* mtdchar_open */ } /* mtdchar_open */
@ -651,16 +643,12 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
case MEMGETINFO: case MEMGETINFO:
case MEMREADOOB: case MEMREADOOB:
case MEMREADOOB64: case MEMREADOOB64:
case MEMLOCK:
case MEMUNLOCK:
case MEMISLOCKED: case MEMISLOCKED:
case MEMGETOOBSEL: case MEMGETOOBSEL:
case MEMGETBADBLOCK: case MEMGETBADBLOCK:
case MEMSETBADBLOCK:
case OTPSELECT: case OTPSELECT:
case OTPGETREGIONCOUNT: case OTPGETREGIONCOUNT:
case OTPGETREGIONINFO: case OTPGETREGIONINFO:
case OTPLOCK:
case ECCGETLAYOUT: case ECCGETLAYOUT:
case ECCGETSTATS: case ECCGETSTATS:
case MTDFILEMODE: case MTDFILEMODE:
@ -671,9 +659,14 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
/* "dangerous" commands */ /* "dangerous" commands */
case MEMERASE: case MEMERASE:
case MEMERASE64: case MEMERASE64:
case MEMLOCK:
case MEMUNLOCK:
case MEMSETBADBLOCK:
case MEMWRITEOOB: case MEMWRITEOOB:
case MEMWRITEOOB64: case MEMWRITEOOB64:
case MEMWRITE: case MEMWRITE:
case OTPLOCK:
case OTPERASE:
if (!(file->f_mode & FMODE_WRITE)) if (!(file->f_mode & FMODE_WRITE))
return -EPERM; return -EPERM;
break; break;
@ -938,6 +931,7 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
} }
case OTPLOCK: case OTPLOCK:
case OTPERASE:
{ {
struct otp_info oinfo; struct otp_info oinfo;
@ -945,7 +939,10 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
return -EINVAL; return -EINVAL;
if (copy_from_user(&oinfo, argp, sizeof(oinfo))) if (copy_from_user(&oinfo, argp, sizeof(oinfo)))
return -EFAULT; return -EFAULT;
if (cmd == OTPLOCK)
ret = mtd_lock_user_prot_reg(mtd, oinfo.start, oinfo.length); ret = mtd_lock_user_prot_reg(mtd, oinfo.start, oinfo.length);
else
ret = mtd_erase_user_prot_reg(mtd, oinfo.start, oinfo.length);
break; break;
} }
@ -991,6 +988,7 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
if (!mtd_has_oob(mtd)) if (!mtd_has_oob(mtd))
return -EOPNOTSUPP; return -EOPNOTSUPP;
mfi->mode = arg; mfi->mode = arg;
break;
case MTD_FILE_MODE_NORMAL: case MTD_FILE_MODE_NORMAL:
break; break;
@ -1026,11 +1024,14 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
static long mtdchar_unlocked_ioctl(struct file *file, u_int cmd, u_long arg) static long mtdchar_unlocked_ioctl(struct file *file, u_int cmd, u_long arg)
{ {
struct mtd_file_info *mfi = file->private_data;
struct mtd_info *mtd = mfi->mtd;
struct mtd_info *master = mtd_get_master(mtd);
int ret; int ret;
mutex_lock(&mtd_mutex); mutex_lock(&master->master.chrdev_lock);
ret = mtdchar_ioctl(file, cmd, arg); ret = mtdchar_ioctl(file, cmd, arg);
mutex_unlock(&mtd_mutex); mutex_unlock(&master->master.chrdev_lock);
return ret; return ret;
} }
@ -1051,10 +1052,11 @@ static long mtdchar_compat_ioctl(struct file *file, unsigned int cmd,
{ {
struct mtd_file_info *mfi = file->private_data; struct mtd_file_info *mfi = file->private_data;
struct mtd_info *mtd = mfi->mtd; struct mtd_info *mtd = mfi->mtd;
struct mtd_info *master = mtd_get_master(mtd);
void __user *argp = compat_ptr(arg); void __user *argp = compat_ptr(arg);
int ret = 0; int ret = 0;
mutex_lock(&mtd_mutex); mutex_lock(&master->master.chrdev_lock);
switch (cmd) { switch (cmd) {
case MEMWRITEOOB32: case MEMWRITEOOB32:
@ -1117,7 +1119,7 @@ static long mtdchar_compat_ioctl(struct file *file, unsigned int cmd,
ret = mtdchar_ioctl(file, cmd, (unsigned long)argp); ret = mtdchar_ioctl(file, cmd, (unsigned long)argp);
} }
mutex_unlock(&mtd_mutex); mutex_unlock(&master->master.chrdev_lock);
return ret; return ret;
} }

View File

@ -531,6 +531,7 @@ static int mtd_nvmem_reg_read(void *priv, unsigned int offset,
static int mtd_nvmem_add(struct mtd_info *mtd) static int mtd_nvmem_add(struct mtd_info *mtd)
{ {
struct device_node *node = mtd_get_of_node(mtd);
struct nvmem_config config = {}; struct nvmem_config config = {};
config.id = -1; config.id = -1;
@ -543,7 +544,7 @@ static int mtd_nvmem_add(struct mtd_info *mtd)
config.stride = 1; config.stride = 1;
config.read_only = true; config.read_only = true;
config.root_only = true; config.root_only = true;
config.no_of_node = true; config.no_of_node = !of_device_is_compatible(node, "nvmem-cells");
config.priv = mtd; config.priv = mtd;
mtd->nvmem = nvmem_register(&config); mtd->nvmem = nvmem_register(&config);
@ -773,6 +774,7 @@ static void mtd_set_dev_defaults(struct mtd_info *mtd)
INIT_LIST_HEAD(&mtd->partitions); INIT_LIST_HEAD(&mtd->partitions);
mutex_init(&mtd->master.partitions_lock); mutex_init(&mtd->master.partitions_lock);
mutex_init(&mtd->master.chrdev_lock);
} }
/** /**
@ -820,6 +822,9 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types,
/* Prefer parsed partitions over driver-provided fallback */ /* Prefer parsed partitions over driver-provided fallback */
ret = parse_mtd_partitions(mtd, types, parser_data); ret = parse_mtd_partitions(mtd, types, parser_data);
if (ret == -EPROBE_DEFER)
goto out;
if (ret > 0) if (ret > 0)
ret = 0; ret = 0;
else if (nr_parts) else if (nr_parts)
@ -1884,7 +1889,7 @@ int mtd_read_user_prot_reg(struct mtd_info *mtd, loff_t from, size_t len,
EXPORT_SYMBOL_GPL(mtd_read_user_prot_reg); EXPORT_SYMBOL_GPL(mtd_read_user_prot_reg);
int mtd_write_user_prot_reg(struct mtd_info *mtd, loff_t to, size_t len, int mtd_write_user_prot_reg(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, u_char *buf) size_t *retlen, const u_char *buf)
{ {
struct mtd_info *master = mtd_get_master(mtd); struct mtd_info *master = mtd_get_master(mtd);
int ret; int ret;
@ -1918,6 +1923,18 @@ int mtd_lock_user_prot_reg(struct mtd_info *mtd, loff_t from, size_t len)
} }
EXPORT_SYMBOL_GPL(mtd_lock_user_prot_reg); EXPORT_SYMBOL_GPL(mtd_lock_user_prot_reg);
int mtd_erase_user_prot_reg(struct mtd_info *mtd, loff_t from, size_t len)
{
struct mtd_info *master = mtd_get_master(mtd);
if (!master->_erase_user_prot_reg)
return -EOPNOTSUPP;
if (!len)
return 0;
return master->_erase_user_prot_reg(master, from, len);
}
EXPORT_SYMBOL_GPL(mtd_erase_user_prot_reg);
/* Chip-supported device locking */ /* Chip-supported device locking */
int mtd_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) int mtd_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{ {
@ -2172,7 +2189,7 @@ static int mtd_proc_show(struct seq_file *m, void *v)
/*====================================================================*/ /*====================================================================*/
/* Init code */ /* Init code */
static struct backing_dev_info * __init mtd_bdi_init(char *name) static struct backing_dev_info * __init mtd_bdi_init(const char *name)
{ {
struct backing_dev_info *bdi; struct backing_dev_info *bdi;
int ret; int ret;

View File

@ -331,7 +331,7 @@ static int __del_mtd_partitions(struct mtd_info *mtd)
list_for_each_entry_safe(child, next, &mtd->partitions, part.node) { list_for_each_entry_safe(child, next, &mtd->partitions, part.node) {
if (mtd_has_partitions(child)) if (mtd_has_partitions(child))
del_mtd_partitions(child); __del_mtd_partitions(child);
pr_info("Deleting %s MTD partition\n", child->name); pr_info("Deleting %s MTD partition\n", child->name);
ret = del_mtd_device(child); ret = del_mtd_device(child);

View File

@ -1484,19 +1484,7 @@ static struct mtd_blktrans_ops mtdswap_ops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static int __init mtdswap_modinit(void) module_mtd_blktrans(mtdswap_ops);
{
return register_mtd_blktrans(&mtdswap_ops);
}
static void __exit mtdswap_modexit(void)
{
deregister_mtd_blktrans(&mtdswap_ops);
}
module_init(mtdswap_modinit);
module_exit(mtdswap_modexit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Jarkko Lavinen <jarkko.lavinen@nokia.com>"); MODULE_AUTHOR("Jarkko Lavinen <jarkko.lavinen@nokia.com>");

View File

@ -236,7 +236,6 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
goto free_engine_conf; goto free_engine_conf;
engine_conf->code_size = code_size; engine_conf->code_size = code_size;
engine_conf->nsteps = nsteps;
engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL); engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL); engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
if (!engine_conf->calc_buf || !engine_conf->code_buf) { if (!engine_conf->calc_buf || !engine_conf->code_buf) {
@ -245,6 +244,7 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
} }
nand->ecc.ctx.priv = engine_conf; nand->ecc.ctx.priv = engine_conf;
nand->ecc.ctx.nsteps = nsteps;
nand->ecc.ctx.total = nsteps * code_size; nand->ecc.ctx.total = nsteps * code_size;
ret = nand_ecc_sw_bch_init(nand); ret = nand_ecc_sw_bch_init(nand);
@ -253,7 +253,7 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
/* Verify the layout validity */ /* Verify the layout validity */
if (mtd_ooblayout_count_eccbytes(mtd) != if (mtd_ooblayout_count_eccbytes(mtd) !=
engine_conf->nsteps * engine_conf->code_size) { nand->ecc.ctx.nsteps * engine_conf->code_size) {
pr_err("Invalid ECC layout\n"); pr_err("Invalid ECC layout\n");
ret = -EINVAL; ret = -EINVAL;
goto cleanup_bch_ctx; goto cleanup_bch_ctx;
@ -295,7 +295,7 @@ static int nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand,
struct mtd_info *mtd = nanddev_to_mtd(nand); struct mtd_info *mtd = nanddev_to_mtd(nand);
int eccsize = nand->ecc.ctx.conf.step_size; int eccsize = nand->ecc.ctx.conf.step_size;
int eccbytes = engine_conf->code_size; int eccbytes = engine_conf->code_size;
int eccsteps = engine_conf->nsteps; int eccsteps = nand->ecc.ctx.nsteps;
int total = nand->ecc.ctx.total; int total = nand->ecc.ctx.total;
u8 *ecccalc = engine_conf->calc_buf; u8 *ecccalc = engine_conf->calc_buf;
const u8 *data; const u8 *data;
@ -333,7 +333,7 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
int eccsize = nand->ecc.ctx.conf.step_size; int eccsize = nand->ecc.ctx.conf.step_size;
int total = nand->ecc.ctx.total; int total = nand->ecc.ctx.total;
int eccbytes = engine_conf->code_size; int eccbytes = engine_conf->code_size;
int eccsteps = engine_conf->nsteps; int eccsteps = nand->ecc.ctx.nsteps;
u8 *ecccalc = engine_conf->calc_buf; u8 *ecccalc = engine_conf->calc_buf;
u8 *ecccode = engine_conf->code_buf; u8 *ecccode = engine_conf->code_buf;
unsigned int max_bitflips = 0; unsigned int max_bitflips = 0;
@ -365,7 +365,7 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
nand_ecc_sw_bch_calculate(nand, data, &ecccalc[i]); nand_ecc_sw_bch_calculate(nand, data, &ecccalc[i]);
/* Finish a page read: compare and correct */ /* Finish a page read: compare and correct */
for (eccsteps = engine_conf->nsteps, i = 0, data = req->databuf.in; for (eccsteps = nand->ecc.ctx.nsteps, i = 0, data = req->databuf.in;
eccsteps; eccsteps;
eccsteps--, i += eccbytes, data += eccsize) { eccsteps--, i += eccbytes, data += eccsize) {
int stat = nand_ecc_sw_bch_correct(nand, data, int stat = nand_ecc_sw_bch_correct(nand, data,

View File

@ -504,7 +504,6 @@ int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand)
goto free_engine_conf; goto free_engine_conf;
engine_conf->code_size = 3; engine_conf->code_size = 3;
engine_conf->nsteps = mtd->writesize / conf->step_size;
engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL); engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL); engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
if (!engine_conf->calc_buf || !engine_conf->code_buf) { if (!engine_conf->calc_buf || !engine_conf->code_buf) {
@ -513,7 +512,8 @@ int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand)
} }
nand->ecc.ctx.priv = engine_conf; nand->ecc.ctx.priv = engine_conf;
nand->ecc.ctx.total = engine_conf->nsteps * engine_conf->code_size; nand->ecc.ctx.nsteps = mtd->writesize / conf->step_size;
nand->ecc.ctx.total = nand->ecc.ctx.nsteps * engine_conf->code_size;
return 0; return 0;
@ -548,7 +548,7 @@ static int nand_ecc_sw_hamming_prepare_io_req(struct nand_device *nand,
struct mtd_info *mtd = nanddev_to_mtd(nand); struct mtd_info *mtd = nanddev_to_mtd(nand);
int eccsize = nand->ecc.ctx.conf.step_size; int eccsize = nand->ecc.ctx.conf.step_size;
int eccbytes = engine_conf->code_size; int eccbytes = engine_conf->code_size;
int eccsteps = engine_conf->nsteps; int eccsteps = nand->ecc.ctx.nsteps;
int total = nand->ecc.ctx.total; int total = nand->ecc.ctx.total;
u8 *ecccalc = engine_conf->calc_buf; u8 *ecccalc = engine_conf->calc_buf;
const u8 *data; const u8 *data;
@ -586,7 +586,7 @@ static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand,
int eccsize = nand->ecc.ctx.conf.step_size; int eccsize = nand->ecc.ctx.conf.step_size;
int total = nand->ecc.ctx.total; int total = nand->ecc.ctx.total;
int eccbytes = engine_conf->code_size; int eccbytes = engine_conf->code_size;
int eccsteps = engine_conf->nsteps; int eccsteps = nand->ecc.ctx.nsteps;
u8 *ecccalc = engine_conf->calc_buf; u8 *ecccalc = engine_conf->calc_buf;
u8 *ecccode = engine_conf->code_buf; u8 *ecccode = engine_conf->code_buf;
unsigned int max_bitflips = 0; unsigned int max_bitflips = 0;
@ -618,7 +618,7 @@ static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand,
nand_ecc_sw_hamming_calculate(nand, data, &ecccalc[i]); nand_ecc_sw_hamming_calculate(nand, data, &ecccalc[i]);
/* Finish a page read: compare and correct */ /* Finish a page read: compare and correct */
for (eccsteps = engine_conf->nsteps, i = 0, data = req->databuf.in; for (eccsteps = nand->ecc.ctx.nsteps, i = 0, data = req->databuf.in;
eccsteps; eccsteps;
eccsteps--, i += eccbytes, data += eccsize) { eccsteps--, i += eccbytes, data += eccsize) {
int stat = nand_ecc_sw_hamming_correct(nand, data, int stat = nand_ecc_sw_hamming_correct(nand, data,

View File

@ -3167,9 +3167,10 @@ static int onenand_read_user_prot_reg(struct mtd_info *mtd, loff_t from,
* Write user OTP area. * Write user OTP area.
*/ */
static int onenand_write_user_prot_reg(struct mtd_info *mtd, loff_t from, static int onenand_write_user_prot_reg(struct mtd_info *mtd, loff_t from,
size_t len, size_t *retlen, u_char *buf) size_t len, size_t *retlen, const u_char *buf)
{ {
return onenand_otp_walk(mtd, from, len, retlen, buf, do_otp_write, MTD_OTP_USER); return onenand_otp_walk(mtd, from, len, retlen, (u_char *)buf,
do_otp_write, MTD_OTP_USER);
} }
/** /**

View File

@ -396,6 +396,7 @@ static int s3c_onenand_command(struct mtd_info *mtd, int cmd, loff_t addr,
case ONENAND_CMD_READOOB: case ONENAND_CMD_READOOB:
case ONENAND_CMD_BUFFERRAM: case ONENAND_CMD_BUFFERRAM:
ONENAND_SET_NEXT_BUFFERRAM(this); ONENAND_SET_NEXT_BUFFERRAM(this);
break;
default: default:
break; break;
} }

View File

@ -883,10 +883,12 @@ static int atmel_nand_pmecc_correct_data(struct nand_chip *chip, void *buf,
NULL, 0, NULL, 0,
chip->ecc.strength); chip->ecc.strength);
if (ret >= 0) if (ret >= 0) {
mtd->ecc_stats.corrected += ret;
max_bitflips = max(ret, max_bitflips); max_bitflips = max(ret, max_bitflips);
else } else {
mtd->ecc_stats.failed++; mtd->ecc_stats.failed++;
}
databuf += chip->ecc.size; databuf += chip->ecc.size;
eccbuf += chip->ecc.bytes; eccbuf += chip->ecc.bytes;

View File

@ -242,6 +242,9 @@ struct brcmnand_controller {
u32 edu_ext_addr; u32 edu_ext_addr;
u32 edu_cmd; u32 edu_cmd;
u32 edu_config; u32 edu_config;
int sas; /* spare area size, per flash cache */
int sector_size_1k;
u8 *oob;
/* flash_dma reg */ /* flash_dma reg */
const u16 *flash_dma_offsets; const u16 *flash_dma_offsets;
@ -249,7 +252,7 @@ struct brcmnand_controller {
dma_addr_t dma_pa; dma_addr_t dma_pa;
int (*dma_trans)(struct brcmnand_host *host, u64 addr, u32 *buf, int (*dma_trans)(struct brcmnand_host *host, u64 addr, u32 *buf,
u32 len, u8 dma_cmd); u8 *oob, u32 len, u8 dma_cmd);
/* in-memory cache of the FLASH_CACHE, used only for some commands */ /* in-memory cache of the FLASH_CACHE, used only for some commands */
u8 flash_cache[FC_BYTES]; u8 flash_cache[FC_BYTES];
@ -1479,6 +1482,23 @@ static irqreturn_t brcmnand_edu_irq(int irq, void *data)
edu_writel(ctrl, EDU_EXT_ADDR, ctrl->edu_ext_addr); edu_writel(ctrl, EDU_EXT_ADDR, ctrl->edu_ext_addr);
edu_readl(ctrl, EDU_EXT_ADDR); edu_readl(ctrl, EDU_EXT_ADDR);
if (ctrl->oob) {
if (ctrl->edu_cmd == EDU_CMD_READ) {
ctrl->oob += read_oob_from_regs(ctrl,
ctrl->edu_count + 1,
ctrl->oob, ctrl->sas,
ctrl->sector_size_1k);
} else {
brcmnand_write_reg(ctrl, BRCMNAND_CMD_ADDRESS,
ctrl->edu_ext_addr);
brcmnand_read_reg(ctrl, BRCMNAND_CMD_ADDRESS);
ctrl->oob += write_oob_to_regs(ctrl,
ctrl->edu_count,
ctrl->oob, ctrl->sas,
ctrl->sector_size_1k);
}
}
mb(); /* flush previous writes */ mb(); /* flush previous writes */
edu_writel(ctrl, EDU_CMD, ctrl->edu_cmd); edu_writel(ctrl, EDU_CMD, ctrl->edu_cmd);
edu_readl(ctrl, EDU_CMD); edu_readl(ctrl, EDU_CMD);
@ -1850,9 +1870,10 @@ static void brcmnand_write_buf(struct nand_chip *chip, const uint8_t *buf,
* Kick EDU engine * Kick EDU engine
*/ */
static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf, static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
u32 len, u8 cmd) u8 *oob, u32 len, u8 cmd)
{ {
struct brcmnand_controller *ctrl = host->ctrl; struct brcmnand_controller *ctrl = host->ctrl;
struct brcmnand_cfg *cfg = &host->hwcfg;
unsigned long timeo = msecs_to_jiffies(200); unsigned long timeo = msecs_to_jiffies(200);
int ret = 0; int ret = 0;
int dir = (cmd == CMD_PAGE_READ ? DMA_FROM_DEVICE : DMA_TO_DEVICE); int dir = (cmd == CMD_PAGE_READ ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
@ -1860,6 +1881,9 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
unsigned int trans = len >> FC_SHIFT; unsigned int trans = len >> FC_SHIFT;
dma_addr_t pa; dma_addr_t pa;
dev_dbg(ctrl->dev, "EDU %s %p:%p\n", ((edu_cmd == EDU_CMD_READ) ?
"read" : "write"), buf, oob);
pa = dma_map_single(ctrl->dev, buf, len, dir); pa = dma_map_single(ctrl->dev, buf, len, dir);
if (dma_mapping_error(ctrl->dev, pa)) { if (dma_mapping_error(ctrl->dev, pa)) {
dev_err(ctrl->dev, "unable to map buffer for EDU DMA\n"); dev_err(ctrl->dev, "unable to map buffer for EDU DMA\n");
@ -1871,6 +1895,8 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
ctrl->edu_ext_addr = addr; ctrl->edu_ext_addr = addr;
ctrl->edu_cmd = edu_cmd; ctrl->edu_cmd = edu_cmd;
ctrl->edu_count = trans; ctrl->edu_count = trans;
ctrl->sas = cfg->spare_area_size;
ctrl->oob = oob;
edu_writel(ctrl, EDU_DRAM_ADDR, (u32)ctrl->edu_dram_addr); edu_writel(ctrl, EDU_DRAM_ADDR, (u32)ctrl->edu_dram_addr);
edu_readl(ctrl, EDU_DRAM_ADDR); edu_readl(ctrl, EDU_DRAM_ADDR);
@ -1879,6 +1905,16 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
edu_writel(ctrl, EDU_LENGTH, FC_BYTES); edu_writel(ctrl, EDU_LENGTH, FC_BYTES);
edu_readl(ctrl, EDU_LENGTH); edu_readl(ctrl, EDU_LENGTH);
if (ctrl->oob && (ctrl->edu_cmd == EDU_CMD_WRITE)) {
brcmnand_write_reg(ctrl, BRCMNAND_CMD_ADDRESS,
ctrl->edu_ext_addr);
brcmnand_read_reg(ctrl, BRCMNAND_CMD_ADDRESS);
ctrl->oob += write_oob_to_regs(ctrl,
1,
ctrl->oob, ctrl->sas,
ctrl->sector_size_1k);
}
/* Start edu engine */ /* Start edu engine */
mb(); /* flush previous writes */ mb(); /* flush previous writes */
edu_writel(ctrl, EDU_CMD, ctrl->edu_cmd); edu_writel(ctrl, EDU_CMD, ctrl->edu_cmd);
@ -1893,6 +1929,14 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
dma_unmap_single(ctrl->dev, pa, len, dir); dma_unmap_single(ctrl->dev, pa, len, dir);
/* read last subpage oob */
if (ctrl->oob && (ctrl->edu_cmd == EDU_CMD_READ)) {
ctrl->oob += read_oob_from_regs(ctrl,
1,
ctrl->oob, ctrl->sas,
ctrl->sector_size_1k);
}
/* for program page check NAND status */ /* for program page check NAND status */
if (((brcmnand_read_reg(ctrl, BRCMNAND_INTFC_STATUS) & if (((brcmnand_read_reg(ctrl, BRCMNAND_INTFC_STATUS) &
INTFC_FLASH_STATUS) & NAND_STATUS_FAIL) && INTFC_FLASH_STATUS) & NAND_STATUS_FAIL) &&
@ -2002,7 +2046,7 @@ static void brcmnand_dma_run(struct brcmnand_host *host, dma_addr_t desc)
} }
static int brcmnand_dma_trans(struct brcmnand_host *host, u64 addr, u32 *buf, static int brcmnand_dma_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
u32 len, u8 dma_cmd) u8 *oob, u32 len, u8 dma_cmd)
{ {
struct brcmnand_controller *ctrl = host->ctrl; struct brcmnand_controller *ctrl = host->ctrl;
dma_addr_t buf_pa; dma_addr_t buf_pa;
@ -2147,8 +2191,9 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip,
try_dmaread: try_dmaread:
brcmnand_clear_ecc_addr(ctrl); brcmnand_clear_ecc_addr(ctrl);
if (ctrl->dma_trans && !oob && flash_dma_buf_ok(buf)) { if (ctrl->dma_trans && (has_edu(ctrl) || !oob) &&
err = ctrl->dma_trans(host, addr, buf, flash_dma_buf_ok(buf)) {
err = ctrl->dma_trans(host, addr, buf, oob,
trans * FC_BYTES, trans * FC_BYTES,
CMD_PAGE_READ); CMD_PAGE_READ);
@ -2296,8 +2341,12 @@ static int brcmnand_write(struct mtd_info *mtd, struct nand_chip *chip,
for (i = 0; i < ctrl->max_oob; i += 4) for (i = 0; i < ctrl->max_oob; i += 4)
oob_reg_write(ctrl, i, 0xffffffff); oob_reg_write(ctrl, i, 0xffffffff);
if (use_dma(ctrl) && !oob && flash_dma_buf_ok(buf)) { if (mtd->oops_panic_write)
if (ctrl->dma_trans(host, addr, (u32 *)buf, mtd->writesize, /* switch to interrupt polling and PIO mode */
disable_ctrl_irqs(ctrl);
if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) {
if (ctrl->dma_trans(host, addr, (u32 *)buf, oob, mtd->writesize,
CMD_PROGRAM_PAGE)) CMD_PROGRAM_PAGE))
ret = -EIO; ret = -EIO;
@ -2688,6 +2737,12 @@ static int brcmnand_attach_chip(struct nand_chip *chip)
ret = brcmstb_choose_ecc_layout(host); ret = brcmstb_choose_ecc_layout(host);
/* If OOB is written with ECC enabled it will cause ECC errors */
if (is_hamming_ecc(host->ctrl, &host->hwcfg)) {
chip->ecc.write_oob = brcmnand_write_oob_raw;
chip->ecc.read_oob = brcmnand_read_oob_raw;
}
return ret; return ret;
} }

View File

@ -930,6 +930,7 @@ static int fsmc_nand_attach_chip(struct nand_chip *nand)
"Using 4-bit SW BCH ECC scheme\n"); "Using 4-bit SW BCH ECC scheme\n");
break; break;
} }
break;
case NAND_ECC_ENGINE_TYPE_ON_DIE: case NAND_ECC_ENGINE_TYPE_ON_DIE:
break; break;
@ -1077,11 +1078,13 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
host->read_dma_chan = dma_request_channel(mask, filter, NULL); host->read_dma_chan = dma_request_channel(mask, filter, NULL);
if (!host->read_dma_chan) { if (!host->read_dma_chan) {
dev_err(&pdev->dev, "Unable to get read dma channel\n"); dev_err(&pdev->dev, "Unable to get read dma channel\n");
ret = -ENODEV;
goto disable_clk; goto disable_clk;
} }
host->write_dma_chan = dma_request_channel(mask, filter, NULL); host->write_dma_chan = dma_request_channel(mask, filter, NULL);
if (!host->write_dma_chan) { if (!host->write_dma_chan) {
dev_err(&pdev->dev, "Unable to get write dma channel\n"); dev_err(&pdev->dev, "Unable to get write dma channel\n");
ret = -ENODEV;
goto release_dma_read_chan; goto release_dma_read_chan;
} }
} }

View File

@ -2449,7 +2449,7 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
this->bch_geometry.auxiliary_size = 128; this->bch_geometry.auxiliary_size = 128;
ret = gpmi_alloc_dma_buffer(this); ret = gpmi_alloc_dma_buffer(this);
if (ret) if (ret)
goto err_out; return ret;
nand_controller_init(&this->base); nand_controller_init(&this->base);
this->base.ops = &gpmi_nand_controller_ops; this->base.ops = &gpmi_nand_controller_ops;

View File

@ -1849,7 +1849,7 @@ static int mxcnd_remove(struct platform_device *pdev)
static struct platform_driver mxcnd_driver = { static struct platform_driver mxcnd_driver = {
.driver = { .driver = {
.name = DRIVER_NAME, .name = DRIVER_NAME,
.of_match_table = of_match_ptr(mxcnd_dt_ids), .of_match_table = mxcnd_dt_ids,
}, },
.probe = mxcnd_probe, .probe = mxcnd_probe,
.remove = mxcnd_remove, .remove = mxcnd_remove,

View File

@ -278,11 +278,48 @@ static int nand_block_bad(struct nand_chip *chip, loff_t ofs)
return 0; return 0;
} }
/**
* nand_region_is_secured() - Check if the region is secured
* @chip: NAND chip object
* @offset: Offset of the region to check
* @size: Size of the region to check
*
* Checks if the region is secured by comparing the offset and size with the
* list of secure regions obtained from DT. Returns true if the region is
* secured else false.
*/
static bool nand_region_is_secured(struct nand_chip *chip, loff_t offset, u64 size)
{
int i;
/* Skip touching the secure regions if present */
for (i = 0; i < chip->nr_secure_regions; i++) {
const struct nand_secure_region *region = &chip->secure_regions[i];
if (offset + size <= region->offset ||
offset >= region->offset + region->size)
continue;
pr_debug("%s: Region 0x%llx - 0x%llx is secured!",
__func__, offset, offset + size);
return true;
}
return false;
}
static int nand_isbad_bbm(struct nand_chip *chip, loff_t ofs) static int nand_isbad_bbm(struct nand_chip *chip, loff_t ofs)
{ {
struct mtd_info *mtd = nand_to_mtd(chip);
if (chip->options & NAND_NO_BBM_QUIRK) if (chip->options & NAND_NO_BBM_QUIRK)
return 0; return 0;
/* Check if the region is secured */
if (nand_region_is_secured(chip, ofs, mtd->erasesize))
return -EIO;
if (chip->legacy.block_bad) if (chip->legacy.block_bad)
return chip->legacy.block_bad(chip, ofs); return chip->legacy.block_bad(chip, ofs);
@ -397,6 +434,10 @@ static int nand_do_write_oob(struct nand_chip *chip, loff_t to,
return -EINVAL; return -EINVAL;
} }
/* Check if the region is secured */
if (nand_region_is_secured(chip, to, ops->ooblen))
return -EIO;
chipnr = (int)(to >> chip->chip_shift); chipnr = (int)(to >> chip->chip_shift);
/* /*
@ -1294,8 +1335,6 @@ static int nand_exec_prog_page_op(struct nand_chip *chip, unsigned int page,
}; };
struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs);
int naddrs = nand_fill_column_cycles(chip, addrs, offset_in_page); int naddrs = nand_fill_column_cycles(chip, addrs, offset_in_page);
int ret;
u8 status;
if (naddrs < 0) if (naddrs < 0)
return naddrs; return naddrs;
@ -1335,15 +1374,7 @@ static int nand_exec_prog_page_op(struct nand_chip *chip, unsigned int page,
op.ninstrs--; op.ninstrs--;
} }
ret = nand_exec_op(chip, &op); return nand_exec_op(chip, &op);
if (!prog || ret)
return ret;
ret = nand_status_op(chip, &status);
if (ret)
return ret;
return status;
} }
/** /**
@ -1449,7 +1480,8 @@ int nand_prog_page_op(struct nand_chip *chip, unsigned int page,
unsigned int len) unsigned int len)
{ {
struct mtd_info *mtd = nand_to_mtd(chip); struct mtd_info *mtd = nand_to_mtd(chip);
int status; u8 status;
int ret;
if (!len || !buf) if (!len || !buf)
return -EINVAL; return -EINVAL;
@ -1458,14 +1490,24 @@ int nand_prog_page_op(struct nand_chip *chip, unsigned int page,
return -EINVAL; return -EINVAL;
if (nand_has_exec_op(chip)) { if (nand_has_exec_op(chip)) {
status = nand_exec_prog_page_op(chip, page, offset_in_page, buf, ret = nand_exec_prog_page_op(chip, page, offset_in_page, buf,
len, true); len, true);
if (ret)
return ret;
ret = nand_status_op(chip, &status);
if (ret)
return ret;
} else { } else {
chip->legacy.cmdfunc(chip, NAND_CMD_SEQIN, offset_in_page, chip->legacy.cmdfunc(chip, NAND_CMD_SEQIN, offset_in_page,
page); page);
chip->legacy.write_buf(chip, buf, len); chip->legacy.write_buf(chip, buf, len);
chip->legacy.cmdfunc(chip, NAND_CMD_PAGEPROG, -1, -1); chip->legacy.cmdfunc(chip, NAND_CMD_PAGEPROG, -1, -1);
status = chip->legacy.waitfunc(chip); ret = chip->legacy.waitfunc(chip);
if (ret < 0)
return ret;
status = ret;
} }
if (status & NAND_STATUS_FAIL) if (status & NAND_STATUS_FAIL)
@ -3127,6 +3169,10 @@ static int nand_do_read_ops(struct nand_chip *chip, loff_t from,
int retry_mode = 0; int retry_mode = 0;
bool ecc_fail = false; bool ecc_fail = false;
/* Check if the region is secured */
if (nand_region_is_secured(chip, from, readlen))
return -EIO;
chipnr = (int)(from >> chip->chip_shift); chipnr = (int)(from >> chip->chip_shift);
nand_select_target(chip, chipnr); nand_select_target(chip, chipnr);
@ -3458,6 +3504,10 @@ static int nand_do_read_oob(struct nand_chip *chip, loff_t from,
pr_debug("%s: from = 0x%08Lx, len = %i\n", pr_debug("%s: from = 0x%08Lx, len = %i\n",
__func__, (unsigned long long)from, readlen); __func__, (unsigned long long)from, readlen);
/* Check if the region is secured */
if (nand_region_is_secured(chip, from, readlen))
return -EIO;
stats = mtd->ecc_stats; stats = mtd->ecc_stats;
len = mtd_oobavail(mtd, ops); len = mtd_oobavail(mtd, ops);
@ -3979,6 +4029,10 @@ static int nand_do_write_ops(struct nand_chip *chip, loff_t to,
return -EINVAL; return -EINVAL;
} }
/* Check if the region is secured */
if (nand_region_is_secured(chip, to, writelen))
return -EIO;
column = to & (mtd->writesize - 1); column = to & (mtd->writesize - 1);
chipnr = (int)(to >> chip->chip_shift); chipnr = (int)(to >> chip->chip_shift);
@ -4180,6 +4234,10 @@ int nand_erase_nand(struct nand_chip *chip, struct erase_info *instr,
if (check_offs_len(chip, instr->addr, instr->len)) if (check_offs_len(chip, instr->addr, instr->len))
return -EINVAL; return -EINVAL;
/* Check if the region is secured */
if (nand_region_is_secured(chip, instr->addr, instr->len))
return -EIO;
/* Grab the lock and see if the device is available */ /* Grab the lock and see if the device is available */
ret = nand_get_device(chip); ret = nand_get_device(chip);
if (ret) if (ret)
@ -4995,6 +5053,31 @@ static bool of_get_nand_on_flash_bbt(struct device_node *np)
return of_property_read_bool(np, "nand-on-flash-bbt"); return of_property_read_bool(np, "nand-on-flash-bbt");
} }
static int of_get_nand_secure_regions(struct nand_chip *chip)
{
struct device_node *dn = nand_get_flash_node(chip);
int nr_elem, i, j;
nr_elem = of_property_count_elems_of_size(dn, "secure-regions", sizeof(u64));
if (!nr_elem)
return 0;
chip->nr_secure_regions = nr_elem / 2;
chip->secure_regions = kcalloc(chip->nr_secure_regions, sizeof(*chip->secure_regions),
GFP_KERNEL);
if (!chip->secure_regions)
return -ENOMEM;
for (i = 0, j = 0; i < chip->nr_secure_regions; i++, j += 2) {
of_property_read_u64_index(dn, "secure-regions", j,
&chip->secure_regions[i].offset);
of_property_read_u64_index(dn, "secure-regions", j + 1,
&chip->secure_regions[i].size);
}
return 0;
}
static int rawnand_dt_init(struct nand_chip *chip) static int rawnand_dt_init(struct nand_chip *chip)
{ {
struct nand_device *nand = mtd_to_nanddev(nand_to_mtd(chip)); struct nand_device *nand = mtd_to_nanddev(nand_to_mtd(chip));
@ -5162,8 +5245,8 @@ int rawnand_sw_hamming_init(struct nand_chip *chip)
chip->ecc.size = base->ecc.ctx.conf.step_size; chip->ecc.size = base->ecc.ctx.conf.step_size;
chip->ecc.strength = base->ecc.ctx.conf.strength; chip->ecc.strength = base->ecc.ctx.conf.strength;
chip->ecc.total = base->ecc.ctx.total; chip->ecc.total = base->ecc.ctx.total;
chip->ecc.steps = engine_conf->nsteps; chip->ecc.steps = nanddev_get_ecc_nsteps(base);
chip->ecc.bytes = engine_conf->code_size; chip->ecc.bytes = base->ecc.ctx.total / nanddev_get_ecc_nsteps(base);
return 0; return 0;
} }
@ -5201,7 +5284,7 @@ EXPORT_SYMBOL(rawnand_sw_hamming_cleanup);
int rawnand_sw_bch_init(struct nand_chip *chip) int rawnand_sw_bch_init(struct nand_chip *chip)
{ {
struct nand_device *base = &chip->base; struct nand_device *base = &chip->base;
struct nand_ecc_sw_bch_conf *engine_conf; const struct nand_ecc_props *ecc_conf = nanddev_get_ecc_conf(base);
int ret; int ret;
base->ecc.user_conf.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; base->ecc.user_conf.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
@ -5213,13 +5296,11 @@ int rawnand_sw_bch_init(struct nand_chip *chip)
if (ret) if (ret)
return ret; return ret;
engine_conf = base->ecc.ctx.priv; chip->ecc.size = ecc_conf->step_size;
chip->ecc.strength = ecc_conf->strength;
chip->ecc.size = base->ecc.ctx.conf.step_size;
chip->ecc.strength = base->ecc.ctx.conf.strength;
chip->ecc.total = base->ecc.ctx.total; chip->ecc.total = base->ecc.ctx.total;
chip->ecc.steps = engine_conf->nsteps; chip->ecc.steps = nanddev_get_ecc_nsteps(base);
chip->ecc.bytes = engine_conf->code_size; chip->ecc.bytes = base->ecc.ctx.total / nanddev_get_ecc_nsteps(base);
return 0; return 0;
} }
@ -5953,6 +6034,16 @@ static int nand_scan_tail(struct nand_chip *chip)
goto err_free_interface_config; goto err_free_interface_config;
} }
/*
* Look for secure regions in the NAND chip. These regions are supposed
* to be protected by a secure element like Trustzone. So the read/write
* accesses to these regions will be blocked in the runtime by this
* driver.
*/
ret = of_get_nand_secure_regions(chip);
if (ret)
goto err_free_interface_config;
/* Check, if we should skip the bad block table scan */ /* Check, if we should skip the bad block table scan */
if (chip->options & NAND_SKIP_BBTSCAN) if (chip->options & NAND_SKIP_BBTSCAN)
return 0; return 0;
@ -5960,10 +6051,13 @@ static int nand_scan_tail(struct nand_chip *chip)
/* Build bad block table */ /* Build bad block table */
ret = nand_create_bbt(chip); ret = nand_create_bbt(chip);
if (ret) if (ret)
goto err_free_interface_config; goto err_free_secure_regions;
return 0; return 0;
err_free_secure_regions:
kfree(chip->secure_regions);
err_free_interface_config: err_free_interface_config:
kfree(chip->best_interface_config); kfree(chip->best_interface_config);
@ -6051,6 +6145,9 @@ void nand_cleanup(struct nand_chip *chip)
nanddev_cleanup(&chip->base); nanddev_cleanup(&chip->base);
/* Free secure regions data */
kfree(chip->secure_regions);
/* Free bad block table memory */ /* Free bad block table memory */
kfree(chip->bbt); kfree(chip->bbt);
kfree(chip->data_buf); kfree(chip->data_buf);

View File

@ -1868,18 +1868,19 @@ static int omap_sw_ooblayout_ecc(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion) struct mtd_oob_region *oobregion)
{ {
struct nand_device *nand = mtd_to_nanddev(mtd); struct nand_device *nand = mtd_to_nanddev(mtd);
const struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; unsigned int nsteps = nanddev_get_ecc_nsteps(nand);
unsigned int ecc_bytes = nanddev_get_ecc_bytes_per_step(nand);
int off = BADBLOCK_MARKER_LENGTH; int off = BADBLOCK_MARKER_LENGTH;
if (section >= engine_conf->nsteps) if (section >= nsteps)
return -ERANGE; return -ERANGE;
/* /*
* When SW correction is employed, one OMAP specific marker byte is * When SW correction is employed, one OMAP specific marker byte is
* reserved after each ECC step. * reserved after each ECC step.
*/ */
oobregion->offset = off + (section * (engine_conf->code_size + 1)); oobregion->offset = off + (section * (ecc_bytes + 1));
oobregion->length = engine_conf->code_size; oobregion->length = ecc_bytes;
return 0; return 0;
} }
@ -1888,7 +1889,8 @@ static int omap_sw_ooblayout_free(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion) struct mtd_oob_region *oobregion)
{ {
struct nand_device *nand = mtd_to_nanddev(mtd); struct nand_device *nand = mtd_to_nanddev(mtd);
const struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; unsigned int nsteps = nanddev_get_ecc_nsteps(nand);
unsigned int ecc_bytes = nanddev_get_ecc_bytes_per_step(nand);
int off = BADBLOCK_MARKER_LENGTH; int off = BADBLOCK_MARKER_LENGTH;
if (section) if (section)
@ -1898,7 +1900,7 @@ static int omap_sw_ooblayout_free(struct mtd_info *mtd, int section,
* When SW correction is employed, one OMAP specific marker byte is * When SW correction is employed, one OMAP specific marker byte is
* reserved after each ECC step. * reserved after each ECC step.
*/ */
off += ((engine_conf->code_size + 1) * engine_conf->nsteps); off += ((ecc_bytes + 1) * nsteps);
if (off >= mtd->oobsize) if (off >= mtd->oobsize)
return -ERANGE; return -ERANGE;

View File

@ -27,7 +27,7 @@
#define NAND_DEV0_CFG0 0x20 #define NAND_DEV0_CFG0 0x20
#define NAND_DEV0_CFG1 0x24 #define NAND_DEV0_CFG1 0x24
#define NAND_DEV0_ECC_CFG 0x28 #define NAND_DEV0_ECC_CFG 0x28
#define NAND_DEV1_ECC_CFG 0x2c #define NAND_AUTO_STATUS_EN 0x2c
#define NAND_DEV1_CFG0 0x30 #define NAND_DEV1_CFG0 0x30
#define NAND_DEV1_CFG1 0x34 #define NAND_DEV1_CFG1 0x34
#define NAND_READ_ID 0x40 #define NAND_READ_ID 0x40
@ -48,6 +48,10 @@
#define NAND_READ_LOCATION_1 0xf24 #define NAND_READ_LOCATION_1 0xf24
#define NAND_READ_LOCATION_2 0xf28 #define NAND_READ_LOCATION_2 0xf28
#define NAND_READ_LOCATION_3 0xf2c #define NAND_READ_LOCATION_3 0xf2c
#define NAND_READ_LOCATION_LAST_CW_0 0xf40
#define NAND_READ_LOCATION_LAST_CW_1 0xf44
#define NAND_READ_LOCATION_LAST_CW_2 0xf48
#define NAND_READ_LOCATION_LAST_CW_3 0xf4c
/* dummy register offsets, used by write_reg_dma */ /* dummy register offsets, used by write_reg_dma */
#define NAND_DEV_CMD1_RESTORE 0xdead #define NAND_DEV_CMD1_RESTORE 0xdead
@ -181,12 +185,17 @@
#define ECC_BCH_4BIT BIT(2) #define ECC_BCH_4BIT BIT(2)
#define ECC_BCH_8BIT BIT(3) #define ECC_BCH_8BIT BIT(3)
#define nandc_set_read_loc(nandc, reg, offset, size, is_last) \ #define nandc_set_read_loc_first(chip, reg, cw_offset, read_size, is_last_read_loc) \
nandc_set_reg(nandc, NAND_READ_LOCATION_##reg, \ nandc_set_reg(chip, reg, \
((offset) << READ_LOCATION_OFFSET) | \ ((cw_offset) << READ_LOCATION_OFFSET) | \
((size) << READ_LOCATION_SIZE) | \ ((read_size) << READ_LOCATION_SIZE) | \
((is_last) << READ_LOCATION_LAST)) ((is_last_read_loc) << READ_LOCATION_LAST))
#define nandc_set_read_loc_last(chip, reg, cw_offset, read_size, is_last_read_loc) \
nandc_set_reg(chip, reg, \
((cw_offset) << READ_LOCATION_OFFSET) | \
((read_size) << READ_LOCATION_SIZE) | \
((is_last_read_loc) << READ_LOCATION_LAST))
/* /*
* Returns the actual register address for all NAND_DEV_ registers * Returns the actual register address for all NAND_DEV_ registers
* (i.e. NAND_DEV_CMD0, NAND_DEV_CMD1, NAND_DEV_CMD2 and NAND_DEV_CMD_VLD) * (i.e. NAND_DEV_CMD0, NAND_DEV_CMD1, NAND_DEV_CMD2 and NAND_DEV_CMD_VLD)
@ -316,6 +325,10 @@ struct nandc_regs {
__le32 read_location1; __le32 read_location1;
__le32 read_location2; __le32 read_location2;
__le32 read_location3; __le32 read_location3;
__le32 read_location_last0;
__le32 read_location_last1;
__le32 read_location_last2;
__le32 read_location_last3;
__le32 erased_cw_detect_cfg_clr; __le32 erased_cw_detect_cfg_clr;
__le32 erased_cw_detect_cfg_set; __le32 erased_cw_detect_cfg_set;
@ -644,14 +657,23 @@ static __le32 *offset_to_nandc_reg(struct nandc_regs *regs, int offset)
return &regs->read_location2; return &regs->read_location2;
case NAND_READ_LOCATION_3: case NAND_READ_LOCATION_3:
return &regs->read_location3; return &regs->read_location3;
case NAND_READ_LOCATION_LAST_CW_0:
return &regs->read_location_last0;
case NAND_READ_LOCATION_LAST_CW_1:
return &regs->read_location_last1;
case NAND_READ_LOCATION_LAST_CW_2:
return &regs->read_location_last2;
case NAND_READ_LOCATION_LAST_CW_3:
return &regs->read_location_last3;
default: default:
return NULL; return NULL;
} }
} }
static void nandc_set_reg(struct qcom_nand_controller *nandc, int offset, static void nandc_set_reg(struct nand_chip *chip, int offset,
u32 val) u32 val)
{ {
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
struct nandc_regs *regs = nandc->regs; struct nandc_regs *regs = nandc->regs;
__le32 *reg; __le32 *reg;
@ -661,17 +683,43 @@ static void nandc_set_reg(struct qcom_nand_controller *nandc, int offset,
*reg = cpu_to_le32(val); *reg = cpu_to_le32(val);
} }
/* Helper to check the code word, whether it is last cw or not */
static bool qcom_nandc_is_last_cw(struct nand_ecc_ctrl *ecc, int cw)
{
return cw == (ecc->steps - 1);
}
/* helper to configure location register values */
static void nandc_set_read_loc(struct nand_chip *chip, int cw, int reg,
int cw_offset, int read_size, int is_last_read_loc)
{
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
struct nand_ecc_ctrl *ecc = &chip->ecc;
int reg_base = NAND_READ_LOCATION_0;
if (nandc->props->qpic_v2 && qcom_nandc_is_last_cw(ecc, cw))
reg_base = NAND_READ_LOCATION_LAST_CW_0;
reg_base += reg * 4;
if (nandc->props->qpic_v2 && qcom_nandc_is_last_cw(ecc, cw))
return nandc_set_read_loc_last(chip, reg_base, cw_offset,
read_size, is_last_read_loc);
else
return nandc_set_read_loc_first(chip, reg_base, cw_offset,
read_size, is_last_read_loc);
}
/* helper to configure address register values */ /* helper to configure address register values */
static void set_address(struct qcom_nand_host *host, u16 column, int page) static void set_address(struct qcom_nand_host *host, u16 column, int page)
{ {
struct nand_chip *chip = &host->chip; struct nand_chip *chip = &host->chip;
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
if (chip->options & NAND_BUSWIDTH_16) if (chip->options & NAND_BUSWIDTH_16)
column >>= 1; column >>= 1;
nandc_set_reg(nandc, NAND_ADDR0, page << 16 | column); nandc_set_reg(chip, NAND_ADDR0, page << 16 | column);
nandc_set_reg(nandc, NAND_ADDR1, page >> 16 & 0xff); nandc_set_reg(chip, NAND_ADDR1, page >> 16 & 0xff);
} }
/* /*
@ -680,11 +728,11 @@ static void set_address(struct qcom_nand_host *host, u16 column, int page)
* *
* @num_cw: number of steps for the read/write operation * @num_cw: number of steps for the read/write operation
* @read: read or write operation * @read: read or write operation
* @cw : which code word
*/ */
static void update_rw_regs(struct qcom_nand_host *host, int num_cw, bool read) static void update_rw_regs(struct qcom_nand_host *host, int num_cw, bool read, int cw)
{ {
struct nand_chip *chip = &host->chip; struct nand_chip *chip = &host->chip;
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
u32 cmd, cfg0, cfg1, ecc_bch_cfg; u32 cmd, cfg0, cfg1, ecc_bch_cfg;
if (read) { if (read) {
@ -710,17 +758,17 @@ static void update_rw_regs(struct qcom_nand_host *host, int num_cw, bool read)
ecc_bch_cfg = 1 << ECC_CFG_ECC_DISABLE; ecc_bch_cfg = 1 << ECC_CFG_ECC_DISABLE;
} }
nandc_set_reg(nandc, NAND_FLASH_CMD, cmd); nandc_set_reg(chip, NAND_FLASH_CMD, cmd);
nandc_set_reg(nandc, NAND_DEV0_CFG0, cfg0); nandc_set_reg(chip, NAND_DEV0_CFG0, cfg0);
nandc_set_reg(nandc, NAND_DEV0_CFG1, cfg1); nandc_set_reg(chip, NAND_DEV0_CFG1, cfg1);
nandc_set_reg(nandc, NAND_DEV0_ECC_CFG, ecc_bch_cfg); nandc_set_reg(chip, NAND_DEV0_ECC_CFG, ecc_bch_cfg);
nandc_set_reg(nandc, NAND_EBI2_ECC_BUF_CFG, host->ecc_buf_cfg); nandc_set_reg(chip, NAND_EBI2_ECC_BUF_CFG, host->ecc_buf_cfg);
nandc_set_reg(nandc, NAND_FLASH_STATUS, host->clrflashstatus); nandc_set_reg(chip, NAND_FLASH_STATUS, host->clrflashstatus);
nandc_set_reg(nandc, NAND_READ_STATUS, host->clrreadstatus); nandc_set_reg(chip, NAND_READ_STATUS, host->clrreadstatus);
nandc_set_reg(nandc, NAND_EXEC_CMD, 1); nandc_set_reg(chip, NAND_EXEC_CMD, 1);
if (read) if (read)
nandc_set_read_loc(nandc, 0, 0, host->use_ecc ? nandc_set_read_loc(chip, cw, 0, 0, host->use_ecc ?
host->cw_data : host->cw_size, 1); host->cw_data : host->cw_size, 1);
} }
@ -1079,8 +1127,10 @@ static int write_data_dma(struct qcom_nand_controller *nandc, int reg_off,
* Helper to prepare DMA descriptors for configuring registers * Helper to prepare DMA descriptors for configuring registers
* before reading a NAND page. * before reading a NAND page.
*/ */
static void config_nand_page_read(struct qcom_nand_controller *nandc) static void config_nand_page_read(struct nand_chip *chip)
{ {
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
write_reg_dma(nandc, NAND_ADDR0, 2, 0); write_reg_dma(nandc, NAND_ADDR0, 2, 0);
write_reg_dma(nandc, NAND_DEV0_CFG0, 3, 0); write_reg_dma(nandc, NAND_DEV0_CFG0, 3, 0);
write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1, 0); write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1, 0);
@ -1094,11 +1144,18 @@ static void config_nand_page_read(struct qcom_nand_controller *nandc)
* before reading each codeword in NAND page. * before reading each codeword in NAND page.
*/ */
static void static void
config_nand_cw_read(struct qcom_nand_controller *nandc, bool use_ecc) config_nand_cw_read(struct nand_chip *chip, bool use_ecc, int cw)
{ {
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
struct nand_ecc_ctrl *ecc = &chip->ecc;
int reg = NAND_READ_LOCATION_0;
if (nandc->props->qpic_v2 && qcom_nandc_is_last_cw(ecc, cw))
reg = NAND_READ_LOCATION_LAST_CW_0;
if (nandc->props->is_bam) if (nandc->props->is_bam)
write_reg_dma(nandc, NAND_READ_LOCATION_0, 4, write_reg_dma(nandc, reg, 4, NAND_BAM_NEXT_SGL);
NAND_BAM_NEXT_SGL);
write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL);
write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL);
@ -1117,19 +1174,21 @@ config_nand_cw_read(struct qcom_nand_controller *nandc, bool use_ecc)
* single codeword in page * single codeword in page
*/ */
static void static void
config_nand_single_cw_page_read(struct qcom_nand_controller *nandc, config_nand_single_cw_page_read(struct nand_chip *chip,
bool use_ecc) bool use_ecc, int cw)
{ {
config_nand_page_read(nandc); config_nand_page_read(chip);
config_nand_cw_read(nandc, use_ecc); config_nand_cw_read(chip, use_ecc, cw);
} }
/* /*
* Helper to prepare DMA descriptors used to configure registers needed for * Helper to prepare DMA descriptors used to configure registers needed for
* before writing a NAND page. * before writing a NAND page.
*/ */
static void config_nand_page_write(struct qcom_nand_controller *nandc) static void config_nand_page_write(struct nand_chip *chip)
{ {
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
write_reg_dma(nandc, NAND_ADDR0, 2, 0); write_reg_dma(nandc, NAND_ADDR0, 2, 0);
write_reg_dma(nandc, NAND_DEV0_CFG0, 3, 0); write_reg_dma(nandc, NAND_DEV0_CFG0, 3, 0);
write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1, write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1,
@ -1140,8 +1199,10 @@ static void config_nand_page_write(struct qcom_nand_controller *nandc)
* Helper to prepare DMA descriptors for configuring registers * Helper to prepare DMA descriptors for configuring registers
* before writing each codeword in NAND page. * before writing each codeword in NAND page.
*/ */
static void config_nand_cw_write(struct qcom_nand_controller *nandc) static void config_nand_cw_write(struct nand_chip *chip)
{ {
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL);
write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL);
@ -1168,44 +1229,44 @@ static int nandc_param(struct qcom_nand_host *host)
* bytes to read onfi params * bytes to read onfi params
*/ */
if (nandc->props->qpic_v2) if (nandc->props->qpic_v2)
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ_ONFI_READ | nandc_set_reg(chip, NAND_FLASH_CMD, OP_PAGE_READ_ONFI_READ |
PAGE_ACC | LAST_PAGE); PAGE_ACC | LAST_PAGE);
else else
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | nandc_set_reg(chip, NAND_FLASH_CMD, OP_PAGE_READ |
PAGE_ACC | LAST_PAGE); PAGE_ACC | LAST_PAGE);
nandc_set_reg(nandc, NAND_ADDR0, 0); nandc_set_reg(chip, NAND_ADDR0, 0);
nandc_set_reg(nandc, NAND_ADDR1, 0); nandc_set_reg(chip, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE nandc_set_reg(chip, NAND_DEV0_CFG0, 0 << CW_PER_PAGE
| 512 << UD_SIZE_BYTES | 512 << UD_SIZE_BYTES
| 5 << NUM_ADDR_CYCLES | 5 << NUM_ADDR_CYCLES
| 0 << SPARE_SIZE_BYTES); | 0 << SPARE_SIZE_BYTES);
nandc_set_reg(nandc, NAND_DEV0_CFG1, 7 << NAND_RECOVERY_CYCLES nandc_set_reg(chip, NAND_DEV0_CFG1, 7 << NAND_RECOVERY_CYCLES
| 0 << CS_ACTIVE_BSY | 0 << CS_ACTIVE_BSY
| 17 << BAD_BLOCK_BYTE_NUM | 17 << BAD_BLOCK_BYTE_NUM
| 1 << BAD_BLOCK_IN_SPARE_AREA | 1 << BAD_BLOCK_IN_SPARE_AREA
| 2 << WR_RD_BSY_GAP | 2 << WR_RD_BSY_GAP
| 0 << WIDE_FLASH | 0 << WIDE_FLASH
| 1 << DEV0_CFG1_ECC_DISABLE); | 1 << DEV0_CFG1_ECC_DISABLE);
nandc_set_reg(nandc, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE); nandc_set_reg(chip, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE);
/* configure CMD1 and VLD for ONFI param probing in QPIC v1 */ /* configure CMD1 and VLD for ONFI param probing in QPIC v1 */
if (!nandc->props->qpic_v2) { if (!nandc->props->qpic_v2) {
nandc_set_reg(nandc, NAND_DEV_CMD_VLD, nandc_set_reg(chip, NAND_DEV_CMD_VLD,
(nandc->vld & ~READ_START_VLD)); (nandc->vld & ~READ_START_VLD));
nandc_set_reg(nandc, NAND_DEV_CMD1, nandc_set_reg(chip, NAND_DEV_CMD1,
(nandc->cmd1 & ~(0xFF << READ_ADDR)) (nandc->cmd1 & ~(0xFF << READ_ADDR))
| NAND_CMD_PARAM << READ_ADDR); | NAND_CMD_PARAM << READ_ADDR);
} }
nandc_set_reg(nandc, NAND_EXEC_CMD, 1); nandc_set_reg(chip, NAND_EXEC_CMD, 1);
if (!nandc->props->qpic_v2) { if (!nandc->props->qpic_v2) {
nandc_set_reg(nandc, NAND_DEV_CMD1_RESTORE, nandc->cmd1); nandc_set_reg(chip, NAND_DEV_CMD1_RESTORE, nandc->cmd1);
nandc_set_reg(nandc, NAND_DEV_CMD_VLD_RESTORE, nandc->vld); nandc_set_reg(chip, NAND_DEV_CMD_VLD_RESTORE, nandc->vld);
} }
nandc_set_read_loc(nandc, 0, 0, 512, 1); nandc_set_read_loc(chip, 0, 0, 0, 512, 1);
if (!nandc->props->qpic_v2) { if (!nandc->props->qpic_v2) {
write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0); write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0);
@ -1215,7 +1276,7 @@ static int nandc_param(struct qcom_nand_host *host)
nandc->buf_count = 512; nandc->buf_count = 512;
memset(nandc->data_buffer, 0xff, nandc->buf_count); memset(nandc->data_buffer, 0xff, nandc->buf_count);
config_nand_single_cw_page_read(nandc, false); config_nand_single_cw_page_read(chip, false, 0);
read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer,
nandc->buf_count, 0); nandc->buf_count, 0);
@ -1235,16 +1296,16 @@ static int erase_block(struct qcom_nand_host *host, int page_addr)
struct nand_chip *chip = &host->chip; struct nand_chip *chip = &host->chip;
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
nandc_set_reg(nandc, NAND_FLASH_CMD, nandc_set_reg(chip, NAND_FLASH_CMD,
OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE); OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE);
nandc_set_reg(nandc, NAND_ADDR0, page_addr); nandc_set_reg(chip, NAND_ADDR0, page_addr);
nandc_set_reg(nandc, NAND_ADDR1, 0); nandc_set_reg(chip, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_DEV0_CFG0, nandc_set_reg(chip, NAND_DEV0_CFG0,
host->cfg0_raw & ~(7 << CW_PER_PAGE)); host->cfg0_raw & ~(7 << CW_PER_PAGE));
nandc_set_reg(nandc, NAND_DEV0_CFG1, host->cfg1_raw); nandc_set_reg(chip, NAND_DEV0_CFG1, host->cfg1_raw);
nandc_set_reg(nandc, NAND_EXEC_CMD, 1); nandc_set_reg(chip, NAND_EXEC_CMD, 1);
nandc_set_reg(nandc, NAND_FLASH_STATUS, host->clrflashstatus); nandc_set_reg(chip, NAND_FLASH_STATUS, host->clrflashstatus);
nandc_set_reg(nandc, NAND_READ_STATUS, host->clrreadstatus); nandc_set_reg(chip, NAND_READ_STATUS, host->clrreadstatus);
write_reg_dma(nandc, NAND_FLASH_CMD, 3, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_FLASH_CMD, 3, NAND_BAM_NEXT_SGL);
write_reg_dma(nandc, NAND_DEV0_CFG0, 2, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_DEV0_CFG0, 2, NAND_BAM_NEXT_SGL);
@ -1267,12 +1328,12 @@ static int read_id(struct qcom_nand_host *host, int column)
if (column == -1) if (column == -1)
return 0; return 0;
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_FETCH_ID); nandc_set_reg(chip, NAND_FLASH_CMD, OP_FETCH_ID);
nandc_set_reg(nandc, NAND_ADDR0, column); nandc_set_reg(chip, NAND_ADDR0, column);
nandc_set_reg(nandc, NAND_ADDR1, 0); nandc_set_reg(chip, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_FLASH_CHIP_SELECT, nandc_set_reg(chip, NAND_FLASH_CHIP_SELECT,
nandc->props->is_bam ? 0 : DM_EN); nandc->props->is_bam ? 0 : DM_EN);
nandc_set_reg(nandc, NAND_EXEC_CMD, 1); nandc_set_reg(chip, NAND_EXEC_CMD, 1);
write_reg_dma(nandc, NAND_FLASH_CMD, 4, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_FLASH_CMD, 4, NAND_BAM_NEXT_SGL);
write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL);
@ -1288,8 +1349,8 @@ static int reset(struct qcom_nand_host *host)
struct nand_chip *chip = &host->chip; struct nand_chip *chip = &host->chip;
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_RESET_DEVICE); nandc_set_reg(chip, NAND_FLASH_CMD, OP_RESET_DEVICE);
nandc_set_reg(nandc, NAND_EXEC_CMD, 1); nandc_set_reg(chip, NAND_EXEC_CMD, 1);
write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL);
write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL);
@ -1492,7 +1553,7 @@ static void qcom_nandc_command(struct nand_chip *chip, unsigned int command,
host->use_ecc = true; host->use_ecc = true;
set_address(host, 0, page_addr); set_address(host, 0, page_addr);
update_rw_regs(host, ecc->steps, true); update_rw_regs(host, ecc->steps, true, 0);
break; break;
case NAND_CMD_SEQIN: case NAND_CMD_SEQIN:
@ -1616,13 +1677,13 @@ qcom_nandc_read_cw_raw(struct mtd_info *mtd, struct nand_chip *chip,
clear_bam_transaction(nandc); clear_bam_transaction(nandc);
set_address(host, host->cw_size * cw, page); set_address(host, host->cw_size * cw, page);
update_rw_regs(host, 1, true); update_rw_regs(host, 1, true, cw);
config_nand_page_read(nandc); config_nand_page_read(chip);
data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1); data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1);
oob_size1 = host->bbm_size; oob_size1 = host->bbm_size;
if (cw == (ecc->steps - 1)) { if (qcom_nandc_is_last_cw(ecc, cw)) {
data_size2 = ecc->size - data_size1 - data_size2 = ecc->size - data_size1 -
((ecc->steps - 1) * 4); ((ecc->steps - 1) * 4);
oob_size2 = (ecc->steps * 4) + host->ecc_bytes_hw + oob_size2 = (ecc->steps * 4) + host->ecc_bytes_hw +
@ -1633,19 +1694,19 @@ qcom_nandc_read_cw_raw(struct mtd_info *mtd, struct nand_chip *chip,
} }
if (nandc->props->is_bam) { if (nandc->props->is_bam) {
nandc_set_read_loc(nandc, 0, read_loc, data_size1, 0); nandc_set_read_loc(chip, cw, 0, read_loc, data_size1, 0);
read_loc += data_size1; read_loc += data_size1;
nandc_set_read_loc(nandc, 1, read_loc, oob_size1, 0); nandc_set_read_loc(chip, cw, 1, read_loc, oob_size1, 0);
read_loc += oob_size1; read_loc += oob_size1;
nandc_set_read_loc(nandc, 2, read_loc, data_size2, 0); nandc_set_read_loc(chip, cw, 2, read_loc, data_size2, 0);
read_loc += data_size2; read_loc += data_size2;
nandc_set_read_loc(nandc, 3, read_loc, oob_size2, 1); nandc_set_read_loc(chip, cw, 3, read_loc, oob_size2, 1);
} }
config_nand_cw_read(nandc, false); config_nand_cw_read(chip, false, cw);
read_data_dma(nandc, reg_off, data_buf, data_size1, 0); read_data_dma(nandc, reg_off, data_buf, data_size1, 0);
reg_off += data_size1; reg_off += data_size1;
@ -1703,7 +1764,7 @@ check_for_erased_page(struct qcom_nand_host *host, u8 *data_buf,
} }
for_each_set_bit(cw, &uncorrectable_cws, ecc->steps) { for_each_set_bit(cw, &uncorrectable_cws, ecc->steps) {
if (cw == (ecc->steps - 1)) { if (qcom_nandc_is_last_cw(ecc, cw)) {
data_size = ecc->size - ((ecc->steps - 1) * 4); data_size = ecc->size - ((ecc->steps - 1) * 4);
oob_size = (ecc->steps * 4) + host->ecc_bytes_hw; oob_size = (ecc->steps * 4) + host->ecc_bytes_hw;
} else { } else {
@ -1763,7 +1824,7 @@ static int parse_read_errors(struct qcom_nand_host *host, u8 *data_buf,
u32 flash, buffer, erased_cw; u32 flash, buffer, erased_cw;
int data_len, oob_len; int data_len, oob_len;
if (i == (ecc->steps - 1)) { if (qcom_nandc_is_last_cw(ecc, i)) {
data_len = ecc->size - ((ecc->steps - 1) << 2); data_len = ecc->size - ((ecc->steps - 1) << 2);
oob_len = ecc->steps << 2; oob_len = ecc->steps << 2;
} else { } else {
@ -1856,13 +1917,13 @@ static int read_page_ecc(struct qcom_nand_host *host, u8 *data_buf,
u8 *data_buf_start = data_buf, *oob_buf_start = oob_buf; u8 *data_buf_start = data_buf, *oob_buf_start = oob_buf;
int i, ret; int i, ret;
config_nand_page_read(nandc); config_nand_page_read(chip);
/* queue cmd descs for each codeword */ /* queue cmd descs for each codeword */
for (i = 0; i < ecc->steps; i++) { for (i = 0; i < ecc->steps; i++) {
int data_size, oob_size; int data_size, oob_size;
if (i == (ecc->steps - 1)) { if (qcom_nandc_is_last_cw(ecc, i)) {
data_size = ecc->size - ((ecc->steps - 1) << 2); data_size = ecc->size - ((ecc->steps - 1) << 2);
oob_size = (ecc->steps << 2) + host->ecc_bytes_hw + oob_size = (ecc->steps << 2) + host->ecc_bytes_hw +
host->spare_bytes; host->spare_bytes;
@ -1873,18 +1934,18 @@ static int read_page_ecc(struct qcom_nand_host *host, u8 *data_buf,
if (nandc->props->is_bam) { if (nandc->props->is_bam) {
if (data_buf && oob_buf) { if (data_buf && oob_buf) {
nandc_set_read_loc(nandc, 0, 0, data_size, 0); nandc_set_read_loc(chip, i, 0, 0, data_size, 0);
nandc_set_read_loc(nandc, 1, data_size, nandc_set_read_loc(chip, i, 1, data_size,
oob_size, 1); oob_size, 1);
} else if (data_buf) { } else if (data_buf) {
nandc_set_read_loc(nandc, 0, 0, data_size, 1); nandc_set_read_loc(chip, i, 0, 0, data_size, 1);
} else { } else {
nandc_set_read_loc(nandc, 0, data_size, nandc_set_read_loc(chip, i, 0, data_size,
oob_size, 1); oob_size, 1);
} }
} }
config_nand_cw_read(nandc, true); config_nand_cw_read(chip, true, i);
if (data_buf) if (data_buf)
read_data_dma(nandc, FLASH_BUF_ACC, data_buf, read_data_dma(nandc, FLASH_BUF_ACC, data_buf,
@ -1944,9 +2005,9 @@ static int copy_last_cw(struct qcom_nand_host *host, int page)
memset(nandc->data_buffer, 0xff, size); memset(nandc->data_buffer, 0xff, size);
set_address(host, host->cw_size * (ecc->steps - 1), page); set_address(host, host->cw_size * (ecc->steps - 1), page);
update_rw_regs(host, 1, true); update_rw_regs(host, 1, true, ecc->steps - 1);
config_nand_single_cw_page_read(nandc, host->use_ecc); config_nand_single_cw_page_read(chip, host->use_ecc, ecc->steps - 1);
read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, size, 0); read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, size, 0);
@ -2011,7 +2072,7 @@ static int qcom_nandc_read_oob(struct nand_chip *chip, int page)
host->use_ecc = true; host->use_ecc = true;
set_address(host, 0, page); set_address(host, 0, page);
update_rw_regs(host, ecc->steps, true); update_rw_regs(host, ecc->steps, true, 0);
return read_page_ecc(host, NULL, chip->oob_poi, page); return read_page_ecc(host, NULL, chip->oob_poi, page);
} }
@ -2035,13 +2096,13 @@ static int qcom_nandc_write_page(struct nand_chip *chip, const uint8_t *buf,
oob_buf = chip->oob_poi; oob_buf = chip->oob_poi;
host->use_ecc = true; host->use_ecc = true;
update_rw_regs(host, ecc->steps, false); update_rw_regs(host, ecc->steps, false, 0);
config_nand_page_write(nandc); config_nand_page_write(chip);
for (i = 0; i < ecc->steps; i++) { for (i = 0; i < ecc->steps; i++) {
int data_size, oob_size; int data_size, oob_size;
if (i == (ecc->steps - 1)) { if (qcom_nandc_is_last_cw(ecc, i)) {
data_size = ecc->size - ((ecc->steps - 1) << 2); data_size = ecc->size - ((ecc->steps - 1) << 2);
oob_size = (ecc->steps << 2) + host->ecc_bytes_hw + oob_size = (ecc->steps << 2) + host->ecc_bytes_hw +
host->spare_bytes; host->spare_bytes;
@ -2061,14 +2122,14 @@ static int qcom_nandc_write_page(struct nand_chip *chip, const uint8_t *buf,
* itself. For the last codeword, we skip the bbm positions and * itself. For the last codeword, we skip the bbm positions and
* write to the free oob area. * write to the free oob area.
*/ */
if (i == (ecc->steps - 1)) { if (qcom_nandc_is_last_cw(ecc, i)) {
oob_buf += host->bbm_size; oob_buf += host->bbm_size;
write_data_dma(nandc, FLASH_BUF_ACC + data_size, write_data_dma(nandc, FLASH_BUF_ACC + data_size,
oob_buf, oob_size, 0); oob_buf, oob_size, 0);
} }
config_nand_cw_write(nandc); config_nand_cw_write(chip);
data_buf += data_size; data_buf += data_size;
oob_buf += oob_size; oob_buf += oob_size;
@ -2106,8 +2167,8 @@ static int qcom_nandc_write_page_raw(struct nand_chip *chip,
oob_buf = chip->oob_poi; oob_buf = chip->oob_poi;
host->use_ecc = false; host->use_ecc = false;
update_rw_regs(host, ecc->steps, false); update_rw_regs(host, ecc->steps, false, 0);
config_nand_page_write(nandc); config_nand_page_write(chip);
for (i = 0; i < ecc->steps; i++) { for (i = 0; i < ecc->steps; i++) {
int data_size1, data_size2, oob_size1, oob_size2; int data_size1, data_size2, oob_size1, oob_size2;
@ -2116,7 +2177,7 @@ static int qcom_nandc_write_page_raw(struct nand_chip *chip,
data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1); data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1);
oob_size1 = host->bbm_size; oob_size1 = host->bbm_size;
if (i == (ecc->steps - 1)) { if (qcom_nandc_is_last_cw(ecc, i)) {
data_size2 = ecc->size - data_size1 - data_size2 = ecc->size - data_size1 -
((ecc->steps - 1) << 2); ((ecc->steps - 1) << 2);
oob_size2 = (ecc->steps << 2) + host->ecc_bytes_hw + oob_size2 = (ecc->steps << 2) + host->ecc_bytes_hw +
@ -2144,7 +2205,7 @@ static int qcom_nandc_write_page_raw(struct nand_chip *chip,
write_data_dma(nandc, reg_off, oob_buf, oob_size2, 0); write_data_dma(nandc, reg_off, oob_buf, oob_size2, 0);
oob_buf += oob_size2; oob_buf += oob_size2;
config_nand_cw_write(nandc); config_nand_cw_write(chip);
} }
ret = submit_descs(nandc); ret = submit_descs(nandc);
@ -2189,12 +2250,12 @@ static int qcom_nandc_write_oob(struct nand_chip *chip, int page)
0, mtd->oobavail); 0, mtd->oobavail);
set_address(host, host->cw_size * (ecc->steps - 1), page); set_address(host, host->cw_size * (ecc->steps - 1), page);
update_rw_regs(host, 1, false); update_rw_regs(host, 1, false, 0);
config_nand_page_write(nandc); config_nand_page_write(chip);
write_data_dma(nandc, FLASH_BUF_ACC, write_data_dma(nandc, FLASH_BUF_ACC,
nandc->data_buffer, data_size + oob_size, 0); nandc->data_buffer, data_size + oob_size, 0);
config_nand_cw_write(nandc); config_nand_cw_write(chip);
ret = submit_descs(nandc); ret = submit_descs(nandc);
@ -2268,12 +2329,12 @@ static int qcom_nandc_block_markbad(struct nand_chip *chip, loff_t ofs)
/* prepare write */ /* prepare write */
host->use_ecc = false; host->use_ecc = false;
set_address(host, host->cw_size * (ecc->steps - 1), page); set_address(host, host->cw_size * (ecc->steps - 1), page);
update_rw_regs(host, 1, false); update_rw_regs(host, 1, false, ecc->steps - 1);
config_nand_page_write(nandc); config_nand_page_write(chip);
write_data_dma(nandc, FLASH_BUF_ACC, write_data_dma(nandc, FLASH_BUF_ACC,
nandc->data_buffer, host->cw_size, 0); nandc->data_buffer, host->cw_size, 0);
config_nand_cw_write(nandc); config_nand_cw_write(chip);
ret = submit_descs(nandc); ret = submit_descs(nandc);
@ -2882,6 +2943,7 @@ static int qcom_nand_host_init_and_register(struct qcom_nand_controller *nandc,
if (!nandc->bam_txn) { if (!nandc->bam_txn) {
dev_err(nandc->dev, dev_err(nandc->dev,
"failed to allocate bam transaction\n"); "failed to allocate bam transaction\n");
nand_cleanup(chip);
return -ENOMEM; return -ENOMEM;
} }
} }
@ -2898,7 +2960,7 @@ static int qcom_probe_nand_devices(struct qcom_nand_controller *nandc)
struct device *dev = nandc->dev; struct device *dev = nandc->dev;
struct device_node *dn = dev->of_node, *child; struct device_node *dn = dev->of_node, *child;
struct qcom_nand_host *host; struct qcom_nand_host *host;
int ret; int ret = -ENODEV;
for_each_available_child_of_node(dn, child) { for_each_available_child_of_node(dn, child) {
host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
@ -2916,10 +2978,7 @@ static int qcom_probe_nand_devices(struct qcom_nand_controller *nandc)
list_add_tail(&host->node, &nandc->host_list); list_add_tail(&host->node, &nandc->host_list);
} }
if (list_empty(&nandc->host_list)) return ret;
return -ENODEV;
return 0;
} }
/* parse custom DT properties here */ /* parse custom DT properties here */
@ -2992,7 +3051,7 @@ static int qcom_nandc_probe(struct platform_device *pdev)
nandc->base_dma = dma_map_resource(dev, res->start, nandc->base_dma = dma_map_resource(dev, res->start,
resource_size(res), resource_size(res),
DMA_BIDIRECTIONAL, 0); DMA_BIDIRECTIONAL, 0);
if (!nandc->base_dma) if (dma_mapping_error(dev, nandc->base_dma))
return -ENXIO; return -ENXIO;
ret = qcom_nandc_alloc(nandc); ret = qcom_nandc_alloc(nandc);

View File

@ -724,10 +724,9 @@ static irqreturn_t r852_irq(int irq, void *data)
struct r852_device *dev = (struct r852_device *)data; struct r852_device *dev = (struct r852_device *)data;
uint8_t card_status, dma_status; uint8_t card_status, dma_status;
unsigned long flags;
irqreturn_t ret = IRQ_NONE; irqreturn_t ret = IRQ_NONE;
spin_lock_irqsave(&dev->irqlock, flags); spin_lock(&dev->irqlock);
/* handle card detection interrupts first */ /* handle card detection interrupts first */
card_status = r852_read_reg(dev, R852_CARD_IRQ_STA); card_status = r852_read_reg(dev, R852_CARD_IRQ_STA);
@ -813,7 +812,7 @@ static irqreturn_t r852_irq(int irq, void *data)
dbg("strange card status = %x", card_status); dbg("strange card status = %x", card_status);
out: out:
spin_unlock_irqrestore(&dev->irqlock, flags); spin_unlock(&dev->irqlock);
return ret; return ret;
} }

View File

@ -159,7 +159,7 @@ struct rk_nfc_nand_chip {
u32 timing; u32 timing;
u8 nsels; u8 nsels;
u8 sels[0]; u8 sels[];
/* Nothing after this field. */ /* Nothing after this field. */
}; };

View File

@ -531,6 +531,7 @@ static int stm32_fmc2_nfc_ham_correct(struct nand_chip *chip, u8 *dat,
switch (b % 4) { switch (b % 4) {
case 2: case 2:
bit_position += shifting; bit_position += shifting;
break;
case 1: case 1:
break; break;
default: default:
@ -546,6 +547,7 @@ static int stm32_fmc2_nfc_ham_correct(struct nand_chip *chip, u8 *dat,
switch (b % 4) { switch (b % 4) {
case 2: case 2:
byte_addr += shifting; byte_addr += shifting;
break;
case 1: case 1:
break; break;
default: default:

View File

@ -1263,12 +1263,14 @@ static const struct spi_device_id spinand_ids[] = {
{ .name = "spi-nand" }, { .name = "spi-nand" },
{ /* sentinel */ }, { /* sentinel */ },
}; };
MODULE_DEVICE_TABLE(spi, spinand_ids);
#ifdef CONFIG_OF #ifdef CONFIG_OF
static const struct of_device_id spinand_of_ids[] = { static const struct of_device_id spinand_of_ids[] = {
{ .compatible = "spi-nand" }, { .compatible = "spi-nand" },
{ /* sentinel */ }, { /* sentinel */ },
}; };
MODULE_DEVICE_TABLE(of, spinand_of_ids);
#endif #endif
static struct spi_mem_driver spinand_drv = { static struct spi_mem_driver spinand_drv = {

View File

@ -13,7 +13,10 @@
#define GD5FXGQ4XA_STATUS_ECC_1_7_BITFLIPS (1 << 4) #define GD5FXGQ4XA_STATUS_ECC_1_7_BITFLIPS (1 << 4)
#define GD5FXGQ4XA_STATUS_ECC_8_BITFLIPS (3 << 4) #define GD5FXGQ4XA_STATUS_ECC_8_BITFLIPS (3 << 4)
#define GD5FXGQ4UEXXG_REG_STATUS2 0xf0 #define GD5FXGQ5XE_STATUS_ECC_1_4_BITFLIPS (1 << 4)
#define GD5FXGQ5XE_STATUS_ECC_4_BITFLIPS (3 << 4)
#define GD5FXGQXXEXXG_REG_STATUS2 0xf0
#define GD5FXGQ4UXFXXG_STATUS_ECC_MASK (7 << 4) #define GD5FXGQ4UXFXXG_STATUS_ECC_MASK (7 << 4)
#define GD5FXGQ4UXFXXG_STATUS_ECC_NO_BITFLIPS (0 << 4) #define GD5FXGQ4UXFXXG_STATUS_ECC_NO_BITFLIPS (0 << 4)
@ -102,7 +105,7 @@ static int gd5fxgq4xa_ecc_get_status(struct spinand_device *spinand,
return -EINVAL; return -EINVAL;
} }
static int gd5fxgq4_variant2_ooblayout_ecc(struct mtd_info *mtd, int section, static int gd5fxgqx_variant2_ooblayout_ecc(struct mtd_info *mtd, int section,
struct mtd_oob_region *region) struct mtd_oob_region *region)
{ {
if (section) if (section)
@ -114,7 +117,7 @@ static int gd5fxgq4_variant2_ooblayout_ecc(struct mtd_info *mtd, int section,
return 0; return 0;
} }
static int gd5fxgq4_variant2_ooblayout_free(struct mtd_info *mtd, int section, static int gd5fxgqx_variant2_ooblayout_free(struct mtd_info *mtd, int section,
struct mtd_oob_region *region) struct mtd_oob_region *region)
{ {
if (section) if (section)
@ -127,9 +130,10 @@ static int gd5fxgq4_variant2_ooblayout_free(struct mtd_info *mtd, int section,
return 0; return 0;
} }
static const struct mtd_ooblayout_ops gd5fxgq4_variant2_ooblayout = { /* Valid for Q4/Q5 and Q6 (untested) devices */
.ecc = gd5fxgq4_variant2_ooblayout_ecc, static const struct mtd_ooblayout_ops gd5fxgqx_variant2_ooblayout = {
.free = gd5fxgq4_variant2_ooblayout_free, .ecc = gd5fxgqx_variant2_ooblayout_ecc,
.free = gd5fxgqx_variant2_ooblayout_free,
}; };
static int gd5fxgq4xc_ooblayout_256_ecc(struct mtd_info *mtd, int section, static int gd5fxgq4xc_ooblayout_256_ecc(struct mtd_info *mtd, int section,
@ -165,7 +169,7 @@ static int gd5fxgq4uexxg_ecc_get_status(struct spinand_device *spinand,
u8 status) u8 status)
{ {
u8 status2; u8 status2;
struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQ4UEXXG_REG_STATUS2, struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQXXEXXG_REG_STATUS2,
&status2); &status2);
int ret; int ret;
@ -203,6 +207,43 @@ static int gd5fxgq4uexxg_ecc_get_status(struct spinand_device *spinand,
return -EINVAL; return -EINVAL;
} }
static int gd5fxgq5xexxg_ecc_get_status(struct spinand_device *spinand,
u8 status)
{
u8 status2;
struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQXXEXXG_REG_STATUS2,
&status2);
int ret;
switch (status & STATUS_ECC_MASK) {
case STATUS_ECC_NO_BITFLIPS:
return 0;
case GD5FXGQ5XE_STATUS_ECC_1_4_BITFLIPS:
/*
* Read status2 register to determine a more fine grained
* bit error status
*/
ret = spi_mem_exec_op(spinand->spimem, &op);
if (ret)
return ret;
/*
* 1 ... 4 bits are flipped (and corrected)
*/
/* bits sorted this way (1...0): ECCSE1, ECCSE0 */
return ((status2 & STATUS_ECC_MASK) >> 4) + 1;
case STATUS_ECC_UNCOR_ERROR:
return -EBADMSG;
default:
break;
}
return -EINVAL;
}
static int gd5fxgq4ufxxg_ecc_get_status(struct spinand_device *spinand, static int gd5fxgq4ufxxg_ecc_get_status(struct spinand_device *spinand,
u8 status) u8 status)
{ {
@ -282,7 +323,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
&write_cache_variants, &write_cache_variants,
&update_cache_variants), &update_cache_variants),
SPINAND_HAS_QE_BIT, SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&gd5fxgq4_variant2_ooblayout, SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout,
gd5fxgq4uexxg_ecc_get_status)), gd5fxgq4uexxg_ecc_get_status)),
SPINAND_INFO("GD5F1GQ4UFxxG", SPINAND_INFO("GD5F1GQ4UFxxG",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE, 0xb1, 0x48), SPINAND_ID(SPINAND_READID_METHOD_OPCODE, 0xb1, 0x48),
@ -292,8 +333,18 @@ static const struct spinand_info gigadevice_spinand_table[] = {
&write_cache_variants, &write_cache_variants,
&update_cache_variants), &update_cache_variants),
SPINAND_HAS_QE_BIT, SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&gd5fxgq4_variant2_ooblayout, SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout,
gd5fxgq4ufxxg_ecc_get_status)), gd5fxgq4ufxxg_ecc_get_status)),
SPINAND_INFO("GD5F1GQ5UExxG",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x51),
NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(4, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout,
gd5fxgq5xexxg_ecc_get_status)),
}; };
static const struct spinand_manufacturer_ops gigadevice_spinand_manuf_ops = { static const struct spinand_manufacturer_ops gigadevice_spinand_manuf_ops = {

View File

@ -797,18 +797,7 @@ static struct mtd_blktrans_ops nftl_tr = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static int __init init_nftl(void) module_mtd_blktrans(nftl_tr);
{
return register_mtd_blktrans(&nftl_tr);
}
static void __exit cleanup_nftl(void)
{
deregister_mtd_blktrans(&nftl_tr);
}
module_init(init_nftl);
module_exit(cleanup_nftl);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>, Fabrice Bellard <fabrice.bellard@netgem.com> et al."); MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>, Fabrice Bellard <fabrice.bellard@netgem.com> et al.");

View File

@ -67,6 +67,25 @@ config MTD_OF_PARTS
flash memory node, as described in flash memory node, as described in
Documentation/devicetree/bindings/mtd/partition.txt. Documentation/devicetree/bindings/mtd/partition.txt.
config MTD_OF_PARTS_BCM4908
bool "BCM4908 partitioning support"
depends on MTD_OF_PARTS && (ARCH_BCM4908 || COMPILE_TEST)
default ARCH_BCM4908
help
This provides partitions parser for BCM4908 family devices
that can have multiple "firmware" partitions. It takes care of
finding currently used one and backup ones.
config MTD_OF_PARTS_LINKSYS_NS
bool "Linksys Northstar partitioning support"
depends on MTD_OF_PARTS && (ARCH_BCM_5301X || ARCH_BCM4908 || COMPILE_TEST)
default ARCH_BCM_5301X
help
This provides partitions parser for Linksys devices based on Broadcom
Northstar architecture. Linksys commonly uses fixed flash layout with
two "firmware" partitions. Currently used firmware has to be detected
using CFE environment variable.
config MTD_PARSER_IMAGETAG config MTD_PARSER_IMAGETAG
tristate "Parser for BCM963XX Image Tag format partitions" tristate "Parser for BCM963XX Image Tag format partitions"
depends on BCM63XX || BMIPS_GENERIC || COMPILE_TEST depends on BCM63XX || BMIPS_GENERIC || COMPILE_TEST
@ -162,9 +181,8 @@ config MTD_REDBOOT_PARTS_READONLY
endif # MTD_REDBOOT_PARTS endif # MTD_REDBOOT_PARTS
config MTD_QCOMSMEM_PARTS config MTD_QCOMSMEM_PARTS
tristate "Qualcomm SMEM NAND flash partition parser" tristate "Qualcomm SMEM flash partition parser"
depends on MTD_NAND_QCOM || COMPILE_TEST
depends on QCOM_SMEM depends on QCOM_SMEM
help help
This provides support for parsing partitions from Shared Memory (SMEM) This provides support for parsing partitions from Shared Memory (SMEM)
for NAND flash on Qualcomm platforms. for NAND and SPI flash on Qualcomm platforms.

View File

@ -4,6 +4,9 @@ obj-$(CONFIG_MTD_BCM47XX_PARTS) += bcm47xxpart.o
obj-$(CONFIG_MTD_BCM63XX_PARTS) += bcm63xxpart.o obj-$(CONFIG_MTD_BCM63XX_PARTS) += bcm63xxpart.o
obj-$(CONFIG_MTD_CMDLINE_PARTS) += cmdlinepart.o obj-$(CONFIG_MTD_CMDLINE_PARTS) += cmdlinepart.o
obj-$(CONFIG_MTD_OF_PARTS) += ofpart.o obj-$(CONFIG_MTD_OF_PARTS) += ofpart.o
ofpart-y += ofpart_core.o
ofpart-$(CONFIG_MTD_OF_PARTS_BCM4908) += ofpart_bcm4908.o
ofpart-$(CONFIG_MTD_OF_PARTS_LINKSYS_NS)+= ofpart_linksys_ns.o
obj-$(CONFIG_MTD_PARSER_IMAGETAG) += parser_imagetag.o obj-$(CONFIG_MTD_PARSER_IMAGETAG) += parser_imagetag.o
obj-$(CONFIG_MTD_AFS_PARTS) += afs.o obj-$(CONFIG_MTD_AFS_PARTS) += afs.o
obj-$(CONFIG_MTD_PARSER_TRX) += parser_trx.o obj-$(CONFIG_MTD_PARSER_TRX) += parser_trx.o

View File

@ -0,0 +1,64 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2021 Rafał Miłecki <rafal@milecki.pl>
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/mtd/mtd.h>
#include <linux/slab.h>
#include <linux/mtd/partitions.h>
#include "ofpart_bcm4908.h"
#define BLPARAMS_FW_OFFSET "NAND_RFS_OFS"
static long long bcm4908_partitions_fw_offset(void)
{
struct device_node *root;
struct property *prop;
const char *s;
root = of_find_node_by_path("/");
if (!root)
return -ENOENT;
of_property_for_each_string(root, "brcm_blparms", prop, s) {
size_t len = strlen(BLPARAMS_FW_OFFSET);
unsigned long offset;
int err;
if (strncmp(s, BLPARAMS_FW_OFFSET, len) || s[len] != '=')
continue;
err = kstrtoul(s + len + 1, 0, &offset);
if (err) {
pr_err("failed to parse %s\n", s + len + 1);
return err;
}
return offset << 10;
}
return -ENOENT;
}
int bcm4908_partitions_post_parse(struct mtd_info *mtd, struct mtd_partition *parts, int nr_parts)
{
long long fw_offset;
int i;
fw_offset = bcm4908_partitions_fw_offset();
for (i = 0; i < nr_parts; i++) {
if (of_device_is_compatible(parts[i].of_node, "brcm,bcm4908-firmware")) {
if (fw_offset < 0 || parts[i].offset == fw_offset)
parts[i].name = "firmware";
else
parts[i].name = "backup";
}
}
return 0;
}

View File

@ -0,0 +1,15 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __BCM4908_PARTITIONS_H
#define __BCM4908_PARTITIONS_H
#ifdef CONFIG_MTD_OF_PARTS_BCM4908
int bcm4908_partitions_post_parse(struct mtd_info *mtd, struct mtd_partition *parts, int nr_parts);
#else
static inline int bcm4908_partitions_post_parse(struct mtd_info *mtd, struct mtd_partition *parts,
int nr_parts)
{
return -EOPNOTSUPP;
}
#endif
#endif

View File

@ -16,6 +16,23 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/mtd/partitions.h> #include <linux/mtd/partitions.h>
#include "ofpart_bcm4908.h"
#include "ofpart_linksys_ns.h"
struct fixed_partitions_quirks {
int (*post_parse)(struct mtd_info *mtd, struct mtd_partition *parts, int nr_parts);
};
static struct fixed_partitions_quirks bcm4908_partitions_quirks = {
.post_parse = bcm4908_partitions_post_parse,
};
static struct fixed_partitions_quirks linksys_ns_partitions_quirks = {
.post_parse = linksys_ns_partitions_post_parse,
};
static const struct of_device_id parse_ofpart_match_table[];
static bool node_has_compatible(struct device_node *pp) static bool node_has_compatible(struct device_node *pp)
{ {
return of_get_property(pp, "compatible", NULL); return of_get_property(pp, "compatible", NULL);
@ -25,6 +42,8 @@ static int parse_fixed_partitions(struct mtd_info *master,
const struct mtd_partition **pparts, const struct mtd_partition **pparts,
struct mtd_part_parser_data *data) struct mtd_part_parser_data *data)
{ {
const struct fixed_partitions_quirks *quirks;
const struct of_device_id *of_id;
struct mtd_partition *parts; struct mtd_partition *parts;
struct device_node *mtd_node; struct device_node *mtd_node;
struct device_node *ofpart_node; struct device_node *ofpart_node;
@ -33,14 +52,13 @@ static int parse_fixed_partitions(struct mtd_info *master,
int nr_parts, i, ret = 0; int nr_parts, i, ret = 0;
bool dedicated = true; bool dedicated = true;
/* Pull of_node from the master device node */ /* Pull of_node from the master device node */
mtd_node = mtd_get_of_node(master); mtd_node = mtd_get_of_node(master);
if (!mtd_node) if (!mtd_node)
return 0; return 0;
ofpart_node = of_get_child_by_name(mtd_node, "partitions"); ofpart_node = of_get_child_by_name(mtd_node, "partitions");
if (!ofpart_node) { if (!ofpart_node && !master->parent) {
/* /*
* We might get here even when ofpart isn't used at all (e.g., * We might get here even when ofpart isn't used at all (e.g.,
* when using another parser), so don't be louder than * when using another parser), so don't be louder than
@ -50,11 +68,18 @@ static int parse_fixed_partitions(struct mtd_info *master,
master->name, mtd_node); master->name, mtd_node);
ofpart_node = mtd_node; ofpart_node = mtd_node;
dedicated = false; dedicated = false;
} else if (!of_device_is_compatible(ofpart_node, "fixed-partitions")) { }
if (!ofpart_node)
return 0;
of_id = of_match_node(parse_ofpart_match_table, ofpart_node);
if (dedicated && !of_id) {
/* The 'partitions' subnode might be used by another parser */ /* The 'partitions' subnode might be used by another parser */
return 0; return 0;
} }
quirks = of_id ? of_id->data : NULL;
/* First count the subnodes */ /* First count the subnodes */
nr_parts = 0; nr_parts = 0;
for_each_child_of_node(ofpart_node, pp) { for_each_child_of_node(ofpart_node, pp) {
@ -126,6 +151,9 @@ static int parse_fixed_partitions(struct mtd_info *master,
if (!nr_parts) if (!nr_parts)
goto ofpart_none; goto ofpart_none;
if (quirks && quirks->post_parse)
quirks->post_parse(master, parts, nr_parts);
*pparts = parts; *pparts = parts;
return nr_parts; return nr_parts;
@ -140,7 +168,11 @@ ofpart_none:
} }
static const struct of_device_id parse_ofpart_match_table[] = { static const struct of_device_id parse_ofpart_match_table[] = {
/* Generic */
{ .compatible = "fixed-partitions" }, { .compatible = "fixed-partitions" },
/* Customized */
{ .compatible = "brcm,bcm4908-partitions", .data = &bcm4908_partitions_quirks, },
{ .compatible = "linksys,ns-partitions", .data = &linksys_ns_partitions_quirks, },
{}, {},
}; };
MODULE_DEVICE_TABLE(of, parse_ofpart_match_table); MODULE_DEVICE_TABLE(of, parse_ofpart_match_table);

View File

@ -0,0 +1,50 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2021 Rafał Miłecki <rafal@milecki.pl>
*/
#include <linux/bcm47xx_nvram.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include "ofpart_linksys_ns.h"
#define NVRAM_BOOT_PART "bootpartition"
static int ofpart_linksys_ns_bootpartition(void)
{
char buf[4];
int bootpartition;
/* Check CFE environment variable */
if (bcm47xx_nvram_getenv(NVRAM_BOOT_PART, buf, sizeof(buf)) > 0) {
if (!kstrtoint(buf, 0, &bootpartition))
return bootpartition;
pr_warn("Failed to parse %s value \"%s\"\n", NVRAM_BOOT_PART,
buf);
} else {
pr_warn("Failed to get NVRAM \"%s\"\n", NVRAM_BOOT_PART);
}
return 0;
}
int linksys_ns_partitions_post_parse(struct mtd_info *mtd,
struct mtd_partition *parts,
int nr_parts)
{
int bootpartition = ofpart_linksys_ns_bootpartition();
int trx_idx = 0;
int i;
for (i = 0; i < nr_parts; i++) {
if (of_device_is_compatible(parts[i].of_node, "linksys,ns-firmware")) {
if (trx_idx++ == bootpartition)
parts[i].name = "firmware";
else
parts[i].name = "backup";
}
}
return 0;
}

View File

@ -0,0 +1,18 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __OFPART_LINKSYS_NS_H
#define __OFPART_LINKSYS_NS_H
#ifdef CONFIG_MTD_OF_PARTS_LINKSYS_NS
int linksys_ns_partitions_post_parse(struct mtd_info *mtd,
struct mtd_partition *parts,
int nr_parts);
#else
static inline int linksys_ns_partitions_post_parse(struct mtd_info *mtd,
struct mtd_partition *parts,
int nr_parts)
{
return -EOPNOTSUPP;
}
#endif
#endif

View File

@ -65,6 +65,13 @@ static int parse_qcomsmem_part(struct mtd_info *mtd,
int ret, i, numparts; int ret, i, numparts;
char *name, *c; char *name, *c;
if (IS_ENABLED(CONFIG_MTD_SPI_NOR_USE_4K_SECTORS)
&& mtd->type == MTD_NORFLASH) {
pr_err("%s: SMEM partition parser is incompatible with 4K sectors\n",
mtd->name);
return -EINVAL;
}
pr_debug("Parsing partition table info from SMEM\n"); pr_debug("Parsing partition table info from SMEM\n");
ptable = qcom_smem_get(SMEM_APPS, SMEM_AARM_PARTITION_TABLE, &len); ptable = qcom_smem_get(SMEM_APPS, SMEM_AARM_PARTITION_TABLE, &len);
if (IS_ERR(ptable)) { if (IS_ERR(ptable)) {
@ -104,7 +111,7 @@ static int parse_qcomsmem_part(struct mtd_info *mtd,
* complete partition table * complete partition table
*/ */
ptable = qcom_smem_get(SMEM_APPS, SMEM_AARM_PARTITION_TABLE, &len); ptable = qcom_smem_get(SMEM_APPS, SMEM_AARM_PARTITION_TABLE, &len);
if (IS_ERR_OR_NULL(ptable)) { if (IS_ERR(ptable)) {
pr_err("Error reading partition table\n"); pr_err("Error reading partition table\n");
return PTR_ERR(ptable); return PTR_ERR(ptable);
} }

View File

@ -794,18 +794,7 @@ static struct mtd_blktrans_ops rfd_ftl_tr = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static int __init init_rfd_ftl(void) module_mtd_blktrans(rfd_ftl_tr);
{
return register_mtd_blktrans(&rfd_ftl_tr);
}
static void __exit cleanup_rfd_ftl(void)
{
deregister_mtd_blktrans(&rfd_ftl_tr);
}
module_init(init_rfd_ftl);
module_exit(cleanup_rfd_ftl);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Sean Young <sean@mess.org>"); MODULE_AUTHOR("Sean Young <sean@mess.org>");

View File

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
spi-nor-objs := core.o sfdp.o spi-nor-objs := core.o sfdp.o swp.o otp.o
spi-nor-objs += atmel.o spi-nor-objs += atmel.o
spi-nor-objs += catalyst.o spi-nor-objs += catalyst.o
spi-nor-objs += eon.o spi-nor-objs += eon.o

View File

@ -15,7 +15,6 @@
#include <linux/mtd/mtd.h> #include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h> #include <linux/mtd/partitions.h>
#include <linux/mtd/spi-nor.h> #include <linux/mtd/spi-nor.h>
#include <linux/platform_data/intel-spi.h>
#include "intel-spi.h" #include "intel-spi.h"

View File

@ -9,7 +9,7 @@
#ifndef INTEL_SPI_H #ifndef INTEL_SPI_H
#define INTEL_SPI_H #define INTEL_SPI_H
#include <linux/platform_data/intel-spi.h> #include <linux/platform_data/x86/intel-spi.h>
struct intel_spi; struct intel_spi;
struct resource; struct resource;

View File

@ -1034,7 +1034,7 @@ static int spi_nor_write_16bit_sr_and_check(struct spi_nor *nor, u8 sr1)
* *
* Return: 0 on success, -errno otherwise. * Return: 0 on success, -errno otherwise.
*/ */
static int spi_nor_write_16bit_cr_and_check(struct spi_nor *nor, u8 cr) int spi_nor_write_16bit_cr_and_check(struct spi_nor *nor, u8 cr)
{ {
int ret; int ret;
u8 *sr_cr = nor->bouncebuf; u8 *sr_cr = nor->bouncebuf;
@ -1610,6 +1610,9 @@ static int spi_nor_erase_multi_sectors(struct spi_nor *nor, u64 addr, u32 len)
list_for_each_entry_safe(cmd, next, &erase_list, list) { list_for_each_entry_safe(cmd, next, &erase_list, list) {
nor->erase_opcode = cmd->opcode; nor->erase_opcode = cmd->opcode;
while (cmd->count) { while (cmd->count) {
dev_vdbg(nor->dev, "erase_cmd->size = 0x%08x, erase_cmd->opcode = 0x%02x, erase_cmd->count = %u\n",
cmd->size, cmd->opcode, cmd->count);
ret = spi_nor_write_enable(nor); ret = spi_nor_write_enable(nor);
if (ret) if (ret)
goto destroy_erase_cmd_list; goto destroy_erase_cmd_list;
@ -1618,12 +1621,12 @@ static int spi_nor_erase_multi_sectors(struct spi_nor *nor, u64 addr, u32 len)
if (ret) if (ret)
goto destroy_erase_cmd_list; goto destroy_erase_cmd_list;
addr += cmd->size;
cmd->count--;
ret = spi_nor_wait_till_ready(nor); ret = spi_nor_wait_till_ready(nor);
if (ret) if (ret)
goto destroy_erase_cmd_list; goto destroy_erase_cmd_list;
addr += cmd->size;
cmd->count--;
} }
list_del(&cmd->list); list_del(&cmd->list);
kfree(cmd); kfree(cmd);
@ -1704,12 +1707,12 @@ static int spi_nor_erase(struct mtd_info *mtd, struct erase_info *instr)
if (ret) if (ret)
goto erase_err; goto erase_err;
addr += mtd->erasesize;
len -= mtd->erasesize;
ret = spi_nor_wait_till_ready(nor); ret = spi_nor_wait_till_ready(nor);
if (ret) if (ret)
goto erase_err; goto erase_err;
addr += mtd->erasesize;
len -= mtd->erasesize;
} }
/* erase multiple sectors */ /* erase multiple sectors */
@ -1727,376 +1730,6 @@ erase_err:
return ret; return ret;
} }
static u8 spi_nor_get_sr_bp_mask(struct spi_nor *nor)
{
u8 mask = SR_BP2 | SR_BP1 | SR_BP0;
if (nor->flags & SNOR_F_HAS_SR_BP3_BIT6)
return mask | SR_BP3_BIT6;
if (nor->flags & SNOR_F_HAS_4BIT_BP)
return mask | SR_BP3;
return mask;
}
static u8 spi_nor_get_sr_tb_mask(struct spi_nor *nor)
{
if (nor->flags & SNOR_F_HAS_SR_TB_BIT6)
return SR_TB_BIT6;
else
return SR_TB_BIT5;
}
static u64 spi_nor_get_min_prot_length_sr(struct spi_nor *nor)
{
unsigned int bp_slots, bp_slots_needed;
u8 mask = spi_nor_get_sr_bp_mask(nor);
/* Reserved one for "protect none" and one for "protect all". */
bp_slots = (1 << hweight8(mask)) - 2;
bp_slots_needed = ilog2(nor->info->n_sectors);
if (bp_slots_needed > bp_slots)
return nor->info->sector_size <<
(bp_slots_needed - bp_slots);
else
return nor->info->sector_size;
}
static void spi_nor_get_locked_range_sr(struct spi_nor *nor, u8 sr, loff_t *ofs,
uint64_t *len)
{
struct mtd_info *mtd = &nor->mtd;
u64 min_prot_len;
u8 mask = spi_nor_get_sr_bp_mask(nor);
u8 tb_mask = spi_nor_get_sr_tb_mask(nor);
u8 bp, val = sr & mask;
if (nor->flags & SNOR_F_HAS_SR_BP3_BIT6 && val & SR_BP3_BIT6)
val = (val & ~SR_BP3_BIT6) | SR_BP3;
bp = val >> SR_BP_SHIFT;
if (!bp) {
/* No protection */
*ofs = 0;
*len = 0;
return;
}
min_prot_len = spi_nor_get_min_prot_length_sr(nor);
*len = min_prot_len << (bp - 1);
if (*len > mtd->size)
*len = mtd->size;
if (nor->flags & SNOR_F_HAS_SR_TB && sr & tb_mask)
*ofs = 0;
else
*ofs = mtd->size - *len;
}
/*
* Return 1 if the entire region is locked (if @locked is true) or unlocked (if
* @locked is false); 0 otherwise
*/
static int spi_nor_check_lock_status_sr(struct spi_nor *nor, loff_t ofs,
uint64_t len, u8 sr, bool locked)
{
loff_t lock_offs;
uint64_t lock_len;
if (!len)
return 1;
spi_nor_get_locked_range_sr(nor, sr, &lock_offs, &lock_len);
if (locked)
/* Requested range is a sub-range of locked range */
return (ofs + len <= lock_offs + lock_len) && (ofs >= lock_offs);
else
/* Requested range does not overlap with locked range */
return (ofs >= lock_offs + lock_len) || (ofs + len <= lock_offs);
}
static int spi_nor_is_locked_sr(struct spi_nor *nor, loff_t ofs, uint64_t len,
u8 sr)
{
return spi_nor_check_lock_status_sr(nor, ofs, len, sr, true);
}
static int spi_nor_is_unlocked_sr(struct spi_nor *nor, loff_t ofs, uint64_t len,
u8 sr)
{
return spi_nor_check_lock_status_sr(nor, ofs, len, sr, false);
}
/*
* Lock a region of the flash. Compatible with ST Micro and similar flash.
* Supports the block protection bits BP{0,1,2}/BP{0,1,2,3} in the status
* register
* (SR). Does not support these features found in newer SR bitfields:
* - SEC: sector/block protect - only handle SEC=0 (block protect)
* - CMP: complement protect - only support CMP=0 (range is not complemented)
*
* Support for the following is provided conditionally for some flash:
* - TB: top/bottom protect
*
* Sample table portion for 8MB flash (Winbond w25q64fw):
*
* SEC | TB | BP2 | BP1 | BP0 | Prot Length | Protected Portion
* --------------------------------------------------------------------------
* X | X | 0 | 0 | 0 | NONE | NONE
* 0 | 0 | 0 | 0 | 1 | 128 KB | Upper 1/64
* 0 | 0 | 0 | 1 | 0 | 256 KB | Upper 1/32
* 0 | 0 | 0 | 1 | 1 | 512 KB | Upper 1/16
* 0 | 0 | 1 | 0 | 0 | 1 MB | Upper 1/8
* 0 | 0 | 1 | 0 | 1 | 2 MB | Upper 1/4
* 0 | 0 | 1 | 1 | 0 | 4 MB | Upper 1/2
* X | X | 1 | 1 | 1 | 8 MB | ALL
* ------|-------|-------|-------|-------|---------------|-------------------
* 0 | 1 | 0 | 0 | 1 | 128 KB | Lower 1/64
* 0 | 1 | 0 | 1 | 0 | 256 KB | Lower 1/32
* 0 | 1 | 0 | 1 | 1 | 512 KB | Lower 1/16
* 0 | 1 | 1 | 0 | 0 | 1 MB | Lower 1/8
* 0 | 1 | 1 | 0 | 1 | 2 MB | Lower 1/4
* 0 | 1 | 1 | 1 | 0 | 4 MB | Lower 1/2
*
* Returns negative on errors, 0 on success.
*/
static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
struct mtd_info *mtd = &nor->mtd;
u64 min_prot_len;
int ret, status_old, status_new;
u8 mask = spi_nor_get_sr_bp_mask(nor);
u8 tb_mask = spi_nor_get_sr_tb_mask(nor);
u8 pow, val;
loff_t lock_len;
bool can_be_top = true, can_be_bottom = nor->flags & SNOR_F_HAS_SR_TB;
bool use_top;
ret = spi_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
status_old = nor->bouncebuf[0];
/* If nothing in our range is unlocked, we don't need to do anything */
if (spi_nor_is_locked_sr(nor, ofs, len, status_old))
return 0;
/* If anything below us is unlocked, we can't use 'bottom' protection */
if (!spi_nor_is_locked_sr(nor, 0, ofs, status_old))
can_be_bottom = false;
/* If anything above us is unlocked, we can't use 'top' protection */
if (!spi_nor_is_locked_sr(nor, ofs + len, mtd->size - (ofs + len),
status_old))
can_be_top = false;
if (!can_be_bottom && !can_be_top)
return -EINVAL;
/* Prefer top, if both are valid */
use_top = can_be_top;
/* lock_len: length of region that should end up locked */
if (use_top)
lock_len = mtd->size - ofs;
else
lock_len = ofs + len;
if (lock_len == mtd->size) {
val = mask;
} else {
min_prot_len = spi_nor_get_min_prot_length_sr(nor);
pow = ilog2(lock_len) - ilog2(min_prot_len) + 1;
val = pow << SR_BP_SHIFT;
if (nor->flags & SNOR_F_HAS_SR_BP3_BIT6 && val & SR_BP3)
val = (val & ~SR_BP3) | SR_BP3_BIT6;
if (val & ~mask)
return -EINVAL;
/* Don't "lock" with no region! */
if (!(val & mask))
return -EINVAL;
}
status_new = (status_old & ~mask & ~tb_mask) | val;
/* Disallow further writes if WP pin is asserted */
status_new |= SR_SRWD;
if (!use_top)
status_new |= tb_mask;
/* Don't bother if they're the same */
if (status_new == status_old)
return 0;
/* Only modify protection if it will not unlock other areas */
if ((status_new & mask) < (status_old & mask))
return -EINVAL;
return spi_nor_write_sr_and_check(nor, status_new);
}
/*
* Unlock a region of the flash. See spi_nor_sr_lock() for more info
*
* Returns negative on errors, 0 on success.
*/
static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
struct mtd_info *mtd = &nor->mtd;
u64 min_prot_len;
int ret, status_old, status_new;
u8 mask = spi_nor_get_sr_bp_mask(nor);
u8 tb_mask = spi_nor_get_sr_tb_mask(nor);
u8 pow, val;
loff_t lock_len;
bool can_be_top = true, can_be_bottom = nor->flags & SNOR_F_HAS_SR_TB;
bool use_top;
ret = spi_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
status_old = nor->bouncebuf[0];
/* If nothing in our range is locked, we don't need to do anything */
if (spi_nor_is_unlocked_sr(nor, ofs, len, status_old))
return 0;
/* If anything below us is locked, we can't use 'top' protection */
if (!spi_nor_is_unlocked_sr(nor, 0, ofs, status_old))
can_be_top = false;
/* If anything above us is locked, we can't use 'bottom' protection */
if (!spi_nor_is_unlocked_sr(nor, ofs + len, mtd->size - (ofs + len),
status_old))
can_be_bottom = false;
if (!can_be_bottom && !can_be_top)
return -EINVAL;
/* Prefer top, if both are valid */
use_top = can_be_top;
/* lock_len: length of region that should remain locked */
if (use_top)
lock_len = mtd->size - (ofs + len);
else
lock_len = ofs;
if (lock_len == 0) {
val = 0; /* fully unlocked */
} else {
min_prot_len = spi_nor_get_min_prot_length_sr(nor);
pow = ilog2(lock_len) - ilog2(min_prot_len) + 1;
val = pow << SR_BP_SHIFT;
if (nor->flags & SNOR_F_HAS_SR_BP3_BIT6 && val & SR_BP3)
val = (val & ~SR_BP3) | SR_BP3_BIT6;
/* Some power-of-two sizes are not supported */
if (val & ~mask)
return -EINVAL;
}
status_new = (status_old & ~mask & ~tb_mask) | val;
/* Don't protect status register if we're fully unlocked */
if (lock_len == 0)
status_new &= ~SR_SRWD;
if (!use_top)
status_new |= tb_mask;
/* Don't bother if they're the same */
if (status_new == status_old)
return 0;
/* Only modify protection if it will not lock other areas */
if ((status_new & mask) > (status_old & mask))
return -EINVAL;
return spi_nor_write_sr_and_check(nor, status_new);
}
/*
* Check if a region of the flash is (completely) locked. See spi_nor_sr_lock()
* for more info.
*
* Returns 1 if entire region is locked, 0 if any portion is unlocked, and
* negative on errors.
*/
static int spi_nor_sr_is_locked(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
int ret;
ret = spi_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
return spi_nor_is_locked_sr(nor, ofs, len, nor->bouncebuf[0]);
}
static const struct spi_nor_locking_ops spi_nor_sr_locking_ops = {
.lock = spi_nor_sr_lock,
.unlock = spi_nor_sr_unlock,
.is_locked = spi_nor_sr_is_locked,
};
static int spi_nor_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
int ret;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
ret = nor->params->locking_ops->lock(nor, ofs, len);
spi_nor_unlock_and_unprep(nor);
return ret;
}
static int spi_nor_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
int ret;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
ret = nor->params->locking_ops->unlock(nor, ofs, len);
spi_nor_unlock_and_unprep(nor);
return ret;
}
static int spi_nor_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
int ret;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
ret = nor->params->locking_ops->is_locked(nor, ofs, len);
spi_nor_unlock_and_unprep(nor);
return ret;
}
/** /**
* spi_nor_sr1_bit6_quad_enable() - Set the Quad Enable BIT(6) in the Status * spi_nor_sr1_bit6_quad_enable() - Set the Quad Enable BIT(6) in the Status
* Register 1. * Register 1.
@ -2336,11 +1969,8 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len,
* If page_size is a power of two, the offset can be quickly * If page_size is a power of two, the offset can be quickly
* calculated with an AND operation. On the other cases we * calculated with an AND operation. On the other cases we
* need to do a modulus operation (more expensive). * need to do a modulus operation (more expensive).
* Power of two numbers have only one bit set and we can use
* the instruction hweight32 to detect if we need to do a
* modulus (do_div()) or not.
*/ */
if (hweight32(nor->page_size) == 1) { if (is_power_of_2(nor->page_size)) {
page_offset = addr & (nor->page_size - 1); page_offset = addr & (nor->page_size - 1);
} else { } else {
uint64_t aux = addr; uint64_t aux = addr;
@ -2626,22 +2256,20 @@ void spi_nor_init_uniform_erase_map(struct spi_nor_erase_map *map,
int spi_nor_post_bfpt_fixups(struct spi_nor *nor, int spi_nor_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt, const struct sfdp_bfpt *bfpt)
struct spi_nor_flash_parameter *params)
{ {
int ret; int ret;
if (nor->manufacturer && nor->manufacturer->fixups && if (nor->manufacturer && nor->manufacturer->fixups &&
nor->manufacturer->fixups->post_bfpt) { nor->manufacturer->fixups->post_bfpt) {
ret = nor->manufacturer->fixups->post_bfpt(nor, bfpt_header, ret = nor->manufacturer->fixups->post_bfpt(nor, bfpt_header,
bfpt, params); bfpt);
if (ret) if (ret)
return ret; return ret;
} }
if (nor->info->fixups && nor->info->fixups->post_bfpt) if (nor->info->fixups && nor->info->fixups->post_bfpt)
return nor->info->fixups->post_bfpt(nor, bfpt_header, bfpt, return nor->info->fixups->post_bfpt(nor, bfpt_header, bfpt);
params);
return 0; return 0;
} }
@ -2896,7 +2524,7 @@ static void spi_nor_sfdp_init_params(struct spi_nor *nor)
memcpy(&sfdp_params, nor->params, sizeof(sfdp_params)); memcpy(&sfdp_params, nor->params, sizeof(sfdp_params));
if (spi_nor_parse_sfdp(nor, nor->params)) { if (spi_nor_parse_sfdp(nor)) {
memcpy(nor->params, &sfdp_params, sizeof(*nor->params)); memcpy(nor->params, &sfdp_params, sizeof(*nor->params));
nor->addr_width = 0; nor->addr_width = 0;
nor->flags &= ~SNOR_F_4B_OPCODES; nor->flags &= ~SNOR_F_4B_OPCODES;
@ -2916,10 +2544,12 @@ static void spi_nor_info_init_params(struct spi_nor *nor)
struct device_node *np = spi_nor_get_flash_node(nor); struct device_node *np = spi_nor_get_flash_node(nor);
u8 i, erase_mask; u8 i, erase_mask;
/* Initialize legacy flash parameters and settings. */ /* Initialize default flash parameters and settings. */
params->quad_enable = spi_nor_sr2_bit1_quad_enable; params->quad_enable = spi_nor_sr2_bit1_quad_enable;
params->set_4byte_addr_mode = spansion_set_4byte_addr_mode; params->set_4byte_addr_mode = spansion_set_4byte_addr_mode;
params->setup = spi_nor_default_setup; params->setup = spi_nor_default_setup;
params->otp.org = &info->otp_org;
/* Default to 16-bit Write Status (01h) Command */ /* Default to 16-bit Write Status (01h) Command */
nor->flags |= SNOR_F_HAS_16BIT_SR; nor->flags |= SNOR_F_HAS_16BIT_SR;
@ -3048,7 +2678,7 @@ static void spi_nor_late_init_params(struct spi_nor *nor)
* the default ones. * the default ones.
*/ */
if (nor->flags & SNOR_F_HAS_LOCK && !nor->params->locking_ops) if (nor->flags & SNOR_F_HAS_LOCK && !nor->params->locking_ops)
nor->params->locking_ops = &spi_nor_sr_locking_ops; spi_nor_init_default_locking_ops(nor);
} }
/** /**
@ -3160,32 +2790,6 @@ static int spi_nor_quad_enable(struct spi_nor *nor)
return nor->params->quad_enable(nor); return nor->params->quad_enable(nor);
} }
/**
* spi_nor_try_unlock_all() - Tries to unlock the entire flash memory array.
* @nor: pointer to a 'struct spi_nor'.
*
* Some SPI NOR flashes are write protected by default after a power-on reset
* cycle, in order to avoid inadvertent writes during power-up. Backward
* compatibility imposes to unlock the entire flash memory array at power-up
* by default.
*
* Unprotecting the entire flash array will fail for boards which are hardware
* write-protected. Thus any errors are ignored.
*/
static void spi_nor_try_unlock_all(struct spi_nor *nor)
{
int ret;
if (!(nor->flags & SNOR_F_HAS_LOCK))
return;
dev_dbg(nor->dev, "Unprotecting entire flash array\n");
ret = spi_nor_unlock(&nor->mtd, 0, nor->params->size);
if (ret)
dev_dbg(nor->dev, "Failed to unlock the entire flash memory array\n");
}
static int spi_nor_init(struct spi_nor *nor) static int spi_nor_init(struct spi_nor *nor)
{ {
int err; int err;
@ -3301,6 +2905,37 @@ static void spi_nor_resume(struct mtd_info *mtd)
dev_err(dev, "resume() failed\n"); dev_err(dev, "resume() failed\n");
} }
static int spi_nor_get_device(struct mtd_info *mtd)
{
struct mtd_info *master = mtd_get_master(mtd);
struct spi_nor *nor = mtd_to_spi_nor(master);
struct device *dev;
if (nor->spimem)
dev = nor->spimem->spi->controller->dev.parent;
else
dev = nor->dev;
if (!try_module_get(dev->driver->owner))
return -ENODEV;
return 0;
}
static void spi_nor_put_device(struct mtd_info *mtd)
{
struct mtd_info *master = mtd_get_master(mtd);
struct spi_nor *nor = mtd_to_spi_nor(master);
struct device *dev;
if (nor->spimem)
dev = nor->spimem->spi->controller->dev.parent;
else
dev = nor->dev;
module_put(dev->driver->owner);
}
void spi_nor_restore(struct spi_nor *nor) void spi_nor_restore(struct spi_nor *nor)
{ {
/* restore the addressing mode */ /* restore the addressing mode */
@ -3495,12 +3130,8 @@ int spi_nor_scan(struct spi_nor *nor, const char *name,
mtd->_read = spi_nor_read; mtd->_read = spi_nor_read;
mtd->_suspend = spi_nor_suspend; mtd->_suspend = spi_nor_suspend;
mtd->_resume = spi_nor_resume; mtd->_resume = spi_nor_resume;
mtd->_get_device = spi_nor_get_device;
if (nor->params->locking_ops) { mtd->_put_device = spi_nor_put_device;
mtd->_lock = spi_nor_lock;
mtd->_unlock = spi_nor_unlock;
mtd->_is_locked = spi_nor_is_locked;
}
if (info->flags & USE_FSR) if (info->flags & USE_FSR)
nor->flags |= SNOR_F_USE_FSR; nor->flags |= SNOR_F_USE_FSR;
@ -3553,11 +3184,16 @@ int spi_nor_scan(struct spi_nor *nor, const char *name,
if (ret) if (ret)
return ret; return ret;
spi_nor_register_locking_ops(nor);
/* Send all the required SPI flash commands to initialize device */ /* Send all the required SPI flash commands to initialize device */
ret = spi_nor_init(nor); ret = spi_nor_init(nor);
if (ret) if (ret)
return ret; return ret;
/* Configure OTP parameters and ops */
spi_nor_otp_init(nor);
dev_info(dev, "%s (%lld Kbytes)\n", info->name, dev_info(dev, "%s (%lld Kbytes)\n", info->name,
(long long)mtd->size >> 10); (long long)mtd->size >> 10);

View File

@ -187,6 +187,46 @@ struct spi_nor_locking_ops {
int (*is_locked)(struct spi_nor *nor, loff_t ofs, uint64_t len); int (*is_locked)(struct spi_nor *nor, loff_t ofs, uint64_t len);
}; };
/**
* struct spi_nor_otp_organization - Structure to describe the SPI NOR OTP regions
* @len: size of one OTP region in bytes.
* @base: start address of the OTP area.
* @offset: offset between consecutive OTP regions if there are more
* than one.
* @n_regions: number of individual OTP regions.
*/
struct spi_nor_otp_organization {
size_t len;
loff_t base;
loff_t offset;
unsigned int n_regions;
};
/**
* struct spi_nor_otp_ops - SPI NOR OTP methods
* @read: read from the SPI NOR OTP area.
* @write: write to the SPI NOR OTP area.
* @lock: lock an OTP region.
* @is_locked: check if an OTP region of the SPI NOR is locked.
*/
struct spi_nor_otp_ops {
int (*read)(struct spi_nor *nor, loff_t addr, size_t len, u8 *buf);
int (*write)(struct spi_nor *nor, loff_t addr, size_t len,
const u8 *buf);
int (*lock)(struct spi_nor *nor, unsigned int region);
int (*is_locked)(struct spi_nor *nor, unsigned int region);
};
/**
* struct spi_nor_otp - SPI NOR OTP grouping structure
* @org: OTP region organization
* @ops: OTP access ops
*/
struct spi_nor_otp {
const struct spi_nor_otp_organization *org;
const struct spi_nor_otp_ops *ops;
};
/** /**
* struct spi_nor_flash_parameter - SPI NOR flash parameters and settings. * struct spi_nor_flash_parameter - SPI NOR flash parameters and settings.
* Includes legacy flash parameters and settings that can be overwritten * Includes legacy flash parameters and settings that can be overwritten
@ -208,6 +248,7 @@ struct spi_nor_locking_ops {
* higher index in the array, the higher priority. * higher index in the array, the higher priority.
* @erase_map: the erase map parsed from the SFDP Sector Map Parameter * @erase_map: the erase map parsed from the SFDP Sector Map Parameter
* Table. * Table.
* @otp_info: describes the OTP regions.
* @octal_dtr_enable: enables SPI NOR octal DTR mode. * @octal_dtr_enable: enables SPI NOR octal DTR mode.
* @quad_enable: enables SPI NOR quad mode. * @quad_enable: enables SPI NOR quad mode.
* @set_4byte_addr_mode: puts the SPI NOR in 4 byte addressing mode. * @set_4byte_addr_mode: puts the SPI NOR in 4 byte addressing mode.
@ -219,6 +260,7 @@ struct spi_nor_locking_ops {
* e.g. different opcodes, specific address calculation, * e.g. different opcodes, specific address calculation,
* page size, etc. * page size, etc.
* @locking_ops: SPI NOR locking methods. * @locking_ops: SPI NOR locking methods.
* @otp: SPI NOR OTP methods.
*/ */
struct spi_nor_flash_parameter { struct spi_nor_flash_parameter {
u64 size; u64 size;
@ -232,6 +274,7 @@ struct spi_nor_flash_parameter {
struct spi_nor_pp_command page_programs[SNOR_CMD_PP_MAX]; struct spi_nor_pp_command page_programs[SNOR_CMD_PP_MAX];
struct spi_nor_erase_map erase_map; struct spi_nor_erase_map erase_map;
struct spi_nor_otp otp;
int (*octal_dtr_enable)(struct spi_nor *nor, bool enable); int (*octal_dtr_enable)(struct spi_nor *nor, bool enable);
int (*quad_enable)(struct spi_nor *nor); int (*quad_enable)(struct spi_nor *nor);
@ -261,8 +304,7 @@ struct spi_nor_fixups {
void (*default_init)(struct spi_nor *nor); void (*default_init)(struct spi_nor *nor);
int (*post_bfpt)(struct spi_nor *nor, int (*post_bfpt)(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt, const struct sfdp_bfpt *bfpt);
struct spi_nor_flash_parameter *params);
void (*post_sfdp)(struct spi_nor *nor); void (*post_sfdp)(struct spi_nor *nor);
}; };
@ -339,6 +381,8 @@ struct flash_info {
* power-up in a write-protected state. * power-up in a write-protected state.
*/ */
const struct spi_nor_otp_organization otp_org;
/* Part specific fixup hooks. */ /* Part specific fixup hooks. */
const struct spi_nor_fixups *fixups; const struct spi_nor_fixups *fixups;
}; };
@ -393,6 +437,14 @@ struct flash_info {
.addr_width = 3, \ .addr_width = 3, \
.flags = SPI_NOR_NO_FR | SPI_NOR_XSR_RDY, .flags = SPI_NOR_NO_FR | SPI_NOR_XSR_RDY,
#define OTP_INFO(_len, _n_regions, _base, _offset) \
.otp_org = { \
.len = (_len), \
.base = (_base), \
.offset = (_offset), \
.n_regions = (_n_regions), \
},
/** /**
* struct spi_nor_manufacturer - SPI NOR manufacturer object * struct spi_nor_manufacturer - SPI NOR manufacturer object
* @name: manufacturer name * @name: manufacturer name
@ -444,6 +496,7 @@ int spi_nor_read_sr(struct spi_nor *nor, u8 *sr);
int spi_nor_read_cr(struct spi_nor *nor, u8 *cr); int spi_nor_read_cr(struct spi_nor *nor, u8 *cr);
int spi_nor_write_sr(struct spi_nor *nor, const u8 *sr, size_t len); int spi_nor_write_sr(struct spi_nor *nor, const u8 *sr, size_t len);
int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1); int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1);
int spi_nor_write_16bit_cr_and_check(struct spi_nor *nor, u8 cr);
int spi_nor_xread_sr(struct spi_nor *nor, u8 *sr); int spi_nor_xread_sr(struct spi_nor *nor, u8 *sr);
ssize_t spi_nor_read_data(struct spi_nor *nor, loff_t from, size_t len, ssize_t spi_nor_read_data(struct spi_nor *nor, loff_t from, size_t len,
@ -451,6 +504,12 @@ ssize_t spi_nor_read_data(struct spi_nor *nor, loff_t from, size_t len,
ssize_t spi_nor_write_data(struct spi_nor *nor, loff_t to, size_t len, ssize_t spi_nor_write_data(struct spi_nor *nor, loff_t to, size_t len,
const u8 *buf); const u8 *buf);
int spi_nor_otp_read_secr(struct spi_nor *nor, loff_t addr, size_t len, u8 *buf);
int spi_nor_otp_write_secr(struct spi_nor *nor, loff_t addr, size_t len,
const u8 *buf);
int spi_nor_otp_lock_sr2(struct spi_nor *nor, unsigned int region);
int spi_nor_otp_is_locked_sr2(struct spi_nor *nor, unsigned int region);
int spi_nor_hwcaps_read2cmd(u32 hwcaps); int spi_nor_hwcaps_read2cmd(u32 hwcaps);
u8 spi_nor_convert_3to4_read(u8 opcode); u8 spi_nor_convert_3to4_read(u8 opcode);
void spi_nor_set_read_settings(struct spi_nor_read_command *read, void spi_nor_set_read_settings(struct spi_nor_read_command *read,
@ -470,8 +529,12 @@ void spi_nor_init_uniform_erase_map(struct spi_nor_erase_map *map,
int spi_nor_post_bfpt_fixups(struct spi_nor *nor, int spi_nor_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt, const struct sfdp_bfpt *bfpt);
struct spi_nor_flash_parameter *params);
void spi_nor_init_default_locking_ops(struct spi_nor *nor);
void spi_nor_try_unlock_all(struct spi_nor *nor);
void spi_nor_register_locking_ops(struct spi_nor *nor);
void spi_nor_otp_init(struct spi_nor *nor);
static struct spi_nor __maybe_unused *mtd_to_spi_nor(struct mtd_info *mtd) static struct spi_nor __maybe_unused *mtd_to_spi_nor(struct mtd_info *mtd)
{ {

View File

@ -11,8 +11,7 @@
static int static int
is25lp256_post_bfpt_fixups(struct spi_nor *nor, is25lp256_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt, const struct sfdp_bfpt *bfpt)
struct spi_nor_flash_parameter *params)
{ {
/* /*
* IS25LP256 supports 4B opcodes, but the BFPT advertises a * IS25LP256 supports 4B opcodes, but the BFPT advertises a

View File

@ -11,8 +11,7 @@
static int static int
mx25l25635_post_bfpt_fixups(struct spi_nor *nor, mx25l25635_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt, const struct sfdp_bfpt *bfpt)
struct spi_nor_flash_parameter *params)
{ {
/* /*
* MX25L25635F supports 4B opcodes but MX25L25635E does not. * MX25L25635F supports 4B opcodes but MX25L25635E does not.
@ -73,9 +72,6 @@ static const struct flash_info macronix_parts[] = {
SECT_4K | SPI_NOR_DUAL_READ | SECT_4K | SPI_NOR_DUAL_READ |
SPI_NOR_QUAD_READ) }, SPI_NOR_QUAD_READ) },
{ "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) }, { "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) },
{ "mx25l51245g", INFO(0xc2201a, 0, 64 * 1024, 1024,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_4B_OPCODES) },
{ "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, { "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_4B_OPCODES) }, SPI_NOR_4B_OPCODES) },

377
drivers/mtd/spi-nor/otp.c Normal file
View File

@ -0,0 +1,377 @@
// SPDX-License-Identifier: GPL-2.0
/*
* OTP support for SPI NOR flashes
*
* Copyright (C) 2021 Michael Walle <michael@walle.cc>
*/
#include <linux/log2.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/spi-nor.h>
#include "core.h"
#define spi_nor_otp_region_len(nor) ((nor)->params->otp.org->len)
#define spi_nor_otp_n_regions(nor) ((nor)->params->otp.org->n_regions)
/**
* spi_nor_otp_read_secr() - read OTP data
* @nor: pointer to 'struct spi_nor'
* @from: offset to read from
* @len: number of bytes to read
* @buf: pointer to dst buffer
*
* Read OTP data from one region by using the SPINOR_OP_RSECR commands. This
* method is used on GigaDevice and Winbond flashes.
*
* Return: number of bytes read successfully, -errno otherwise
*/
int spi_nor_otp_read_secr(struct spi_nor *nor, loff_t addr, size_t len, u8 *buf)
{
u8 addr_width, read_opcode, read_dummy;
struct spi_mem_dirmap_desc *rdesc;
enum spi_nor_protocol read_proto;
int ret;
read_opcode = nor->read_opcode;
addr_width = nor->addr_width;
read_dummy = nor->read_dummy;
read_proto = nor->read_proto;
rdesc = nor->dirmap.rdesc;
nor->read_opcode = SPINOR_OP_RSECR;
nor->addr_width = 3;
nor->read_dummy = 8;
nor->read_proto = SNOR_PROTO_1_1_1;
nor->dirmap.rdesc = NULL;
ret = spi_nor_read_data(nor, addr, len, buf);
nor->read_opcode = read_opcode;
nor->addr_width = addr_width;
nor->read_dummy = read_dummy;
nor->read_proto = read_proto;
nor->dirmap.rdesc = rdesc;
return ret;
}
/**
* spi_nor_otp_write_secr() - write OTP data
* @nor: pointer to 'struct spi_nor'
* @to: offset to write to
* @len: number of bytes to write
* @buf: pointer to src buffer
*
* Write OTP data to one region by using the SPINOR_OP_PSECR commands. This
* method is used on GigaDevice and Winbond flashes.
*
* Please note, the write must not span multiple OTP regions.
*
* Return: number of bytes written successfully, -errno otherwise
*/
int spi_nor_otp_write_secr(struct spi_nor *nor, loff_t addr, size_t len,
const u8 *buf)
{
enum spi_nor_protocol write_proto;
struct spi_mem_dirmap_desc *wdesc;
u8 addr_width, program_opcode;
int ret, written;
program_opcode = nor->program_opcode;
addr_width = nor->addr_width;
write_proto = nor->write_proto;
wdesc = nor->dirmap.wdesc;
nor->program_opcode = SPINOR_OP_PSECR;
nor->addr_width = 3;
nor->write_proto = SNOR_PROTO_1_1_1;
nor->dirmap.wdesc = NULL;
/*
* We only support a write to one single page. For now all winbond
* flashes only have one page per OTP region.
*/
ret = spi_nor_write_enable(nor);
if (ret)
goto out;
written = spi_nor_write_data(nor, addr, len, buf);
if (written < 0)
goto out;
ret = spi_nor_wait_till_ready(nor);
out:
nor->program_opcode = program_opcode;
nor->addr_width = addr_width;
nor->write_proto = write_proto;
nor->dirmap.wdesc = wdesc;
return ret ?: written;
}
static int spi_nor_otp_lock_bit_cr(unsigned int region)
{
static const int lock_bits[] = { SR2_LB1, SR2_LB2, SR2_LB3 };
if (region >= ARRAY_SIZE(lock_bits))
return -EINVAL;
return lock_bits[region];
}
/**
* spi_nor_otp_lock_sr2() - lock the OTP region
* @nor: pointer to 'struct spi_nor'
* @region: OTP region
*
* Lock the OTP region by writing the status register-2. This method is used on
* GigaDevice and Winbond flashes.
*
* Return: 0 on success, -errno otherwise.
*/
int spi_nor_otp_lock_sr2(struct spi_nor *nor, unsigned int region)
{
u8 *cr = nor->bouncebuf;
int ret, lock_bit;
lock_bit = spi_nor_otp_lock_bit_cr(region);
if (lock_bit < 0)
return lock_bit;
ret = spi_nor_read_cr(nor, cr);
if (ret)
return ret;
/* no need to write the register if region is already locked */
if (cr[0] & lock_bit)
return 0;
cr[0] |= lock_bit;
return spi_nor_write_16bit_cr_and_check(nor, cr[0]);
}
/**
* spi_nor_otp_is_locked_sr2() - get the OTP region lock status
* @nor: pointer to 'struct spi_nor'
* @region: OTP region
*
* Retrieve the OTP region lock bit by reading the status register-2. This
* method is used on GigaDevice and Winbond flashes.
*
* Return: 0 on success, -errno otherwise.
*/
int spi_nor_otp_is_locked_sr2(struct spi_nor *nor, unsigned int region)
{
u8 *cr = nor->bouncebuf;
int ret, lock_bit;
lock_bit = spi_nor_otp_lock_bit_cr(region);
if (lock_bit < 0)
return lock_bit;
ret = spi_nor_read_cr(nor, cr);
if (ret)
return ret;
return cr[0] & lock_bit;
}
static loff_t spi_nor_otp_region_start(const struct spi_nor *nor, unsigned int region)
{
const struct spi_nor_otp_organization *org = nor->params->otp.org;
return org->base + region * org->offset;
}
static size_t spi_nor_otp_size(struct spi_nor *nor)
{
return spi_nor_otp_n_regions(nor) * spi_nor_otp_region_len(nor);
}
/* Translate the file offsets from and to OTP regions. */
static loff_t spi_nor_otp_region_to_offset(struct spi_nor *nor, unsigned int region)
{
return region * spi_nor_otp_region_len(nor);
}
static unsigned int spi_nor_otp_offset_to_region(struct spi_nor *nor, loff_t ofs)
{
return div64_u64(ofs, spi_nor_otp_region_len(nor));
}
static int spi_nor_mtd_otp_info(struct mtd_info *mtd, size_t len,
size_t *retlen, struct otp_info *buf)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
const struct spi_nor_otp_ops *ops = nor->params->otp.ops;
unsigned int n_regions = spi_nor_otp_n_regions(nor);
unsigned int i;
int ret, locked;
if (len < n_regions * sizeof(*buf))
return -ENOSPC;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
for (i = 0; i < n_regions; i++) {
buf->start = spi_nor_otp_region_to_offset(nor, i);
buf->length = spi_nor_otp_region_len(nor);
locked = ops->is_locked(nor, i);
if (locked < 0) {
ret = locked;
goto out;
}
buf->locked = !!locked;
buf++;
}
*retlen = n_regions * sizeof(*buf);
out:
spi_nor_unlock_and_unprep(nor);
return ret;
}
static int spi_nor_mtd_otp_read_write(struct mtd_info *mtd, loff_t ofs,
size_t total_len, size_t *retlen,
const u8 *buf, bool is_write)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
const struct spi_nor_otp_ops *ops = nor->params->otp.ops;
const size_t rlen = spi_nor_otp_region_len(nor);
loff_t rstart, rofs;
unsigned int region;
size_t len;
int ret;
if (ofs < 0 || ofs >= spi_nor_otp_size(nor))
return 0;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
/* don't access beyond the end */
total_len = min_t(size_t, total_len, spi_nor_otp_size(nor) - ofs);
*retlen = 0;
while (total_len) {
/*
* The OTP regions are mapped into a contiguous area starting
* at 0 as expected by the MTD layer. This will map the MTD
* file offsets to the address of an OTP region as used in the
* actual SPI commands.
*/
region = spi_nor_otp_offset_to_region(nor, ofs);
rstart = spi_nor_otp_region_start(nor, region);
/*
* The size of a OTP region is expected to be a power of two,
* thus we can just mask the lower bits and get the offset into
* a region.
*/
rofs = ofs & (rlen - 1);
/* don't access beyond one OTP region */
len = min_t(size_t, total_len, rlen - rofs);
if (is_write)
ret = ops->write(nor, rstart + rofs, len, buf);
else
ret = ops->read(nor, rstart + rofs, len, (u8 *)buf);
if (ret == 0)
ret = -EIO;
if (ret < 0)
goto out;
*retlen += ret;
ofs += ret;
buf += ret;
total_len -= ret;
}
ret = 0;
out:
spi_nor_unlock_and_unprep(nor);
return ret;
}
static int spi_nor_mtd_otp_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u8 *buf)
{
return spi_nor_mtd_otp_read_write(mtd, from, len, retlen, buf, false);
}
static int spi_nor_mtd_otp_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u8 *buf)
{
return spi_nor_mtd_otp_read_write(mtd, to, len, retlen, buf, true);
}
static int spi_nor_mtd_otp_lock(struct mtd_info *mtd, loff_t from, size_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
const struct spi_nor_otp_ops *ops = nor->params->otp.ops;
const size_t rlen = spi_nor_otp_region_len(nor);
unsigned int region;
int ret;
if (from < 0 || (from + len) > spi_nor_otp_size(nor))
return -EINVAL;
/* the user has to explicitly ask for whole regions */
if (!IS_ALIGNED(len, rlen) || !IS_ALIGNED(from, rlen))
return -EINVAL;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
while (len) {
region = spi_nor_otp_offset_to_region(nor, from);
ret = ops->lock(nor, region);
if (ret)
goto out;
len -= rlen;
from += rlen;
}
out:
spi_nor_unlock_and_unprep(nor);
return ret;
}
void spi_nor_otp_init(struct spi_nor *nor)
{
struct mtd_info *mtd = &nor->mtd;
if (!nor->params->otp.ops)
return;
if (WARN_ON(!is_power_of_2(spi_nor_otp_region_len(nor))))
return;
/*
* We only support user_prot callbacks (yet).
*
* Some SPI NOR flashes like Macronix ones can be ordered in two
* different variants. One with a factory locked OTP area and one where
* it is left to the user to write to it. The factory locked OTP is
* usually preprogrammed with an "electrical serial number". We don't
* support these for now.
*/
mtd->_get_user_prot_info = spi_nor_mtd_otp_info;
mtd->_read_user_prot_reg = spi_nor_mtd_otp_read;
mtd->_write_user_prot_reg = spi_nor_mtd_otp_write;
mtd->_lock_user_prot_reg = spi_nor_mtd_otp_lock;
}

View File

@ -405,8 +405,6 @@ static void spi_nor_regions_sort_erase_types(struct spi_nor_erase_map *map)
* @nor: pointer to a 'struct spi_nor' * @nor: pointer to a 'struct spi_nor'
* @bfpt_header: pointer to the 'struct sfdp_parameter_header' describing * @bfpt_header: pointer to the 'struct sfdp_parameter_header' describing
* the Basic Flash Parameter Table length and version * the Basic Flash Parameter Table length and version
* @params: pointer to the 'struct spi_nor_flash_parameter' to be
* filled
* *
* The Basic Flash Parameter Table is the main and only mandatory table as * The Basic Flash Parameter Table is the main and only mandatory table as
* defined by the SFDP (JESD216) specification. * defined by the SFDP (JESD216) specification.
@ -431,9 +429,9 @@ static void spi_nor_regions_sort_erase_types(struct spi_nor_erase_map *map)
* Return: 0 on success, -errno otherwise. * Return: 0 on success, -errno otherwise.
*/ */
static int spi_nor_parse_bfpt(struct spi_nor *nor, static int spi_nor_parse_bfpt(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header)
struct spi_nor_flash_parameter *params)
{ {
struct spi_nor_flash_parameter *params = nor->params;
struct spi_nor_erase_map *map = &params->erase_map; struct spi_nor_erase_map *map = &params->erase_map;
struct spi_nor_erase_type *erase_type = map->erase_type; struct spi_nor_erase_type *erase_type = map->erase_type;
struct sfdp_bfpt bfpt; struct sfdp_bfpt bfpt;
@ -552,8 +550,7 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
/* Stop here if not JESD216 rev A or later. */ /* Stop here if not JESD216 rev A or later. */
if (bfpt_header->length == BFPT_DWORD_MAX_JESD216) if (bfpt_header->length == BFPT_DWORD_MAX_JESD216)
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt, return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt);
params);
/* Page size: this field specifies 'N' so the page size = 2^N bytes. */ /* Page size: this field specifies 'N' so the page size = 2^N bytes. */
val = bfpt.dwords[BFPT_DWORD(11)]; val = bfpt.dwords[BFPT_DWORD(11)];
@ -614,8 +611,8 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
/* Stop here if not JESD216 rev C or later. */ /* Stop here if not JESD216 rev C or later. */
if (bfpt_header->length == BFPT_DWORD_MAX_JESD216B) if (bfpt_header->length == BFPT_DWORD_MAX_JESD216B)
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt, return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt);
params);
/* 8D-8D-8D command extension. */ /* 8D-8D-8D command extension. */
switch (bfpt.dwords[BFPT_DWORD(18)] & BFPT_DWORD18_CMD_EXT_MASK) { switch (bfpt.dwords[BFPT_DWORD(18)] & BFPT_DWORD18_CMD_EXT_MASK) {
case BFPT_DWORD18_CMD_EXT_REP: case BFPT_DWORD18_CMD_EXT_REP:
@ -635,7 +632,7 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt, params); return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt);
} }
/** /**
@ -800,18 +797,14 @@ spi_nor_region_check_overlay(struct spi_nor_erase_region *region,
/** /**
* spi_nor_init_non_uniform_erase_map() - initialize the non-uniform erase map * spi_nor_init_non_uniform_erase_map() - initialize the non-uniform erase map
* @nor: pointer to a 'struct spi_nor' * @nor: pointer to a 'struct spi_nor'
* @params: pointer to a duplicate 'struct spi_nor_flash_parameter' that is
* used for storing SFDP parsed data
* @smpt: pointer to the sector map parameter table * @smpt: pointer to the sector map parameter table
* *
* Return: 0 on success, -errno otherwise. * Return: 0 on success, -errno otherwise.
*/ */
static int static int spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
struct spi_nor_flash_parameter *params,
const u32 *smpt) const u32 *smpt)
{ {
struct spi_nor_erase_map *map = &params->erase_map; struct spi_nor_erase_map *map = &nor->params->erase_map;
struct spi_nor_erase_type *erase = map->erase_type; struct spi_nor_erase_type *erase = map->erase_type;
struct spi_nor_erase_region *region; struct spi_nor_erase_region *region;
u64 offset; u64 offset;
@ -889,8 +882,6 @@ spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
* spi_nor_parse_smpt() - parse Sector Map Parameter Table * spi_nor_parse_smpt() - parse Sector Map Parameter Table
* @nor: pointer to a 'struct spi_nor' * @nor: pointer to a 'struct spi_nor'
* @smpt_header: sector map parameter table header * @smpt_header: sector map parameter table header
* @params: pointer to a duplicate 'struct spi_nor_flash_parameter'
* that is used for storing SFDP parsed data
* *
* This table is optional, but when available, we parse it to identify the * This table is optional, but when available, we parse it to identify the
* location and size of sectors within the main data array of the flash memory * location and size of sectors within the main data array of the flash memory
@ -899,8 +890,7 @@ spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
* Return: 0 on success, -errno otherwise. * Return: 0 on success, -errno otherwise.
*/ */
static int spi_nor_parse_smpt(struct spi_nor *nor, static int spi_nor_parse_smpt(struct spi_nor *nor,
const struct sfdp_parameter_header *smpt_header, const struct sfdp_parameter_header *smpt_header)
struct spi_nor_flash_parameter *params)
{ {
const u32 *sector_map; const u32 *sector_map;
u32 *smpt; u32 *smpt;
@ -928,11 +918,11 @@ static int spi_nor_parse_smpt(struct spi_nor *nor,
goto out; goto out;
} }
ret = spi_nor_init_non_uniform_erase_map(nor, params, sector_map); ret = spi_nor_init_non_uniform_erase_map(nor, sector_map);
if (ret) if (ret)
goto out; goto out;
spi_nor_regions_sort_erase_types(&params->erase_map); spi_nor_regions_sort_erase_types(&nor->params->erase_map);
/* fall through */ /* fall through */
out: out:
kfree(smpt); kfree(smpt);
@ -944,13 +934,11 @@ out:
* @nor: pointer to a 'struct spi_nor'. * @nor: pointer to a 'struct spi_nor'.
* @param_header: pointer to the 'struct sfdp_parameter_header' describing * @param_header: pointer to the 'struct sfdp_parameter_header' describing
* the 4-Byte Address Instruction Table length and version. * the 4-Byte Address Instruction Table length and version.
* @params: pointer to the 'struct spi_nor_flash_parameter' to be.
* *
* Return: 0 on success, -errno otherwise. * Return: 0 on success, -errno otherwise.
*/ */
static int spi_nor_parse_4bait(struct spi_nor *nor, static int spi_nor_parse_4bait(struct spi_nor *nor,
const struct sfdp_parameter_header *param_header, const struct sfdp_parameter_header *param_header)
struct spi_nor_flash_parameter *params)
{ {
static const struct sfdp_4bait reads[] = { static const struct sfdp_4bait reads[] = {
{ SNOR_HWCAPS_READ, BIT(0) }, { SNOR_HWCAPS_READ, BIT(0) },
@ -974,6 +962,7 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
{ 0u /* not used */, BIT(11) }, { 0u /* not used */, BIT(11) },
{ 0u /* not used */, BIT(12) }, { 0u /* not used */, BIT(12) },
}; };
struct spi_nor_flash_parameter *params = nor->params;
struct spi_nor_pp_command *params_pp = params->page_programs; struct spi_nor_pp_command *params_pp = params->page_programs;
struct spi_nor_erase_map *map = &params->erase_map; struct spi_nor_erase_map *map = &params->erase_map;
struct spi_nor_erase_type *erase_type = map->erase_type; struct spi_nor_erase_type *erase_type = map->erase_type;
@ -1130,13 +1119,11 @@ out:
* @nor: pointer to a 'struct spi_nor' * @nor: pointer to a 'struct spi_nor'
* @profile1_header: pointer to the 'struct sfdp_parameter_header' describing * @profile1_header: pointer to the 'struct sfdp_parameter_header' describing
* the Profile 1.0 Table length and version. * the Profile 1.0 Table length and version.
* @params: pointer to the 'struct spi_nor_flash_parameter' to be.
* *
* Return: 0 on success, -errno otherwise. * Return: 0 on success, -errno otherwise.
*/ */
static int spi_nor_parse_profile1(struct spi_nor *nor, static int spi_nor_parse_profile1(struct spi_nor *nor,
const struct sfdp_parameter_header *profile1_header, const struct sfdp_parameter_header *profile1_header)
struct spi_nor_flash_parameter *params)
{ {
u32 *dwords, addr; u32 *dwords, addr;
size_t len; size_t len;
@ -1160,14 +1147,14 @@ static int spi_nor_parse_profile1(struct spi_nor *nor,
/* Set the Read Status Register dummy cycles and dummy address bytes. */ /* Set the Read Status Register dummy cycles and dummy address bytes. */
if (dwords[0] & PROFILE1_DWORD1_RDSR_DUMMY) if (dwords[0] & PROFILE1_DWORD1_RDSR_DUMMY)
params->rdsr_dummy = 8; nor->params->rdsr_dummy = 8;
else else
params->rdsr_dummy = 4; nor->params->rdsr_dummy = 4;
if (dwords[0] & PROFILE1_DWORD1_RDSR_ADDR_BYTES) if (dwords[0] & PROFILE1_DWORD1_RDSR_ADDR_BYTES)
params->rdsr_addr_nbytes = 4; nor->params->rdsr_addr_nbytes = 4;
else else
params->rdsr_addr_nbytes = 0; nor->params->rdsr_addr_nbytes = 0;
/* /*
* We don't know what speed the controller is running at. Find the * We don't know what speed the controller is running at. Find the
@ -1193,7 +1180,7 @@ static int spi_nor_parse_profile1(struct spi_nor *nor,
dummy = round_up(dummy, 2); dummy = round_up(dummy, 2);
/* Update the fast read settings. */ /* Update the fast read settings. */
spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_8_8_8_DTR], spi_nor_set_read_settings(&nor->params->reads[SNOR_CMD_READ_8_8_8_DTR],
0, dummy, opcode, 0, dummy, opcode,
SNOR_PROTO_8_8_8_DTR); SNOR_PROTO_8_8_8_DTR);
@ -1210,13 +1197,11 @@ out:
* @nor: pointer to a 'struct spi_nor' * @nor: pointer to a 'struct spi_nor'
* @sccr_header: pointer to the 'struct sfdp_parameter_header' describing * @sccr_header: pointer to the 'struct sfdp_parameter_header' describing
* the SCCR Map table length and version. * the SCCR Map table length and version.
* @params: pointer to the 'struct spi_nor_flash_parameter' to be.
* *
* Return: 0 on success, -errno otherwise. * Return: 0 on success, -errno otherwise.
*/ */
static int spi_nor_parse_sccr(struct spi_nor *nor, static int spi_nor_parse_sccr(struct spi_nor *nor,
const struct sfdp_parameter_header *sccr_header, const struct sfdp_parameter_header *sccr_header)
struct spi_nor_flash_parameter *params)
{ {
u32 *dwords, addr; u32 *dwords, addr;
size_t len; size_t len;
@ -1245,8 +1230,6 @@ out:
/** /**
* spi_nor_parse_sfdp() - parse the Serial Flash Discoverable Parameters. * spi_nor_parse_sfdp() - parse the Serial Flash Discoverable Parameters.
* @nor: pointer to a 'struct spi_nor' * @nor: pointer to a 'struct spi_nor'
* @params: pointer to the 'struct spi_nor_flash_parameter' to be
* filled
* *
* The Serial Flash Discoverable Parameters are described by the JEDEC JESD216 * The Serial Flash Discoverable Parameters are described by the JEDEC JESD216
* specification. This is a standard which tends to supported by almost all * specification. This is a standard which tends to supported by almost all
@ -1256,8 +1239,7 @@ out:
* *
* Return: 0 on success, -errno otherwise. * Return: 0 on success, -errno otherwise.
*/ */
int spi_nor_parse_sfdp(struct spi_nor *nor, int spi_nor_parse_sfdp(struct spi_nor *nor)
struct spi_nor_flash_parameter *params)
{ {
const struct sfdp_parameter_header *param_header, *bfpt_header; const struct sfdp_parameter_header *param_header, *bfpt_header;
struct sfdp_parameter_header *param_headers = NULL; struct sfdp_parameter_header *param_headers = NULL;
@ -1326,7 +1308,7 @@ int spi_nor_parse_sfdp(struct spi_nor *nor,
bfpt_header = param_header; bfpt_header = param_header;
} }
err = spi_nor_parse_bfpt(nor, bfpt_header, params); err = spi_nor_parse_bfpt(nor, bfpt_header);
if (err) if (err)
goto exit; goto exit;
@ -1336,19 +1318,19 @@ int spi_nor_parse_sfdp(struct spi_nor *nor,
switch (SFDP_PARAM_HEADER_ID(param_header)) { switch (SFDP_PARAM_HEADER_ID(param_header)) {
case SFDP_SECTOR_MAP_ID: case SFDP_SECTOR_MAP_ID:
err = spi_nor_parse_smpt(nor, param_header, params); err = spi_nor_parse_smpt(nor, param_header);
break; break;
case SFDP_4BAIT_ID: case SFDP_4BAIT_ID:
err = spi_nor_parse_4bait(nor, param_header, params); err = spi_nor_parse_4bait(nor, param_header);
break; break;
case SFDP_PROFILE1_ID: case SFDP_PROFILE1_ID:
err = spi_nor_parse_profile1(nor, param_header, params); err = spi_nor_parse_profile1(nor, param_header);
break; break;
case SFDP_SCCR_MAP_ID: case SFDP_SCCR_MAP_ID:
err = spi_nor_parse_sccr(nor, param_header, params); err = spi_nor_parse_sccr(nor, param_header);
break; break;
default: default:

View File

@ -107,7 +107,6 @@ struct sfdp_parameter_header {
u8 id_msb; u8 id_msb;
}; };
int spi_nor_parse_sfdp(struct spi_nor *nor, int spi_nor_parse_sfdp(struct spi_nor *nor);
struct spi_nor_flash_parameter *params);
#endif /* __LINUX_MTD_SFDP_H */ #endif /* __LINUX_MTD_SFDP_H */

View File

@ -142,8 +142,7 @@ static void s28hs512t_post_sfdp_fixup(struct spi_nor *nor)
static int s28hs512t_post_bfpt_fixup(struct spi_nor *nor, static int s28hs512t_post_bfpt_fixup(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt, const struct sfdp_bfpt *bfpt)
struct spi_nor_flash_parameter *params)
{ {
/* /*
* The BFPT table advertises a 512B page size but the page size is * The BFPT table advertises a 512B page size but the page size is
@ -162,9 +161,9 @@ static int s28hs512t_post_bfpt_fixup(struct spi_nor *nor,
return ret; return ret;
if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3V_PGSZ) if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3V_PGSZ)
params->page_size = 512; nor->params->page_size = 512;
else else
params->page_size = 256; nor->params->page_size = 256;
return 0; return 0;
} }
@ -178,8 +177,7 @@ static struct spi_nor_fixups s28hs512t_fixups = {
static int static int
s25fs_s_post_bfpt_fixups(struct spi_nor *nor, s25fs_s_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt, const struct sfdp_bfpt *bfpt)
struct spi_nor_flash_parameter *params)
{ {
/* /*
* The S25FS-S chip family reports 512-byte pages in BFPT but * The S25FS-S chip family reports 512-byte pages in BFPT but
@ -187,7 +185,7 @@ s25fs_s_post_bfpt_fixups(struct spi_nor *nor,
* of 256 bytes. Overwrite the page size advertised by BFPT * of 256 bytes. Overwrite the page size advertised by BFPT
* to get the writes working. * to get the writes working.
*/ */
params->page_size = 256; nor->params->page_size = 256;
return 0; return 0;
} }

427
drivers/mtd/spi-nor/swp.c Normal file
View File

@ -0,0 +1,427 @@
// SPDX-License-Identifier: GPL-2.0
/*
* SPI NOR Software Write Protection logic.
*
* Copyright (C) 2005, Intec Automation Inc.
* Copyright (C) 2014, Freescale Semiconductor, Inc.
*/
#include <linux/mtd/mtd.h>
#include <linux/mtd/spi-nor.h>
#include "core.h"
static u8 spi_nor_get_sr_bp_mask(struct spi_nor *nor)
{
u8 mask = SR_BP2 | SR_BP1 | SR_BP0;
if (nor->flags & SNOR_F_HAS_SR_BP3_BIT6)
return mask | SR_BP3_BIT6;
if (nor->flags & SNOR_F_HAS_4BIT_BP)
return mask | SR_BP3;
return mask;
}
static u8 spi_nor_get_sr_tb_mask(struct spi_nor *nor)
{
if (nor->flags & SNOR_F_HAS_SR_TB_BIT6)
return SR_TB_BIT6;
else
return SR_TB_BIT5;
}
static u64 spi_nor_get_min_prot_length_sr(struct spi_nor *nor)
{
unsigned int bp_slots, bp_slots_needed;
u8 mask = spi_nor_get_sr_bp_mask(nor);
/* Reserved one for "protect none" and one for "protect all". */
bp_slots = (1 << hweight8(mask)) - 2;
bp_slots_needed = ilog2(nor->info->n_sectors);
if (bp_slots_needed > bp_slots)
return nor->info->sector_size <<
(bp_slots_needed - bp_slots);
else
return nor->info->sector_size;
}
static void spi_nor_get_locked_range_sr(struct spi_nor *nor, u8 sr, loff_t *ofs,
uint64_t *len)
{
struct mtd_info *mtd = &nor->mtd;
u64 min_prot_len;
u8 mask = spi_nor_get_sr_bp_mask(nor);
u8 tb_mask = spi_nor_get_sr_tb_mask(nor);
u8 bp, val = sr & mask;
if (nor->flags & SNOR_F_HAS_SR_BP3_BIT6 && val & SR_BP3_BIT6)
val = (val & ~SR_BP3_BIT6) | SR_BP3;
bp = val >> SR_BP_SHIFT;
if (!bp) {
/* No protection */
*ofs = 0;
*len = 0;
return;
}
min_prot_len = spi_nor_get_min_prot_length_sr(nor);
*len = min_prot_len << (bp - 1);
if (*len > mtd->size)
*len = mtd->size;
if (nor->flags & SNOR_F_HAS_SR_TB && sr & tb_mask)
*ofs = 0;
else
*ofs = mtd->size - *len;
}
/*
* Return true if the entire region is locked (if @locked is true) or unlocked
* (if @locked is false); false otherwise.
*/
static bool spi_nor_check_lock_status_sr(struct spi_nor *nor, loff_t ofs,
uint64_t len, u8 sr, bool locked)
{
loff_t lock_offs, lock_offs_max, offs_max;
uint64_t lock_len;
if (!len)
return true;
spi_nor_get_locked_range_sr(nor, sr, &lock_offs, &lock_len);
lock_offs_max = lock_offs + lock_len;
offs_max = ofs + len;
if (locked)
/* Requested range is a sub-range of locked range */
return (offs_max <= lock_offs_max) && (ofs >= lock_offs);
else
/* Requested range does not overlap with locked range */
return (ofs >= lock_offs_max) || (offs_max <= lock_offs);
}
static bool spi_nor_is_locked_sr(struct spi_nor *nor, loff_t ofs, uint64_t len,
u8 sr)
{
return spi_nor_check_lock_status_sr(nor, ofs, len, sr, true);
}
static bool spi_nor_is_unlocked_sr(struct spi_nor *nor, loff_t ofs,
uint64_t len, u8 sr)
{
return spi_nor_check_lock_status_sr(nor, ofs, len, sr, false);
}
/*
* Lock a region of the flash. Compatible with ST Micro and similar flash.
* Supports the block protection bits BP{0,1,2}/BP{0,1,2,3} in the status
* register
* (SR). Does not support these features found in newer SR bitfields:
* - SEC: sector/block protect - only handle SEC=0 (block protect)
* - CMP: complement protect - only support CMP=0 (range is not complemented)
*
* Support for the following is provided conditionally for some flash:
* - TB: top/bottom protect
*
* Sample table portion for 8MB flash (Winbond w25q64fw):
*
* SEC | TB | BP2 | BP1 | BP0 | Prot Length | Protected Portion
* --------------------------------------------------------------------------
* X | X | 0 | 0 | 0 | NONE | NONE
* 0 | 0 | 0 | 0 | 1 | 128 KB | Upper 1/64
* 0 | 0 | 0 | 1 | 0 | 256 KB | Upper 1/32
* 0 | 0 | 0 | 1 | 1 | 512 KB | Upper 1/16
* 0 | 0 | 1 | 0 | 0 | 1 MB | Upper 1/8
* 0 | 0 | 1 | 0 | 1 | 2 MB | Upper 1/4
* 0 | 0 | 1 | 1 | 0 | 4 MB | Upper 1/2
* X | X | 1 | 1 | 1 | 8 MB | ALL
* ------|-------|-------|-------|-------|---------------|-------------------
* 0 | 1 | 0 | 0 | 1 | 128 KB | Lower 1/64
* 0 | 1 | 0 | 1 | 0 | 256 KB | Lower 1/32
* 0 | 1 | 0 | 1 | 1 | 512 KB | Lower 1/16
* 0 | 1 | 1 | 0 | 0 | 1 MB | Lower 1/8
* 0 | 1 | 1 | 0 | 1 | 2 MB | Lower 1/4
* 0 | 1 | 1 | 1 | 0 | 4 MB | Lower 1/2
*
* Returns negative on errors, 0 on success.
*/
static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
struct mtd_info *mtd = &nor->mtd;
u64 min_prot_len;
int ret, status_old, status_new;
u8 mask = spi_nor_get_sr_bp_mask(nor);
u8 tb_mask = spi_nor_get_sr_tb_mask(nor);
u8 pow, val;
loff_t lock_len;
bool can_be_top = true, can_be_bottom = nor->flags & SNOR_F_HAS_SR_TB;
bool use_top;
ret = spi_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
status_old = nor->bouncebuf[0];
/* If nothing in our range is unlocked, we don't need to do anything */
if (spi_nor_is_locked_sr(nor, ofs, len, status_old))
return 0;
/* If anything below us is unlocked, we can't use 'bottom' protection */
if (!spi_nor_is_locked_sr(nor, 0, ofs, status_old))
can_be_bottom = false;
/* If anything above us is unlocked, we can't use 'top' protection */
if (!spi_nor_is_locked_sr(nor, ofs + len, mtd->size - (ofs + len),
status_old))
can_be_top = false;
if (!can_be_bottom && !can_be_top)
return -EINVAL;
/* Prefer top, if both are valid */
use_top = can_be_top;
/* lock_len: length of region that should end up locked */
if (use_top)
lock_len = mtd->size - ofs;
else
lock_len = ofs + len;
if (lock_len == mtd->size) {
val = mask;
} else {
min_prot_len = spi_nor_get_min_prot_length_sr(nor);
pow = ilog2(lock_len) - ilog2(min_prot_len) + 1;
val = pow << SR_BP_SHIFT;
if (nor->flags & SNOR_F_HAS_SR_BP3_BIT6 && val & SR_BP3)
val = (val & ~SR_BP3) | SR_BP3_BIT6;
if (val & ~mask)
return -EINVAL;
/* Don't "lock" with no region! */
if (!(val & mask))
return -EINVAL;
}
status_new = (status_old & ~mask & ~tb_mask) | val;
/* Disallow further writes if WP pin is asserted */
status_new |= SR_SRWD;
if (!use_top)
status_new |= tb_mask;
/* Don't bother if they're the same */
if (status_new == status_old)
return 0;
/* Only modify protection if it will not unlock other areas */
if ((status_new & mask) < (status_old & mask))
return -EINVAL;
return spi_nor_write_sr_and_check(nor, status_new);
}
/*
* Unlock a region of the flash. See spi_nor_sr_lock() for more info
*
* Returns negative on errors, 0 on success.
*/
static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
struct mtd_info *mtd = &nor->mtd;
u64 min_prot_len;
int ret, status_old, status_new;
u8 mask = spi_nor_get_sr_bp_mask(nor);
u8 tb_mask = spi_nor_get_sr_tb_mask(nor);
u8 pow, val;
loff_t lock_len;
bool can_be_top = true, can_be_bottom = nor->flags & SNOR_F_HAS_SR_TB;
bool use_top;
ret = spi_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
status_old = nor->bouncebuf[0];
/* If nothing in our range is locked, we don't need to do anything */
if (spi_nor_is_unlocked_sr(nor, ofs, len, status_old))
return 0;
/* If anything below us is locked, we can't use 'top' protection */
if (!spi_nor_is_unlocked_sr(nor, 0, ofs, status_old))
can_be_top = false;
/* If anything above us is locked, we can't use 'bottom' protection */
if (!spi_nor_is_unlocked_sr(nor, ofs + len, mtd->size - (ofs + len),
status_old))
can_be_bottom = false;
if (!can_be_bottom && !can_be_top)
return -EINVAL;
/* Prefer top, if both are valid */
use_top = can_be_top;
/* lock_len: length of region that should remain locked */
if (use_top)
lock_len = mtd->size - (ofs + len);
else
lock_len = ofs;
if (lock_len == 0) {
val = 0; /* fully unlocked */
} else {
min_prot_len = spi_nor_get_min_prot_length_sr(nor);
pow = ilog2(lock_len) - ilog2(min_prot_len) + 1;
val = pow << SR_BP_SHIFT;
if (nor->flags & SNOR_F_HAS_SR_BP3_BIT6 && val & SR_BP3)
val = (val & ~SR_BP3) | SR_BP3_BIT6;
/* Some power-of-two sizes are not supported */
if (val & ~mask)
return -EINVAL;
}
status_new = (status_old & ~mask & ~tb_mask) | val;
/* Don't protect status register if we're fully unlocked */
if (lock_len == 0)
status_new &= ~SR_SRWD;
if (!use_top)
status_new |= tb_mask;
/* Don't bother if they're the same */
if (status_new == status_old)
return 0;
/* Only modify protection if it will not lock other areas */
if ((status_new & mask) > (status_old & mask))
return -EINVAL;
return spi_nor_write_sr_and_check(nor, status_new);
}
/*
* Check if a region of the flash is (completely) locked. See spi_nor_sr_lock()
* for more info.
*
* Returns 1 if entire region is locked, 0 if any portion is unlocked, and
* negative on errors.
*/
static int spi_nor_sr_is_locked(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
int ret;
ret = spi_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
return spi_nor_is_locked_sr(nor, ofs, len, nor->bouncebuf[0]);
}
static const struct spi_nor_locking_ops spi_nor_sr_locking_ops = {
.lock = spi_nor_sr_lock,
.unlock = spi_nor_sr_unlock,
.is_locked = spi_nor_sr_is_locked,
};
void spi_nor_init_default_locking_ops(struct spi_nor *nor)
{
nor->params->locking_ops = &spi_nor_sr_locking_ops;
}
static int spi_nor_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
int ret;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
ret = nor->params->locking_ops->lock(nor, ofs, len);
spi_nor_unlock_and_unprep(nor);
return ret;
}
static int spi_nor_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
int ret;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
ret = nor->params->locking_ops->unlock(nor, ofs, len);
spi_nor_unlock_and_unprep(nor);
return ret;
}
static int spi_nor_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
int ret;
ret = spi_nor_lock_and_prep(nor);
if (ret)
return ret;
ret = nor->params->locking_ops->is_locked(nor, ofs, len);
spi_nor_unlock_and_unprep(nor);
return ret;
}
/**
* spi_nor_try_unlock_all() - Tries to unlock the entire flash memory array.
* @nor: pointer to a 'struct spi_nor'.
*
* Some SPI NOR flashes are write protected by default after a power-on reset
* cycle, in order to avoid inadvertent writes during power-up. Backward
* compatibility imposes to unlock the entire flash memory array at power-up
* by default.
*
* Unprotecting the entire flash array will fail for boards which are hardware
* write-protected. Thus any errors are ignored.
*/
void spi_nor_try_unlock_all(struct spi_nor *nor)
{
int ret;
if (!(nor->flags & SNOR_F_HAS_LOCK))
return;
dev_dbg(nor->dev, "Unprotecting entire flash array\n");
ret = spi_nor_unlock(&nor->mtd, 0, nor->params->size);
if (ret)
dev_dbg(nor->dev, "Failed to unlock the entire flash memory array\n");
}
void spi_nor_register_locking_ops(struct spi_nor *nor)
{
struct mtd_info *mtd = &nor->mtd;
if (!nor->params->locking_ops)
return;
mtd->_lock = spi_nor_lock;
mtd->_unlock = spi_nor_unlock;
mtd->_is_locked = spi_nor_is_locked;
}

View File

@ -11,8 +11,7 @@
static int static int
w25q256_post_bfpt_fixups(struct spi_nor *nor, w25q256_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header, const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt, const struct sfdp_bfpt *bfpt)
struct spi_nor_flash_parameter *params)
{ {
/* /*
* W25Q256JV supports 4B opcodes but W25Q256FV does not. * W25Q256JV supports 4B opcodes but W25Q256FV does not.
@ -55,14 +54,18 @@ static const struct flash_info winbond_parts[] = {
{ "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) }, { "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) },
{ "w25q32dw", INFO(0xef6016, 0, 64 * 1024, 64, { "w25q32dw", INFO(0xef6016, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) }, SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB)
OTP_INFO(256, 3, 0x1000, 0x1000)
},
{ "w25q32jv", INFO(0xef7016, 0, 64 * 1024, 64, { "w25q32jv", INFO(0xef7016, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB)
}, },
{ "w25q32jwm", INFO(0xef8016, 0, 64 * 1024, 64, { "w25q32jwm", INFO(0xef8016, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) }, SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB)
OTP_INFO(256, 3, 0x1000, 0x1000) },
{ "w25q64jwm", INFO(0xef8017, 0, 64 * 1024, 128, { "w25q64jwm", INFO(0xef8017, 0, 64 * 1024, 128,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) }, SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) },
@ -97,6 +100,8 @@ static const struct flash_info winbond_parts[] = {
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "w25m512jv", INFO(0xef7119, 0, 64 * 1024, 1024, { "w25m512jv", INFO(0xef7119, 0, 64 * 1024, 1024,
SECT_4K | SPI_NOR_QUAD_READ | SPI_NOR_DUAL_READ) }, SECT_4K | SPI_NOR_QUAD_READ | SPI_NOR_DUAL_READ) },
{ "w25q512jvq", INFO(0xef4020, 0, 64 * 1024, 1024,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
}; };
/** /**
@ -131,9 +136,18 @@ static int winbond_set_4byte_addr_mode(struct spi_nor *nor, bool enable)
return spi_nor_write_disable(nor); return spi_nor_write_disable(nor);
} }
static const struct spi_nor_otp_ops winbond_otp_ops = {
.read = spi_nor_otp_read_secr,
.write = spi_nor_otp_write_secr,
.lock = spi_nor_otp_lock_sr2,
.is_locked = spi_nor_otp_is_locked_sr2,
};
static void winbond_default_init(struct spi_nor *nor) static void winbond_default_init(struct spi_nor *nor)
{ {
nor->params->set_4byte_addr_mode = winbond_set_4byte_addr_mode; nor->params->set_4byte_addr_mode = winbond_set_4byte_addr_mode;
if (nor->params->otp.org->n_regions)
nor->params->otp.ops = &winbond_otp_ops;
} }
static const struct spi_nor_fixups winbond_fixups = { static const struct spi_nor_fixups winbond_fixups = {

View File

@ -8,7 +8,7 @@
#ifndef LPC_ICH_H #ifndef LPC_ICH_H
#define LPC_ICH_H #define LPC_ICH_H
#include <linux/platform_data/intel-spi.h> #include <linux/platform_data/x86/intel-spi.h>
/* GPIO resources */ /* GPIO resources */
#define ICH_RES_GPIO 0 #define ICH_RES_GPIO 0

View File

@ -77,5 +77,16 @@ extern int add_mtd_blktrans_dev(struct mtd_blktrans_dev *dev);
extern int del_mtd_blktrans_dev(struct mtd_blktrans_dev *dev); extern int del_mtd_blktrans_dev(struct mtd_blktrans_dev *dev);
extern int mtd_blktrans_cease_background(struct mtd_blktrans_dev *dev); extern int mtd_blktrans_cease_background(struct mtd_blktrans_dev *dev);
/**
* module_mtd_blktrans() - Helper macro for registering a mtd blktrans driver
* @__mtd_blktrans: mtd_blktrans_ops struct
*
* Helper macro for mtd blktrans drivers which do not do anything special in
* module init/exit. This eliminates a lot of boilerplate. Each module may only
* use this macro once, and calling it replaces module_init() and module_exit()
*/
#define module_mtd_blktrans(__mtd_blktrans) \
module_driver(__mtd_blktrans, register_mtd_blktrans, \
deregister_mtd_blktrans)
#endif /* __MTD_TRANS_H__ */ #endif /* __MTD_TRANS_H__ */

View File

@ -229,6 +229,7 @@ struct mtd_part {
*/ */
struct mtd_master { struct mtd_master {
struct mutex partitions_lock; struct mutex partitions_lock;
struct mutex chrdev_lock;
unsigned int suspended : 1; unsigned int suspended : 1;
}; };
@ -333,9 +334,12 @@ struct mtd_info {
int (*_read_user_prot_reg) (struct mtd_info *mtd, loff_t from, int (*_read_user_prot_reg) (struct mtd_info *mtd, loff_t from,
size_t len, size_t *retlen, u_char *buf); size_t len, size_t *retlen, u_char *buf);
int (*_write_user_prot_reg) (struct mtd_info *mtd, loff_t to, int (*_write_user_prot_reg) (struct mtd_info *mtd, loff_t to,
size_t len, size_t *retlen, u_char *buf); size_t len, size_t *retlen,
const u_char *buf);
int (*_lock_user_prot_reg) (struct mtd_info *mtd, loff_t from, int (*_lock_user_prot_reg) (struct mtd_info *mtd, loff_t from,
size_t len); size_t len);
int (*_erase_user_prot_reg) (struct mtd_info *mtd, loff_t from,
size_t len);
int (*_writev) (struct mtd_info *mtd, const struct kvec *vecs, int (*_writev) (struct mtd_info *mtd, const struct kvec *vecs,
unsigned long count, loff_t to, size_t *retlen); unsigned long count, loff_t to, size_t *retlen);
void (*_sync) (struct mtd_info *mtd); void (*_sync) (struct mtd_info *mtd);
@ -515,8 +519,9 @@ int mtd_get_user_prot_info(struct mtd_info *mtd, size_t len, size_t *retlen,
int mtd_read_user_prot_reg(struct mtd_info *mtd, loff_t from, size_t len, int mtd_read_user_prot_reg(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf); size_t *retlen, u_char *buf);
int mtd_write_user_prot_reg(struct mtd_info *mtd, loff_t to, size_t len, int mtd_write_user_prot_reg(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, u_char *buf); size_t *retlen, const u_char *buf);
int mtd_lock_user_prot_reg(struct mtd_info *mtd, loff_t from, size_t len); int mtd_lock_user_prot_reg(struct mtd_info *mtd, loff_t from, size_t len);
int mtd_erase_user_prot_reg(struct mtd_info *mtd, loff_t from, size_t len);
int mtd_writev(struct mtd_info *mtd, const struct kvec *vecs, int mtd_writev(struct mtd_info *mtd, const struct kvec *vecs,
unsigned long count, loff_t to, size_t *retlen); unsigned long count, loff_t to, size_t *retlen);

View File

@ -16,7 +16,6 @@
* @req_ctx: Save request context and tweak the original request to fit the * @req_ctx: Save request context and tweak the original request to fit the
* engine needs * engine needs
* @code_size: Number of bytes needed to store a code (one code per step) * @code_size: Number of bytes needed to store a code (one code per step)
* @nsteps: Number of steps
* @calc_buf: Buffer to use when calculating ECC bytes * @calc_buf: Buffer to use when calculating ECC bytes
* @code_buf: Buffer to use when reading (raw) ECC bytes from the chip * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip
* @bch: BCH control structure * @bch: BCH control structure
@ -26,7 +25,6 @@
struct nand_ecc_sw_bch_conf { struct nand_ecc_sw_bch_conf {
struct nand_ecc_req_tweak_ctx req_ctx; struct nand_ecc_req_tweak_ctx req_ctx;
unsigned int code_size; unsigned int code_size;
unsigned int nsteps;
u8 *calc_buf; u8 *calc_buf;
u8 *code_buf; u8 *code_buf;
struct bch_control *bch; struct bch_control *bch;

View File

@ -17,7 +17,6 @@
* @req_ctx: Save request context and tweak the original request to fit the * @req_ctx: Save request context and tweak the original request to fit the
* engine needs * engine needs
* @code_size: Number of bytes needed to store a code (one code per step) * @code_size: Number of bytes needed to store a code (one code per step)
* @nsteps: Number of steps
* @calc_buf: Buffer to use when calculating ECC bytes * @calc_buf: Buffer to use when calculating ECC bytes
* @code_buf: Buffer to use when reading (raw) ECC bytes from the chip * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip
* @sm_order: Smart Media special ordering * @sm_order: Smart Media special ordering
@ -25,7 +24,6 @@
struct nand_ecc_sw_hamming_conf { struct nand_ecc_sw_hamming_conf {
struct nand_ecc_req_tweak_ctx req_ctx; struct nand_ecc_req_tweak_ctx req_ctx;
unsigned int code_size; unsigned int code_size;
unsigned int nsteps;
u8 *calc_buf; u8 *calc_buf;
u8 *code_buf; u8 *code_buf;
unsigned int sm_order; unsigned int sm_order;

View File

@ -231,12 +231,14 @@ struct nand_ops {
/** /**
* struct nand_ecc_context - Context for the ECC engine * struct nand_ecc_context - Context for the ECC engine
* @conf: basic ECC engine parameters * @conf: basic ECC engine parameters
* @nsteps: number of ECC steps
* @total: total number of bytes used for storing ECC codes, this is used by * @total: total number of bytes used for storing ECC codes, this is used by
* generic OOB layouts * generic OOB layouts
* @priv: ECC engine driver private data * @priv: ECC engine driver private data
*/ */
struct nand_ecc_context { struct nand_ecc_context {
struct nand_ecc_props conf; struct nand_ecc_props conf;
unsigned int nsteps;
unsigned int total; unsigned int total;
void *priv; void *priv;
}; };
@ -585,6 +587,26 @@ nanddev_get_ecc_conf(struct nand_device *nand)
return &nand->ecc.ctx.conf; return &nand->ecc.ctx.conf;
} }
/**
* nanddev_get_ecc_nsteps() - Extract the number of ECC steps
* @nand: NAND device
*/
static inline unsigned int
nanddev_get_ecc_nsteps(struct nand_device *nand)
{
return nand->ecc.ctx.nsteps;
}
/**
* nanddev_get_ecc_bytes_per_step() - Extract the number of ECC bytes per step
* @nand: NAND device
*/
static inline unsigned int
nanddev_get_ecc_bytes_per_step(struct nand_device *nand)
{
return nand->ecc.ctx.total / nand->ecc.ctx.nsteps;
}
/** /**
* nanddev_get_ecc_requirements() - Extract the ECC requirements from a NAND * nanddev_get_ecc_requirements() - Extract the ECC requirements from a NAND
* device * device

View File

@ -18,7 +18,6 @@
#include <linux/mtd/flashchip.h> #include <linux/mtd/flashchip.h>
#include <linux/mtd/bbm.h> #include <linux/mtd/bbm.h>
#include <linux/mtd/jedec.h> #include <linux/mtd/jedec.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/onfi.h> #include <linux/mtd/onfi.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/of.h> #include <linux/of.h>
@ -1036,6 +1035,16 @@ struct nand_manufacturer {
void *priv; void *priv;
}; };
/**
* struct nand_secure_region - NAND secure region structure
* @offset: Offset of the start of the secure region
* @size: Size of the secure region
*/
struct nand_secure_region {
u64 offset;
u64 size;
};
/** /**
* struct nand_chip - NAND Private Flash Chip Data * struct nand_chip - NAND Private Flash Chip Data
* @base: Inherit from the generic NAND device * @base: Inherit from the generic NAND device
@ -1086,6 +1095,8 @@ struct nand_manufacturer {
* NAND Controller drivers should not modify this value, but they're * NAND Controller drivers should not modify this value, but they're
* allowed to read it. * allowed to read it.
* @read_retries: The number of read retry modes supported * @read_retries: The number of read retry modes supported
* @secure_regions: Structure containing the secure regions info
* @nr_secure_regions: Number of secure regions
* @controller: The hardware controller structure which is shared among multiple * @controller: The hardware controller structure which is shared among multiple
* independent devices * independent devices
* @ecc: The ECC controller structure * @ecc: The ECC controller structure
@ -1135,6 +1146,8 @@ struct nand_chip {
unsigned int suspended : 1; unsigned int suspended : 1;
int cur_cs; int cur_cs;
int read_retries; int read_retries;
struct nand_secure_region *secure_regions;
u8 nr_secure_regions;
/* Externals */ /* Externals */
struct nand_controller *controller; struct nand_controller *controller;

View File

@ -107,6 +107,11 @@
#define SPINOR_OP_RD_EVCR 0x65 /* Read EVCR register */ #define SPINOR_OP_RD_EVCR 0x65 /* Read EVCR register */
#define SPINOR_OP_WD_EVCR 0x61 /* Write EVCR register */ #define SPINOR_OP_WD_EVCR 0x61 /* Write EVCR register */
/* Used for GigaDevices and Winbond flashes. */
#define SPINOR_OP_ESECR 0x44 /* Erase Security registers */
#define SPINOR_OP_PSECR 0x42 /* Program Security registers */
#define SPINOR_OP_RSECR 0x48 /* Read Security registers */
/* Status Register bits. */ /* Status Register bits. */
#define SR_WIP BIT(0) /* Write in progress */ #define SR_WIP BIT(0) /* Write in progress */
#define SR_WEL BIT(1) /* Write enable latch */ #define SR_WEL BIT(1) /* Write enable latch */
@ -138,6 +143,9 @@
/* Status Register 2 bits. */ /* Status Register 2 bits. */
#define SR2_QUAD_EN_BIT1 BIT(1) #define SR2_QUAD_EN_BIT1 BIT(1)
#define SR2_LB1 BIT(3) /* Security Register Lock Bit 1 */
#define SR2_LB2 BIT(4) /* Security Register Lock Bit 2 */
#define SR2_LB3 BIT(5) /* Security Register Lock Bit 3 */
#define SR2_QUAD_EN_BIT7 BIT(7) #define SR2_QUAD_EN_BIT7 BIT(7)
/* Supported SPI protocols */ /* Supported SPI protocols */

View File

@ -205,6 +205,8 @@ struct otp_info {
* without OOB, e.g., NOR flash. * without OOB, e.g., NOR flash.
*/ */
#define MEMWRITE _IOWR('M', 24, struct mtd_write_req) #define MEMWRITE _IOWR('M', 24, struct mtd_write_req)
/* Erase a given range of user data (must be in mode %MTD_FILE_MODE_OTP_USER) */
#define OTPERASE _IOW('M', 25, struct otp_info)
/* /*
* Obsolete legacy interface. Keep it in order not to break userspace * Obsolete legacy interface. Keep it in order not to break userspace