net: mvneta: bm: add support for hardware buffer management
Buffer manager (BM) is a dedicated hardware unit that can be used by all
ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX
path by sparing DRAM access on refilling buffer pool, hardware-based
filling of descriptor ring data and better memory utilization due to HW
arbitration for using 'short' pools for small packets.
Tests performed with A388 SoC working as a network bridge between two
packet generators showed increase of maximum processed 64B packets by
~20k (~555k packets with BM enabled vs ~535 packets without BM). Also
when pushing 1500B-packets with a line rate achieved, CPU load decreased
from around 25% without BM to 20% with BM.
BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which
are called external BP pools - BPPE. Allocating and releasing buffer
pointers (BP) to/from BPPE is performed indirectly by write/read access
to a dedicated internal SRAM, where internal BP pools (BPPI) are placed.
BM hardware controls status of BPPE automatically, as well as assigning
proper buffers to RX descriptors. For more details please refer to
Functional Specification of Armada XP or 38x SoC.
In order to enable support for a separate hardware block, common for all
ports, a new driver has to be implemented ('mvneta_bm'). It provides
initialization sequence of address space, clocks, registers, SRAM,
empty pools' structures and also obtaining optional configuration
from DT (please refer to device tree binding documentation). mvneta_bm
exposes also a necessary API to mvneta driver, as well as a dedicated
structure with BM information (bm_priv), whose presence is used as a
flag notifying of BM usage by port. It has to be ensured that mvneta_bm
probe is executed prior to the ones in ports' driver. In case BM is not
used or its probe fails, mvneta falls back to use software buffer
management.
A sequence executed in mvneta_probe function is modified in order to have
an access to needed resources before possible port's BM initialization is
done. According to port-pools mapping provided by DT appropriate registers
are configured and the buffer pools are filled. RX path is modified
accordingly. Becaues the hardware allows a wide variety of configuration
options, following assumptions are made:
* using BM mechanisms can be selectively disabled/enabled basing
on DT configuration among the ports
* 'long' pool's single buffer size is tied to port's MTU
* using 'long' pool by port is obligatory and it cannot be shared
* using 'short' pool for smaller packets is optional
* one 'short' pool can be shared among all ports
This commit enables hardware buffer management operation cooperating with
existing mvneta driver. New device tree binding documentation is added and
the one of mvneta is updated accordingly.
[gregory.clement@free-electrons.com: removed the suspend/resume part]
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-03-14 16:39:03 +08:00
|
|
|
/*
|
|
|
|
* Driver for Marvell NETA network controller Buffer Manager.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2015 Marvell
|
|
|
|
*
|
|
|
|
* Marcin Wojtas <mw@semihalf.com>
|
|
|
|
*
|
|
|
|
* This file is licensed under the terms of the GNU General Public
|
|
|
|
* License version 2. This program is licensed "as is" without any
|
|
|
|
* warranty of any kind, whether express or implied.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _MVNETA_BM_H_
|
|
|
|
#define _MVNETA_BM_H_
|
|
|
|
|
|
|
|
/* BM Configuration Register */
|
|
|
|
#define MVNETA_BM_CONFIG_REG 0x0
|
|
|
|
#define MVNETA_BM_STATUS_MASK 0x30
|
|
|
|
#define MVNETA_BM_ACTIVE_MASK BIT(4)
|
|
|
|
#define MVNETA_BM_MAX_IN_BURST_SIZE_MASK 0x60000
|
|
|
|
#define MVNETA_BM_MAX_IN_BURST_SIZE_16BP BIT(18)
|
|
|
|
#define MVNETA_BM_EMPTY_LIMIT_MASK BIT(19)
|
|
|
|
|
|
|
|
/* BM Activation Register */
|
|
|
|
#define MVNETA_BM_COMMAND_REG 0x4
|
|
|
|
#define MVNETA_BM_START_MASK BIT(0)
|
|
|
|
#define MVNETA_BM_STOP_MASK BIT(1)
|
|
|
|
#define MVNETA_BM_PAUSE_MASK BIT(2)
|
|
|
|
|
|
|
|
/* BM Xbar interface Register */
|
|
|
|
#define MVNETA_BM_XBAR_01_REG 0x8
|
|
|
|
#define MVNETA_BM_XBAR_23_REG 0xc
|
|
|
|
#define MVNETA_BM_XBAR_POOL_REG(pool) \
|
|
|
|
(((pool) < 2) ? MVNETA_BM_XBAR_01_REG : MVNETA_BM_XBAR_23_REG)
|
|
|
|
#define MVNETA_BM_TARGET_ID_OFFS(pool) (((pool) & 1) ? 16 : 0)
|
|
|
|
#define MVNETA_BM_TARGET_ID_MASK(pool) \
|
|
|
|
(0xf << MVNETA_BM_TARGET_ID_OFFS(pool))
|
|
|
|
#define MVNETA_BM_TARGET_ID_VAL(pool, id) \
|
|
|
|
((id) << MVNETA_BM_TARGET_ID_OFFS(pool))
|
|
|
|
#define MVNETA_BM_XBAR_ATTR_OFFS(pool) (((pool) & 1) ? 20 : 4)
|
|
|
|
#define MVNETA_BM_XBAR_ATTR_MASK(pool) \
|
|
|
|
(0xff << MVNETA_BM_XBAR_ATTR_OFFS(pool))
|
|
|
|
#define MVNETA_BM_XBAR_ATTR_VAL(pool, attr) \
|
|
|
|
((attr) << MVNETA_BM_XBAR_ATTR_OFFS(pool))
|
|
|
|
|
|
|
|
/* Address of External Buffer Pointers Pool Register */
|
|
|
|
#define MVNETA_BM_POOL_BASE_REG(pool) (0x10 + ((pool) << 4))
|
|
|
|
#define MVNETA_BM_POOL_ENABLE_MASK BIT(0)
|
|
|
|
|
|
|
|
/* External Buffer Pointers Pool RD pointer Register */
|
|
|
|
#define MVNETA_BM_POOL_READ_PTR_REG(pool) (0x14 + ((pool) << 4))
|
|
|
|
#define MVNETA_BM_POOL_SET_READ_PTR_MASK 0xfffc
|
|
|
|
#define MVNETA_BM_POOL_GET_READ_PTR_OFFS 16
|
|
|
|
#define MVNETA_BM_POOL_GET_READ_PTR_MASK 0xfffc0000
|
|
|
|
|
|
|
|
/* External Buffer Pointers Pool WR pointer */
|
|
|
|
#define MVNETA_BM_POOL_WRITE_PTR_REG(pool) (0x18 + ((pool) << 4))
|
|
|
|
#define MVNETA_BM_POOL_SET_WRITE_PTR_OFFS 0
|
|
|
|
#define MVNETA_BM_POOL_SET_WRITE_PTR_MASK 0xfffc
|
|
|
|
#define MVNETA_BM_POOL_GET_WRITE_PTR_OFFS 16
|
|
|
|
#define MVNETA_BM_POOL_GET_WRITE_PTR_MASK 0xfffc0000
|
|
|
|
|
|
|
|
/* External Buffer Pointers Pool Size Register */
|
|
|
|
#define MVNETA_BM_POOL_SIZE_REG(pool) (0x1c + ((pool) << 4))
|
|
|
|
#define MVNETA_BM_POOL_SIZE_MASK 0x3fff
|
|
|
|
|
|
|
|
/* BM Interrupt Cause Register */
|
|
|
|
#define MVNETA_BM_INTR_CAUSE_REG (0x50)
|
|
|
|
|
|
|
|
/* BM interrupt Mask Register */
|
|
|
|
#define MVNETA_BM_INTR_MASK_REG (0x54)
|
|
|
|
|
|
|
|
/* Other definitions */
|
|
|
|
#define MVNETA_BM_SHORT_PKT_SIZE 256
|
|
|
|
#define MVNETA_BM_POOLS_NUM 4
|
|
|
|
#define MVNETA_BM_POOL_CAP_MIN 128
|
|
|
|
#define MVNETA_BM_POOL_CAP_DEF 2048
|
|
|
|
#define MVNETA_BM_POOL_CAP_MAX \
|
|
|
|
(16 * 1024 - MVNETA_BM_POOL_CAP_ALIGN)
|
|
|
|
#define MVNETA_BM_POOL_CAP_ALIGN 32
|
|
|
|
#define MVNETA_BM_POOL_PTR_ALIGN 32
|
|
|
|
|
|
|
|
#define MVNETA_BM_POOL_ACCESS_OFFS 8
|
|
|
|
|
|
|
|
#define MVNETA_BM_BPPI_SIZE 0x100000
|
|
|
|
|
|
|
|
#define MVNETA_RX_BUF_SIZE(pkt_size) ((pkt_size) + NET_SKB_PAD)
|
|
|
|
|
|
|
|
enum mvneta_bm_type {
|
|
|
|
MVNETA_BM_FREE,
|
|
|
|
MVNETA_BM_LONG,
|
|
|
|
MVNETA_BM_SHORT
|
|
|
|
};
|
|
|
|
|
|
|
|
struct mvneta_bm {
|
|
|
|
void __iomem *reg_base;
|
|
|
|
struct clk *clk;
|
|
|
|
struct platform_device *pdev;
|
|
|
|
|
|
|
|
struct gen_pool *bppi_pool;
|
|
|
|
/* BPPI virtual base address */
|
|
|
|
void __iomem *bppi_virt_addr;
|
|
|
|
/* BPPI physical base address */
|
|
|
|
dma_addr_t bppi_phys_addr;
|
|
|
|
|
|
|
|
/* BM pools */
|
|
|
|
struct mvneta_bm_pool *bm_pools;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct mvneta_bm_pool {
|
2016-03-14 16:39:05 +08:00
|
|
|
struct hwbm_pool hwbm_pool;
|
net: mvneta: bm: add support for hardware buffer management
Buffer manager (BM) is a dedicated hardware unit that can be used by all
ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX
path by sparing DRAM access on refilling buffer pool, hardware-based
filling of descriptor ring data and better memory utilization due to HW
arbitration for using 'short' pools for small packets.
Tests performed with A388 SoC working as a network bridge between two
packet generators showed increase of maximum processed 64B packets by
~20k (~555k packets with BM enabled vs ~535 packets without BM). Also
when pushing 1500B-packets with a line rate achieved, CPU load decreased
from around 25% without BM to 20% with BM.
BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which
are called external BP pools - BPPE. Allocating and releasing buffer
pointers (BP) to/from BPPE is performed indirectly by write/read access
to a dedicated internal SRAM, where internal BP pools (BPPI) are placed.
BM hardware controls status of BPPE automatically, as well as assigning
proper buffers to RX descriptors. For more details please refer to
Functional Specification of Armada XP or 38x SoC.
In order to enable support for a separate hardware block, common for all
ports, a new driver has to be implemented ('mvneta_bm'). It provides
initialization sequence of address space, clocks, registers, SRAM,
empty pools' structures and also obtaining optional configuration
from DT (please refer to device tree binding documentation). mvneta_bm
exposes also a necessary API to mvneta driver, as well as a dedicated
structure with BM information (bm_priv), whose presence is used as a
flag notifying of BM usage by port. It has to be ensured that mvneta_bm
probe is executed prior to the ones in ports' driver. In case BM is not
used or its probe fails, mvneta falls back to use software buffer
management.
A sequence executed in mvneta_probe function is modified in order to have
an access to needed resources before possible port's BM initialization is
done. According to port-pools mapping provided by DT appropriate registers
are configured and the buffer pools are filled. RX path is modified
accordingly. Becaues the hardware allows a wide variety of configuration
options, following assumptions are made:
* using BM mechanisms can be selectively disabled/enabled basing
on DT configuration among the ports
* 'long' pool's single buffer size is tied to port's MTU
* using 'long' pool by port is obligatory and it cannot be shared
* using 'short' pool for smaller packets is optional
* one 'short' pool can be shared among all ports
This commit enables hardware buffer management operation cooperating with
existing mvneta driver. New device tree binding documentation is added and
the one of mvneta is updated accordingly.
[gregory.clement@free-electrons.com: removed the suspend/resume part]
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-03-14 16:39:03 +08:00
|
|
|
/* Pool number in the range 0-3 */
|
|
|
|
u8 id;
|
|
|
|
enum mvneta_bm_type type;
|
|
|
|
|
|
|
|
/* Packet size */
|
|
|
|
int pkt_size;
|
2016-03-14 16:39:05 +08:00
|
|
|
/* Size of the buffer acces through DMA*/
|
|
|
|
u32 buf_size;
|
net: mvneta: bm: add support for hardware buffer management
Buffer manager (BM) is a dedicated hardware unit that can be used by all
ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX
path by sparing DRAM access on refilling buffer pool, hardware-based
filling of descriptor ring data and better memory utilization due to HW
arbitration for using 'short' pools for small packets.
Tests performed with A388 SoC working as a network bridge between two
packet generators showed increase of maximum processed 64B packets by
~20k (~555k packets with BM enabled vs ~535 packets without BM). Also
when pushing 1500B-packets with a line rate achieved, CPU load decreased
from around 25% without BM to 20% with BM.
BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which
are called external BP pools - BPPE. Allocating and releasing buffer
pointers (BP) to/from BPPE is performed indirectly by write/read access
to a dedicated internal SRAM, where internal BP pools (BPPI) are placed.
BM hardware controls status of BPPE automatically, as well as assigning
proper buffers to RX descriptors. For more details please refer to
Functional Specification of Armada XP or 38x SoC.
In order to enable support for a separate hardware block, common for all
ports, a new driver has to be implemented ('mvneta_bm'). It provides
initialization sequence of address space, clocks, registers, SRAM,
empty pools' structures and also obtaining optional configuration
from DT (please refer to device tree binding documentation). mvneta_bm
exposes also a necessary API to mvneta driver, as well as a dedicated
structure with BM information (bm_priv), whose presence is used as a
flag notifying of BM usage by port. It has to be ensured that mvneta_bm
probe is executed prior to the ones in ports' driver. In case BM is not
used or its probe fails, mvneta falls back to use software buffer
management.
A sequence executed in mvneta_probe function is modified in order to have
an access to needed resources before possible port's BM initialization is
done. According to port-pools mapping provided by DT appropriate registers
are configured and the buffer pools are filled. RX path is modified
accordingly. Becaues the hardware allows a wide variety of configuration
options, following assumptions are made:
* using BM mechanisms can be selectively disabled/enabled basing
on DT configuration among the ports
* 'long' pool's single buffer size is tied to port's MTU
* using 'long' pool by port is obligatory and it cannot be shared
* using 'short' pool for smaller packets is optional
* one 'short' pool can be shared among all ports
This commit enables hardware buffer management operation cooperating with
existing mvneta driver. New device tree binding documentation is added and
the one of mvneta is updated accordingly.
[gregory.clement@free-electrons.com: removed the suspend/resume part]
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-03-14 16:39:03 +08:00
|
|
|
|
|
|
|
/* BPPE virtual base address */
|
|
|
|
u32 *virt_addr;
|
|
|
|
/* BPPE physical base address */
|
|
|
|
dma_addr_t phys_addr;
|
|
|
|
|
|
|
|
/* Ports using BM pool */
|
|
|
|
u8 port_map;
|
|
|
|
|
|
|
|
struct mvneta_bm *priv;
|
|
|
|
};
|
|
|
|
|
|
|
|
/* Declarations and definitions */
|
|
|
|
void *mvneta_frag_alloc(unsigned int frag_size);
|
|
|
|
void mvneta_frag_free(unsigned int frag_size, void *data);
|
|
|
|
|
2016-09-12 22:03:40 +08:00
|
|
|
#if IS_ENABLED(CONFIG_MVNETA_BM)
|
net: mvneta: bm: add support for hardware buffer management
Buffer manager (BM) is a dedicated hardware unit that can be used by all
ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX
path by sparing DRAM access on refilling buffer pool, hardware-based
filling of descriptor ring data and better memory utilization due to HW
arbitration for using 'short' pools for small packets.
Tests performed with A388 SoC working as a network bridge between two
packet generators showed increase of maximum processed 64B packets by
~20k (~555k packets with BM enabled vs ~535 packets without BM). Also
when pushing 1500B-packets with a line rate achieved, CPU load decreased
from around 25% without BM to 20% with BM.
BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which
are called external BP pools - BPPE. Allocating and releasing buffer
pointers (BP) to/from BPPE is performed indirectly by write/read access
to a dedicated internal SRAM, where internal BP pools (BPPI) are placed.
BM hardware controls status of BPPE automatically, as well as assigning
proper buffers to RX descriptors. For more details please refer to
Functional Specification of Armada XP or 38x SoC.
In order to enable support for a separate hardware block, common for all
ports, a new driver has to be implemented ('mvneta_bm'). It provides
initialization sequence of address space, clocks, registers, SRAM,
empty pools' structures and also obtaining optional configuration
from DT (please refer to device tree binding documentation). mvneta_bm
exposes also a necessary API to mvneta driver, as well as a dedicated
structure with BM information (bm_priv), whose presence is used as a
flag notifying of BM usage by port. It has to be ensured that mvneta_bm
probe is executed prior to the ones in ports' driver. In case BM is not
used or its probe fails, mvneta falls back to use software buffer
management.
A sequence executed in mvneta_probe function is modified in order to have
an access to needed resources before possible port's BM initialization is
done. According to port-pools mapping provided by DT appropriate registers
are configured and the buffer pools are filled. RX path is modified
accordingly. Becaues the hardware allows a wide variety of configuration
options, following assumptions are made:
* using BM mechanisms can be selectively disabled/enabled basing
on DT configuration among the ports
* 'long' pool's single buffer size is tied to port's MTU
* using 'long' pool by port is obligatory and it cannot be shared
* using 'short' pool for smaller packets is optional
* one 'short' pool can be shared among all ports
This commit enables hardware buffer management operation cooperating with
existing mvneta driver. New device tree binding documentation is added and
the one of mvneta is updated accordingly.
[gregory.clement@free-electrons.com: removed the suspend/resume part]
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-03-14 16:39:03 +08:00
|
|
|
void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
|
|
|
|
struct mvneta_bm_pool *bm_pool, u8 port_map);
|
|
|
|
void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
|
|
|
|
u8 port_map);
|
2016-03-14 16:39:05 +08:00
|
|
|
int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf);
|
net: mvneta: bm: add support for hardware buffer management
Buffer manager (BM) is a dedicated hardware unit that can be used by all
ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX
path by sparing DRAM access on refilling buffer pool, hardware-based
filling of descriptor ring data and better memory utilization due to HW
arbitration for using 'short' pools for small packets.
Tests performed with A388 SoC working as a network bridge between two
packet generators showed increase of maximum processed 64B packets by
~20k (~555k packets with BM enabled vs ~535 packets without BM). Also
when pushing 1500B-packets with a line rate achieved, CPU load decreased
from around 25% without BM to 20% with BM.
BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which
are called external BP pools - BPPE. Allocating and releasing buffer
pointers (BP) to/from BPPE is performed indirectly by write/read access
to a dedicated internal SRAM, where internal BP pools (BPPI) are placed.
BM hardware controls status of BPPE automatically, as well as assigning
proper buffers to RX descriptors. For more details please refer to
Functional Specification of Armada XP or 38x SoC.
In order to enable support for a separate hardware block, common for all
ports, a new driver has to be implemented ('mvneta_bm'). It provides
initialization sequence of address space, clocks, registers, SRAM,
empty pools' structures and also obtaining optional configuration
from DT (please refer to device tree binding documentation). mvneta_bm
exposes also a necessary API to mvneta driver, as well as a dedicated
structure with BM information (bm_priv), whose presence is used as a
flag notifying of BM usage by port. It has to be ensured that mvneta_bm
probe is executed prior to the ones in ports' driver. In case BM is not
used or its probe fails, mvneta falls back to use software buffer
management.
A sequence executed in mvneta_probe function is modified in order to have
an access to needed resources before possible port's BM initialization is
done. According to port-pools mapping provided by DT appropriate registers
are configured and the buffer pools are filled. RX path is modified
accordingly. Becaues the hardware allows a wide variety of configuration
options, following assumptions are made:
* using BM mechanisms can be selectively disabled/enabled basing
on DT configuration among the ports
* 'long' pool's single buffer size is tied to port's MTU
* using 'long' pool by port is obligatory and it cannot be shared
* using 'short' pool for smaller packets is optional
* one 'short' pool can be shared among all ports
This commit enables hardware buffer management operation cooperating with
existing mvneta driver. New device tree binding documentation is added and
the one of mvneta is updated accordingly.
[gregory.clement@free-electrons.com: removed the suspend/resume part]
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-03-14 16:39:03 +08:00
|
|
|
int mvneta_bm_pool_refill(struct mvneta_bm *priv,
|
|
|
|
struct mvneta_bm_pool *bm_pool);
|
|
|
|
struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id,
|
|
|
|
enum mvneta_bm_type type, u8 port_id,
|
|
|
|
int pkt_size);
|
|
|
|
|
|
|
|
static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
|
|
|
|
struct mvneta_bm_pool *bm_pool,
|
|
|
|
dma_addr_t buf_phys_addr)
|
|
|
|
{
|
|
|
|
writel_relaxed(buf_phys_addr, priv->bppi_virt_addr +
|
|
|
|
(bm_pool->id << MVNETA_BM_POOL_ACCESS_OFFS));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
|
|
|
|
struct mvneta_bm_pool *bm_pool)
|
|
|
|
{
|
|
|
|
return readl_relaxed(priv->bppi_virt_addr +
|
|
|
|
(bm_pool->id << MVNETA_BM_POOL_ACCESS_OFFS));
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
|
|
|
|
struct mvneta_bm_pool *bm_pool, u8 port_map) {}
|
|
|
|
void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
|
|
|
|
u8 port_map) {}
|
2016-03-14 16:39:05 +08:00
|
|
|
int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf) { return 0; }
|
net: mvneta: bm: add support for hardware buffer management
Buffer manager (BM) is a dedicated hardware unit that can be used by all
ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX
path by sparing DRAM access on refilling buffer pool, hardware-based
filling of descriptor ring data and better memory utilization due to HW
arbitration for using 'short' pools for small packets.
Tests performed with A388 SoC working as a network bridge between two
packet generators showed increase of maximum processed 64B packets by
~20k (~555k packets with BM enabled vs ~535 packets without BM). Also
when pushing 1500B-packets with a line rate achieved, CPU load decreased
from around 25% without BM to 20% with BM.
BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which
are called external BP pools - BPPE. Allocating and releasing buffer
pointers (BP) to/from BPPE is performed indirectly by write/read access
to a dedicated internal SRAM, where internal BP pools (BPPI) are placed.
BM hardware controls status of BPPE automatically, as well as assigning
proper buffers to RX descriptors. For more details please refer to
Functional Specification of Armada XP or 38x SoC.
In order to enable support for a separate hardware block, common for all
ports, a new driver has to be implemented ('mvneta_bm'). It provides
initialization sequence of address space, clocks, registers, SRAM,
empty pools' structures and also obtaining optional configuration
from DT (please refer to device tree binding documentation). mvneta_bm
exposes also a necessary API to mvneta driver, as well as a dedicated
structure with BM information (bm_priv), whose presence is used as a
flag notifying of BM usage by port. It has to be ensured that mvneta_bm
probe is executed prior to the ones in ports' driver. In case BM is not
used or its probe fails, mvneta falls back to use software buffer
management.
A sequence executed in mvneta_probe function is modified in order to have
an access to needed resources before possible port's BM initialization is
done. According to port-pools mapping provided by DT appropriate registers
are configured and the buffer pools are filled. RX path is modified
accordingly. Becaues the hardware allows a wide variety of configuration
options, following assumptions are made:
* using BM mechanisms can be selectively disabled/enabled basing
on DT configuration among the ports
* 'long' pool's single buffer size is tied to port's MTU
* using 'long' pool by port is obligatory and it cannot be shared
* using 'short' pool for smaller packets is optional
* one 'short' pool can be shared among all ports
This commit enables hardware buffer management operation cooperating with
existing mvneta driver. New device tree binding documentation is added and
the one of mvneta is updated accordingly.
[gregory.clement@free-electrons.com: removed the suspend/resume part]
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-03-14 16:39:03 +08:00
|
|
|
int mvneta_bm_pool_refill(struct mvneta_bm *priv,
|
|
|
|
struct mvneta_bm_pool *bm_pool) {return 0; }
|
|
|
|
struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id,
|
|
|
|
enum mvneta_bm_type type, u8 port_id,
|
|
|
|
int pkt_size) { return NULL; }
|
|
|
|
|
|
|
|
static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
|
|
|
|
struct mvneta_bm_pool *bm_pool,
|
|
|
|
dma_addr_t buf_phys_addr) {}
|
|
|
|
|
|
|
|
static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
|
|
|
|
struct mvneta_bm_pool *bm_pool)
|
|
|
|
{ return 0; }
|
|
|
|
#endif /* CONFIG_MVNETA_BM */
|
|
|
|
#endif
|