That's possible to set ring params while interfaces are down. When
interface gets up it uses number of descs to fill rx queue and on
later on changes to create rx pools. Usually, this resplit can happen
after phy is up, but it can be needed before this, so allow it to
happen while setting number of rx descs, when interfaces are down.
Also, if no dependency on intf state, move it to cpdma layer, where
it should be.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
In case if dma mapped packet needs to be sent, like with XDP
page pool, the "mapped" submit can be used. This patch adds dma
mapped submit based on regular one.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
While data pass suspend, reuse of rx descriptors can be disabled using
channel state & lock from cpdma layer. For this, submit to a channel
has to be disabled using state != "not active" under lock, what is done
with this patch. The same submit is used to fill rx channel while
ndo_open, when channel is idled, so add idled submit routine that
allows to prepare descs for the channel. All this simplifies code and
helps to avoid dormant mode usage and send packets only to active
channels, avoiding potential race in later on changes. Also add missed
sync barrier analogically like in other places after stopping tx
queues.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use dma_addr_t for desc_mem_phys and desc_hw_addr to avoid types
conversions.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Replace textual license with SPDX-License-Identifier.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
In VLAN_AWARE mode CPSW can insert VLAN header encapsulation word on Host
port 0 egress (RX) before the packet data if RX_VLAN_ENCAP bit is set in
CPSW_CONTROL register. VLAN header encapsulation word has following format:
HDR_PKT_Priority bits 29-31 - Header Packet VLAN prio (Highest prio: 7)
HDR_PKT_CFI bits 28 - Header Packet VLAN CFI bit.
HDR_PKT_Vid bits 27-16 - Header Packet VLAN ID
PKT_Type bits 8-9 - Packet Type. Indicates whether the packet is
VLAN-tagged, priority-tagged, or non-tagged.
00: VLAN-tagged packet
01: Reserved
10: Priority-tagged packet
11: Non-tagged packet
This feature can be used to implement TX VLAN offload in case of
VLAN-tagged packets and to insert VLAN tag in case Non-tagged packet was
received on port with PVID set. As per documentation, CPSW never modifies
packet data on Host egress (RX) and as result, without this feature
enabled, Host port will not be able to receive properly packets which
entered switch non-tagged through external Port with PVID set (when
non-tagged packet forwarded from external Port with PVID set to another
external Port - packet will be VLAN tagged properly).
Implementation details:
- on RX driver will check CPDMA status bit RX_VLAN_ENCAP BIT(19) in CPPI
descriptor to identify when VLAN header encapsulation word is present.
- PKT_Type = 0x01 or 0x02 then ignore VLAN header encapsulation word and
pass packet as is;
- if HDR_PKT_Vid = 0 then ignore VLAN header encapsulation word and pass
packet as is;
- In dual mac mode traffic is separated between ports using default port
vlans, which are not be visible to Host and so should not be reported.
Hence, check for default port vlans in dual mac mode and ignore VLAN header
encapsulation word;
- otherwise fill SKB with VLAN info using __vlan_hwaccel_put_tag();
- PKT_Type = 0x00 (VLAN-tagged) then strip out VLAN header from SKB.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The CPDMA uses one pool of descriptors for both RX and TX which by default
split between all channels proportionally depending on total number of
CPDMA channels and number of TX and RX channels. As result, more
descriptors will be consumed by TX path if there are more TX channels and
there is no way now to dedicate more descriptors for RX path.
So, add the ability to re-split CPDMA pool of descriptors between RX and TX
path via ethtool '-G' command wich will allow to configure and fix number
of descriptors used by RX and TX path, which, then, will be split between
RX/TX channels proportionally depending on RX/TX channels number and
weight. ethtool '-G' command will accept only number of RX entries and rest
of descriptors will be arranged for TX automatically.
Command:
ethtool -G <devname> rx <number of descriptors>
defaults and limitations:
- minimum number of rx descriptors is 10% of total number of descriptors in
CPDMA pool
- maximum number of rx descriptors is 90% of total number of descriptors in
CPDMA pool
- by default, descriptors will be split equally between RX/TX path
- any values passed in "tx" parameter will be ignored
Usage:
# ethtool -g eth0
Pre-set maximums:
RX: 7372
RX Mini: 0
RX Jumbo: 0
TX: 0
Current hardware settings:
RX: 4096
RX Mini: 0
RX Jumbo: 0
TX: 4096
# ethtool -G eth0 rx 7372
# ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX: 7372
RX Mini: 0
RX Jumbo: 0
TX: 0
Current hardware settings:
RX: 7372
RX Mini: 0
RX Jumbo: 0
TX: 820
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The CPSW CPDMA can process buffer descriptors placed as in internal
CPPI RAM as in DDR. This patch adds support in CPSW and CPDMA for
descs_pool_size mudule parameter, which defines total number of CPDMA CPPI
descriptors to be used for both ingress/egress packets processing:
- memory size, required for CPDMA descriptor pool, is calculated basing
on number of descriptors specified by user in descs_pool_size and
CPDMA descriptor size and allocated from coherent memory (CMA area);
- CPDMA descriptor pool will be allocated in DDR if pool memory size >
internal CPPI RAM or use internal CPPI RAM otherwise;
- if descs_pool_size not specified in DT - the default value 256 will
be used which will allow to place CPDMA descriptors pool into the
internal CPPI RAM (current default behaviour);
- CPDMA will ignore descs_pool_size if descs_pool_size = 0 for
backward comaptiobility with davinci_emac.
descs_pool_size is boot time setting and can't be changed once
CPSW/CPDMA is initialized.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The cpdma has 8 rate limited tx channels. This patch adds
ability for cpdma driver to use 8 tx h/w shapers. If at least one
channel is not rate limited then it must have higher number, this
is because the rate limited channels have to have higher priority
then not rate limited channels. The channel priority is set in low-hi
direction already, so that when a new channel is added with ethtool
and it doesn't have rate yet, it cannot affect on rate limited
channels. It can be useful for TSN streams and just in cases when
h/w rate limited channels are needed.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The weight of a channel is needed to split descriptors between
channels. The weight can depend on maximum rate of channels, maximum
rate of an interface or other reasons. The channel weight is in
percentage and is independent for rx and tx channels.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Keep the driver internals in C file. Currently it's not required for
drivers to know rx or tx a channel is, except create function.
So correct "channel create" function, and use all channel struct
macroses only for internal use.
Reviewed-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The cpsw h/w supports up to 8 tx and 8 rx channels. This patch adds
multi-queue support to the driver only, shaper configuration will
be added with separate patch series. Default shaper mode, as
before, priority mode, but with corrected priority order, 0 - is
highest priority, 7 - lowest.
The poll function handles all unprocessed channels, till all of
them are free, beginning from hi priority channel.
In dual_emac mode the channels are shared between two network devices,
as it's with single-queue default mode.
The statistic for every channel can be read with:
$ ethtool -S ethX
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Reviewed-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tx channels share same pool of descriptors. Thus one channel can
block another if pool is emptied by one. But, the shaper should
decide which channel is allowed to send packets. To avoid such
impact of one channel on another, let every channel to have its
own piece of pool.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Reviewed-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Such a big dump of register values is hardly useful on a production
system.
Another downside of the now removed functions is that calling
emac_dump_regs resulted in at least 87 calls to dev_info while holding a
spinlock and having irqs off which is a big source of latency.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is no reason in rx_descs property because davinici_cpdma
driver splits pool of descriptors equally between tx and rx channels.
That is, if number of descriptors 256, 128 of them are for rx
channels. While receiving, the descriptor is freed to the pool and
then allocated with new skb. And if in DT the "rx_descs" is set to
64, then 128 - 64 = 64 descriptors are always in the pool and cannot
be used, for tx, for instance. It's not correct resource usage,
better to set it to half of pool, then the rx pool can be used in
full. It will not have any impact on performance, as anyway, the
"redundant" descriptors were unused.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The gfp_mask argument is not used in cpdma_chan_submit() and always set
to GFP_KERNEL even in atomic sections. This patch drops it since it is
unused.
Acked-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
CPDMA interrupts are not properly acknowledged which leads to interrupt
storm, only cpdma interrupt 0 is acknowledged in Davinci CPDMA driver.
Changed cpdma_ctlr_eoi api to acknowledge 1 and 2 interrupts which are
used for rx and tx respectively.
Reported-by: Pantelis Antoniou <panto@antoniou-consulting.com>
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
* Introduced parameter to add port number for directed packet in cpdma_chan_submit
* Source port detection macro with DMA descriptor status
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When there is heavy transmission traffic in the CPDMA, then Rx descriptors
memory is also utilized as tx desc memory looses all rx descriptors and the
driver stops working then.
This patch adds boundary for tx and rx descriptors in bd ram dividing the
descriptor memory to ensure that during heavy transmission tx doesn't use
rx descriptors.
This patch is already applied to davinci_emac driver, since CPSW and
davici_dmac shares the same CPDMA, moving the boundry seperation from
Davinci EMAC driver to CPDMA driver which was done in the following
commit
commit 86d8c07ff2
Author: Sascha Hauer <s.hauer@pengutronix.de>
Date: Tue Jan 3 05:27:47 2012 +0000
net/davinci: do not use all descriptors for tx packets
The driver uses a shared pool for both rx and tx descriptors.
During open it queues fixed number of 128 descriptors for receive
packets. For each received packet it tries to queue another
descriptor. If this fails the descriptor is lost for rx.
The driver has no limitation on tx descriptors to use, so it
can happen during a nmap / ping -f attack that the driver
allocates all descriptors for tx and looses all rx descriptors.
The driver stops working then.
To fix this limit the number of tx descriptors used to half of
the descriptors available, the rx path uses the other half.
Tested on a custom board using nmap / ping -f to the board from
two different hosts.
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>