Merge branch 'sxgbe'

Byungho An says:

====================
This is 14th posting for SAMSUNG SXGBE driver.

Changes since v1:
- changed name of driver to SXGbE as per Ben's comment
- squashed Joe's neatening for many stuff in original patches

Changes since v2:
- updated and split binding document as per Mark's comment
- clean up codes as per Joe's comment
- removed unused fields and clean up codes as per Francois's comment
- removed module parameters as per Dave's comment
- moved driver directory to samsung/sxgbe/

Changes since v3:
- fixed Missing a blank line after declarations as per Dave's comment
- clean up codes as per Joe's comment
- removed reference of net_device.{irq, base_addr} as per Francois's comment

Changes since v4:
- updated binding document and DT related function as per Mark's comment

Changes since v5:
- updated binding document and DT related function as per Florian's comment
- fixed typo and shortened code as per Joe's comment

Changes since v6:
- updated TSO related functions as per Rayagond's comment
- updated binding document as per Mark's comment
- removed WoL patch from this patch set

Changes since v7:
- updated TSO related functions as per Rayagond's comment

Changes since v8:
- removed select and depends statement from vendor sub-section as per
  Dave's comment

Changes since v9:
- removed adv-add-map, force-sf-dma-modei and force-thresh-dma-mode from
  binding documnet as per Mark's comment

Changes since v10:
- clean up codes as per Francois's comment

Changes since v11:
- clean up mdio_read/write codes as per Francois's comment
- changed irq acquisition error path as per Francois's comment
- updated mdio and platform related codes as per Tomasz'comment
- clean up dma related codes as per Vince's comment

Changes since v12:
- fixed typo

Changes since v13:
- clean up error path codes for irqs as per Francois's comment
- removed unsupported functions for ehttoolirq as per Ben's comment
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2014-03-27 13:07:45 -04:00
commit 1dbe136938
24 changed files with 6513 additions and 0 deletions

View File

@ -0,0 +1,52 @@
* Samsung 10G Ethernet driver (SXGBE)
Required properties:
- compatible: Should be "samsung,sxgbe-v2.0a"
- reg: Address and length of the register set for the device
- interrupt-parent: Should be the phandle for the interrupt controller
that services interrupts for this device
- interrupts: Should contain the SXGBE interrupts
These interrupts are ordered by fixed and follows variable
trasmit DMA interrupts, receive DMA interrupts and lpi interrupt.
index 0 - this is fixed common interrupt of SXGBE and it is always
available.
index 1 to 25 - 8 variable trasmit interrupts, variable 16 receive interrupts
and 1 optional lpi interrupt.
- phy-mode: String, operation mode of the PHY interface.
Supported values are: "sgmii", "xgmii".
- samsung,pbl: Integer, Programmable Burst Length.
Supported values are 1, 2, 4, 8, 16, or 32.
- samsung,burst-map: Integer, Program the possible bursts supported by sxgbe
This is an interger and represents allowable DMA bursts when fixed burst.
Allowable range is 0x01-0x3F. When this field is set fixed burst is enabled.
When fixed length is needed for burst mode, it can be set within allowable
range.
Optional properties:
- mac-address: 6 bytes, mac address
- max-frame-size: Maximum Transfer Unit (IEEE defined MTU), rather
than the maximum frame size.
Example:
aliases {
ethernet0 = <&sxgbe0>;
};
sxgbe0: ethernet@1a040000 {
compatible = "samsung,sxgbe-v2.0a";
reg = <0 0x1a040000 0 0x10000>;
interrupt-parent = <&gic>;
interrupts = <0 209 4>, <0 185 4>, <0 186 4>, <0 187 4>,
<0 188 4>, <0 189 4>, <0 190 4>, <0 191 4>,
<0 192 4>, <0 193 4>, <0 194 4>, <0 195 4>,
<0 196 4>, <0 197 4>, <0 198 4>, <0 199 4>,
<0 200 4>, <0 201 4>, <0 202 4>, <0 203 4>,
<0 204 4>, <0 205 4>, <0 206 4>, <0 207 4>,
<0 208 4>, <0 210 4>;
samsung,pbl = <0x08>
samsung,burst-map = <0x20>
mac-address = [ 00 11 22 33 44 55 ]; /* Filled in by U-Boot */
max-frame-size = <9000>;
phy-mode = "xgmii";
};

View File

@ -7550,6 +7550,15 @@ S: Supported
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
F: drivers/clk/samsung/
SAMSUNG SXGBE DRIVERS
M: Byungho An <bh74.an@samsung.com>
M: Girish K S <ks.giri@samsung.com>
M: Siva Reddy Kallam <siva.kallam@samsung.com>
M: Vipul Pandya <vipul.pandya@samsung.com>
S: Supported
L: netdev@vger.kernel.org
F: drivers/net/ethernet/samsung/sxgbe/
SERIAL DRIVERS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org

View File

@ -150,6 +150,7 @@ config S6GMAC
To compile this driver as a module, choose M here. The module
will be called s6gmac.
source "drivers/net/ethernet/samsung/Kconfig"
source "drivers/net/ethernet/seeq/Kconfig"
source "drivers/net/ethernet/silan/Kconfig"
source "drivers/net/ethernet/sis/Kconfig"

View File

@ -61,6 +61,7 @@ obj-$(CONFIG_NET_VENDOR_REALTEK) += realtek/
obj-$(CONFIG_SH_ETH) += renesas/
obj-$(CONFIG_NET_VENDOR_RDC) += rdc/
obj-$(CONFIG_S6GMAC) += s6gmac.o
obj-$(CONFIG_NET_VENDOR_SAMSUNG) += samsung/
obj-$(CONFIG_NET_VENDOR_SEEQ) += seeq/
obj-$(CONFIG_NET_VENDOR_SILAN) += silan/
obj-$(CONFIG_NET_VENDOR_SIS) += sis/

View File

@ -0,0 +1,16 @@
#
# Samsung Ethernet device configuration
#
config NET_VENDOR_SAMSUNG
bool "Samsung Ethernet device"
default y
---help---
This is the driver for the SXGBE 10G Ethernet IP block found on Samsung
platforms.
if NET_VENDOR_SAMSUNG
source "drivers/net/ethernet/samsung/sxgbe/Kconfig"
endif # NET_VENDOR_SAMSUNG

View File

@ -0,0 +1,5 @@
#
# Makefile for the Samsung Ethernet device drivers.
#
obj-$(CONFIG_SXGBE_ETH) += sxgbe/

View File

@ -0,0 +1,9 @@
config SXGBE_ETH
tristate "Samsung 10G/2.5G/1G SXGBE Ethernet driver"
depends on HAS_IOMEM && HAS_DMA
select PHYLIB
select CRC32
select PTP_1588_CLOCK
---help---
This is the driver for the SXGBE 10G Ethernet IP block found on Samsung
platforms.

View File

@ -0,0 +1,4 @@
obj-$(CONFIG_SXGBE_ETH) += samsung-sxgbe.o
samsung-sxgbe-objs:= sxgbe_platform.o sxgbe_main.o sxgbe_desc.o \
sxgbe_dma.o sxgbe_core.o sxgbe_mtl.o sxgbe_mdio.o \
sxgbe_ethtool.o sxgbe_xpcs.o $(samsung-sxgbe-y)

View File

@ -0,0 +1,535 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __SXGBE_COMMON_H__
#define __SXGBE_COMMON_H__
/* forward references */
struct sxgbe_desc_ops;
struct sxgbe_dma_ops;
struct sxgbe_mtl_ops;
#define SXGBE_RESOURCE_NAME "sam_sxgbeeth"
#define DRV_MODULE_VERSION "November_2013"
/* MAX HW feature words */
#define SXGBE_HW_WORDS 3
#define SXGBE_RX_COE_NONE 0
/* CSR Frequency Access Defines*/
#define SXGBE_CSR_F_150M 150000000
#define SXGBE_CSR_F_250M 250000000
#define SXGBE_CSR_F_300M 300000000
#define SXGBE_CSR_F_350M 350000000
#define SXGBE_CSR_F_400M 400000000
#define SXGBE_CSR_F_500M 500000000
/* pause time */
#define SXGBE_PAUSE_TIME 0x200
/* tx queues */
#define SXGBE_TX_QUEUES 8
#define SXGBE_RX_QUEUES 16
/* Calculated based how much time does it take to fill 256KB Rx memory
* at 10Gb speed at 156MHz clock rate and considered little less then
* the actual value.
*/
#define SXGBE_MAX_DMA_RIWT 0x70
#define SXGBE_MIN_DMA_RIWT 0x01
/* Tx coalesce parameters */
#define SXGBE_COAL_TX_TIMER 40000
#define SXGBE_MAX_COAL_TX_TICK 100000
#define SXGBE_TX_MAX_FRAMES 512
#define SXGBE_TX_FRAMES 128
/* SXGBE TX FIFO is 8K, Rx FIFO is 16K */
#define BUF_SIZE_16KiB 16384
#define BUF_SIZE_8KiB 8192
#define BUF_SIZE_4KiB 4096
#define BUF_SIZE_2KiB 2048
#define SXGBE_DEFAULT_LIT_LS 0x3E8
#define SXGBE_DEFAULT_TWT_LS 0x0
/* Flow Control defines */
#define SXGBE_FLOW_OFF 0
#define SXGBE_FLOW_RX 1
#define SXGBE_FLOW_TX 2
#define SXGBE_FLOW_AUTO (SXGBE_FLOW_TX | SXGBE_FLOW_RX)
#define SF_DMA_MODE 1 /* DMA STORE-AND-FORWARD Operation Mode */
/* errors */
#define RX_GMII_ERR 0x01
#define RX_WATCHDOG_ERR 0x02
#define RX_CRC_ERR 0x03
#define RX_GAINT_ERR 0x04
#define RX_IP_HDR_ERR 0x05
#define RX_PAYLOAD_ERR 0x06
#define RX_OVERFLOW_ERR 0x07
/* pkt type */
#define RX_LEN_PKT 0x00
#define RX_MACCTL_PKT 0x01
#define RX_DCBCTL_PKT 0x02
#define RX_ARP_PKT 0x03
#define RX_OAM_PKT 0x04
#define RX_UNTAG_PKT 0x05
#define RX_OTHER_PKT 0x07
#define RX_SVLAN_PKT 0x08
#define RX_CVLAN_PKT 0x09
#define RX_DVLAN_OCVLAN_ICVLAN_PKT 0x0A
#define RX_DVLAN_OSVLAN_ISVLAN_PKT 0x0B
#define RX_DVLAN_OSVLAN_ICVLAN_PKT 0x0C
#define RX_DVLAN_OCVLAN_ISVLAN_PKT 0x0D
#define RX_NOT_IP_PKT 0x00
#define RX_IPV4_TCP_PKT 0x01
#define RX_IPV4_UDP_PKT 0x02
#define RX_IPV4_ICMP_PKT 0x03
#define RX_IPV4_UNKNOWN_PKT 0x07
#define RX_IPV6_TCP_PKT 0x09
#define RX_IPV6_UDP_PKT 0x0A
#define RX_IPV6_ICMP_PKT 0x0B
#define RX_IPV6_UNKNOWN_PKT 0x0F
#define RX_NO_PTP 0x00
#define RX_PTP_SYNC 0x01
#define RX_PTP_FOLLOW_UP 0x02
#define RX_PTP_DELAY_REQ 0x03
#define RX_PTP_DELAY_RESP 0x04
#define RX_PTP_PDELAY_REQ 0x05
#define RX_PTP_PDELAY_RESP 0x06
#define RX_PTP_PDELAY_FOLLOW_UP 0x07
#define RX_PTP_ANNOUNCE 0x08
#define RX_PTP_MGMT 0x09
#define RX_PTP_SIGNAL 0x0A
#define RX_PTP_RESV_MSG 0x0F
/* EEE-LPI mode flags*/
#define TX_ENTRY_LPI_MODE 0x10
#define TX_EXIT_LPI_MODE 0x20
#define RX_ENTRY_LPI_MODE 0x40
#define RX_EXIT_LPI_MODE 0x80
/* EEE-LPI Interrupt status flag */
#define LPI_INT_STATUS BIT(5)
/* EEE-LPI Default timer values */
#define LPI_LINK_STATUS_TIMER 0x3E8
#define LPI_MAC_WAIT_TIMER 0x00
/* EEE-LPI Control and status definitions */
#define LPI_CTRL_STATUS_TXA BIT(19)
#define LPI_CTRL_STATUS_PLSDIS BIT(18)
#define LPI_CTRL_STATUS_PLS BIT(17)
#define LPI_CTRL_STATUS_LPIEN BIT(16)
#define LPI_CTRL_STATUS_TXRSTP BIT(11)
#define LPI_CTRL_STATUS_RXRSTP BIT(10)
#define LPI_CTRL_STATUS_RLPIST BIT(9)
#define LPI_CTRL_STATUS_TLPIST BIT(8)
#define LPI_CTRL_STATUS_RLPIEX BIT(3)
#define LPI_CTRL_STATUS_RLPIEN BIT(2)
#define LPI_CTRL_STATUS_TLPIEX BIT(1)
#define LPI_CTRL_STATUS_TLPIEN BIT(0)
enum dma_irq_status {
tx_hard_error = BIT(0),
tx_bump_tc = BIT(1),
handle_tx = BIT(2),
rx_hard_error = BIT(3),
rx_bump_tc = BIT(4),
handle_rx = BIT(5),
};
#define NETIF_F_HW_VLAN_ALL (NETIF_F_HW_VLAN_CTAG_RX | \
NETIF_F_HW_VLAN_STAG_RX | \
NETIF_F_HW_VLAN_CTAG_TX | \
NETIF_F_HW_VLAN_STAG_TX | \
NETIF_F_HW_VLAN_CTAG_FILTER | \
NETIF_F_HW_VLAN_STAG_FILTER)
/* MMC control defines */
#define SXGBE_MMC_CTRL_CNT_FRZ 0x00000008
/* SXGBE HW ADDR regs */
#define SXGBE_ADDR_HIGH(reg) (((reg > 15) ? 0x00000800 : 0x00000040) + \
(reg * 8))
#define SXGBE_ADDR_LOW(reg) (((reg > 15) ? 0x00000804 : 0x00000044) + \
(reg * 8))
#define SXGBE_MAX_PERFECT_ADDRESSES 32 /* Maximum unicast perfect filtering */
#define SXGBE_FRAME_FILTER 0x00000004 /* Frame Filter */
/* SXGBE Frame Filter defines */
#define SXGBE_FRAME_FILTER_PR 0x00000001 /* Promiscuous Mode */
#define SXGBE_FRAME_FILTER_HUC 0x00000002 /* Hash Unicast */
#define SXGBE_FRAME_FILTER_HMC 0x00000004 /* Hash Multicast */
#define SXGBE_FRAME_FILTER_DAIF 0x00000008 /* DA Inverse Filtering */
#define SXGBE_FRAME_FILTER_PM 0x00000010 /* Pass all multicast */
#define SXGBE_FRAME_FILTER_DBF 0x00000020 /* Disable Broadcast frames */
#define SXGBE_FRAME_FILTER_SAIF 0x00000100 /* Inverse Filtering */
#define SXGBE_FRAME_FILTER_SAF 0x00000200 /* Source Address Filter */
#define SXGBE_FRAME_FILTER_HPF 0x00000400 /* Hash or perfect Filter */
#define SXGBE_FRAME_FILTER_RA 0x80000000 /* Receive all mode */
#define SXGBE_HASH_TABLE_SIZE 64
#define SXGBE_HASH_HIGH 0x00000008 /* Multicast Hash Table High */
#define SXGBE_HASH_LOW 0x0000000c /* Multicast Hash Table Low */
#define SXGBE_HI_REG_AE 0x80000000
/* Minimum and maximum MTU */
#define MIN_MTU 68
#define MAX_MTU 9000
#define SXGBE_FOR_EACH_QUEUE(max_queues, queue_num) \
for (queue_num = 0; queue_num < max_queues; queue_num++)
#define DRV_VERSION "1.0.0"
#define SXGBE_MAX_RX_CHANNELS 16
#define SXGBE_MAX_TX_CHANNELS 16
#define START_MAC_REG_OFFSET 0x0000
#define MAX_MAC_REG_OFFSET 0x0DFC
#define START_MTL_REG_OFFSET 0x1000
#define MAX_MTL_REG_OFFSET 0x18FC
#define START_DMA_REG_OFFSET 0x3000
#define MAX_DMA_REG_OFFSET 0x38FC
#define REG_SPACE_SIZE 0x2000
/* sxgbe statistics counters */
struct sxgbe_extra_stats {
/* TX/RX IRQ events */
unsigned long tx_underflow_irq;
unsigned long tx_process_stopped_irq;
unsigned long tx_ctxt_desc_err;
unsigned long tx_threshold;
unsigned long rx_threshold;
unsigned long tx_pkt_n;
unsigned long rx_pkt_n;
unsigned long normal_irq_n;
unsigned long tx_normal_irq_n;
unsigned long rx_normal_irq_n;
unsigned long napi_poll;
unsigned long tx_clean;
unsigned long tx_reset_ic_bit;
unsigned long rx_process_stopped_irq;
unsigned long rx_underflow_irq;
/* Bus access errors */
unsigned long fatal_bus_error_irq;
unsigned long tx_read_transfer_err;
unsigned long tx_write_transfer_err;
unsigned long tx_desc_access_err;
unsigned long tx_buffer_access_err;
unsigned long tx_data_transfer_err;
unsigned long rx_read_transfer_err;
unsigned long rx_write_transfer_err;
unsigned long rx_desc_access_err;
unsigned long rx_buffer_access_err;
unsigned long rx_data_transfer_err;
/* EEE-LPI stats */
unsigned long tx_lpi_entry_n;
unsigned long tx_lpi_exit_n;
unsigned long rx_lpi_entry_n;
unsigned long rx_lpi_exit_n;
unsigned long eee_wakeup_error_n;
/* RX specific */
/* L2 error */
unsigned long rx_code_gmii_err;
unsigned long rx_watchdog_err;
unsigned long rx_crc_err;
unsigned long rx_gaint_pkt_err;
unsigned long ip_hdr_err;
unsigned long ip_payload_err;
unsigned long overflow_error;
/* L2 Pkt type */
unsigned long len_pkt;
unsigned long mac_ctl_pkt;
unsigned long dcb_ctl_pkt;
unsigned long arp_pkt;
unsigned long oam_pkt;
unsigned long untag_okt;
unsigned long other_pkt;
unsigned long svlan_tag_pkt;
unsigned long cvlan_tag_pkt;
unsigned long dvlan_ocvlan_icvlan_pkt;
unsigned long dvlan_osvlan_isvlan_pkt;
unsigned long dvlan_osvlan_icvlan_pkt;
unsigned long dvan_ocvlan_icvlan_pkt;
/* L3/L4 Pkt type */
unsigned long not_ip_pkt;
unsigned long ip4_tcp_pkt;
unsigned long ip4_udp_pkt;
unsigned long ip4_icmp_pkt;
unsigned long ip4_unknown_pkt;
unsigned long ip6_tcp_pkt;
unsigned long ip6_udp_pkt;
unsigned long ip6_icmp_pkt;
unsigned long ip6_unknown_pkt;
/* Filter specific */
unsigned long vlan_filter_match;
unsigned long sa_filter_fail;
unsigned long da_filter_fail;
unsigned long hash_filter_pass;
unsigned long l3_filter_match;
unsigned long l4_filter_match;
/* RX context specific */
unsigned long timestamp_dropped;
unsigned long rx_msg_type_no_ptp;
unsigned long rx_ptp_type_sync;
unsigned long rx_ptp_type_follow_up;
unsigned long rx_ptp_type_delay_req;
unsigned long rx_ptp_type_delay_resp;
unsigned long rx_ptp_type_pdelay_req;
unsigned long rx_ptp_type_pdelay_resp;
unsigned long rx_ptp_type_pdelay_follow_up;
unsigned long rx_ptp_announce;
unsigned long rx_ptp_mgmt;
unsigned long rx_ptp_signal;
unsigned long rx_ptp_resv_msg_type;
};
struct mac_link {
int port;
int duplex;
int speed;
};
struct mii_regs {
unsigned int addr; /* MII Address */
unsigned int data; /* MII Data */
};
struct sxgbe_core_ops {
/* MAC core initialization */
void (*core_init)(void __iomem *ioaddr);
/* Dump MAC registers */
void (*dump_regs)(void __iomem *ioaddr);
/* Handle extra events on specific interrupts hw dependent */
int (*host_irq_status)(void __iomem *ioaddr,
struct sxgbe_extra_stats *x);
/* Set power management mode (e.g. magic frame) */
void (*pmt)(void __iomem *ioaddr, unsigned long mode);
/* Set/Get Unicast MAC addresses */
void (*set_umac_addr)(void __iomem *ioaddr, unsigned char *addr,
unsigned int reg_n);
void (*get_umac_addr)(void __iomem *ioaddr, unsigned char *addr,
unsigned int reg_n);
void (*enable_rx)(void __iomem *ioaddr, bool enable);
void (*enable_tx)(void __iomem *ioaddr, bool enable);
/* controller version specific operations */
int (*get_controller_version)(void __iomem *ioaddr);
/* If supported then get the optional core features */
unsigned int (*get_hw_feature)(void __iomem *ioaddr,
unsigned char feature_index);
/* adjust SXGBE speed */
void (*set_speed)(void __iomem *ioaddr, unsigned char speed);
/* EEE-LPI specific operations */
void (*set_eee_mode)(void __iomem *ioaddr);
void (*reset_eee_mode)(void __iomem *ioaddr);
void (*set_eee_timer)(void __iomem *ioaddr, const int ls,
const int tw);
void (*set_eee_pls)(void __iomem *ioaddr, const int link);
/* Enable disable checksum offload operations */
void (*enable_rx_csum)(void __iomem *ioaddr);
void (*disable_rx_csum)(void __iomem *ioaddr);
};
const struct sxgbe_core_ops *sxgbe_get_core_ops(void);
struct sxgbe_ops {
const struct sxgbe_core_ops *mac;
const struct sxgbe_desc_ops *desc;
const struct sxgbe_dma_ops *dma;
const struct sxgbe_mtl_ops *mtl;
struct mii_regs mii; /* MII register Addresses */
struct mac_link link;
unsigned int ctrl_uid;
unsigned int ctrl_id;
};
/* SXGBE private data structures */
struct sxgbe_tx_queue {
unsigned int irq_no;
struct sxgbe_priv_data *priv_ptr;
struct sxgbe_tx_norm_desc *dma_tx;
dma_addr_t dma_tx_phy;
dma_addr_t *tx_skbuff_dma;
struct sk_buff **tx_skbuff;
struct timer_list txtimer;
spinlock_t tx_lock; /* lock for tx queues */
unsigned int cur_tx;
unsigned int dirty_tx;
u32 tx_count_frames;
u32 tx_coal_frames;
u32 tx_coal_timer;
int hwts_tx_en;
u16 prev_mss;
u8 queue_no;
};
struct sxgbe_rx_queue {
struct sxgbe_priv_data *priv_ptr;
struct sxgbe_rx_norm_desc *dma_rx;
struct sk_buff **rx_skbuff;
unsigned int cur_rx;
unsigned int dirty_rx;
unsigned int irq_no;
u32 rx_riwt;
dma_addr_t *rx_skbuff_dma;
dma_addr_t dma_rx_phy;
u8 queue_no;
};
/* SXGBE HW capabilities */
struct sxgbe_hw_features {
/****** CAP [0] *******/
unsigned int pmt_remote_wake_up;
unsigned int pmt_magic_frame;
/* IEEE 1588-2008 */
unsigned int atime_stamp;
unsigned int eee;
unsigned int tx_csum_offload;
unsigned int rx_csum_offload;
unsigned int multi_macaddr;
unsigned int tstamp_srcselect;
unsigned int sa_vlan_insert;
/****** CAP [1] *******/
unsigned int rxfifo_size;
unsigned int txfifo_size;
unsigned int atstmap_hword;
unsigned int dcb_enable;
unsigned int splithead_enable;
unsigned int tcpseg_offload;
unsigned int debug_mem;
unsigned int rss_enable;
unsigned int hash_tsize;
unsigned int l3l4_filer_size;
/* This value is in bytes and
* as mentioned in HW features
* of SXGBE data book
*/
unsigned int rx_mtl_qsize;
unsigned int tx_mtl_qsize;
/****** CAP [2] *******/
/* TX and RX number of channels */
unsigned int rx_mtl_queues;
unsigned int tx_mtl_queues;
unsigned int rx_dma_channels;
unsigned int tx_dma_channels;
unsigned int pps_output_count;
unsigned int aux_input_count;
};
struct sxgbe_priv_data {
/* DMA descriptos */
struct sxgbe_tx_queue *txq[SXGBE_TX_QUEUES];
struct sxgbe_rx_queue *rxq[SXGBE_RX_QUEUES];
u8 cur_rx_qnum;
unsigned int dma_tx_size;
unsigned int dma_rx_size;
unsigned int dma_buf_sz;
u32 rx_riwt;
struct napi_struct napi;
void __iomem *ioaddr;
struct net_device *dev;
struct device *device;
struct sxgbe_ops *hw; /* sxgbe specific ops */
int no_csum_insertion;
int irq;
int rxcsum_insertion;
spinlock_t stats_lock; /* lock for tx/rx statatics */
struct phy_device *phydev;
int oldlink;
int speed;
int oldduplex;
struct mii_bus *mii;
int mii_irq[PHY_MAX_ADDR];
u8 rx_pause;
u8 tx_pause;
struct sxgbe_extra_stats xstats;
struct sxgbe_plat_data *plat;
struct sxgbe_hw_features hw_cap;
u32 msg_enable;
struct clk *sxgbe_clk;
int clk_csr;
unsigned int mode;
unsigned int default_addend;
/* advanced time stamp support */
u32 adv_ts;
int use_riwt;
struct ptp_clock *ptp_clock;
/* tc control */
int tx_tc;
int rx_tc;
/* EEE-LPI specific members */
struct timer_list eee_ctrl_timer;
bool tx_path_in_lpi_mode;
int lpi_irq;
int eee_enabled;
int eee_active;
int tx_lpi_timer;
};
/* Function prototypes */
struct sxgbe_priv_data *sxgbe_drv_probe(struct device *device,
struct sxgbe_plat_data *plat_dat,
void __iomem *addr);
int sxgbe_drv_remove(struct net_device *ndev);
void sxgbe_set_ethtool_ops(struct net_device *netdev);
int sxgbe_mdio_unregister(struct net_device *ndev);
int sxgbe_mdio_register(struct net_device *ndev);
int sxgbe_register_platform(void);
void sxgbe_unregister_platform(void);
#ifdef CONFIG_PM
int sxgbe_suspend(struct net_device *ndev);
int sxgbe_resume(struct net_device *ndev);
int sxgbe_freeze(struct net_device *ndev);
int sxgbe_restore(struct net_device *ndev);
#endif /* CONFIG_PM */
const struct sxgbe_mtl_ops *sxgbe_get_mtl_ops(void);
void sxgbe_disable_eee_mode(struct sxgbe_priv_data * const priv);
bool sxgbe_eee_init(struct sxgbe_priv_data * const priv);
#endif /* __SXGBE_COMMON_H__ */

View File

@ -0,0 +1,262 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/export.h>
#include <linux/io.h>
#include <linux/netdevice.h>
#include <linux/phy.h>
#include "sxgbe_common.h"
#include "sxgbe_reg.h"
/* MAC core initialization */
static void sxgbe_core_init(void __iomem *ioaddr)
{
u32 regval;
/* TX configuration */
regval = readl(ioaddr + SXGBE_CORE_TX_CONFIG_REG);
/* Other configurable parameters IFP, IPG, ISR, ISM
* needs to be set if needed
*/
regval |= SXGBE_TX_JABBER_DISABLE;
writel(regval, ioaddr + SXGBE_CORE_TX_CONFIG_REG);
/* RX configuration */
regval = readl(ioaddr + SXGBE_CORE_RX_CONFIG_REG);
/* Other configurable parameters CST, SPEN, USP, GPSLCE
* WD, LM, S2KP, HDSMS, GPSL, ELEN, ARPEN needs to be
* set if needed
*/
regval |= SXGBE_RX_JUMBPKT_ENABLE | SXGBE_RX_ACS_ENABLE;
writel(regval, ioaddr + SXGBE_CORE_RX_CONFIG_REG);
}
/* Dump MAC registers */
static void sxgbe_core_dump_regs(void __iomem *ioaddr)
{
}
static int sxgbe_get_lpi_status(void __iomem *ioaddr, const u32 irq_status)
{
int status = 0;
int lpi_status;
/* Reading this register shall clear all the LPI status bits */
lpi_status = readl(ioaddr + SXGBE_CORE_LPI_CTRL_STATUS);
if (lpi_status & LPI_CTRL_STATUS_TLPIEN)
status |= TX_ENTRY_LPI_MODE;
if (lpi_status & LPI_CTRL_STATUS_TLPIEX)
status |= TX_EXIT_LPI_MODE;
if (lpi_status & LPI_CTRL_STATUS_RLPIEN)
status |= RX_ENTRY_LPI_MODE;
if (lpi_status & LPI_CTRL_STATUS_RLPIEX)
status |= RX_EXIT_LPI_MODE;
return status;
}
/* Handle extra events on specific interrupts hw dependent */
static int sxgbe_core_host_irq_status(void __iomem *ioaddr,
struct sxgbe_extra_stats *x)
{
int irq_status, status = 0;
irq_status = readl(ioaddr + SXGBE_CORE_INT_STATUS_REG);
if (unlikely(irq_status & LPI_INT_STATUS))
status |= sxgbe_get_lpi_status(ioaddr, irq_status);
return status;
}
/* Set power management mode (e.g. magic frame) */
static void sxgbe_core_pmt(void __iomem *ioaddr, unsigned long mode)
{
}
/* Set/Get Unicast MAC addresses */
static void sxgbe_core_set_umac_addr(void __iomem *ioaddr, unsigned char *addr,
unsigned int reg_n)
{
u32 high_word, low_word;
high_word = (addr[5] << 8) || (addr[4]);
low_word = ((addr[3] << 24) || (addr[2] << 16) ||
(addr[1] << 8) || (addr[0]));
writel(high_word, ioaddr + SXGBE_CORE_ADD_HIGHOFFSET(reg_n));
writel(low_word, ioaddr + SXGBE_CORE_ADD_LOWOFFSET(reg_n));
}
static void sxgbe_core_get_umac_addr(void __iomem *ioaddr, unsigned char *addr,
unsigned int reg_n)
{
u32 high_word, low_word;
high_word = readl(ioaddr + SXGBE_CORE_ADD_HIGHOFFSET(reg_n));
low_word = readl(ioaddr + SXGBE_CORE_ADD_LOWOFFSET(reg_n));
/* extract and assign address */
addr[5] = (high_word & 0x0000FF00) >> 8;
addr[4] = (high_word & 0x000000FF);
addr[3] = (low_word & 0xFF000000) >> 24;
addr[2] = (low_word & 0x00FF0000) >> 16;
addr[1] = (low_word & 0x0000FF00) >> 8;
addr[0] = (low_word & 0x000000FF);
}
static void sxgbe_enable_tx(void __iomem *ioaddr, bool enable)
{
u32 tx_config;
tx_config = readl(ioaddr + SXGBE_CORE_TX_CONFIG_REG);
tx_config &= ~SXGBE_TX_ENABLE;
if (enable)
tx_config |= SXGBE_TX_ENABLE;
writel(tx_config, ioaddr + SXGBE_CORE_TX_CONFIG_REG);
}
static void sxgbe_enable_rx(void __iomem *ioaddr, bool enable)
{
u32 rx_config;
rx_config = readl(ioaddr + SXGBE_CORE_RX_CONFIG_REG);
rx_config &= ~SXGBE_RX_ENABLE;
if (enable)
rx_config |= SXGBE_RX_ENABLE;
writel(rx_config, ioaddr + SXGBE_CORE_RX_CONFIG_REG);
}
static int sxgbe_get_controller_version(void __iomem *ioaddr)
{
return readl(ioaddr + SXGBE_CORE_VERSION_REG);
}
/* If supported then get the optional core features */
static unsigned int sxgbe_get_hw_feature(void __iomem *ioaddr,
unsigned char feature_index)
{
return readl(ioaddr + (SXGBE_CORE_HW_FEA_REG(feature_index)));
}
static void sxgbe_core_set_speed(void __iomem *ioaddr, unsigned char speed)
{
u32 tx_cfg = readl(ioaddr + SXGBE_CORE_TX_CONFIG_REG);
/* clear the speed bits */
tx_cfg &= ~0x60000000;
tx_cfg |= (speed << SXGBE_SPEED_LSHIFT);
/* set the speed */
writel(tx_cfg, ioaddr + SXGBE_CORE_TX_CONFIG_REG);
}
static void sxgbe_set_eee_mode(void __iomem *ioaddr)
{
u32 ctrl;
/* Enable the LPI mode for transmit path with Tx automate bit set.
* When Tx Automate bit is set, MAC internally handles the entry
* to LPI mode after all outstanding and pending packets are
* transmitted.
*/
ctrl = readl(ioaddr + SXGBE_CORE_LPI_CTRL_STATUS);
ctrl |= LPI_CTRL_STATUS_LPIEN | LPI_CTRL_STATUS_TXA;
writel(ctrl, ioaddr + SXGBE_CORE_LPI_CTRL_STATUS);
}
static void sxgbe_reset_eee_mode(void __iomem *ioaddr)
{
u32 ctrl;
ctrl = readl(ioaddr + SXGBE_CORE_LPI_CTRL_STATUS);
ctrl &= ~(LPI_CTRL_STATUS_LPIEN | LPI_CTRL_STATUS_TXA);
writel(ctrl, ioaddr + SXGBE_CORE_LPI_CTRL_STATUS);
}
static void sxgbe_set_eee_pls(void __iomem *ioaddr, const int link)
{
u32 ctrl;
ctrl = readl(ioaddr + SXGBE_CORE_LPI_CTRL_STATUS);
/* If the PHY link status is UP then set PLS */
if (link)
ctrl |= LPI_CTRL_STATUS_PLS;
else
ctrl &= ~LPI_CTRL_STATUS_PLS;
writel(ctrl, ioaddr + SXGBE_CORE_LPI_CTRL_STATUS);
}
static void sxgbe_set_eee_timer(void __iomem *ioaddr,
const int ls, const int tw)
{
int value = ((tw & 0xffff)) | ((ls & 0x7ff) << 16);
/* Program the timers in the LPI timer control register:
* LS: minimum time (ms) for which the link
* status from PHY should be ok before transmitting
* the LPI pattern.
* TW: minimum time (us) for which the core waits
* after it has stopped transmitting the LPI pattern.
*/
writel(value, ioaddr + SXGBE_CORE_LPI_TIMER_CTRL);
}
static void sxgbe_enable_rx_csum(void __iomem *ioaddr)
{
u32 ctrl;
ctrl = readl(ioaddr + SXGBE_CORE_RX_CONFIG_REG);
ctrl |= SXGBE_RX_CSUMOFFLOAD_ENABLE;
writel(ctrl, ioaddr + SXGBE_CORE_RX_CONFIG_REG);
}
static void sxgbe_disable_rx_csum(void __iomem *ioaddr)
{
u32 ctrl;
ctrl = readl(ioaddr + SXGBE_CORE_RX_CONFIG_REG);
ctrl &= ~SXGBE_RX_CSUMOFFLOAD_ENABLE;
writel(ctrl, ioaddr + SXGBE_CORE_RX_CONFIG_REG);
}
const struct sxgbe_core_ops core_ops = {
.core_init = sxgbe_core_init,
.dump_regs = sxgbe_core_dump_regs,
.host_irq_status = sxgbe_core_host_irq_status,
.pmt = sxgbe_core_pmt,
.set_umac_addr = sxgbe_core_set_umac_addr,
.get_umac_addr = sxgbe_core_get_umac_addr,
.enable_rx = sxgbe_enable_rx,
.enable_tx = sxgbe_enable_tx,
.get_controller_version = sxgbe_get_controller_version,
.get_hw_feature = sxgbe_get_hw_feature,
.set_speed = sxgbe_core_set_speed,
.set_eee_mode = sxgbe_set_eee_mode,
.reset_eee_mode = sxgbe_reset_eee_mode,
.set_eee_timer = sxgbe_set_eee_timer,
.set_eee_pls = sxgbe_set_eee_pls,
.enable_rx_csum = sxgbe_enable_rx_csum,
.disable_rx_csum = sxgbe_disable_rx_csum,
};
const struct sxgbe_core_ops *sxgbe_get_core_ops(void)
{
return &core_ops;
}

View File

@ -0,0 +1,515 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/bitops.h>
#include <linux/export.h>
#include <linux/io.h>
#include <linux/netdevice.h>
#include <linux/phy.h>
#include "sxgbe_common.h"
#include "sxgbe_dma.h"
#include "sxgbe_desc.h"
/* DMA TX descriptor ring initialization */
static void sxgbe_init_tx_desc(struct sxgbe_tx_norm_desc *p)
{
p->tdes23.tx_rd_des23.own_bit = 0;
}
static void sxgbe_tx_desc_enable_tse(struct sxgbe_tx_norm_desc *p, u8 is_tse,
u32 total_hdr_len, u32 tcp_hdr_len,
u32 tcp_payload_len)
{
p->tdes23.tx_rd_des23.tse_bit = is_tse;
p->tdes23.tx_rd_des23.buf1_size = total_hdr_len;
p->tdes23.tx_rd_des23.tcp_hdr_len = tcp_hdr_len / 4;
p->tdes23.tx_rd_des23.tx_pkt_len.tcp_payload_len = tcp_payload_len;
}
/* Assign buffer lengths for descriptor */
static void sxgbe_prepare_tx_desc(struct sxgbe_tx_norm_desc *p, u8 is_fd,
int buf1_len, int pkt_len, int cksum)
{
p->tdes23.tx_rd_des23.first_desc = is_fd;
p->tdes23.tx_rd_des23.buf1_size = buf1_len;
p->tdes23.tx_rd_des23.tx_pkt_len.cksum_pktlen.total_pkt_len = pkt_len;
if (cksum)
p->tdes23.tx_rd_des23.tx_pkt_len.cksum_pktlen.cksum_ctl = cic_full;
}
/* Set VLAN control information */
static void sxgbe_tx_vlanctl_desc(struct sxgbe_tx_norm_desc *p, int vlan_ctl)
{
p->tdes23.tx_rd_des23.vlan_tag_ctl = vlan_ctl;
}
/* Set the owner of Normal descriptor */
static void sxgbe_set_tx_owner(struct sxgbe_tx_norm_desc *p)
{
p->tdes23.tx_rd_des23.own_bit = 1;
}
/* Get the owner of Normal descriptor */
static int sxgbe_get_tx_owner(struct sxgbe_tx_norm_desc *p)
{
return p->tdes23.tx_rd_des23.own_bit;
}
/* Invoked by the xmit function to close the tx descriptor */
static void sxgbe_close_tx_desc(struct sxgbe_tx_norm_desc *p)
{
p->tdes23.tx_rd_des23.last_desc = 1;
p->tdes23.tx_rd_des23.int_on_com = 1;
}
/* Clean the tx descriptor as soon as the tx irq is received */
static void sxgbe_release_tx_desc(struct sxgbe_tx_norm_desc *p)
{
memset(p, 0, sizeof(*p));
}
/* Clear interrupt on tx frame completion. When this bit is
* set an interrupt happens as soon as the frame is transmitted
*/
static void sxgbe_clear_tx_ic(struct sxgbe_tx_norm_desc *p)
{
p->tdes23.tx_rd_des23.int_on_com = 0;
}
/* Last tx segment reports the transmit status */
static int sxgbe_get_tx_ls(struct sxgbe_tx_norm_desc *p)
{
return p->tdes23.tx_rd_des23.last_desc;
}
/* Get the buffer size from the descriptor */
static int sxgbe_get_tx_len(struct sxgbe_tx_norm_desc *p)
{
return p->tdes23.tx_rd_des23.buf1_size;
}
/* Set tx timestamp enable bit */
static void sxgbe_tx_enable_tstamp(struct sxgbe_tx_norm_desc *p)
{
p->tdes23.tx_rd_des23.timestmp_enable = 1;
}
/* get tx timestamp status */
static int sxgbe_get_tx_timestamp_status(struct sxgbe_tx_norm_desc *p)
{
return p->tdes23.tx_rd_des23.timestmp_enable;
}
/* TX Context Descripto Specific */
static void sxgbe_tx_ctxt_desc_set_ctxt(struct sxgbe_tx_ctxt_desc *p)
{
p->ctxt_bit = 1;
}
/* Set the owner of TX context descriptor */
static void sxgbe_tx_ctxt_desc_set_owner(struct sxgbe_tx_ctxt_desc *p)
{
p->own_bit = 1;
}
/* Get the owner of TX context descriptor */
static int sxgbe_tx_ctxt_desc_get_owner(struct sxgbe_tx_ctxt_desc *p)
{
return p->own_bit;
}
/* Set TX mss in TX context Descriptor */
static void sxgbe_tx_ctxt_desc_set_mss(struct sxgbe_tx_ctxt_desc *p, u16 mss)
{
p->maxseg_size = mss;
}
/* Get TX mss from TX context Descriptor */
static int sxgbe_tx_ctxt_desc_get_mss(struct sxgbe_tx_ctxt_desc *p)
{
return p->maxseg_size;
}
/* Set TX tcmssv in TX context Descriptor */
static void sxgbe_tx_ctxt_desc_set_tcmssv(struct sxgbe_tx_ctxt_desc *p)
{
p->tcmssv = 1;
}
/* Reset TX ostc in TX context Descriptor */
static void sxgbe_tx_ctxt_desc_reset_ostc(struct sxgbe_tx_ctxt_desc *p)
{
p->ostc = 0;
}
/* Set IVLAN information */
static void sxgbe_tx_ctxt_desc_set_ivlantag(struct sxgbe_tx_ctxt_desc *p,
int is_ivlanvalid, int ivlan_tag,
int ivlan_ctl)
{
if (is_ivlanvalid) {
p->ivlan_tag_valid = is_ivlanvalid;
p->ivlan_tag = ivlan_tag;
p->ivlan_tag_ctl = ivlan_ctl;
}
}
/* Return IVLAN Tag */
static int sxgbe_tx_ctxt_desc_get_ivlantag(struct sxgbe_tx_ctxt_desc *p)
{
return p->ivlan_tag;
}
/* Set VLAN Tag */
static void sxgbe_tx_ctxt_desc_set_vlantag(struct sxgbe_tx_ctxt_desc *p,
int is_vlanvalid, int vlan_tag)
{
if (is_vlanvalid) {
p->vltag_valid = is_vlanvalid;
p->vlan_tag = vlan_tag;
}
}
/* Return VLAN Tag */
static int sxgbe_tx_ctxt_desc_get_vlantag(struct sxgbe_tx_ctxt_desc *p)
{
return p->vlan_tag;
}
/* Set Time stamp */
static void sxgbe_tx_ctxt_desc_set_tstamp(struct sxgbe_tx_ctxt_desc *p,
u8 ostc_enable, u64 tstamp)
{
if (ostc_enable) {
p->ostc = ostc_enable;
p->tstamp_lo = (u32) tstamp;
p->tstamp_hi = (u32) (tstamp>>32);
}
}
/* Close TX context descriptor */
static void sxgbe_tx_ctxt_desc_close(struct sxgbe_tx_ctxt_desc *p)
{
p->own_bit = 1;
}
/* WB status of context descriptor */
static int sxgbe_tx_ctxt_desc_get_cde(struct sxgbe_tx_ctxt_desc *p)
{
return p->ctxt_desc_err;
}
/* DMA RX descriptor ring initialization */
static void sxgbe_init_rx_desc(struct sxgbe_rx_norm_desc *p, int disable_rx_ic,
int mode, int end)
{
p->rdes23.rx_rd_des23.own_bit = 1;
if (disable_rx_ic)
p->rdes23.rx_rd_des23.int_on_com = disable_rx_ic;
}
/* Get RX own bit */
static int sxgbe_get_rx_owner(struct sxgbe_rx_norm_desc *p)
{
return p->rdes23.rx_rd_des23.own_bit;
}
/* Set RX own bit */
static void sxgbe_set_rx_owner(struct sxgbe_rx_norm_desc *p)
{
p->rdes23.rx_rd_des23.own_bit = 1;
}
/* Get the receive frame size */
static int sxgbe_get_rx_frame_len(struct sxgbe_rx_norm_desc *p)
{
return p->rdes23.rx_wb_des23.pkt_len;
}
/* Return first Descriptor status */
static int sxgbe_get_rx_fd_status(struct sxgbe_rx_norm_desc *p)
{
return p->rdes23.rx_wb_des23.first_desc;
}
/* Return Last Descriptor status */
static int sxgbe_get_rx_ld_status(struct sxgbe_rx_norm_desc *p)
{
return p->rdes23.rx_wb_des23.last_desc;
}
/* Return the RX status looking at the WB fields */
static int sxgbe_rx_wbstatus(struct sxgbe_rx_norm_desc *p,
struct sxgbe_extra_stats *x, int *checksum)
{
int status = 0;
*checksum = CHECKSUM_UNNECESSARY;
if (p->rdes23.rx_wb_des23.err_summary) {
switch (p->rdes23.rx_wb_des23.err_l2_type) {
case RX_GMII_ERR:
status = -EINVAL;
x->rx_code_gmii_err++;
break;
case RX_WATCHDOG_ERR:
status = -EINVAL;
x->rx_watchdog_err++;
break;
case RX_CRC_ERR:
status = -EINVAL;
x->rx_crc_err++;
break;
case RX_GAINT_ERR:
status = -EINVAL;
x->rx_gaint_pkt_err++;
break;
case RX_IP_HDR_ERR:
*checksum = CHECKSUM_NONE;
x->ip_hdr_err++;
break;
case RX_PAYLOAD_ERR:
*checksum = CHECKSUM_NONE;
x->ip_payload_err++;
break;
case RX_OVERFLOW_ERR:
status = -EINVAL;
x->overflow_error++;
break;
default:
pr_err("Invalid Error type\n");
break;
}
} else {
switch (p->rdes23.rx_wb_des23.err_l2_type) {
case RX_LEN_PKT:
x->len_pkt++;
break;
case RX_MACCTL_PKT:
x->mac_ctl_pkt++;
break;
case RX_DCBCTL_PKT:
x->dcb_ctl_pkt++;
break;
case RX_ARP_PKT:
x->arp_pkt++;
break;
case RX_OAM_PKT:
x->oam_pkt++;
break;
case RX_UNTAG_PKT:
x->untag_okt++;
break;
case RX_OTHER_PKT:
x->other_pkt++;
break;
case RX_SVLAN_PKT:
x->svlan_tag_pkt++;
break;
case RX_CVLAN_PKT:
x->cvlan_tag_pkt++;
break;
case RX_DVLAN_OCVLAN_ICVLAN_PKT:
x->dvlan_ocvlan_icvlan_pkt++;
break;
case RX_DVLAN_OSVLAN_ISVLAN_PKT:
x->dvlan_osvlan_isvlan_pkt++;
break;
case RX_DVLAN_OSVLAN_ICVLAN_PKT:
x->dvlan_osvlan_icvlan_pkt++;
break;
case RX_DVLAN_OCVLAN_ISVLAN_PKT:
x->dvlan_ocvlan_icvlan_pkt++;
break;
default:
pr_err("Invalid L2 Packet type\n");
break;
}
}
/* L3/L4 Pkt type */
switch (p->rdes23.rx_wb_des23.layer34_pkt_type) {
case RX_NOT_IP_PKT:
x->not_ip_pkt++;
break;
case RX_IPV4_TCP_PKT:
x->ip4_tcp_pkt++;
break;
case RX_IPV4_UDP_PKT:
x->ip4_udp_pkt++;
break;
case RX_IPV4_ICMP_PKT:
x->ip4_icmp_pkt++;
break;
case RX_IPV4_UNKNOWN_PKT:
x->ip4_unknown_pkt++;
break;
case RX_IPV6_TCP_PKT:
x->ip6_tcp_pkt++;
break;
case RX_IPV6_UDP_PKT:
x->ip6_udp_pkt++;
break;
case RX_IPV6_ICMP_PKT:
x->ip6_icmp_pkt++;
break;
case RX_IPV6_UNKNOWN_PKT:
x->ip6_unknown_pkt++;
break;
default:
pr_err("Invalid L3/L4 Packet type\n");
break;
}
/* Filter */
if (p->rdes23.rx_wb_des23.vlan_filter_match)
x->vlan_filter_match++;
if (p->rdes23.rx_wb_des23.sa_filter_fail) {
status = -EINVAL;
x->sa_filter_fail++;
}
if (p->rdes23.rx_wb_des23.da_filter_fail) {
status = -EINVAL;
x->da_filter_fail++;
}
if (p->rdes23.rx_wb_des23.hash_filter_pass)
x->hash_filter_pass++;
if (p->rdes23.rx_wb_des23.l3_filter_match)
x->l3_filter_match++;
if (p->rdes23.rx_wb_des23.l4_filter_match)
x->l4_filter_match++;
return status;
}
/* Get own bit of context descriptor */
static int sxgbe_get_rx_ctxt_owner(struct sxgbe_rx_ctxt_desc *p)
{
return p->own_bit;
}
/* Set own bit for context descriptor */
static void sxgbe_set_ctxt_rx_owner(struct sxgbe_rx_ctxt_desc *p)
{
p->own_bit = 1;
}
/* Return the reception status looking at Context control information */
static void sxgbe_rx_ctxt_wbstatus(struct sxgbe_rx_ctxt_desc *p,
struct sxgbe_extra_stats *x)
{
if (p->tstamp_dropped)
x->timestamp_dropped++;
/* ptp */
if (p->ptp_msgtype == RX_NO_PTP)
x->rx_msg_type_no_ptp++;
else if (p->ptp_msgtype == RX_PTP_SYNC)
x->rx_ptp_type_sync++;
else if (p->ptp_msgtype == RX_PTP_FOLLOW_UP)
x->rx_ptp_type_follow_up++;
else if (p->ptp_msgtype == RX_PTP_DELAY_REQ)
x->rx_ptp_type_delay_req++;
else if (p->ptp_msgtype == RX_PTP_DELAY_RESP)
x->rx_ptp_type_delay_resp++;
else if (p->ptp_msgtype == RX_PTP_PDELAY_REQ)
x->rx_ptp_type_pdelay_req++;
else if (p->ptp_msgtype == RX_PTP_PDELAY_RESP)
x->rx_ptp_type_pdelay_resp++;
else if (p->ptp_msgtype == RX_PTP_PDELAY_FOLLOW_UP)
x->rx_ptp_type_pdelay_follow_up++;
else if (p->ptp_msgtype == RX_PTP_ANNOUNCE)
x->rx_ptp_announce++;
else if (p->ptp_msgtype == RX_PTP_MGMT)
x->rx_ptp_mgmt++;
else if (p->ptp_msgtype == RX_PTP_SIGNAL)
x->rx_ptp_signal++;
else if (p->ptp_msgtype == RX_PTP_RESV_MSG)
x->rx_ptp_resv_msg_type++;
}
/* Get rx timestamp status */
static int sxgbe_get_rx_ctxt_tstamp_status(struct sxgbe_rx_ctxt_desc *p)
{
if ((p->tstamp_hi == 0xffffffff) && (p->tstamp_lo == 0xffffffff)) {
pr_err("Time stamp corrupted\n");
return 0;
}
return p->tstamp_available;
}
static u64 sxgbe_get_rx_timestamp(struct sxgbe_rx_ctxt_desc *p)
{
u64 ns;
ns = p->tstamp_lo;
ns |= ((u64)p->tstamp_hi) << 32;
return ns;
}
static const struct sxgbe_desc_ops desc_ops = {
.init_tx_desc = sxgbe_init_tx_desc,
.tx_desc_enable_tse = sxgbe_tx_desc_enable_tse,
.prepare_tx_desc = sxgbe_prepare_tx_desc,
.tx_vlanctl_desc = sxgbe_tx_vlanctl_desc,
.set_tx_owner = sxgbe_set_tx_owner,
.get_tx_owner = sxgbe_get_tx_owner,
.close_tx_desc = sxgbe_close_tx_desc,
.release_tx_desc = sxgbe_release_tx_desc,
.clear_tx_ic = sxgbe_clear_tx_ic,
.get_tx_ls = sxgbe_get_tx_ls,
.get_tx_len = sxgbe_get_tx_len,
.tx_enable_tstamp = sxgbe_tx_enable_tstamp,
.get_tx_timestamp_status = sxgbe_get_tx_timestamp_status,
.tx_ctxt_desc_set_ctxt = sxgbe_tx_ctxt_desc_set_ctxt,
.tx_ctxt_desc_set_owner = sxgbe_tx_ctxt_desc_set_owner,
.get_tx_ctxt_owner = sxgbe_tx_ctxt_desc_get_owner,
.tx_ctxt_desc_set_mss = sxgbe_tx_ctxt_desc_set_mss,
.tx_ctxt_desc_get_mss = sxgbe_tx_ctxt_desc_get_mss,
.tx_ctxt_desc_set_tcmssv = sxgbe_tx_ctxt_desc_set_tcmssv,
.tx_ctxt_desc_reset_ostc = sxgbe_tx_ctxt_desc_reset_ostc,
.tx_ctxt_desc_set_ivlantag = sxgbe_tx_ctxt_desc_set_ivlantag,
.tx_ctxt_desc_get_ivlantag = sxgbe_tx_ctxt_desc_get_ivlantag,
.tx_ctxt_desc_set_vlantag = sxgbe_tx_ctxt_desc_set_vlantag,
.tx_ctxt_desc_get_vlantag = sxgbe_tx_ctxt_desc_get_vlantag,
.tx_ctxt_set_tstamp = sxgbe_tx_ctxt_desc_set_tstamp,
.close_tx_ctxt_desc = sxgbe_tx_ctxt_desc_close,
.get_tx_ctxt_cde = sxgbe_tx_ctxt_desc_get_cde,
.init_rx_desc = sxgbe_init_rx_desc,
.get_rx_owner = sxgbe_get_rx_owner,
.set_rx_owner = sxgbe_set_rx_owner,
.get_rx_frame_len = sxgbe_get_rx_frame_len,
.get_rx_fd_status = sxgbe_get_rx_fd_status,
.get_rx_ld_status = sxgbe_get_rx_ld_status,
.rx_wbstatus = sxgbe_rx_wbstatus,
.get_rx_ctxt_owner = sxgbe_get_rx_ctxt_owner,
.set_rx_ctxt_owner = sxgbe_set_ctxt_rx_owner,
.rx_ctxt_wbstatus = sxgbe_rx_ctxt_wbstatus,
.get_rx_ctxt_tstamp_status = sxgbe_get_rx_ctxt_tstamp_status,
.get_timestamp = sxgbe_get_rx_timestamp,
};
const struct sxgbe_desc_ops *sxgbe_get_desc_ops(void)
{
return &desc_ops;
}

View File

@ -0,0 +1,298 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __SXGBE_DESC_H__
#define __SXGBE_DESC_H__
#define SXGBE_DESC_SIZE_BYTES 16
/* forward declaration */
struct sxgbe_extra_stats;
/* Transmit checksum insertion control */
enum tdes_csum_insertion {
cic_disabled = 0, /* Checksum Insertion Control */
cic_only_ip = 1, /* Only IP header */
/* IP header but pseudoheader is not calculated */
cic_no_pseudoheader = 2,
cic_full = 3, /* IP header and pseudoheader */
};
struct sxgbe_tx_norm_desc {
u64 tdes01; /* buf1 address */
union {
/* TX Read-Format Desc 2,3 */
struct {
/* TDES2 */
u32 buf1_size:14;
u32 vlan_tag_ctl:2;
u32 buf2_size:14;
u32 timestmp_enable:1;
u32 int_on_com:1;
/* TDES3 */
union {
u32 tcp_payload_len:18;
struct {
u32 total_pkt_len:15;
u32 reserved1:1;
u32 cksum_ctl:2;
} cksum_pktlen;
} tx_pkt_len;
u32 tse_bit:1;
u32 tcp_hdr_len:4;
u32 sa_insert_ctl:3;
u32 crc_pad_ctl:2;
u32 last_desc:1;
u32 first_desc:1;
u32 ctxt_bit:1;
u32 own_bit:1;
} tx_rd_des23;
/* tx write back Desc 2,3 */
struct {
/* WB TES2 */
u32 reserved1;
/* WB TES3 */
u32 reserved2:31;
u32 own_bit:1;
} tx_wb_des23;
} tdes23;
};
struct sxgbe_rx_norm_desc {
union {
u32 rdes0; /* buf1 address */
struct {
u32 out_vlan_tag:16;
u32 in_vlan_tag:16;
} wb_rx_des0;
} rd_wb_des0;
union {
u32 rdes1; /* buf2 address or buf1[63:32] */
u32 rss_hash; /* Write-back RX */
} rd_wb_des1;
union {
/* RX Read format Desc 2,3 */
struct{
/* RDES2 */
u32 buf2_addr;
/* RDES3 */
u32 buf2_hi_addr:30;
u32 int_on_com:1;
u32 own_bit:1;
} rx_rd_des23;
/* RX write back */
struct{
/* WB RDES2 */
u32 hdr_len:10;
u32 rdes2_reserved:2;
u32 elrd_val:1;
u32 iovt_sel:1;
u32 res_pkt:1;
u32 vlan_filter_match:1;
u32 sa_filter_fail:1;
u32 da_filter_fail:1;
u32 hash_filter_pass:1;
u32 macaddr_filter_match:8;
u32 l3_filter_match:1;
u32 l4_filter_match:1;
u32 l34_filter_num:3;
/* WB RDES3 */
u32 pkt_len:14;
u32 rdes3_reserved:1;
u32 err_summary:1;
u32 err_l2_type:4;
u32 layer34_pkt_type:4;
u32 no_coagulation_pkt:1;
u32 in_seq_pkt:1;
u32 rss_valid:1;
u32 context_des_avail:1;
u32 last_desc:1;
u32 first_desc:1;
u32 recv_context_desc:1;
u32 own_bit:1;
} rx_wb_des23;
} rdes23;
};
/* Context descriptor structure */
struct sxgbe_tx_ctxt_desc {
u32 tstamp_lo;
u32 tstamp_hi;
u32 maxseg_size:15;
u32 reserved1:1;
u32 ivlan_tag:16;
u32 vlan_tag:16;
u32 vltag_valid:1;
u32 ivlan_tag_valid:1;
u32 ivlan_tag_ctl:2;
u32 reserved2:3;
u32 ctxt_desc_err:1;
u32 reserved3:2;
u32 ostc:1;
u32 tcmssv:1;
u32 reserved4:2;
u32 ctxt_bit:1;
u32 own_bit:1;
};
struct sxgbe_rx_ctxt_desc {
u32 tstamp_lo;
u32 tstamp_hi;
u32 reserved1;
u32 ptp_msgtype:4;
u32 tstamp_available:1;
u32 ptp_rsp_err:1;
u32 tstamp_dropped:1;
u32 reserved2:23;
u32 rx_ctxt_desc:1;
u32 own_bit:1;
};
struct sxgbe_desc_ops {
/* DMA TX descriptor ring initialization */
void (*init_tx_desc)(struct sxgbe_tx_norm_desc *p);
/* Invoked by the xmit function to prepare the tx descriptor */
void (*tx_desc_enable_tse)(struct sxgbe_tx_norm_desc *p, u8 is_tse,
u32 total_hdr_len, u32 tcp_hdr_len,
u32 tcp_payload_len);
/* Assign buffer lengths for descriptor */
void (*prepare_tx_desc)(struct sxgbe_tx_norm_desc *p, u8 is_fd,
int buf1_len, int pkt_len, int cksum);
/* Set VLAN control information */
void (*tx_vlanctl_desc)(struct sxgbe_tx_norm_desc *p, int vlan_ctl);
/* Set the owner of the descriptor */
void (*set_tx_owner)(struct sxgbe_tx_norm_desc *p);
/* Get the owner of the descriptor */
int (*get_tx_owner)(struct sxgbe_tx_norm_desc *p);
/* Invoked by the xmit function to close the tx descriptor */
void (*close_tx_desc)(struct sxgbe_tx_norm_desc *p);
/* Clean the tx descriptor as soon as the tx irq is received */
void (*release_tx_desc)(struct sxgbe_tx_norm_desc *p);
/* Clear interrupt on tx frame completion. When this bit is
* set an interrupt happens as soon as the frame is transmitted
*/
void (*clear_tx_ic)(struct sxgbe_tx_norm_desc *p);
/* Last tx segment reports the transmit status */
int (*get_tx_ls)(struct sxgbe_tx_norm_desc *p);
/* Get the buffer size from the descriptor */
int (*get_tx_len)(struct sxgbe_tx_norm_desc *p);
/* Set tx timestamp enable bit */
void (*tx_enable_tstamp)(struct sxgbe_tx_norm_desc *p);
/* get tx timestamp status */
int (*get_tx_timestamp_status)(struct sxgbe_tx_norm_desc *p);
/* TX Context Descripto Specific */
void (*tx_ctxt_desc_set_ctxt)(struct sxgbe_tx_ctxt_desc *p);
/* Set the owner of the TX context descriptor */
void (*tx_ctxt_desc_set_owner)(struct sxgbe_tx_ctxt_desc *p);
/* Get the owner of the TX context descriptor */
int (*get_tx_ctxt_owner)(struct sxgbe_tx_ctxt_desc *p);
/* Set TX mss */
void (*tx_ctxt_desc_set_mss)(struct sxgbe_tx_ctxt_desc *p, u16 mss);
/* Set TX mss */
int (*tx_ctxt_desc_get_mss)(struct sxgbe_tx_ctxt_desc *p);
/* Set TX tcmssv */
void (*tx_ctxt_desc_set_tcmssv)(struct sxgbe_tx_ctxt_desc *p);
/* Reset TX ostc */
void (*tx_ctxt_desc_reset_ostc)(struct sxgbe_tx_ctxt_desc *p);
/* Set IVLAN information */
void (*tx_ctxt_desc_set_ivlantag)(struct sxgbe_tx_ctxt_desc *p,
int is_ivlanvalid, int ivlan_tag,
int ivlan_ctl);
/* Return IVLAN Tag */
int (*tx_ctxt_desc_get_ivlantag)(struct sxgbe_tx_ctxt_desc *p);
/* Set VLAN Tag */
void (*tx_ctxt_desc_set_vlantag)(struct sxgbe_tx_ctxt_desc *p,
int is_vlanvalid, int vlan_tag);
/* Return VLAN Tag */
int (*tx_ctxt_desc_get_vlantag)(struct sxgbe_tx_ctxt_desc *p);
/* Set Time stamp */
void (*tx_ctxt_set_tstamp)(struct sxgbe_tx_ctxt_desc *p,
u8 ostc_enable, u64 tstamp);
/* Close TX context descriptor */
void (*close_tx_ctxt_desc)(struct sxgbe_tx_ctxt_desc *p);
/* WB status of context descriptor */
int (*get_tx_ctxt_cde)(struct sxgbe_tx_ctxt_desc *p);
/* DMA RX descriptor ring initialization */
void (*init_rx_desc)(struct sxgbe_rx_norm_desc *p, int disable_rx_ic,
int mode, int end);
/* Get own bit */
int (*get_rx_owner)(struct sxgbe_rx_norm_desc *p);
/* Set own bit */
void (*set_rx_owner)(struct sxgbe_rx_norm_desc *p);
/* Get the receive frame size */
int (*get_rx_frame_len)(struct sxgbe_rx_norm_desc *p);
/* Return first Descriptor status */
int (*get_rx_fd_status)(struct sxgbe_rx_norm_desc *p);
/* Return first Descriptor status */
int (*get_rx_ld_status)(struct sxgbe_rx_norm_desc *p);
/* Return the reception status looking at the RDES1 */
int (*rx_wbstatus)(struct sxgbe_rx_norm_desc *p,
struct sxgbe_extra_stats *x, int *checksum);
/* Get own bit */
int (*get_rx_ctxt_owner)(struct sxgbe_rx_ctxt_desc *p);
/* Set own bit */
void (*set_rx_ctxt_owner)(struct sxgbe_rx_ctxt_desc *p);
/* Return the reception status looking at Context control information */
void (*rx_ctxt_wbstatus)(struct sxgbe_rx_ctxt_desc *p,
struct sxgbe_extra_stats *x);
/* Get rx timestamp status */
int (*get_rx_ctxt_tstamp_status)(struct sxgbe_rx_ctxt_desc *p);
/* Get timestamp value for rx, need to check this */
u64 (*get_timestamp)(struct sxgbe_rx_ctxt_desc *p);
};
const struct sxgbe_desc_ops *sxgbe_get_desc_ops(void);
#endif /* __SXGBE_DESC_H__ */

View File

@ -0,0 +1,382 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/io.h>
#include <linux/delay.h>
#include <linux/export.h>
#include <linux/io.h>
#include <linux/netdevice.h>
#include <linux/phy.h>
#include "sxgbe_common.h"
#include "sxgbe_dma.h"
#include "sxgbe_reg.h"
#include "sxgbe_desc.h"
/* DMA core initialization */
static int sxgbe_dma_init(void __iomem *ioaddr, int fix_burst, int burst_map)
{
int retry_count = 10;
u32 reg_val;
/* reset the DMA */
writel(SXGBE_DMA_SOFT_RESET, ioaddr + SXGBE_DMA_MODE_REG);
while (retry_count--) {
if (!(readl(ioaddr + SXGBE_DMA_MODE_REG) &
SXGBE_DMA_SOFT_RESET))
break;
mdelay(10);
}
if (retry_count < 0)
return -EBUSY;
reg_val = readl(ioaddr + SXGBE_DMA_SYSBUS_MODE_REG);
/* if fix_burst = 0, Set UNDEF = 1 of DMA_Sys_Mode Register.
* if fix_burst = 1, Set UNDEF = 0 of DMA_Sys_Mode Register.
* burst_map is bitmap for BLEN[4, 8, 16, 32, 64, 128 and 256].
* Set burst_map irrespective of fix_burst value.
*/
if (!fix_burst)
reg_val |= SXGBE_DMA_AXI_UNDEF_BURST;
/* write burst len map */
reg_val |= (burst_map << SXGBE_DMA_BLENMAP_LSHIFT);
writel(reg_val, ioaddr + SXGBE_DMA_SYSBUS_MODE_REG);
return 0;
}
static void sxgbe_dma_channel_init(void __iomem *ioaddr, int cha_num,
int fix_burst, int pbl, dma_addr_t dma_tx,
dma_addr_t dma_rx, int t_rsize, int r_rsize)
{
u32 reg_val;
dma_addr_t dma_addr;
reg_val = readl(ioaddr + SXGBE_DMA_CHA_CTL_REG(cha_num));
/* set the pbl */
if (fix_burst) {
reg_val |= SXGBE_DMA_PBL_X8MODE;
writel(reg_val, ioaddr + SXGBE_DMA_CHA_CTL_REG(cha_num));
/* program the TX pbl */
reg_val = readl(ioaddr + SXGBE_DMA_CHA_TXCTL_REG(cha_num));
reg_val |= (pbl << SXGBE_DMA_TXPBL_LSHIFT);
writel(reg_val, ioaddr + SXGBE_DMA_CHA_TXCTL_REG(cha_num));
/* program the RX pbl */
reg_val = readl(ioaddr + SXGBE_DMA_CHA_RXCTL_REG(cha_num));
reg_val |= (pbl << SXGBE_DMA_RXPBL_LSHIFT);
writel(reg_val, ioaddr + SXGBE_DMA_CHA_RXCTL_REG(cha_num));
}
/* program desc registers */
writel(upper_32_bits(dma_tx),
ioaddr + SXGBE_DMA_CHA_TXDESC_HADD_REG(cha_num));
writel(lower_32_bits(dma_tx),
ioaddr + SXGBE_DMA_CHA_TXDESC_LADD_REG(cha_num));
writel(upper_32_bits(dma_rx),
ioaddr + SXGBE_DMA_CHA_RXDESC_HADD_REG(cha_num));
writel(lower_32_bits(dma_rx),
ioaddr + SXGBE_DMA_CHA_RXDESC_LADD_REG(cha_num));
/* program tail pointers */
/* assumption: upper 32 bits are constant and
* same as TX/RX desc list
*/
dma_addr = dma_tx + ((t_rsize - 1) * SXGBE_DESC_SIZE_BYTES);
writel(lower_32_bits(dma_addr),
ioaddr + SXGBE_DMA_CHA_TXDESC_TAILPTR_REG(cha_num));
dma_addr = dma_rx + ((r_rsize - 1) * SXGBE_DESC_SIZE_BYTES);
writel(lower_32_bits(dma_addr),
ioaddr + SXGBE_DMA_CHA_RXDESC_LADD_REG(cha_num));
/* program the ring sizes */
writel(t_rsize - 1, ioaddr + SXGBE_DMA_CHA_TXDESC_RINGLEN_REG(cha_num));
writel(r_rsize - 1, ioaddr + SXGBE_DMA_CHA_RXDESC_RINGLEN_REG(cha_num));
/* Enable TX/RX interrupts */
writel(SXGBE_DMA_ENA_INT,
ioaddr + SXGBE_DMA_CHA_INT_ENABLE_REG(cha_num));
}
static void sxgbe_enable_dma_transmission(void __iomem *ioaddr, int cha_num)
{
u32 tx_config;
tx_config = readl(ioaddr + SXGBE_DMA_CHA_TXCTL_REG(cha_num));
tx_config |= SXGBE_TX_START_DMA;
writel(tx_config, ioaddr + SXGBE_DMA_CHA_TXCTL_REG(cha_num));
}
static void sxgbe_enable_dma_irq(void __iomem *ioaddr, int dma_cnum)
{
/* Enable TX/RX interrupts */
writel(SXGBE_DMA_ENA_INT,
ioaddr + SXGBE_DMA_CHA_INT_ENABLE_REG(dma_cnum));
}
static void sxgbe_disable_dma_irq(void __iomem *ioaddr, int dma_cnum)
{
/* Disable TX/RX interrupts */
writel(0, ioaddr + SXGBE_DMA_CHA_INT_ENABLE_REG(dma_cnum));
}
static void sxgbe_dma_start_tx(void __iomem *ioaddr, int tchannels)
{
int cnum;
u32 tx_ctl_reg;
for (cnum = 0; cnum < tchannels; cnum++) {
tx_ctl_reg = readl(ioaddr + SXGBE_DMA_CHA_TXCTL_REG(cnum));
tx_ctl_reg |= SXGBE_TX_ENABLE;
writel(tx_ctl_reg,
ioaddr + SXGBE_DMA_CHA_TXCTL_REG(cnum));
}
}
static void sxgbe_dma_start_tx_queue(void __iomem *ioaddr, int dma_cnum)
{
u32 tx_ctl_reg;
tx_ctl_reg = readl(ioaddr + SXGBE_DMA_CHA_TXCTL_REG(dma_cnum));
tx_ctl_reg |= SXGBE_TX_ENABLE;
writel(tx_ctl_reg, ioaddr + SXGBE_DMA_CHA_TXCTL_REG(dma_cnum));
}
static void sxgbe_dma_stop_tx_queue(void __iomem *ioaddr, int dma_cnum)
{
u32 tx_ctl_reg;
tx_ctl_reg = readl(ioaddr + SXGBE_DMA_CHA_TXCTL_REG(dma_cnum));
tx_ctl_reg &= ~(SXGBE_TX_ENABLE);
writel(tx_ctl_reg, ioaddr + SXGBE_DMA_CHA_TXCTL_REG(dma_cnum));
}
static void sxgbe_dma_stop_tx(void __iomem *ioaddr, int tchannels)
{
int cnum;
u32 tx_ctl_reg;
for (cnum = 0; cnum < tchannels; cnum++) {
tx_ctl_reg = readl(ioaddr + SXGBE_DMA_CHA_TXCTL_REG(cnum));
tx_ctl_reg &= ~(SXGBE_TX_ENABLE);
writel(tx_ctl_reg, ioaddr + SXGBE_DMA_CHA_TXCTL_REG(cnum));
}
}
static void sxgbe_dma_start_rx(void __iomem *ioaddr, int rchannels)
{
int cnum;
u32 rx_ctl_reg;
for (cnum = 0; cnum < rchannels; cnum++) {
rx_ctl_reg = readl(ioaddr + SXGBE_DMA_CHA_RXCTL_REG(cnum));
rx_ctl_reg |= SXGBE_RX_ENABLE;
writel(rx_ctl_reg,
ioaddr + SXGBE_DMA_CHA_RXCTL_REG(cnum));
}
}
static void sxgbe_dma_stop_rx(void __iomem *ioaddr, int rchannels)
{
int cnum;
u32 rx_ctl_reg;
for (cnum = 0; cnum < rchannels; cnum++) {
rx_ctl_reg = readl(ioaddr + SXGBE_DMA_CHA_RXCTL_REG(cnum));
rx_ctl_reg &= ~(SXGBE_RX_ENABLE);
writel(rx_ctl_reg, ioaddr + SXGBE_DMA_CHA_RXCTL_REG(cnum));
}
}
static int sxgbe_tx_dma_int_status(void __iomem *ioaddr, int channel_no,
struct sxgbe_extra_stats *x)
{
u32 int_status = readl(ioaddr + SXGBE_DMA_CHA_STATUS_REG(channel_no));
u32 clear_val = 0;
u32 ret_val = 0;
/* TX Normal Interrupt Summary */
if (likely(int_status & SXGBE_DMA_INT_STATUS_NIS)) {
x->normal_irq_n++;
if (int_status & SXGBE_DMA_INT_STATUS_TI) {
ret_val |= handle_tx;
x->tx_normal_irq_n++;
clear_val |= SXGBE_DMA_INT_STATUS_TI;
}
if (int_status & SXGBE_DMA_INT_STATUS_TBU) {
x->tx_underflow_irq++;
ret_val |= tx_bump_tc;
clear_val |= SXGBE_DMA_INT_STATUS_TBU;
}
} else if (unlikely(int_status & SXGBE_DMA_INT_STATUS_AIS)) {
/* TX Abnormal Interrupt Summary */
if (int_status & SXGBE_DMA_INT_STATUS_TPS) {
ret_val |= tx_hard_error;
clear_val |= SXGBE_DMA_INT_STATUS_TPS;
x->tx_process_stopped_irq++;
}
if (int_status & SXGBE_DMA_INT_STATUS_FBE) {
ret_val |= tx_hard_error;
x->fatal_bus_error_irq++;
/* Assumption: FBE bit is the combination of
* all the bus access erros and cleared when
* the respective error bits cleared
*/
/* check for actual cause */
if (int_status & SXGBE_DMA_INT_STATUS_TEB0) {
x->tx_read_transfer_err++;
clear_val |= SXGBE_DMA_INT_STATUS_TEB0;
} else {
x->tx_write_transfer_err++;
}
if (int_status & SXGBE_DMA_INT_STATUS_TEB1) {
x->tx_desc_access_err++;
clear_val |= SXGBE_DMA_INT_STATUS_TEB1;
} else {
x->tx_buffer_access_err++;
}
if (int_status & SXGBE_DMA_INT_STATUS_TEB2) {
x->tx_data_transfer_err++;
clear_val |= SXGBE_DMA_INT_STATUS_TEB2;
}
}
/* context descriptor error */
if (int_status & SXGBE_DMA_INT_STATUS_CTXTERR) {
x->tx_ctxt_desc_err++;
clear_val |= SXGBE_DMA_INT_STATUS_CTXTERR;
}
}
/* clear the served bits */
writel(clear_val, ioaddr + SXGBE_DMA_CHA_STATUS_REG(channel_no));
return ret_val;
}
static int sxgbe_rx_dma_int_status(void __iomem *ioaddr, int channel_no,
struct sxgbe_extra_stats *x)
{
u32 int_status = readl(ioaddr + SXGBE_DMA_CHA_STATUS_REG(channel_no));
u32 clear_val = 0;
u32 ret_val = 0;
/* RX Normal Interrupt Summary */
if (likely(int_status & SXGBE_DMA_INT_STATUS_NIS)) {
x->normal_irq_n++;
if (int_status & SXGBE_DMA_INT_STATUS_RI) {
ret_val |= handle_rx;
x->rx_normal_irq_n++;
clear_val |= SXGBE_DMA_INT_STATUS_RI;
}
} else if (unlikely(int_status & SXGBE_DMA_INT_STATUS_AIS)) {
/* RX Abnormal Interrupt Summary */
if (int_status & SXGBE_DMA_INT_STATUS_RBU) {
ret_val |= rx_bump_tc;
clear_val |= SXGBE_DMA_INT_STATUS_RBU;
x->rx_underflow_irq++;
}
if (int_status & SXGBE_DMA_INT_STATUS_RPS) {
ret_val |= rx_hard_error;
clear_val |= SXGBE_DMA_INT_STATUS_RPS;
x->rx_process_stopped_irq++;
}
if (int_status & SXGBE_DMA_INT_STATUS_FBE) {
ret_val |= rx_hard_error;
x->fatal_bus_error_irq++;
/* Assumption: FBE bit is the combination of
* all the bus access erros and cleared when
* the respective error bits cleared
*/
/* check for actual cause */
if (int_status & SXGBE_DMA_INT_STATUS_REB0) {
x->rx_read_transfer_err++;
clear_val |= SXGBE_DMA_INT_STATUS_REB0;
} else {
x->rx_write_transfer_err++;
}
if (int_status & SXGBE_DMA_INT_STATUS_REB1) {
x->rx_desc_access_err++;
clear_val |= SXGBE_DMA_INT_STATUS_REB1;
} else {
x->rx_buffer_access_err++;
}
if (int_status & SXGBE_DMA_INT_STATUS_REB2) {
x->rx_data_transfer_err++;
clear_val |= SXGBE_DMA_INT_STATUS_REB2;
}
}
}
/* clear the served bits */
writel(clear_val, ioaddr + SXGBE_DMA_CHA_STATUS_REG(channel_no));
return ret_val;
}
/* Program the HW RX Watchdog */
static void sxgbe_dma_rx_watchdog(void __iomem *ioaddr, u32 riwt)
{
u32 que_num;
SXGBE_FOR_EACH_QUEUE(SXGBE_RX_QUEUES, que_num) {
writel(riwt,
ioaddr + SXGBE_DMA_CHA_INT_RXWATCHTMR_REG(que_num));
}
}
static void sxgbe_enable_tso(void __iomem *ioaddr, u8 chan_num)
{
u32 ctrl;
ctrl = readl(ioaddr + SXGBE_DMA_CHA_TXCTL_REG(chan_num));
ctrl |= SXGBE_DMA_CHA_TXCTL_TSE_ENABLE;
writel(ctrl, ioaddr + SXGBE_DMA_CHA_TXCTL_REG(chan_num));
}
static const struct sxgbe_dma_ops sxgbe_dma_ops = {
.init = sxgbe_dma_init,
.cha_init = sxgbe_dma_channel_init,
.enable_dma_transmission = sxgbe_enable_dma_transmission,
.enable_dma_irq = sxgbe_enable_dma_irq,
.disable_dma_irq = sxgbe_disable_dma_irq,
.start_tx = sxgbe_dma_start_tx,
.start_tx_queue = sxgbe_dma_start_tx_queue,
.stop_tx = sxgbe_dma_stop_tx,
.stop_tx_queue = sxgbe_dma_stop_tx_queue,
.start_rx = sxgbe_dma_start_rx,
.stop_rx = sxgbe_dma_stop_rx,
.tx_dma_int_status = sxgbe_tx_dma_int_status,
.rx_dma_int_status = sxgbe_rx_dma_int_status,
.rx_watchdog = sxgbe_dma_rx_watchdog,
.enable_tso = sxgbe_enable_tso,
};
const struct sxgbe_dma_ops *sxgbe_get_dma_ops(void)
{
return &sxgbe_dma_ops;
}

View File

@ -0,0 +1,50 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __SXGBE_DMA_H__
#define __SXGBE_DMA_H__
/* forward declaration */
struct sxgbe_extra_stats;
#define SXGBE_DMA_BLENMAP_LSHIFT 1
#define SXGBE_DMA_TXPBL_LSHIFT 16
#define SXGBE_DMA_RXPBL_LSHIFT 16
#define DEFAULT_DMA_PBL 8
struct sxgbe_dma_ops {
/* DMA core initialization */
int (*init)(void __iomem *ioaddr, int fix_burst, int burst_map);
void (*cha_init)(void __iomem *ioaddr, int cha_num, int fix_burst,
int pbl, dma_addr_t dma_tx, dma_addr_t dma_rx,
int t_rzie, int r_rsize);
void (*enable_dma_transmission)(void __iomem *ioaddr, int dma_cnum);
void (*enable_dma_irq)(void __iomem *ioaddr, int dma_cnum);
void (*disable_dma_irq)(void __iomem *ioaddr, int dma_cnum);
void (*start_tx)(void __iomem *ioaddr, int tchannels);
void (*start_tx_queue)(void __iomem *ioaddr, int dma_cnum);
void (*stop_tx)(void __iomem *ioaddr, int tchannels);
void (*stop_tx_queue)(void __iomem *ioaddr, int dma_cnum);
void (*start_rx)(void __iomem *ioaddr, int rchannels);
void (*stop_rx)(void __iomem *ioaddr, int rchannels);
int (*tx_dma_int_status)(void __iomem *ioaddr, int channel_no,
struct sxgbe_extra_stats *x);
int (*rx_dma_int_status)(void __iomem *ioaddr, int channel_no,
struct sxgbe_extra_stats *x);
/* Program the HW RX Watchdog */
void (*rx_watchdog)(void __iomem *ioaddr, u32 riwt);
/* Enable TSO for each DMA channel */
void (*enable_tso)(void __iomem *ioaddr, u8 chan_num);
};
const struct sxgbe_dma_ops *sxgbe_get_dma_ops(void);
#endif /* __SXGBE_CORE_H__ */

View File

@ -0,0 +1,524 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/clk.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/net_tstamp.h>
#include <linux/phy.h>
#include <linux/ptp_clock_kernel.h>
#include "sxgbe_common.h"
#include "sxgbe_reg.h"
#include "sxgbe_dma.h"
struct sxgbe_stats {
char stat_string[ETH_GSTRING_LEN];
int sizeof_stat;
int stat_offset;
};
#define SXGBE_STAT(m) \
{ \
#m, \
FIELD_SIZEOF(struct sxgbe_extra_stats, m), \
offsetof(struct sxgbe_priv_data, xstats.m) \
}
static const struct sxgbe_stats sxgbe_gstrings_stats[] = {
/* TX/RX IRQ events */
SXGBE_STAT(tx_process_stopped_irq),
SXGBE_STAT(tx_ctxt_desc_err),
SXGBE_STAT(tx_threshold),
SXGBE_STAT(rx_threshold),
SXGBE_STAT(tx_pkt_n),
SXGBE_STAT(rx_pkt_n),
SXGBE_STAT(normal_irq_n),
SXGBE_STAT(tx_normal_irq_n),
SXGBE_STAT(rx_normal_irq_n),
SXGBE_STAT(napi_poll),
SXGBE_STAT(tx_clean),
SXGBE_STAT(tx_reset_ic_bit),
SXGBE_STAT(rx_process_stopped_irq),
SXGBE_STAT(rx_underflow_irq),
/* Bus access errors */
SXGBE_STAT(fatal_bus_error_irq),
SXGBE_STAT(tx_read_transfer_err),
SXGBE_STAT(tx_write_transfer_err),
SXGBE_STAT(tx_desc_access_err),
SXGBE_STAT(tx_buffer_access_err),
SXGBE_STAT(tx_data_transfer_err),
SXGBE_STAT(rx_read_transfer_err),
SXGBE_STAT(rx_write_transfer_err),
SXGBE_STAT(rx_desc_access_err),
SXGBE_STAT(rx_buffer_access_err),
SXGBE_STAT(rx_data_transfer_err),
/* EEE-LPI stats */
SXGBE_STAT(tx_lpi_entry_n),
SXGBE_STAT(tx_lpi_exit_n),
SXGBE_STAT(rx_lpi_entry_n),
SXGBE_STAT(rx_lpi_exit_n),
SXGBE_STAT(eee_wakeup_error_n),
/* RX specific */
/* L2 error */
SXGBE_STAT(rx_code_gmii_err),
SXGBE_STAT(rx_watchdog_err),
SXGBE_STAT(rx_crc_err),
SXGBE_STAT(rx_gaint_pkt_err),
SXGBE_STAT(ip_hdr_err),
SXGBE_STAT(ip_payload_err),
SXGBE_STAT(overflow_error),
/* L2 Pkt type */
SXGBE_STAT(len_pkt),
SXGBE_STAT(mac_ctl_pkt),
SXGBE_STAT(dcb_ctl_pkt),
SXGBE_STAT(arp_pkt),
SXGBE_STAT(oam_pkt),
SXGBE_STAT(untag_okt),
SXGBE_STAT(other_pkt),
SXGBE_STAT(svlan_tag_pkt),
SXGBE_STAT(cvlan_tag_pkt),
SXGBE_STAT(dvlan_ocvlan_icvlan_pkt),
SXGBE_STAT(dvlan_osvlan_isvlan_pkt),
SXGBE_STAT(dvlan_osvlan_icvlan_pkt),
SXGBE_STAT(dvan_ocvlan_icvlan_pkt),
/* L3/L4 Pkt type */
SXGBE_STAT(not_ip_pkt),
SXGBE_STAT(ip4_tcp_pkt),
SXGBE_STAT(ip4_udp_pkt),
SXGBE_STAT(ip4_icmp_pkt),
SXGBE_STAT(ip4_unknown_pkt),
SXGBE_STAT(ip6_tcp_pkt),
SXGBE_STAT(ip6_udp_pkt),
SXGBE_STAT(ip6_icmp_pkt),
SXGBE_STAT(ip6_unknown_pkt),
/* Filter specific */
SXGBE_STAT(vlan_filter_match),
SXGBE_STAT(sa_filter_fail),
SXGBE_STAT(da_filter_fail),
SXGBE_STAT(hash_filter_pass),
SXGBE_STAT(l3_filter_match),
SXGBE_STAT(l4_filter_match),
/* RX context specific */
SXGBE_STAT(timestamp_dropped),
SXGBE_STAT(rx_msg_type_no_ptp),
SXGBE_STAT(rx_ptp_type_sync),
SXGBE_STAT(rx_ptp_type_follow_up),
SXGBE_STAT(rx_ptp_type_delay_req),
SXGBE_STAT(rx_ptp_type_delay_resp),
SXGBE_STAT(rx_ptp_type_pdelay_req),
SXGBE_STAT(rx_ptp_type_pdelay_resp),
SXGBE_STAT(rx_ptp_type_pdelay_follow_up),
SXGBE_STAT(rx_ptp_announce),
SXGBE_STAT(rx_ptp_mgmt),
SXGBE_STAT(rx_ptp_signal),
SXGBE_STAT(rx_ptp_resv_msg_type),
};
#define SXGBE_STATS_LEN ARRAY_SIZE(sxgbe_gstrings_stats)
static int sxgbe_get_eee(struct net_device *dev,
struct ethtool_eee *edata)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
if (!priv->hw_cap.eee)
return -EOPNOTSUPP;
edata->eee_enabled = priv->eee_enabled;
edata->eee_active = priv->eee_active;
edata->tx_lpi_timer = priv->tx_lpi_timer;
return phy_ethtool_get_eee(priv->phydev, edata);
}
static int sxgbe_set_eee(struct net_device *dev,
struct ethtool_eee *edata)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
priv->eee_enabled = edata->eee_enabled;
if (!priv->eee_enabled) {
sxgbe_disable_eee_mode(priv);
} else {
/* We are asking for enabling the EEE but it is safe
* to verify all by invoking the eee_init function.
* In case of failure it will return an error.
*/
priv->eee_enabled = sxgbe_eee_init(priv);
if (!priv->eee_enabled)
return -EOPNOTSUPP;
/* Do not change tx_lpi_timer in case of failure */
priv->tx_lpi_timer = edata->tx_lpi_timer;
}
return phy_ethtool_set_eee(priv->phydev, edata);
}
static void sxgbe_getdrvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
strlcpy(info->driver, KBUILD_MODNAME, sizeof(info->driver));
strlcpy(info->version, DRV_VERSION, sizeof(info->version));
}
static int sxgbe_getsettings(struct net_device *dev,
struct ethtool_cmd *cmd)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
if (priv->phydev)
return phy_ethtool_gset(priv->phydev, cmd);
return -EOPNOTSUPP;
}
static int sxgbe_setsettings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
if (priv->phydev)
return phy_ethtool_sset(priv->phydev, cmd);
return -EOPNOTSUPP;
}
static u32 sxgbe_getmsglevel(struct net_device *dev)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
return priv->msg_enable;
}
static void sxgbe_setmsglevel(struct net_device *dev, u32 level)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
priv->msg_enable = level;
}
static void sxgbe_get_strings(struct net_device *dev, u32 stringset, u8 *data)
{
int i;
u8 *p = data;
switch (stringset) {
case ETH_SS_STATS:
for (i = 0; i < SXGBE_STATS_LEN; i++) {
memcpy(p, sxgbe_gstrings_stats[i].stat_string,
ETH_GSTRING_LEN);
p += ETH_GSTRING_LEN;
}
break;
default:
WARN_ON(1);
break;
}
}
static int sxgbe_get_sset_count(struct net_device *netdev, int sset)
{
int len;
switch (sset) {
case ETH_SS_STATS:
len = SXGBE_STATS_LEN;
return len;
default:
return -EINVAL;
}
}
static void sxgbe_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *dummy, u64 *data)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
int i;
char *p;
if (priv->eee_enabled) {
int val = phy_get_eee_err(priv->phydev);
if (val)
priv->xstats.eee_wakeup_error_n = val;
}
for (i = 0; i < SXGBE_STATS_LEN; i++) {
p = (char *)priv + sxgbe_gstrings_stats[i].stat_offset;
data[i] = (sxgbe_gstrings_stats[i].sizeof_stat == sizeof(u64))
? (*(u64 *)p) : (*(u32 *)p);
}
}
static void sxgbe_get_channels(struct net_device *dev,
struct ethtool_channels *channel)
{
channel->max_rx = SXGBE_MAX_RX_CHANNELS;
channel->max_tx = SXGBE_MAX_TX_CHANNELS;
channel->rx_count = SXGBE_RX_QUEUES;
channel->tx_count = SXGBE_TX_QUEUES;
}
static u32 sxgbe_riwt2usec(u32 riwt, struct sxgbe_priv_data *priv)
{
unsigned long clk = clk_get_rate(priv->sxgbe_clk);
if (!clk)
return 0;
return (riwt * 256) / (clk / 1000000);
}
static u32 sxgbe_usec2riwt(u32 usec, struct sxgbe_priv_data *priv)
{
unsigned long clk = clk_get_rate(priv->sxgbe_clk);
if (!clk)
return 0;
return (usec * (clk / 1000000)) / 256;
}
static int sxgbe_get_coalesce(struct net_device *dev,
struct ethtool_coalesce *ec)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
if (priv->use_riwt)
ec->rx_coalesce_usecs = sxgbe_riwt2usec(priv->rx_riwt, priv);
return 0;
}
static int sxgbe_set_coalesce(struct net_device *dev,
struct ethtool_coalesce *ec)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
unsigned int rx_riwt;
if (!ec->rx_coalesce_usecs)
return -EINVAL;
rx_riwt = sxgbe_usec2riwt(ec->rx_coalesce_usecs, priv);
if ((rx_riwt > SXGBE_MAX_DMA_RIWT) || (rx_riwt < SXGBE_MIN_DMA_RIWT))
return -EINVAL;
else if (!priv->use_riwt)
return -EOPNOTSUPP;
priv->rx_riwt = rx_riwt;
priv->hw->dma->rx_watchdog(priv->ioaddr, priv->rx_riwt);
return 0;
}
static int sxgbe_get_rss_hash_opts(struct sxgbe_priv_data *priv,
struct ethtool_rxnfc *cmd)
{
cmd->data = 0;
/* Report default options for RSS on sxgbe */
switch (cmd->flow_type) {
case TCP_V4_FLOW:
case UDP_V4_FLOW:
cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
case SCTP_V4_FLOW:
case AH_ESP_V4_FLOW:
case AH_V4_FLOW:
case ESP_V4_FLOW:
case IPV4_FLOW:
cmd->data |= RXH_IP_SRC | RXH_IP_DST;
break;
case TCP_V6_FLOW:
case UDP_V6_FLOW:
cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
case SCTP_V6_FLOW:
case AH_ESP_V6_FLOW:
case AH_V6_FLOW:
case ESP_V6_FLOW:
case IPV6_FLOW:
cmd->data |= RXH_IP_SRC | RXH_IP_DST;
break;
default:
return -EINVAL;
}
return 0;
}
static int sxgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
u32 *rule_locs)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
int ret = -EOPNOTSUPP;
switch (cmd->cmd) {
case ETHTOOL_GRXFH:
ret = sxgbe_get_rss_hash_opts(priv, cmd);
break;
default:
break;
}
return ret;
}
static int sxgbe_set_rss_hash_opt(struct sxgbe_priv_data *priv,
struct ethtool_rxnfc *cmd)
{
u32 reg_val = 0;
/* RSS does not support anything other than hashing
* to queues on src and dst IPs and ports
*/
if (cmd->data & ~(RXH_IP_SRC | RXH_IP_DST |
RXH_L4_B_0_1 | RXH_L4_B_2_3))
return -EINVAL;
switch (cmd->flow_type) {
case TCP_V4_FLOW:
case TCP_V6_FLOW:
if (!(cmd->data & RXH_IP_SRC) ||
!(cmd->data & RXH_IP_DST) ||
!(cmd->data & RXH_L4_B_0_1) ||
!(cmd->data & RXH_L4_B_2_3))
return -EINVAL;
reg_val = SXGBE_CORE_RSS_CTL_TCP4TE;
break;
case UDP_V4_FLOW:
case UDP_V6_FLOW:
if (!(cmd->data & RXH_IP_SRC) ||
!(cmd->data & RXH_IP_DST) ||
!(cmd->data & RXH_L4_B_0_1) ||
!(cmd->data & RXH_L4_B_2_3))
return -EINVAL;
reg_val = SXGBE_CORE_RSS_CTL_UDP4TE;
break;
case SCTP_V4_FLOW:
case AH_ESP_V4_FLOW:
case AH_V4_FLOW:
case ESP_V4_FLOW:
case AH_ESP_V6_FLOW:
case AH_V6_FLOW:
case ESP_V6_FLOW:
case SCTP_V6_FLOW:
case IPV4_FLOW:
case IPV6_FLOW:
if (!(cmd->data & RXH_IP_SRC) ||
!(cmd->data & RXH_IP_DST) ||
(cmd->data & RXH_L4_B_0_1) ||
(cmd->data & RXH_L4_B_2_3))
return -EINVAL;
reg_val = SXGBE_CORE_RSS_CTL_IP2TE;
break;
default:
return -EINVAL;
}
/* Read SXGBE RSS control register and update */
reg_val |= readl(priv->ioaddr + SXGBE_CORE_RSS_CTL_REG);
writel(reg_val, priv->ioaddr + SXGBE_CORE_RSS_CTL_REG);
readl(priv->ioaddr + SXGBE_CORE_RSS_CTL_REG);
return 0;
}
static int sxgbe_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
int ret = -EOPNOTSUPP;
switch (cmd->cmd) {
case ETHTOOL_SRXFH:
ret = sxgbe_set_rss_hash_opt(priv, cmd);
break;
default:
break;
}
return ret;
}
static void sxgbe_get_regs(struct net_device *dev,
struct ethtool_regs *regs, void *space)
{
struct sxgbe_priv_data *priv = netdev_priv(dev);
u32 *reg_space = (u32 *)space;
int reg_offset;
int reg_ix = 0;
void __iomem *ioaddr = priv->ioaddr;
memset(reg_space, 0x0, REG_SPACE_SIZE);
/* MAC registers */
for (reg_offset = START_MAC_REG_OFFSET;
reg_offset <= MAX_MAC_REG_OFFSET; reg_offset += 4) {
reg_space[reg_ix] = readl(ioaddr + reg_offset);
reg_ix++;
}
/* MTL registers */
for (reg_offset = START_MTL_REG_OFFSET;
reg_offset <= MAX_MTL_REG_OFFSET; reg_offset += 4) {
reg_space[reg_ix] = readl(ioaddr + reg_offset);
reg_ix++;
}
/* DMA registers */
for (reg_offset = START_DMA_REG_OFFSET;
reg_offset <= MAX_DMA_REG_OFFSET; reg_offset += 4) {
reg_space[reg_ix] = readl(ioaddr + reg_offset);
reg_ix++;
}
BUG_ON(reg_ix * 4 > REG_SPACE_SIZE);
}
static int sxgbe_get_regs_len(struct net_device *dev)
{
return REG_SPACE_SIZE;
}
static const struct ethtool_ops sxgbe_ethtool_ops = {
.get_drvinfo = sxgbe_getdrvinfo,
.get_settings = sxgbe_getsettings,
.set_settings = sxgbe_setsettings,
.get_msglevel = sxgbe_getmsglevel,
.set_msglevel = sxgbe_setmsglevel,
.get_link = ethtool_op_get_link,
.get_strings = sxgbe_get_strings,
.get_ethtool_stats = sxgbe_get_ethtool_stats,
.get_sset_count = sxgbe_get_sset_count,
.get_channels = sxgbe_get_channels,
.get_coalesce = sxgbe_get_coalesce,
.set_coalesce = sxgbe_set_coalesce,
.get_rxnfc = sxgbe_get_rxnfc,
.set_rxnfc = sxgbe_set_rxnfc,
.get_regs = sxgbe_get_regs,
.get_regs_len = sxgbe_get_regs_len,
.get_eee = sxgbe_get_eee,
.set_eee = sxgbe_set_eee,
};
void sxgbe_set_ethtool_ops(struct net_device *netdev)
{
SET_ETHTOOL_OPS(netdev, &sxgbe_ethtool_ops);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,251 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/io.h>
#include <linux/mii.h>
#include <linux/netdevice.h>
#include <linux/platform_device.h>
#include <linux/phy.h>
#include <linux/slab.h>
#include <linux/sxgbe_platform.h>
#include "sxgbe_common.h"
#include "sxgbe_reg.h"
#define SXGBE_SMA_WRITE_CMD 0x01 /* write command */
#define SXGBE_SMA_PREAD_CMD 0x02 /* post read increament address */
#define SXGBE_SMA_READ_CMD 0x03 /* read command */
#define SXGBE_SMA_SKIP_ADDRFRM 0x00040000 /* skip the address frame */
#define SXGBE_MII_BUSY 0x00800000 /* mii busy */
static int sxgbe_mdio_busy_wait(void __iomem *ioaddr, unsigned int mii_data)
{
unsigned long fin_time = jiffies + 3 * HZ; /* 3 seconds */
while (!time_after(jiffies, fin_time)) {
if (!(readl(ioaddr + mii_data) & SXGBE_MII_BUSY))
return 0;
cpu_relax();
}
return -EBUSY;
}
static void sxgbe_mdio_ctrl_data(struct sxgbe_priv_data *sp, u32 cmd,
u16 phydata)
{
u32 reg = phydata;
reg |= (cmd << 16) | SXGBE_SMA_SKIP_ADDRFRM |
((sp->clk_csr & 0x7) << 19) | SXGBE_MII_BUSY;
writel(reg, sp->ioaddr + sp->hw->mii.data);
}
static void sxgbe_mdio_c45(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr,
int phyreg, u16 phydata)
{
u32 reg;
/* set mdio address register */
reg = ((phyreg >> 16) & 0x1f) << 21;
reg |= (phyaddr << 16) | (phyreg & 0xffff);
writel(reg, sp->ioaddr + sp->hw->mii.addr);
sxgbe_mdio_ctrl_data(sp, cmd, phydata);
}
static void sxgbe_mdio_c22(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr,
int phyreg, u16 phydata)
{
u32 reg;
writel(1 << phyaddr, sp->ioaddr + SXGBE_MDIO_CLAUSE22_PORT_REG);
/* set mdio address register */
reg = (phyaddr << 16) | (phyreg & 0x1f);
writel(reg, sp->ioaddr + sp->hw->mii.addr);
sxgbe_mdio_ctrl_data(sp, cmd, phydata);
}
static int sxgbe_mdio_access(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr,
int phyreg, u16 phydata)
{
const struct mii_regs *mii = &sp->hw->mii;
int rc;
rc = sxgbe_mdio_busy_wait(sp->ioaddr, mii->data);
if (rc < 0)
return rc;
if (phyreg & MII_ADDR_C45) {
sxgbe_mdio_c45(sp, cmd, phyaddr, phyreg, phydata);
} else {
/* Ports 0-3 only support C22. */
if (phyaddr >= 4)
return -ENODEV;
sxgbe_mdio_c22(sp, cmd, phyaddr, phyreg, phydata);
}
return sxgbe_mdio_busy_wait(sp->ioaddr, mii->data);
}
/**
* sxgbe_mdio_read
* @bus: points to the mii_bus structure
* @phyaddr: address of phy port
* @phyreg: address of register with in phy register
* Description: this function used for C45 and C22 MDIO Read
*/
static int sxgbe_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
{
struct net_device *ndev = bus->priv;
struct sxgbe_priv_data *priv = netdev_priv(ndev);
int rc;
rc = sxgbe_mdio_access(priv, SXGBE_SMA_READ_CMD, phyaddr, phyreg, 0);
if (rc < 0)
return rc;
return readl(priv->ioaddr + priv->hw->mii.data) & 0xffff;
}
/**
* sxgbe_mdio_write
* @bus: points to the mii_bus structure
* @phyaddr: address of phy port
* @phyreg: address of phy registers
* @phydata: data to be written into phy register
* Description: this function is used for C45 and C22 MDIO write
*/
static int sxgbe_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
u16 phydata)
{
struct net_device *ndev = bus->priv;
struct sxgbe_priv_data *priv = netdev_priv(ndev);
return sxgbe_mdio_access(priv, SXGBE_SMA_WRITE_CMD, phyaddr, phyreg,
phydata);
}
int sxgbe_mdio_register(struct net_device *ndev)
{
struct mii_bus *mdio_bus;
struct sxgbe_priv_data *priv = netdev_priv(ndev);
struct sxgbe_mdio_bus_data *mdio_data = priv->plat->mdio_bus_data;
int err, phy_addr;
int *irqlist;
bool act;
/* allocate the new mdio bus */
mdio_bus = mdiobus_alloc();
if (!mdio_bus) {
netdev_err(ndev, "%s: mii bus allocation failed\n", __func__);
return -ENOMEM;
}
if (mdio_data->irqs)
irqlist = mdio_data->irqs;
else
irqlist = priv->mii_irq;
/* assign mii bus fields */
mdio_bus->name = "samsxgbe";
mdio_bus->read = &sxgbe_mdio_read;
mdio_bus->write = &sxgbe_mdio_write;
snprintf(mdio_bus->id, MII_BUS_ID_SIZE, "%s-%x",
mdio_bus->name, priv->plat->bus_id);
mdio_bus->priv = ndev;
mdio_bus->phy_mask = mdio_data->phy_mask;
mdio_bus->parent = priv->device;
/* register with kernel subsystem */
err = mdiobus_register(mdio_bus);
if (err != 0) {
netdev_err(ndev, "mdiobus register failed\n");
goto mdiobus_err;
}
for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++) {
struct phy_device *phy = mdio_bus->phy_map[phy_addr];
if (phy) {
char irq_num[4];
char *irq_str;
/* If an IRQ was provided to be assigned after
* the bus probe, do it here.
*/
if ((mdio_data->irqs == NULL) &&
(mdio_data->probed_phy_irq > 0)) {
irqlist[phy_addr] = mdio_data->probed_phy_irq;
phy->irq = mdio_data->probed_phy_irq;
}
/* If we're going to bind the MAC to this PHY bus,
* and no PHY number was provided to the MAC,
* use the one probed here.
*/
if (priv->plat->phy_addr == -1)
priv->plat->phy_addr = phy_addr;
act = (priv->plat->phy_addr == phy_addr);
switch (phy->irq) {
case PHY_POLL:
irq_str = "POLL";
break;
case PHY_IGNORE_INTERRUPT:
irq_str = "IGNORE";
break;
default:
sprintf(irq_num, "%d", phy->irq);
irq_str = irq_num;
break;
}
netdev_info(ndev, "PHY ID %08x at %d IRQ %s (%s)%s\n",
phy->phy_id, phy_addr, irq_str,
dev_name(&phy->dev), act ? " active" : "");
}
}
if (!err) {
netdev_err(ndev, "PHY not found\n");
mdiobus_unregister(mdio_bus);
mdiobus_free(mdio_bus);
goto mdiobus_err;
}
priv->mii = mdio_bus;
return 0;
mdiobus_err:
mdiobus_free(mdio_bus);
return err;
}
int sxgbe_mdio_unregister(struct net_device *ndev)
{
struct sxgbe_priv_data *priv = netdev_priv(ndev);
if (!priv->mii)
return 0;
mdiobus_unregister(priv->mii);
priv->mii->priv = NULL;
mdiobus_free(priv->mii);
priv->mii = NULL;
return 0;
}

View File

@ -0,0 +1,254 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/io.h>
#include <linux/errno.h>
#include <linux/export.h>
#include <linux/jiffies.h>
#include "sxgbe_mtl.h"
#include "sxgbe_reg.h"
static void sxgbe_mtl_init(void __iomem *ioaddr, unsigned int etsalg,
unsigned int raa)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_OP_MODE_REG);
reg_val &= ETS_RST;
/* ETS Algorith */
switch (etsalg & SXGBE_MTL_OPMODE_ESTMASK) {
case ETS_WRR:
reg_val &= ETS_WRR;
break;
case ETS_WFQ:
reg_val |= ETS_WFQ;
break;
case ETS_DWRR:
reg_val |= ETS_DWRR;
break;
}
writel(reg_val, ioaddr + SXGBE_MTL_OP_MODE_REG);
switch (raa & SXGBE_MTL_OPMODE_RAAMASK) {
case RAA_SP:
reg_val &= RAA_SP;
break;
case RAA_WSP:
reg_val |= RAA_WSP;
break;
}
writel(reg_val, ioaddr + SXGBE_MTL_OP_MODE_REG);
}
/* For Dynamic DMA channel mapping for Rx queue */
static void sxgbe_mtl_dma_dm_rxqueue(void __iomem *ioaddr)
{
writel(RX_QUEUE_DYNAMIC, ioaddr + SXGBE_MTL_RXQ_DMAMAP0_REG);
writel(RX_QUEUE_DYNAMIC, ioaddr + SXGBE_MTL_RXQ_DMAMAP1_REG);
writel(RX_QUEUE_DYNAMIC, ioaddr + SXGBE_MTL_RXQ_DMAMAP2_REG);
}
static void sxgbe_mtl_set_txfifosize(void __iomem *ioaddr, int queue_num,
int queue_fifo)
{
u32 fifo_bits, reg_val;
/* 0 means 256 bytes */
fifo_bits = (queue_fifo / SXGBE_MTL_TX_FIFO_DIV) - 1;
reg_val = readl(ioaddr + SXGBE_MTL_TXQ_OPMODE_REG(queue_num));
reg_val |= (fifo_bits << SXGBE_MTL_FIFO_LSHIFT);
writel(reg_val, ioaddr + SXGBE_MTL_TXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_set_rxfifosize(void __iomem *ioaddr, int queue_num,
int queue_fifo)
{
u32 fifo_bits, reg_val;
/* 0 means 256 bytes */
fifo_bits = (queue_fifo / SXGBE_MTL_RX_FIFO_DIV)-1;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
reg_val |= (fifo_bits << SXGBE_MTL_FIFO_LSHIFT);
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_enable_txqueue(void __iomem *ioaddr, int queue_num)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_TXQ_OPMODE_REG(queue_num));
reg_val |= SXGBE_MTL_ENABLE_QUEUE;
writel(reg_val, ioaddr + SXGBE_MTL_TXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_disable_txqueue(void __iomem *ioaddr, int queue_num)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_TXQ_OPMODE_REG(queue_num));
reg_val &= ~SXGBE_MTL_ENABLE_QUEUE;
writel(reg_val, ioaddr + SXGBE_MTL_TXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_fc_active(void __iomem *ioaddr, int queue_num,
int threshold)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
reg_val &= ~(SXGBE_MTL_FCMASK << RX_FC_ACTIVE);
reg_val |= (threshold << RX_FC_ACTIVE);
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_fc_enable(void __iomem *ioaddr, int queue_num)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
reg_val |= SXGBE_MTL_ENABLE_FC;
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_fc_deactive(void __iomem *ioaddr, int queue_num,
int threshold)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
reg_val &= ~(SXGBE_MTL_FCMASK << RX_FC_DEACTIVE);
reg_val |= (threshold << RX_FC_DEACTIVE);
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_fep_enable(void __iomem *ioaddr, int queue_num)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
reg_val |= SXGBE_MTL_RXQ_OP_FEP;
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_fep_disable(void __iomem *ioaddr, int queue_num)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
reg_val &= ~(SXGBE_MTL_RXQ_OP_FEP);
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_fup_enable(void __iomem *ioaddr, int queue_num)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
reg_val |= SXGBE_MTL_RXQ_OP_FUP;
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static void sxgbe_mtl_fup_disable(void __iomem *ioaddr, int queue_num)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
reg_val &= ~(SXGBE_MTL_RXQ_OP_FUP);
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static void sxgbe_set_tx_mtl_mode(void __iomem *ioaddr, int queue_num,
int tx_mode)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_TXQ_OPMODE_REG(queue_num));
/* TX specific MTL mode settings */
if (tx_mode == SXGBE_MTL_SFMODE) {
reg_val |= SXGBE_MTL_SFMODE;
} else {
/* set the TTC values */
if (tx_mode <= 64)
reg_val |= MTL_CONTROL_TTC_64;
else if (tx_mode <= 96)
reg_val |= MTL_CONTROL_TTC_96;
else if (tx_mode <= 128)
reg_val |= MTL_CONTROL_TTC_128;
else if (tx_mode <= 192)
reg_val |= MTL_CONTROL_TTC_192;
else if (tx_mode <= 256)
reg_val |= MTL_CONTROL_TTC_256;
else if (tx_mode <= 384)
reg_val |= MTL_CONTROL_TTC_384;
else
reg_val |= MTL_CONTROL_TTC_512;
}
/* write into TXQ operation register */
writel(reg_val, ioaddr + SXGBE_MTL_TXQ_OPMODE_REG(queue_num));
}
static void sxgbe_set_rx_mtl_mode(void __iomem *ioaddr, int queue_num,
int rx_mode)
{
u32 reg_val;
reg_val = readl(ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
/* RX specific MTL mode settings */
if (rx_mode == SXGBE_RX_MTL_SFMODE) {
reg_val |= SXGBE_RX_MTL_SFMODE;
} else {
if (rx_mode <= 64)
reg_val |= MTL_CONTROL_RTC_64;
else if (rx_mode <= 96)
reg_val |= MTL_CONTROL_RTC_96;
else if (rx_mode <= 128)
reg_val |= MTL_CONTROL_RTC_128;
}
/* write into RXQ operation register */
writel(reg_val, ioaddr + SXGBE_MTL_RXQ_OPMODE_REG(queue_num));
}
static const struct sxgbe_mtl_ops mtl_ops = {
.mtl_set_txfifosize = sxgbe_mtl_set_txfifosize,
.mtl_set_rxfifosize = sxgbe_mtl_set_rxfifosize,
.mtl_enable_txqueue = sxgbe_mtl_enable_txqueue,
.mtl_disable_txqueue = sxgbe_mtl_disable_txqueue,
.mtl_dynamic_dma_rxqueue = sxgbe_mtl_dma_dm_rxqueue,
.set_tx_mtl_mode = sxgbe_set_tx_mtl_mode,
.set_rx_mtl_mode = sxgbe_set_rx_mtl_mode,
.mtl_init = sxgbe_mtl_init,
.mtl_fc_active = sxgbe_mtl_fc_active,
.mtl_fc_deactive = sxgbe_mtl_fc_deactive,
.mtl_fc_enable = sxgbe_mtl_fc_enable,
.mtl_fep_enable = sxgbe_mtl_fep_enable,
.mtl_fep_disable = sxgbe_mtl_fep_disable,
.mtl_fup_enable = sxgbe_mtl_fup_enable,
.mtl_fup_disable = sxgbe_mtl_fup_disable
};
const struct sxgbe_mtl_ops *sxgbe_get_mtl_ops(void)
{
return &mtl_ops;
}

View File

@ -0,0 +1,104 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __SXGBE_MTL_H__
#define __SXGBE_MTL_H__
#define SXGBE_MTL_OPMODE_ESTMASK 0x3
#define SXGBE_MTL_OPMODE_RAAMASK 0x1
#define SXGBE_MTL_FCMASK 0x7
#define SXGBE_MTL_TX_FIFO_DIV 256
#define SXGBE_MTL_RX_FIFO_DIV 256
#define SXGBE_MTL_RXQ_OP_FEP BIT(4)
#define SXGBE_MTL_RXQ_OP_FUP BIT(3)
#define SXGBE_MTL_ENABLE_FC 0x80
#define ETS_WRR 0xFFFFFF9F
#define ETS_RST 0xFFFFFF9F
#define ETS_WFQ 0x00000020
#define ETS_DWRR 0x00000040
#define RAA_SP 0xFFFFFFFB
#define RAA_WSP 0x00000004
#define RX_QUEUE_DYNAMIC 0x80808080
#define RX_FC_ACTIVE 8
#define RX_FC_DEACTIVE 13
enum ttc_control {
MTL_CONTROL_TTC_64 = 0x00000000,
MTL_CONTROL_TTC_96 = 0x00000020,
MTL_CONTROL_TTC_128 = 0x00000030,
MTL_CONTROL_TTC_192 = 0x00000040,
MTL_CONTROL_TTC_256 = 0x00000050,
MTL_CONTROL_TTC_384 = 0x00000060,
MTL_CONTROL_TTC_512 = 0x00000070,
};
enum rtc_control {
MTL_CONTROL_RTC_64 = 0x00000000,
MTL_CONTROL_RTC_96 = 0x00000002,
MTL_CONTROL_RTC_128 = 0x00000003,
};
enum flow_control_th {
MTL_FC_FULL_1K = 0x00000000,
MTL_FC_FULL_2K = 0x00000001,
MTL_FC_FULL_4K = 0x00000002,
MTL_FC_FULL_5K = 0x00000003,
MTL_FC_FULL_6K = 0x00000004,
MTL_FC_FULL_8K = 0x00000005,
MTL_FC_FULL_16K = 0x00000006,
MTL_FC_FULL_24K = 0x00000007,
};
struct sxgbe_mtl_ops {
void (*mtl_init)(void __iomem *ioaddr, unsigned int etsalg,
unsigned int raa);
void (*mtl_set_txfifosize)(void __iomem *ioaddr, int queue_num,
int mtl_fifo);
void (*mtl_set_rxfifosize)(void __iomem *ioaddr, int queue_num,
int queue_fifo);
void (*mtl_enable_txqueue)(void __iomem *ioaddr, int queue_num);
void (*mtl_disable_txqueue)(void __iomem *ioaddr, int queue_num);
void (*set_tx_mtl_mode)(void __iomem *ioaddr, int queue_num,
int tx_mode);
void (*set_rx_mtl_mode)(void __iomem *ioaddr, int queue_num,
int rx_mode);
void (*mtl_dynamic_dma_rxqueue)(void __iomem *ioaddr);
void (*mtl_fc_active)(void __iomem *ioaddr, int queue_num,
int threshold);
void (*mtl_fc_deactive)(void __iomem *ioaddr, int queue_num,
int threshold);
void (*mtl_fc_enable)(void __iomem *ioaddr, int queue_num);
void (*mtl_fep_enable)(void __iomem *ioaddr, int queue_num);
void (*mtl_fep_disable)(void __iomem *ioaddr, int queue_num);
void (*mtl_fup_enable)(void __iomem *ioaddr, int queue_num);
void (*mtl_fup_disable)(void __iomem *ioaddr, int queue_num);
};
const struct sxgbe_mtl_ops *sxgbe_get_mtl_ops(void);
#endif /* __SXGBE_MTL_H__ */

View File

@ -0,0 +1,259 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/etherdevice.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_net.h>
#include <linux/phy.h>
#include <linux/platform_device.h>
#include <linux/sxgbe_platform.h>
#include "sxgbe_common.h"
#include "sxgbe_reg.h"
#ifdef CONFIG_OF
static int sxgbe_probe_config_dt(struct platform_device *pdev,
struct sxgbe_plat_data *plat,
const char **mac)
{
struct device_node *np = pdev->dev.of_node;
struct sxgbe_dma_cfg *dma_cfg;
if (!np)
return -ENODEV;
*mac = of_get_mac_address(np);
plat->interface = of_get_phy_mode(np);
plat->bus_id = of_alias_get_id(np, "ethernet");
if (plat->bus_id < 0)
plat->bus_id = 0;
plat->mdio_bus_data = devm_kzalloc(&pdev->dev,
sizeof(*plat->mdio_bus_data),
GFP_KERNEL);
dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg), GFP_KERNEL);
if (!dma_cfg)
return -ENOMEM;
plat->dma_cfg = dma_cfg;
of_property_read_u32(np, "samsung,pbl", &dma_cfg->pbl);
if (of_property_read_u32(np, "samsung,burst-map", &dma_cfg->burst_map) == 0)
dma_cfg->fixed_burst = true;
return 0;
}
#else
static int sxgbe_probe_config_dt(struct platform_device *pdev,
struct sxgbe_plat_data *plat,
const char **mac)
{
return -ENOSYS;
}
#endif /* CONFIG_OF */
/**
* sxgbe_platform_probe
* @pdev: platform device pointer
* Description: platform_device probe function. It allocates
* the necessary resources and invokes the main to init
* the net device, register the mdio bus etc.
*/
static int sxgbe_platform_probe(struct platform_device *pdev)
{
int ret;
int i, chan;
struct resource *res;
struct device *dev = &pdev->dev;
void __iomem *addr;
struct sxgbe_priv_data *priv = NULL;
struct sxgbe_plat_data *plat_dat = NULL;
const char *mac = NULL;
struct net_device *ndev = platform_get_drvdata(pdev);
struct device_node *node = dev->of_node;
/* Get memory resource */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
goto err_out;
addr = devm_ioremap_resource(dev, res);
if (IS_ERR(addr))
return PTR_ERR(addr);
if (pdev->dev.of_node) {
plat_dat = devm_kzalloc(&pdev->dev,
sizeof(struct sxgbe_plat_data),
GFP_KERNEL);
if (!plat_dat)
return -ENOMEM;
ret = sxgbe_probe_config_dt(pdev, plat_dat, &mac);
if (ret) {
pr_err("%s: main dt probe failed\n", __func__);
return ret;
}
}
/* Get MAC address if available (DT) */
if (mac)
ether_addr_copy(priv->dev->dev_addr, mac);
priv = sxgbe_drv_probe(&(pdev->dev), plat_dat, addr);
if (!priv) {
pr_err("%s: main driver probe failed\n", __func__);
goto err_out;
}
/* Get the SXGBE common INT information */
priv->irq = irq_of_parse_and_map(node, 0);
if (priv->irq <= 0) {
dev_err(dev, "sxgbe common irq parsing failed\n");
goto err_drv_remove;
}
/* Get the TX/RX IRQ numbers */
for (i = 0, chan = 1; i < SXGBE_TX_QUEUES; i++) {
priv->txq[i]->irq_no = irq_of_parse_and_map(node, chan++);
if (priv->txq[i]->irq_no <= 0) {
dev_err(dev, "sxgbe tx irq parsing failed\n");
goto err_tx_irq_unmap;
}
}
for (i = 0; i < SXGBE_RX_QUEUES; i++) {
priv->rxq[i]->irq_no = irq_of_parse_and_map(node, chan++);
if (priv->rxq[i]->irq_no <= 0) {
dev_err(dev, "sxgbe rx irq parsing failed\n");
goto err_rx_irq_unmap;
}
}
priv->lpi_irq = irq_of_parse_and_map(node, chan);
if (priv->lpi_irq <= 0) {
dev_err(dev, "sxgbe lpi irq parsing failed\n");
goto err_rx_irq_unmap;
}
platform_set_drvdata(pdev, priv->dev);
pr_debug("platform driver registration completed\n");
return 0;
err_rx_irq_unmap:
while (--i)
irq_dispose_mapping(priv->rxq[i]->irq_no);
i = SXGBE_TX_QUEUES;
err_tx_irq_unmap:
while (--i)
irq_dispose_mapping(priv->txq[i]->irq_no);
irq_dispose_mapping(priv->irq);
err_drv_remove:
sxgbe_drv_remove(ndev);
err_out:
return -ENODEV;
}
/**
* sxgbe_platform_remove
* @pdev: platform device pointer
* Description: this function calls the main to free the net resources
* and calls the platforms hook and release the resources (e.g. mem).
*/
static int sxgbe_platform_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
int ret = sxgbe_drv_remove(ndev);
return ret;
}
#ifdef CONFIG_PM
static int sxgbe_platform_suspend(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
return sxgbe_suspend(ndev);
}
static int sxgbe_platform_resume(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
return sxgbe_resume(ndev);
}
int sxgbe_platform_freeze(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
return sxgbe_freeze(ndev);
}
int sxgbe_platform_restore(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
return sxgbe_restore(ndev);
}
static const struct dev_pm_ops sxgbe_platform_pm_ops = {
.suspend = sxgbe_platform_suspend,
.resume = sxgbe_platform_resume,
.freeze = sxgbe_platform_freeze,
.thaw = sxgbe_platform_restore,
.restore = sxgbe_platform_restore,
};
#else
static const struct dev_pm_ops sxgbe_platform_pm_ops;
#endif /* CONFIG_PM */
static const struct of_device_id sxgbe_dt_ids[] = {
{ .compatible = "samsung,sxgbe-v2.0a"},
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, sxgbe_dt_ids);
struct platform_driver sxgbe_platform_driver = {
.probe = sxgbe_platform_probe,
.remove = sxgbe_platform_remove,
.driver = {
.name = SXGBE_RESOURCE_NAME,
.owner = THIS_MODULE,
.pm = &sxgbe_platform_pm_ops,
.of_match_table = of_match_ptr(sxgbe_dt_ids),
},
};
int sxgbe_register_platform(void)
{
int err;
err = platform_driver_register(&sxgbe_platform_driver);
if (err)
pr_err("failed to register the platform driver\n");
return err;
}
void sxgbe_unregister_platform(void)
{
platform_driver_unregister(&sxgbe_platform_driver);
}

View File

@ -0,0 +1,488 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __SXGBE_REGMAP_H__
#define __SXGBE_REGMAP_H__
/* SXGBE MAC Registers */
#define SXGBE_CORE_TX_CONFIG_REG 0x0000
#define SXGBE_CORE_RX_CONFIG_REG 0x0004
#define SXGBE_CORE_PKT_FILTER_REG 0x0008
#define SXGBE_CORE_WATCHDOG_TIMEOUT_REG 0x000C
#define SXGBE_CORE_HASH_TABLE_REG0 0x0010
#define SXGBE_CORE_HASH_TABLE_REG1 0x0014
#define SXGBE_CORE_HASH_TABLE_REG2 0x0018
#define SXGBE_CORE_HASH_TABLE_REG3 0x001C
#define SXGBE_CORE_HASH_TABLE_REG4 0x0020
#define SXGBE_CORE_HASH_TABLE_REG5 0x0024
#define SXGBE_CORE_HASH_TABLE_REG6 0x0028
#define SXGBE_CORE_HASH_TABLE_REG7 0x002C
/* EEE-LPI Registers */
#define SXGBE_CORE_LPI_CTRL_STATUS 0x00D0
#define SXGBE_CORE_LPI_TIMER_CTRL 0x00D4
/* VLAN Specific Registers */
#define SXGBE_CORE_VLAN_TAG_REG 0x0050
#define SXGBE_CORE_VLAN_HASHTAB_REG 0x0058
#define SXGBE_CORE_VLAN_INSCTL_REG 0x0060
#define SXGBE_CORE_VLAN_INNERCTL_REG 0x0064
#define SXGBE_CORE_RX_ETHTYPE_MATCH_REG 0x006C
/* Flow Contol Registers */
#define SXGBE_CORE_TX_Q0_FLOWCTL_REG 0x0070
#define SXGBE_CORE_TX_Q1_FLOWCTL_REG 0x0074
#define SXGBE_CORE_TX_Q2_FLOWCTL_REG 0x0078
#define SXGBE_CORE_TX_Q3_FLOWCTL_REG 0x007C
#define SXGBE_CORE_TX_Q4_FLOWCTL_REG 0x0080
#define SXGBE_CORE_TX_Q5_FLOWCTL_REG 0x0084
#define SXGBE_CORE_TX_Q6_FLOWCTL_REG 0x0088
#define SXGBE_CORE_TX_Q7_FLOWCTL_REG 0x008C
#define SXGBE_CORE_RX_FLOWCTL_REG 0x0090
#define SXGBE_CORE_RX_CTL0_REG 0x00A0
#define SXGBE_CORE_RX_CTL1_REG 0x00A4
#define SXGBE_CORE_RX_CTL2_REG 0x00A8
#define SXGBE_CORE_RX_CTL3_REG 0x00AC
/* Interrupt Registers */
#define SXGBE_CORE_INT_STATUS_REG 0x00B0
#define SXGBE_CORE_INT_ENABLE_REG 0x00B4
#define SXGBE_CORE_RXTX_ERR_STATUS_REG 0x00B8
#define SXGBE_CORE_PMT_CTL_STATUS_REG 0x00C0
#define SXGBE_CORE_RWK_PKT_FILTER_REG 0x00C4
#define SXGBE_CORE_VERSION_REG 0x0110
#define SXGBE_CORE_DEBUG_REG 0x0114
#define SXGBE_CORE_HW_FEA_REG(index) (0x011C + index * 4)
/* SMA(MDIO) module registers */
#define SXGBE_MDIO_SCMD_ADD_REG 0x0200
#define SXGBE_MDIO_SCMD_DATA_REG 0x0204
#define SXGBE_MDIO_CCMD_WADD_REG 0x0208
#define SXGBE_MDIO_CCMD_WDATA_REG 0x020C
#define SXGBE_MDIO_CSCAN_PORT_REG 0x0210
#define SXGBE_MDIO_INT_STATUS_REG 0x0214
#define SXGBE_MDIO_INT_ENABLE_REG 0x0218
#define SXGBE_MDIO_PORT_CONDCON_REG 0x021C
#define SXGBE_MDIO_CLAUSE22_PORT_REG 0x0220
/* port specific, addr = 0-3 */
#define SXGBE_MDIO_DEV_BASE_REG 0x0230
#define SXGBE_MDIO_PORT_DEV_REG(addr) \
(SXGBE_MDIO_DEV_BASE_REG + (0x10 * addr) + 0x0)
#define SXGBE_MDIO_PORT_LSTATUS_REG(addr) \
(SXGBE_MDIO_DEV_BASE_REG + (0x10 * addr) + 0x4)
#define SXGBE_MDIO_PORT_ALIVE_REG(addr) \
(SXGBE_MDIO_DEV_BASE_REG + (0x10 * addr) + 0x8)
#define SXGBE_CORE_GPIO_CTL_REG 0x0278
#define SXGBE_CORE_GPIO_STATUS_REG 0x027C
/* Address registers for filtering */
#define SXGBE_CORE_ADD_BASE_REG 0x0300
/* addr = 0-31 */
#define SXGBE_CORE_ADD_HIGHOFFSET(addr) \
(SXGBE_CORE_ADD_BASE_REG + (0x8 * addr) + 0x0)
#define SXGBE_CORE_ADD_LOWOFFSET(addr) \
(SXGBE_CORE_ADD_BASE_REG + (0x8 * addr) + 0x4)
/* SXGBE MMC registers */
#define SXGBE_MMC_CTL_REG 0x0800
#define SXGBE_MMC_RXINT_STATUS_REG 0x0804
#define SXGBE_MMC_TXINT_STATUS_REG 0x0808
#define SXGBE_MMC_RXINT_ENABLE_REG 0x080C
#define SXGBE_MMC_TXINT_ENABLE_REG 0x0810
/* TX specific counters */
#define SXGBE_MMC_TXOCTETHI_GBCNT_REG 0x0814
#define SXGBE_MMC_TXOCTETLO_GBCNT_REG 0x0818
#define SXGBE_MMC_TXFRAMELO_GBCNT_REG 0x081C
#define SXGBE_MMC_TXFRAMEHI_GBCNT_REG 0x0820
#define SXGBE_MMC_TXBROADLO_GCNT_REG 0x0824
#define SXGBE_MMC_TXBROADHI_GCNT_REG 0x0828
#define SXGBE_MMC_TXMULTILO_GCNT_REG 0x082C
#define SXGBE_MMC_TXMULTIHI_GCNT_REG 0x0830
#define SXGBE_MMC_TX64LO_GBCNT_REG 0x0834
#define SXGBE_MMC_TX64HI_GBCNT_REG 0x0838
#define SXGBE_MMC_TX65TO127LO_GBCNT_REG 0x083C
#define SXGBE_MMC_TX65TO127HI_GBCNT_REG 0x0840
#define SXGBE_MMC_TX128TO255LO_GBCNT_REG 0x0844
#define SXGBE_MMC_TX128TO255HI_GBCNT_REG 0x0848
#define SXGBE_MMC_TX256TO511LO_GBCNT_REG 0x084C
#define SXGBE_MMC_TX256TO511HI_GBCNT_REG 0x0850
#define SXGBE_MMC_TX512TO1023LO_GBCNT_REG 0x0854
#define SXGBE_MMC_TX512TO1023HI_GBCNT_REG 0x0858
#define SXGBE_MMC_TX1023TOMAXLO_GBCNT_REG 0x085C
#define SXGBE_MMC_TX1023TOMAXHI_GBCNT_REG 0x0860
#define SXGBE_MMC_TXUNICASTLO_GBCNT_REG 0x0864
#define SXGBE_MMC_TXUNICASTHI_GBCNT_REG 0x0868
#define SXGBE_MMC_TXMULTILO_GBCNT_REG 0x086C
#define SXGBE_MMC_TXMULTIHI_GBCNT_REG 0x0870
#define SXGBE_MMC_TXBROADLO_GBCNT_REG 0x0874
#define SXGBE_MMC_TXBROADHI_GBCNT_REG 0x0878
#define SXGBE_MMC_TXUFLWLO_GBCNT_REG 0x087C
#define SXGBE_MMC_TXUFLWHI_GBCNT_REG 0x0880
#define SXGBE_MMC_TXOCTETLO_GCNT_REG 0x0884
#define SXGBE_MMC_TXOCTETHI_GCNT_REG 0x0888
#define SXGBE_MMC_TXFRAMELO_GCNT_REG 0x088C
#define SXGBE_MMC_TXFRAMEHI_GCNT_REG 0x0890
#define SXGBE_MMC_TXPAUSELO_CNT_REG 0x0894
#define SXGBE_MMC_TXPAUSEHI_CNT_REG 0x0898
#define SXGBE_MMC_TXVLANLO_GCNT_REG 0x089C
#define SXGBE_MMC_TXVLANHI_GCNT_REG 0x08A0
/* RX specific counters */
#define SXGBE_MMC_RXFRAMELO_GBCNT_REG 0x0900
#define SXGBE_MMC_RXFRAMEHI_GBCNT_REG 0x0904
#define SXGBE_MMC_RXOCTETLO_GBCNT_REG 0x0908
#define SXGBE_MMC_RXOCTETHI_GBCNT_REG 0x090C
#define SXGBE_MMC_RXOCTETLO_GCNT_REG 0x0910
#define SXGBE_MMC_RXOCTETHI_GCNT_REG 0x0914
#define SXGBE_MMC_RXBROADLO_GCNT_REG 0x0918
#define SXGBE_MMC_RXBROADHI_GCNT_REG 0x091C
#define SXGBE_MMC_RXMULTILO_GCNT_REG 0x0920
#define SXGBE_MMC_RXMULTIHI_GCNT_REG 0x0924
#define SXGBE_MMC_RXCRCERRLO_REG 0x0928
#define SXGBE_MMC_RXCRCERRHI_REG 0x092C
#define SXGBE_MMC_RXSHORT64BFRAME_ERR_REG 0x0930
#define SXGBE_MMC_RXJABBERERR_REG 0x0934
#define SXGBE_MMC_RXSHORT64BFRAME_COR_REG 0x0938
#define SXGBE_MMC_RXOVERMAXFRAME_COR_REG 0x093C
#define SXGBE_MMC_RX64LO_GBCNT_REG 0x0940
#define SXGBE_MMC_RX64HI_GBCNT_REG 0x0944
#define SXGBE_MMC_RX65TO127LO_GBCNT_REG 0x0948
#define SXGBE_MMC_RX65TO127HI_GBCNT_REG 0x094C
#define SXGBE_MMC_RX128TO255LO_GBCNT_REG 0x0950
#define SXGBE_MMC_RX128TO255HI_GBCNT_REG 0x0954
#define SXGBE_MMC_RX256TO511LO_GBCNT_REG 0x0958
#define SXGBE_MMC_RX256TO511HI_GBCNT_REG 0x095C
#define SXGBE_MMC_RX512TO1023LO_GBCNT_REG 0x0960
#define SXGBE_MMC_RX512TO1023HI_GBCNT_REG 0x0964
#define SXGBE_MMC_RX1023TOMAXLO_GBCNT_REG 0x0968
#define SXGBE_MMC_RX1023TOMAXHI_GBCNT_REG 0x096C
#define SXGBE_MMC_RXUNICASTLO_GCNT_REG 0x0970
#define SXGBE_MMC_RXUNICASTHI_GCNT_REG 0x0974
#define SXGBE_MMC_RXLENERRLO_REG 0x0978
#define SXGBE_MMC_RXLENERRHI_REG 0x097C
#define SXGBE_MMC_RXOUTOFRANGETYPELO_REG 0x0980
#define SXGBE_MMC_RXOUTOFRANGETYPEHI_REG 0x0984
#define SXGBE_MMC_RXPAUSELO_CNT_REG 0x0988
#define SXGBE_MMC_RXPAUSEHI_CNT_REG 0x098C
#define SXGBE_MMC_RXFIFOOVERFLOWLO_GBCNT_REG 0x0990
#define SXGBE_MMC_RXFIFOOVERFLOWHI_GBCNT_REG 0x0994
#define SXGBE_MMC_RXVLANLO_GBCNT_REG 0x0998
#define SXGBE_MMC_RXVLANHI_GBCNT_REG 0x099C
#define SXGBE_MMC_RXWATCHDOG_ERR_REG 0x09A0
/* L3/L4 function registers */
#define SXGBE_CORE_L34_ADDCTL_REG 0x0C00
#define SXGBE_CORE_L34_ADDCTL_REG 0x0C00
#define SXGBE_CORE_L34_DATA_REG 0x0C04
/* ARP registers */
#define SXGBE_CORE_ARP_ADD_REG 0x0C10
/* RSS registers */
#define SXGBE_CORE_RSS_CTL_REG 0x0C80
#define SXGBE_CORE_RSS_ADD_REG 0x0C88
#define SXGBE_CORE_RSS_DATA_REG 0x0C8C
/* RSS control register bits */
#define SXGBE_CORE_RSS_CTL_UDP4TE BIT(3)
#define SXGBE_CORE_RSS_CTL_TCP4TE BIT(2)
#define SXGBE_CORE_RSS_CTL_IP2TE BIT(1)
#define SXGBE_CORE_RSS_CTL_RSSE BIT(0)
/* IEEE 1588 registers */
#define SXGBE_CORE_TSTAMP_CTL_REG 0x0D00
#define SXGBE_CORE_SUBSEC_INC_REG 0x0D04
#define SXGBE_CORE_SYSTIME_SEC_REG 0x0D0C
#define SXGBE_CORE_SYSTIME_NSEC_REG 0x0D10
#define SXGBE_CORE_SYSTIME_SECUP_REG 0x0D14
#define SXGBE_CORE_TSTAMP_ADD_REG 0x0D18
#define SXGBE_CORE_SYSTIME_HWORD_REG 0x0D1C
#define SXGBE_CORE_TSTAMP_STATUS_REG 0x0D20
#define SXGBE_CORE_TXTIME_STATUSNSEC_REG 0x0D30
#define SXGBE_CORE_TXTIME_STATUSSEC_REG 0x0D34
/* Auxiliary registers */
#define SXGBE_CORE_AUX_CTL_REG 0x0D40
#define SXGBE_CORE_AUX_TSTAMP_NSEC_REG 0x0D48
#define SXGBE_CORE_AUX_TSTAMP_SEC_REG 0x0D4C
#define SXGBE_CORE_AUX_TSTAMP_INGCOR_REG 0x0D50
#define SXGBE_CORE_AUX_TSTAMP_ENGCOR_REG 0x0D54
#define SXGBE_CORE_AUX_TSTAMP_INGCOR_NSEC_REG 0x0D58
#define SXGBE_CORE_AUX_TSTAMP_INGCOR_SUBNSEC_REG 0x0D5C
#define SXGBE_CORE_AUX_TSTAMP_ENGCOR_NSEC_REG 0x0D60
#define SXGBE_CORE_AUX_TSTAMP_ENGCOR_SUBNSEC_REG 0x0D64
/* PPS registers */
#define SXGBE_CORE_PPS_CTL_REG 0x0D70
#define SXGBE_CORE_PPS_BASE 0x0D80
/* addr = 0 - 3 */
#define SXGBE_CORE_PPS_TTIME_SEC_REG(addr) \
(SXGBE_CORE_PPS_BASE + (0x10 * addr) + 0x0)
#define SXGBE_CORE_PPS_TTIME_NSEC_REG(addr) \
(SXGBE_CORE_PPS_BASE + (0x10 * addr) + 0x4)
#define SXGBE_CORE_PPS_INTERVAL_REG(addr) \
(SXGBE_CORE_PPS_BASE + (0x10 * addr) + 0x8)
#define SXGBE_CORE_PPS_WIDTH_REG(addr) \
(SXGBE_CORE_PPS_BASE + (0x10 * addr) + 0xC)
#define SXGBE_CORE_PTO_CTL_REG 0x0DC0
#define SXGBE_CORE_SRCPORT_ITY0_REG 0x0DC4
#define SXGBE_CORE_SRCPORT_ITY1_REG 0x0DC8
#define SXGBE_CORE_SRCPORT_ITY2_REG 0x0DCC
#define SXGBE_CORE_LOGMSG_LEVEL_REG 0x0DD0
/* SXGBE MTL Registers */
#define SXGBE_MTL_BASE_REG 0x1000
#define SXGBE_MTL_OP_MODE_REG (SXGBE_MTL_BASE_REG + 0x0000)
#define SXGBE_MTL_DEBUG_CTL_REG (SXGBE_MTL_BASE_REG + 0x0008)
#define SXGBE_MTL_DEBUG_STATUS_REG (SXGBE_MTL_BASE_REG + 0x000C)
#define SXGBE_MTL_FIFO_DEBUGDATA_REG (SXGBE_MTL_BASE_REG + 0x0010)
#define SXGBE_MTL_INT_STATUS_REG (SXGBE_MTL_BASE_REG + 0x0020)
#define SXGBE_MTL_RXQ_DMAMAP0_REG (SXGBE_MTL_BASE_REG + 0x0030)
#define SXGBE_MTL_RXQ_DMAMAP1_REG (SXGBE_MTL_BASE_REG + 0x0034)
#define SXGBE_MTL_RXQ_DMAMAP2_REG (SXGBE_MTL_BASE_REG + 0x0038)
#define SXGBE_MTL_TX_PRTYMAP0_REG (SXGBE_MTL_BASE_REG + 0x0040)
#define SXGBE_MTL_TX_PRTYMAP1_REG (SXGBE_MTL_BASE_REG + 0x0044)
/* TC/Queue registers, qnum=0-15 */
#define SXGBE_MTL_TC_TXBASE_REG (SXGBE_MTL_BASE_REG + 0x0100)
#define SXGBE_MTL_TXQ_OPMODE_REG(qnum) \
(SXGBE_MTL_TC_TXBASE_REG + (qnum * 0x80) + 0x00)
#define SXGBE_MTL_SFMODE BIT(1)
#define SXGBE_MTL_FIFO_LSHIFT 16
#define SXGBE_MTL_ENABLE_QUEUE 0x00000008
#define SXGBE_MTL_TXQ_UNDERFLOW_REG(qnum) \
(SXGBE_MTL_TC_TXBASE_REG + (qnum * 0x80) + 0x04)
#define SXGBE_MTL_TXQ_DEBUG_REG(qnum) \
(SXGBE_MTL_TC_TXBASE_REG + (qnum * 0x80) + 0x08)
#define SXGBE_MTL_TXQ_ETSCTL_REG(qnum) \
(SXGBE_MTL_TC_TXBASE_REG + (qnum * 0x80) + 0x10)
#define SXGBE_MTL_TXQ_ETSSTATUS_REG(qnum) \
(SXGBE_MTL_TC_TXBASE_REG + (qnum * 0x80) + 0x14)
#define SXGBE_MTL_TXQ_QUANTWEIGHT_REG(qnum) \
(SXGBE_MTL_TC_TXBASE_REG + (qnum * 0x80) + 0x18)
#define SXGBE_MTL_TC_RXBASE_REG 0x1140
#define SXGBE_RX_MTL_SFMODE BIT(5)
#define SXGBE_MTL_RXQ_OPMODE_REG(qnum) \
(SXGBE_MTL_TC_RXBASE_REG + (qnum * 0x80) + 0x00)
#define SXGBE_MTL_RXQ_MISPKTOVERFLOW_REG(qnum) \
(SXGBE_MTL_TC_RXBASE_REG + (qnum * 0x80) + 0x04)
#define SXGBE_MTL_RXQ_DEBUG_REG(qnum) \
(SXGBE_MTL_TC_RXBASE_REG + (qnum * 0x80) + 0x08)
#define SXGBE_MTL_RXQ_CTL_REG(qnum) \
(SXGBE_MTL_TC_RXBASE_REG + (qnum * 0x80) + 0x0C)
#define SXGBE_MTL_RXQ_INTENABLE_REG(qnum) \
(SXGBE_MTL_TC_RXBASE_REG + (qnum * 0x80) + 0x30)
#define SXGBE_MTL_RXQ_INTSTATUS_REG(qnum) \
(SXGBE_MTL_TC_RXBASE_REG + (qnum * 0x80) + 0x34)
/* SXGBE DMA Registers */
#define SXGBE_DMA_BASE_REG 0x3000
#define SXGBE_DMA_MODE_REG (SXGBE_DMA_BASE_REG + 0x0000)
#define SXGBE_DMA_SOFT_RESET BIT(0)
#define SXGBE_DMA_SYSBUS_MODE_REG (SXGBE_DMA_BASE_REG + 0x0004)
#define SXGBE_DMA_AXI_UNDEF_BURST BIT(0)
#define SXGBE_DMA_ENHACE_ADDR_MODE BIT(11)
#define SXGBE_DMA_INT_STATUS_REG (SXGBE_DMA_BASE_REG + 0x0008)
#define SXGBE_DMA_AXI_ARCACHECTL_REG (SXGBE_DMA_BASE_REG + 0x0010)
#define SXGBE_DMA_AXI_AWCACHECTL_REG (SXGBE_DMA_BASE_REG + 0x0018)
#define SXGBE_DMA_DEBUG_STATUS0_REG (SXGBE_DMA_BASE_REG + 0x0020)
#define SXGBE_DMA_DEBUG_STATUS1_REG (SXGBE_DMA_BASE_REG + 0x0024)
#define SXGBE_DMA_DEBUG_STATUS2_REG (SXGBE_DMA_BASE_REG + 0x0028)
#define SXGBE_DMA_DEBUG_STATUS3_REG (SXGBE_DMA_BASE_REG + 0x002C)
#define SXGBE_DMA_DEBUG_STATUS4_REG (SXGBE_DMA_BASE_REG + 0x0030)
#define SXGBE_DMA_DEBUG_STATUS5_REG (SXGBE_DMA_BASE_REG + 0x0034)
/* Channel Registers, cha_num = 0-15 */
#define SXGBE_DMA_CHA_BASE_REG \
(SXGBE_DMA_BASE_REG + 0x0100)
#define SXGBE_DMA_CHA_CTL_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x00)
#define SXGBE_DMA_PBL_X8MODE BIT(16)
#define SXGBE_DMA_CHA_TXCTL_TSE_ENABLE BIT(12)
#define SXGBE_DMA_CHA_TXCTL_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x04)
#define SXGBE_DMA_CHA_RXCTL_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x08)
#define SXGBE_DMA_CHA_TXDESC_HADD_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x10)
#define SXGBE_DMA_CHA_TXDESC_LADD_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x14)
#define SXGBE_DMA_CHA_RXDESC_HADD_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x18)
#define SXGBE_DMA_CHA_RXDESC_LADD_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x1C)
#define SXGBE_DMA_CHA_TXDESC_TAILPTR_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x24)
#define SXGBE_DMA_CHA_RXDESC_TAILPTR_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x2C)
#define SXGBE_DMA_CHA_TXDESC_RINGLEN_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x30)
#define SXGBE_DMA_CHA_RXDESC_RINGLEN_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x34)
#define SXGBE_DMA_CHA_INT_ENABLE_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x38)
#define SXGBE_DMA_CHA_INT_RXWATCHTMR_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x3C)
#define SXGBE_DMA_CHA_TXDESC_CURADDLO_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x44)
#define SXGBE_DMA_CHA_RXDESC_CURADDLO_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x4C)
#define SXGBE_DMA_CHA_CURTXBUF_ADDHI_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x50)
#define SXGBE_DMA_CHA_CURTXBUF_ADDLO_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x54)
#define SXGBE_DMA_CHA_CURRXBUF_ADDHI_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x58)
#define SXGBE_DMA_CHA_CURRXBUF_ADDLO_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x5C)
#define SXGBE_DMA_CHA_STATUS_REG(cha_num) \
(SXGBE_DMA_CHA_BASE_REG + (cha_num * 0x80) + 0x60)
/* TX DMA control register specific */
#define SXGBE_TX_START_DMA BIT(0)
/* sxgbe tx configuration register bitfields */
#define SXGBE_SPEED_10G 0x0
#define SXGBE_SPEED_2_5G 0x1
#define SXGBE_SPEED_1G 0x2
#define SXGBE_SPEED_LSHIFT 29
#define SXGBE_TX_ENABLE BIT(0)
#define SXGBE_TX_DISDIC_ALGO BIT(1)
#define SXGBE_TX_JABBER_DISABLE BIT(16)
/* sxgbe rx configuration register bitfields */
#define SXGBE_RX_ENABLE BIT(0)
#define SXGBE_RX_ACS_ENABLE BIT(1)
#define SXGBE_RX_WATCHDOG_DISABLE BIT(7)
#define SXGBE_RX_JUMBPKT_ENABLE BIT(8)
#define SXGBE_RX_CSUMOFFLOAD_ENABLE BIT(9)
#define SXGBE_RX_LOOPBACK_ENABLE BIT(10)
#define SXGBE_RX_ARPOFFLOAD_ENABLE BIT(31)
/* sxgbe vlan Tag Register bitfields */
#define SXGBE_VLAN_SVLAN_ENABLE BIT(18)
#define SXGBE_VLAN_DOUBLEVLAN_ENABLE BIT(26)
#define SXGBE_VLAN_INNERVLAN_ENABLE BIT(27)
/* XMAC VLAN Tag Inclusion Register(0x0060) bitfields
* Below fields same for Inner VLAN Tag Inclusion
* Register(0x0064) register
*/
enum vlan_tag_ctl_tx {
VLAN_TAG_TX_NOP,
VLAN_TAG_TX_DEL,
VLAN_TAG_TX_INSERT,
VLAN_TAG_TX_REPLACE
};
#define SXGBE_VLAN_PRTY_CTL BIT(18)
#define SXGBE_VLAN_CSVL_CTL BIT(19)
/* SXGBE TX Q Flow Control Register bitfields */
#define SXGBE_TX_FLOW_CTL_FCB BIT(0)
#define SXGBE_TX_FLOW_CTL_TFB BIT(1)
/* SXGBE RX Q Flow Control Register bitfields */
#define SXGBE_RX_FLOW_CTL_ENABLE BIT(0)
#define SXGBE_RX_UNICAST_DETECT BIT(1)
#define SXGBE_RX_PRTYFLOW_CTL_ENABLE BIT(8)
/* sxgbe rx Q control0 register bitfields */
#define SXGBE_RX_Q_ENABLE 0x2
/* SXGBE hardware features bitfield specific */
/* Capability Register 0 */
#define SXGBE_HW_FEAT_GMII(cap) ((cap & 0x00000002) >> 1)
#define SXGBE_HW_FEAT_VLAN_HASH_FILTER(cap) ((cap & 0x00000010) >> 4)
#define SXGBE_HW_FEAT_SMA(cap) ((cap & 0x00000020) >> 5)
#define SXGBE_HW_FEAT_PMT_TEMOTE_WOP(cap) ((cap & 0x00000040) >> 6)
#define SXGBE_HW_FEAT_PMT_MAGIC_PKT(cap) ((cap & 0x00000080) >> 7)
#define SXGBE_HW_FEAT_RMON(cap) ((cap & 0x00000100) >> 8)
#define SXGBE_HW_FEAT_ARP_OFFLOAD(cap) ((cap & 0x00000200) >> 9)
#define SXGBE_HW_FEAT_IEEE1500_2008(cap) ((cap & 0x00001000) >> 12)
#define SXGBE_HW_FEAT_EEE(cap) ((cap & 0x00002000) >> 13)
#define SXGBE_HW_FEAT_TX_CSUM_OFFLOAD(cap) ((cap & 0x00004000) >> 14)
#define SXGBE_HW_FEAT_RX_CSUM_OFFLOAD(cap) ((cap & 0x00010000) >> 16)
#define SXGBE_HW_FEAT_MACADDR_COUNT(cap) ((cap & 0x007C0000) >> 18)
#define SXGBE_HW_FEAT_TSTMAP_SRC(cap) ((cap & 0x06000000) >> 25)
#define SXGBE_HW_FEAT_SRCADDR_VLAN(cap) ((cap & 0x08000000) >> 27)
/* Capability Register 1 */
#define SXGBE_HW_FEAT_RX_FIFO_SIZE(cap) ((cap & 0x0000001F))
#define SXGBE_HW_FEAT_TX_FIFO_SIZE(cap) ((cap & 0x000007C0) >> 6)
#define SXGBE_HW_FEAT_IEEE1588_HWORD(cap) ((cap & 0x00002000) >> 13)
#define SXGBE_HW_FEAT_DCB(cap) ((cap & 0x00010000) >> 16)
#define SXGBE_HW_FEAT_SPLIT_HDR(cap) ((cap & 0x00020000) >> 17)
#define SXGBE_HW_FEAT_TSO(cap) ((cap & 0x00040000) >> 18)
#define SXGBE_HW_FEAT_DEBUG_MEM_IFACE(cap) ((cap & 0x00080000) >> 19)
#define SXGBE_HW_FEAT_RSS(cap) ((cap & 0x00100000) >> 20)
#define SXGBE_HW_FEAT_HASH_TABLE_SIZE(cap) ((cap & 0x03000000) >> 24)
#define SXGBE_HW_FEAT_L3L4_FILTER_NUM(cap) ((cap & 0x78000000) >> 27)
/* Capability Register 2 */
#define SXGBE_HW_FEAT_RX_MTL_QUEUES(cap) ((cap & 0x0000000F))
#define SXGBE_HW_FEAT_TX_MTL_QUEUES(cap) ((cap & 0x000003C0) >> 6)
#define SXGBE_HW_FEAT_RX_DMA_CHANNELS(cap) ((cap & 0x0000F000) >> 12)
#define SXGBE_HW_FEAT_TX_DMA_CHANNELS(cap) ((cap & 0x003C0000) >> 18)
#define SXGBE_HW_FEAT_PPS_OUTPUTS(cap) ((cap & 0x07000000) >> 24)
#define SXGBE_HW_FEAT_AUX_SNAPSHOTS(cap) ((cap & 0x70000000) >> 28)
/* DMAchannel interrupt enable specific */
/* DMA Normal interrupt */
#define SXGBE_DMA_INT_ENA_NIE BIT(16) /* Normal Summary */
#define SXGBE_DMA_INT_ENA_TIE BIT(0) /* Transmit Interrupt */
#define SXGBE_DMA_INT_ENA_TUE BIT(2) /* Transmit Buffer Unavailable */
#define SXGBE_DMA_INT_ENA_RIE BIT(6) /* Receive Interrupt */
#define SXGBE_DMA_INT_NORMAL \
(SXGBE_DMA_INT_ENA_NIE | SXGBE_DMA_INT_ENA_RIE | \
SXGBE_DMA_INT_ENA_TIE | SXGBE_DMA_INT_ENA_TUE)
/* DMA Abnormal interrupt */
#define SXGBE_DMA_INT_ENA_AIE BIT(15) /* Abnormal Summary */
#define SXGBE_DMA_INT_ENA_TSE BIT(1) /* Transmit Stopped */
#define SXGBE_DMA_INT_ENA_RUE BIT(7) /* Receive Buffer Unavailable */
#define SXGBE_DMA_INT_ENA_RSE BIT(8) /* Receive Stopped */
#define SXGBE_DMA_INT_ENA_FBE BIT(12) /* Fatal Bus Error */
#define SXGBE_DMA_INT_ENA_CDEE BIT(13) /* Context Descriptor Error */
#define SXGBE_DMA_INT_ABNORMAL \
(SXGBE_DMA_INT_ENA_AIE | SXGBE_DMA_INT_ENA_TSE | \
SXGBE_DMA_INT_ENA_RUE | SXGBE_DMA_INT_ENA_RSE | \
SXGBE_DMA_INT_ENA_FBE | SXGBE_DMA_INT_ENA_CDEE)
#define SXGBE_DMA_ENA_INT (SXGBE_DMA_INT_NORMAL | SXGBE_DMA_INT_ABNORMAL)
/* DMA channel interrupt status specific */
#define SXGBE_DMA_INT_STATUS_REB2 BIT(21)
#define SXGBE_DMA_INT_STATUS_REB1 BIT(20)
#define SXGBE_DMA_INT_STATUS_REB0 BIT(19)
#define SXGBE_DMA_INT_STATUS_TEB2 BIT(18)
#define SXGBE_DMA_INT_STATUS_TEB1 BIT(17)
#define SXGBE_DMA_INT_STATUS_TEB0 BIT(16)
#define SXGBE_DMA_INT_STATUS_NIS BIT(15)
#define SXGBE_DMA_INT_STATUS_AIS BIT(14)
#define SXGBE_DMA_INT_STATUS_CTXTERR BIT(13)
#define SXGBE_DMA_INT_STATUS_FBE BIT(12)
#define SXGBE_DMA_INT_STATUS_RPS BIT(8)
#define SXGBE_DMA_INT_STATUS_RBU BIT(7)
#define SXGBE_DMA_INT_STATUS_RI BIT(6)
#define SXGBE_DMA_INT_STATUS_TBU BIT(2)
#define SXGBE_DMA_INT_STATUS_TPS BIT(1)
#define SXGBE_DMA_INT_STATUS_TI BIT(0)
#endif /* __SXGBE_REGMAP_H__ */

View File

@ -0,0 +1,91 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/bitops.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/phy.h>
#include "sxgbe_common.h"
#include "sxgbe_xpcs.h"
static int sxgbe_xpcs_read(struct net_device *ndev, unsigned int reg)
{
u32 value;
struct sxgbe_priv_data *priv = netdev_priv(ndev);
value = readl(priv->ioaddr + XPCS_OFFSET + reg);
return value;
}
static int sxgbe_xpcs_write(struct net_device *ndev, int reg, int data)
{
struct sxgbe_priv_data *priv = netdev_priv(ndev);
writel(data, priv->ioaddr + XPCS_OFFSET + reg);
return 0;
}
int sxgbe_xpcs_init(struct net_device *ndev)
{
u32 value;
value = sxgbe_xpcs_read(ndev, SR_PCS_MMD_CONTROL1);
/* 10G XAUI mode */
sxgbe_xpcs_write(ndev, SR_PCS_CONTROL2, XPCS_TYPE_SEL_X);
sxgbe_xpcs_write(ndev, VR_PCS_MMD_XAUI_MODE_CONTROL, XPCS_XAUI_MODE);
sxgbe_xpcs_write(ndev, VR_PCS_MMD_XAUI_MODE_CONTROL, value | BIT(13));
sxgbe_xpcs_write(ndev, SR_PCS_MMD_CONTROL1, value | BIT(11));
do {
value = sxgbe_xpcs_read(ndev, VR_PCS_MMD_DIGITAL_STATUS);
} while ((value & XPCS_QSEQ_STATE_MPLLOFF) == XPCS_QSEQ_STATE_STABLE);
value = sxgbe_xpcs_read(ndev, SR_PCS_MMD_CONTROL1);
sxgbe_xpcs_write(ndev, SR_PCS_MMD_CONTROL1, value & ~BIT(11));
do {
value = sxgbe_xpcs_read(ndev, VR_PCS_MMD_DIGITAL_STATUS);
} while ((value & XPCS_QSEQ_STATE_MPLLOFF) != XPCS_QSEQ_STATE_STABLE);
return 0;
}
int sxgbe_xpcs_init_1G(struct net_device *ndev)
{
int value;
/* 10GBASE-X PCS (1G) mode */
sxgbe_xpcs_write(ndev, SR_PCS_CONTROL2, XPCS_TYPE_SEL_X);
sxgbe_xpcs_write(ndev, VR_PCS_MMD_XAUI_MODE_CONTROL, XPCS_XAUI_MODE);
value = sxgbe_xpcs_read(ndev, SR_PCS_MMD_CONTROL1);
sxgbe_xpcs_write(ndev, SR_PCS_MMD_CONTROL1, value & ~BIT(13));
value = sxgbe_xpcs_read(ndev, SR_MII_MMD_CONTROL);
sxgbe_xpcs_write(ndev, SR_MII_MMD_CONTROL, value | BIT(6));
sxgbe_xpcs_write(ndev, SR_MII_MMD_CONTROL, value & ~BIT(13));
value = sxgbe_xpcs_read(ndev, SR_PCS_MMD_CONTROL1);
sxgbe_xpcs_write(ndev, SR_PCS_MMD_CONTROL1, value | BIT(11));
do {
value = sxgbe_xpcs_read(ndev, VR_PCS_MMD_DIGITAL_STATUS);
} while ((value & XPCS_QSEQ_STATE_MPLLOFF) != XPCS_QSEQ_STATE_STABLE);
value = sxgbe_xpcs_read(ndev, SR_PCS_MMD_CONTROL1);
sxgbe_xpcs_write(ndev, SR_PCS_MMD_CONTROL1, value & ~BIT(11));
/* Auto Negotiation cluase 37 enable */
value = sxgbe_xpcs_read(ndev, SR_MII_MMD_CONTROL);
sxgbe_xpcs_write(ndev, SR_MII_MMD_CONTROL, value | BIT(12));
return 0;
}

View File

@ -0,0 +1,38 @@
/* 10G controller driver for Samsung SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Byungho An <bh74.an@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __SXGBE_XPCS_H__
#define __SXGBE_XPCS_H__
/* XPCS Registers */
#define XPCS_OFFSET 0x1A060000
#define SR_PCS_MMD_CONTROL1 0x030000
#define SR_PCS_CONTROL2 0x030007
#define VR_PCS_MMD_XAUI_MODE_CONTROL 0x038004
#define VR_PCS_MMD_DIGITAL_STATUS 0x038010
#define SR_MII_MMD_CONTROL 0x1F0000
#define SR_MII_MMD_AN_ADV 0x1F0004
#define SR_MII_MMD_AN_LINK_PARTNER_BA 0x1F0005
#define VR_MII_MMD_AN_CONTROL 0x1F8001
#define VR_MII_MMD_AN_INT_STATUS 0x1F8002
#define XPCS_QSEQ_STATE_STABLE 0x10
#define XPCS_QSEQ_STATE_MPLLOFF 0x1c
#define XPCS_TYPE_SEL_R 0x00
#define XPCS_TYPE_SEL_X 0x01
#define XPCS_TYPE_SEL_W 0x02
#define XPCS_XAUI_MODE 0x00
#define XPCS_RXAUI_MODE 0x01
int sxgbe_xpcs_init(struct net_device *ndev);
int sxgbe_xpcs_init_1G(struct net_device *ndev);
#endif /* __SXGBE_XPCS_H__ */

View File

@ -0,0 +1,54 @@
/*
* 10G controller driver for Samsung EXYNOS SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Siva Reddy Kallam <siva.kallam@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __SXGBE_PLATFORM_H__
#define __SXGBE_PLATFORM_H__
/* MDC Clock Selection define*/
#define SXGBE_CSR_100_150M 0x0 /* MDC = clk_scr_i/62 */
#define SXGBE_CSR_150_250M 0x1 /* MDC = clk_scr_i/102 */
#define SXGBE_CSR_250_300M 0x2 /* MDC = clk_scr_i/122 */
#define SXGBE_CSR_300_350M 0x3 /* MDC = clk_scr_i/142 */
#define SXGBE_CSR_350_400M 0x4 /* MDC = clk_scr_i/162 */
#define SXGBE_CSR_400_500M 0x5 /* MDC = clk_scr_i/202 */
/* Platfrom data for platform device structure's
* platform_data field
*/
struct sxgbe_mdio_bus_data {
unsigned int phy_mask;
int *irqs;
int probed_phy_irq;
};
struct sxgbe_dma_cfg {
int pbl;
int fixed_burst;
int burst_map;
int adv_addr_mode;
};
struct sxgbe_plat_data {
char *phy_bus_name;
int bus_id;
int phy_addr;
int interface;
struct sxgbe_mdio_bus_data *mdio_bus_data;
struct sxgbe_dma_cfg *dma_cfg;
int clk_csr;
int pmt;
int force_sf_dma_mode;
int force_thresh_dma_mode;
int riwt_off;
};
#endif /* __SXGBE_PLATFORM_H__ */