thunderbolt: Changes for v5.5 merge window

This adds Thunderbolt 3 support for the software connection manager. It
 is currently only used in Apple systems. Previously the driver started
 the firmware connection manager on those but it is not necessary anymore
 with these patches (we still leave user an option to start the firmware
 in case there are problems with the software connection manager).
 
 This includes:
 
   - Expose 'generation' attribute under each device in sysfs
   - Converting register names to follow the USB4 spec.
   - Lane bonding support
   - Expose link speed and width in sysfs
   - Display Port handshake needed for Titan Ridge devices
   - Display Port pairing and resource management
   - Display Port bandwidth management
 -----BEGIN PGP SIGNATURE-----
 
 iQJUBAABCgA+FiEEVTdhRGBbNzLrSUBaAP2fSd+ZWKAFAl3ED00gHG1pa2Eud2Vz
 dGVyYmVyZ0BsaW51eC5pbnRlbC5jb20ACgkQAP2fSd+ZWKChGw//UJdZcc1uJWhw
 nXoCzJzgseBXCigwE/B44q6P//aqh3Ku9gexApeQC4KMuRWHx2OGzhI8Xji/wBV0
 z9miqRNYseUMwAkiYIvHmZFICJZfBkawenItaoI8sDzRD7vKz39puHDx1icfa4jL
 INAe0GiMtaJtqbVIq4FPz77kuGhmMhEdJcbPbLqTm58gqtXW66K2L1XwDHi5bQSa
 rnw/uUPr9jdek2KfT0nbGwIgP+7+26Igm/JuZcTTFpG9kymy6faukdMHQXiKThM0
 E5azC9BDz80HKiFcCqBjCH165iTxmhrvwOPi9SnTBBi59Xd6P4/NewsT84mtAVCA
 p4SgXzbfCLdojhiBbGTQYan91kBSaBPkjsnhy5hthWtPFKj2ELTqv+WX32kUSHyc
 LFgrC4eu4028Iv8xngMAOzle3+0prc6Gr5248te4yUx3OXPxMiN8AbDw41aUt5dK
 JnKeedNw+ZhvUzN0nOs29gBURSmr4qRpj7T/ZmZRPpe0qx+pSVMa17rBTo3Qk45/
 lZCLVmVwC42DXUmWOZdYvAijWwF/z0L3Rsk38xGs+7DjbFepN1XaUi5cuoMmaqZp
 PJ9ogn19GtazS8Anwawy99ry/2EjqeGiSrW42Zzk2w8hktW6IiSrWQppMcZGmhvW
 c6Lz04I42ZtW3UjnmSe4WIXZtLvMz/8=
 =LiDm
 -----END PGP SIGNATURE-----

Merge tag 'thunderbolt-for-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt into char-misc-next

Mika writes:

thunderbolt: Changes for v5.5 merge window

This adds Thunderbolt 3 support for the software connection manager. It
is currently only used in Apple systems. Previously the driver started
the firmware connection manager on those but it is not necessary anymore
with these patches (we still leave user an option to start the firmware
in case there are problems with the software connection manager).

This includes:

  - Expose 'generation' attribute under each device in sysfs
  - Converting register names to follow the USB4 spec.
  - Lane bonding support
  - Expose link speed and width in sysfs
  - Display Port handshake needed for Titan Ridge devices
  - Display Port pairing and resource management
  - Display Port bandwidth management

* tag 'thunderbolt-for-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt: (21 commits)
  thunderbolt: Do not start firmware unless asked by the user
  thunderbolt: Add bandwidth management for Display Port tunnels
  thunderbolt: Add Display Port adapter pairing and resource management
  thunderbolt: Add Display Port CM handshake for Titan Ridge devices
  thunderbolt: Add downstream PCIe port mappings for Alpine and Titan Ridge
  thunderbolt: Expand controller name in tb_switch_is_xy()
  thunderbolt: Add default linking between lane adapters if not provided by DROM
  thunderbolt: Add support for lane bonding
  thunderbolt: Refactor add_switch() into two functions
  thunderbolt: Add helper macro to iterate over switch ports
  thunderbolt: Make tb_sw_write() take const parameter
  thunderbolt: Convert DP adapter register names to follow the USB4 spec
  thunderbolt: Convert PCIe adapter register names to follow the USB4 spec
  thunderbolt: Convert basic adapter register names to follow the USB4 spec
  thunderbolt: Log error if adding switch fails
  thunderbolt: Log switch route string on config read/write timeout
  thunderbolt: Introduce tb_switch_is_icm()
  thunderbolt: Add 'generation' attribute for devices
  thunderbolt: Drop unnecessary read when writing LC command in Ice Lake
  thunderbolt: Fix lockdep circular locking depedency warning
  ...
This commit is contained in:
Greg Kroah-Hartman 2019-11-07 15:24:16 +01:00
commit 4180468e16
16 changed files with 1632 additions and 277 deletions

View File

@ -80,6 +80,14 @@ Contact: thunderbolt-software@lists.01.org
Description: This attribute contains 1 if Thunderbolt device was already
authorized on boot and 0 otherwise.
What: /sys/bus/thunderbolt/devices/.../generation
Date: Jan 2020
KernelVersion: 5.5
Contact: Christian Kellner <christian@kellner.me>
Description: This attribute contains the generation of the Thunderbolt
controller associated with the device. It will contain 4
for USB4.
What: /sys/bus/thunderbolt/devices/.../key
Date: Sep 2017
KernelVersion: 4.13
@ -104,6 +112,34 @@ Contact: thunderbolt-software@lists.01.org
Description: This attribute contains name of this device extracted from
the device DROM.
What: /sys/bus/thunderbolt/devices/.../rx_speed
Date: Jan 2020
KernelVersion: 5.5
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: This attribute reports the device RX speed per lane.
All RX lanes run at the same speed.
What: /sys/bus/thunderbolt/devices/.../rx_lanes
Date: Jan 2020
KernelVersion: 5.5
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: This attribute reports number of RX lanes the device is
using simultaneusly through its upstream port.
What: /sys/bus/thunderbolt/devices/.../tx_speed
Date: Jan 2020
KernelVersion: 5.5
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: This attribute reports the TX speed per lane.
All TX lanes run at the same speed.
What: /sys/bus/thunderbolt/devices/.../tx_lanes
Date: Jan 2020
KernelVersion: 5.5
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: This attribute reports number of TX lanes the device is
using simultaneusly through its upstream port.
What: /sys/bus/thunderbolt/devices/.../vendor
Date: Sep 2017
KernelVersion: 4.13

View File

@ -33,9 +33,9 @@ static int tb_port_enable_tmu(struct tb_port *port, bool enable)
* Legacy devices need to have TMU access enabled before port
* space can be fully accessed.
*/
if (tb_switch_is_lr(sw))
if (tb_switch_is_light_ridge(sw))
offset = 0x26;
else if (tb_switch_is_er(sw))
else if (tb_switch_is_eagle_ridge(sw))
offset = 0x2a;
else
return 0;
@ -60,7 +60,7 @@ static void tb_port_dummy_read(struct tb_port *port)
* reading stale data on next read perform one dummy read after
* port capabilities are walked.
*/
if (tb_switch_is_lr(port->sw)) {
if (tb_switch_is_light_ridge(port->sw)) {
u32 dummy;
tb_port_read(port, &dummy, TB_CFG_PORT, 0, 1);

View File

@ -962,8 +962,8 @@ int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
return tb_cfg_get_error(ctl, space, &res);
case -ETIMEDOUT:
tb_ctl_warn(ctl, "timeout reading config space %u from %#x\n",
space, offset);
tb_ctl_warn(ctl, "%llx: timeout reading config space %u from %#x\n",
route, space, offset);
break;
default:
@ -988,8 +988,8 @@ int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
return tb_cfg_get_error(ctl, space, &res);
case -ETIMEDOUT:
tb_ctl_warn(ctl, "timeout writing config space %u to %#x\n",
space, offset);
tb_ctl_warn(ctl, "%llx: timeout writing config space %u to %#x\n",
route, space, offset);
break;
default:

View File

@ -514,17 +514,6 @@ int tb_drom_read(struct tb_switch *sw)
* no entries). Hardcode the configuration here.
*/
tb_drom_read_uid_only(sw, &sw->uid);
sw->ports[1].link_nr = 0;
sw->ports[2].link_nr = 1;
sw->ports[1].dual_link_port = &sw->ports[2];
sw->ports[2].dual_link_port = &sw->ports[1];
sw->ports[3].link_nr = 0;
sw->ports[4].link_nr = 1;
sw->ports[3].dual_link_port = &sw->ports[4];
sw->ports[4].dual_link_port = &sw->ports[3];
return 0;
}

View File

@ -11,6 +11,7 @@
#include <linux/delay.h>
#include <linux/mutex.h>
#include <linux/moduleparam.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/platform_data/x86/apple.h>
@ -43,6 +44,10 @@
#define ICM_APPROVE_TIMEOUT 10000 /* ms */
#define ICM_MAX_LINK 4
static bool start_icm;
module_param(start_icm, bool, 0444);
MODULE_PARM_DESC(start_icm, "start ICM firmware if it is not running (default: false)");
/**
* struct icm - Internal connection manager private data
* @request_lock: Makes sure only one message is send to ICM at time
@ -147,6 +152,17 @@ static const struct intel_vss *parse_intel_vss(const void *ep_name, size_t size)
return NULL;
}
static bool intel_vss_is_rtd3(const void *ep_name, size_t size)
{
const struct intel_vss *vss;
vss = parse_intel_vss(ep_name, size);
if (vss)
return !!(vss->flags & INTEL_VSS_FLAGS_RTD3);
return false;
}
static inline struct tb *icm_to_tb(struct icm *icm)
{
return ((void *)icm - sizeof(struct tb));
@ -339,6 +355,14 @@ static void icm_veto_end(struct tb *tb)
}
}
static bool icm_firmware_running(const struct tb_nhi *nhi)
{
u32 val;
val = ioread32(nhi->iobase + REG_FW_STS);
return !!(val & REG_FW_STS_ICM_EN);
}
static bool icm_fr_is_supported(struct tb *tb)
{
return !x86_apple_machine;
@ -562,58 +586,42 @@ static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
return 0;
}
static struct tb_switch *add_switch(struct tb_switch *parent_sw, u64 route,
const uuid_t *uuid, const u8 *ep_name,
size_t ep_name_size, u8 connection_id,
u8 connection_key, u8 link, u8 depth,
enum tb_security_level security_level,
bool authorized, bool boot)
static struct tb_switch *alloc_switch(struct tb_switch *parent_sw, u64 route,
const uuid_t *uuid)
{
const struct intel_vss *vss;
struct tb *tb = parent_sw->tb;
struct tb_switch *sw;
int ret;
pm_runtime_get_sync(&parent_sw->dev);
sw = tb_switch_alloc(parent_sw->tb, &parent_sw->dev, route);
if (IS_ERR(sw))
goto out;
sw = tb_switch_alloc(tb, &parent_sw->dev, route);
if (IS_ERR(sw)) {
tb_warn(tb, "failed to allocate switch at %llx\n", route);
return sw;
}
sw->uuid = kmemdup(uuid, sizeof(*uuid), GFP_KERNEL);
if (!sw->uuid) {
tb_sw_warn(sw, "cannot allocate memory for switch\n");
tb_switch_put(sw);
goto out;
return ERR_PTR(-ENOMEM);
}
sw->connection_id = connection_id;
sw->connection_key = connection_key;
sw->link = link;
sw->depth = depth;
sw->authorized = authorized;
sw->security_level = security_level;
sw->boot = boot;
init_completion(&sw->rpm_complete);
vss = parse_intel_vss(ep_name, ep_name_size);
if (vss)
sw->rpm = !!(vss->flags & INTEL_VSS_FLAGS_RTD3);
init_completion(&sw->rpm_complete);
return sw;
}
static int add_switch(struct tb_switch *parent_sw, struct tb_switch *sw)
{
u64 route = tb_route(sw);
int ret;
/* Link the two switches now */
tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw);
tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw);
ret = tb_switch_add(sw);
if (ret) {
if (ret)
tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
tb_switch_put(sw);
sw = ERR_PTR(ret);
}
out:
pm_runtime_mark_last_busy(&parent_sw->dev);
pm_runtime_put_autosuspend(&parent_sw->dev);
return sw;
return ret;
}
static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw,
@ -697,11 +705,11 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
(const struct icm_fr_event_device_connected *)hdr;
enum tb_security_level security_level;
struct tb_switch *sw, *parent_sw;
bool boot, dual_lane, speed_gen3;
struct icm *icm = tb_priv(tb);
bool authorized = false;
struct tb_xdomain *xd;
u8 link, depth;
bool boot;
u64 route;
int ret;
@ -714,6 +722,8 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >>
ICM_FLAGS_SLEVEL_SHIFT;
boot = pkg->link_info & ICM_LINK_INFO_BOOT;
dual_lane = pkg->hdr.flags & ICM_FLAGS_DUAL_LANE;
speed_gen3 = pkg->hdr.flags & ICM_FLAGS_SPEED_GEN3;
if (pkg->link_info & ICM_LINK_INFO_REJECTED) {
tb_info(tb, "switch at %u.%u was rejected by ICM firmware because topology limit exceeded\n",
@ -811,10 +821,27 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
return;
}
add_switch(parent_sw, route, &pkg->ep_uuid, (const u8 *)pkg->ep_name,
sizeof(pkg->ep_name), pkg->connection_id,
pkg->connection_key, link, depth, security_level,
authorized, boot);
pm_runtime_get_sync(&parent_sw->dev);
sw = alloc_switch(parent_sw, route, &pkg->ep_uuid);
if (!IS_ERR(sw)) {
sw->connection_id = pkg->connection_id;
sw->connection_key = pkg->connection_key;
sw->link = link;
sw->depth = depth;
sw->authorized = authorized;
sw->security_level = security_level;
sw->boot = boot;
sw->link_speed = speed_gen3 ? 20 : 10;
sw->link_width = dual_lane ? 2 : 1;
sw->rpm = intel_vss_is_rtd3(pkg->ep_name, sizeof(pkg->ep_name));
if (add_switch(parent_sw, sw))
tb_switch_put(sw);
}
pm_runtime_mark_last_busy(&parent_sw->dev);
pm_runtime_put_autosuspend(&parent_sw->dev);
tb_switch_put(parent_sw);
}
@ -1142,10 +1169,10 @@ __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr,
{
const struct icm_tr_event_device_connected *pkg =
(const struct icm_tr_event_device_connected *)hdr;
bool authorized, boot, dual_lane, speed_gen3;
enum tb_security_level security_level;
struct tb_switch *sw, *parent_sw;
struct tb_xdomain *xd;
bool authorized, boot;
u64 route;
icm_postpone_rescan(tb);
@ -1163,6 +1190,8 @@ __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr,
security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >>
ICM_FLAGS_SLEVEL_SHIFT;
boot = pkg->link_info & ICM_LINK_INFO_BOOT;
dual_lane = pkg->hdr.flags & ICM_FLAGS_DUAL_LANE;
speed_gen3 = pkg->hdr.flags & ICM_FLAGS_SPEED_GEN3;
if (pkg->link_info & ICM_LINK_INFO_REJECTED) {
tb_info(tb, "switch at %llx was rejected by ICM firmware because topology limit exceeded\n",
@ -1205,11 +1234,27 @@ __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr,
return;
}
sw = add_switch(parent_sw, route, &pkg->ep_uuid, (const u8 *)pkg->ep_name,
sizeof(pkg->ep_name), pkg->connection_id, 0, 0, 0,
security_level, authorized, boot);
if (!IS_ERR(sw) && force_rtd3)
sw->rpm = true;
pm_runtime_get_sync(&parent_sw->dev);
sw = alloc_switch(parent_sw, route, &pkg->ep_uuid);
if (!IS_ERR(sw)) {
sw->connection_id = pkg->connection_id;
sw->authorized = authorized;
sw->security_level = security_level;
sw->boot = boot;
sw->link_speed = speed_gen3 ? 20 : 10;
sw->link_width = dual_lane ? 2 : 1;
sw->rpm = force_rtd3;
if (!sw->rpm)
sw->rpm = intel_vss_is_rtd3(pkg->ep_name,
sizeof(pkg->ep_name));
if (add_switch(parent_sw, sw))
tb_switch_put(sw);
}
pm_runtime_mark_last_busy(&parent_sw->dev);
pm_runtime_put_autosuspend(&parent_sw->dev);
tb_switch_put(parent_sw);
}
@ -1349,9 +1394,12 @@ static bool icm_ar_is_supported(struct tb *tb)
/*
* Starting from Alpine Ridge we can use ICM on Apple machines
* as well. We just need to reset and re-enable it first.
* However, only start it if explicitly asked by the user.
*/
if (!x86_apple_machine)
if (icm_firmware_running(tb->nhi))
return true;
if (!start_icm)
return false;
/*
* Find the upstream PCIe port in case we need to do reset
@ -1704,8 +1752,7 @@ static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi)
u32 val;
/* Check if the ICM firmware is already running */
val = ioread32(nhi->iobase + REG_FW_STS);
if (val & REG_FW_STS_ICM_EN)
if (icm_firmware_running(nhi))
return 0;
dev_dbg(&nhi->pdev->dev, "starting ICM firmware\n");
@ -1893,14 +1940,12 @@ static int icm_suspend(struct tb *tb)
*/
static void icm_unplug_children(struct tb_switch *sw)
{
unsigned int i;
struct tb_port *port;
if (tb_route(sw))
sw->is_unplugged = true;
for (i = 1; i <= sw->config.max_port_number; i++) {
struct tb_port *port = &sw->ports[i];
tb_switch_for_each_port(sw, port) {
if (port->xdomain)
port->xdomain->is_unplugged = true;
else if (tb_port_has_remote(port))
@ -1936,11 +1981,9 @@ static void remove_unplugged_switch(struct tb_switch *sw)
static void icm_free_unplugged_children(struct tb_switch *sw)
{
unsigned int i;
for (i = 1; i <= sw->config.max_port_number; i++) {
struct tb_port *port = &sw->ports[i];
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (port->xdomain && port->xdomain->is_unplugged) {
tb_xdomain_remove(port->xdomain);
port->xdomain = NULL;
@ -2216,7 +2259,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
case PCI_DEVICE_ID_INTEL_ICL_NHI0:
case PCI_DEVICE_ID_INTEL_ICL_NHI1:
icm->is_supported = icm_ar_is_supported;
icm->is_supported = icm_fr_is_supported;
icm->driver_ready = icm_icl_driver_ready;
icm->set_uuid = icm_icl_set_uuid;
icm->device_connected = icm_icl_device_connected;

View File

@ -94,7 +94,7 @@ int tb_lc_configure_link(struct tb_switch *sw)
struct tb_port *up, *down;
int ret;
if (!sw->config.enabled || !tb_route(sw))
if (!tb_route(sw) || tb_switch_is_icm(sw))
return 0;
up = tb_upstream_port(sw);
@ -124,7 +124,7 @@ void tb_lc_unconfigure_link(struct tb_switch *sw)
{
struct tb_port *up, *down;
if (sw->is_unplugged || !sw->config.enabled || !tb_route(sw))
if (sw->is_unplugged || !tb_route(sw) || tb_switch_is_icm(sw))
return;
up = tb_upstream_port(sw);
@ -177,3 +177,192 @@ int tb_lc_set_sleep(struct tb_switch *sw)
return 0;
}
/**
* tb_lc_lane_bonding_possible() - Is lane bonding possible towards switch
* @sw: Switch to check
*
* Checks whether conditions for lane bonding from parent to @sw are
* possible.
*/
bool tb_lc_lane_bonding_possible(struct tb_switch *sw)
{
struct tb_port *up;
int cap, ret;
u32 val;
if (sw->generation < 2)
return false;
up = tb_upstream_port(sw);
cap = find_port_lc_cap(up);
if (cap < 0)
return false;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, cap + TB_LC_PORT_ATTR, 1);
if (ret)
return false;
return !!(val & TB_LC_PORT_ATTR_BE);
}
static int tb_lc_dp_sink_from_port(const struct tb_switch *sw,
struct tb_port *in)
{
struct tb_port *port;
/* The first DP IN port is sink 0 and second is sink 1 */
tb_switch_for_each_port(sw, port) {
if (tb_port_is_dpin(port))
return in != port;
}
return -EINVAL;
}
static int tb_lc_dp_sink_available(struct tb_switch *sw, int sink)
{
u32 val, alloc;
int ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->cap_lc + TB_LC_SNK_ALLOCATION, 1);
if (ret)
return ret;
/*
* Sink is available for CM/SW to use if the allocation valie is
* either 0 or 1.
*/
if (!sink) {
alloc = val & TB_LC_SNK_ALLOCATION_SNK0_MASK;
if (!alloc || alloc == TB_LC_SNK_ALLOCATION_SNK0_CM)
return 0;
} else {
alloc = (val & TB_LC_SNK_ALLOCATION_SNK1_MASK) >>
TB_LC_SNK_ALLOCATION_SNK1_SHIFT;
if (!alloc || alloc == TB_LC_SNK_ALLOCATION_SNK1_CM)
return 0;
}
return -EBUSY;
}
/**
* tb_lc_dp_sink_query() - Is DP sink available for DP IN port
* @sw: Switch whose DP sink is queried
* @in: DP IN port to check
*
* Queries through LC SNK_ALLOCATION registers whether DP sink is available
* for the given DP IN port or not.
*/
bool tb_lc_dp_sink_query(struct tb_switch *sw, struct tb_port *in)
{
int sink;
/*
* For older generations sink is always available as there is no
* allocation mechanism.
*/
if (sw->generation < 3)
return true;
sink = tb_lc_dp_sink_from_port(sw, in);
if (sink < 0)
return false;
return !tb_lc_dp_sink_available(sw, sink);
}
/**
* tb_lc_dp_sink_alloc() - Allocate DP sink
* @sw: Switch whose DP sink is allocated
* @in: DP IN port the DP sink is allocated for
*
* Allocate DP sink for @in via LC SNK_ALLOCATION registers. If the
* resource is available and allocation is successful returns %0. In all
* other cases returs negative errno. In particular %-EBUSY is returned if
* the resource was not available.
*/
int tb_lc_dp_sink_alloc(struct tb_switch *sw, struct tb_port *in)
{
int ret, sink;
u32 val;
if (sw->generation < 3)
return 0;
sink = tb_lc_dp_sink_from_port(sw, in);
if (sink < 0)
return sink;
ret = tb_lc_dp_sink_available(sw, sink);
if (ret)
return ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->cap_lc + TB_LC_SNK_ALLOCATION, 1);
if (ret)
return ret;
if (!sink) {
val &= ~TB_LC_SNK_ALLOCATION_SNK0_MASK;
val |= TB_LC_SNK_ALLOCATION_SNK0_CM;
} else {
val &= ~TB_LC_SNK_ALLOCATION_SNK1_MASK;
val |= TB_LC_SNK_ALLOCATION_SNK1_CM <<
TB_LC_SNK_ALLOCATION_SNK1_SHIFT;
}
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->cap_lc + TB_LC_SNK_ALLOCATION, 1);
if (ret)
return ret;
tb_port_dbg(in, "sink %d allocated\n", sink);
return 0;
}
/**
* tb_lc_dp_sink_dealloc() - De-allocate DP sink
* @sw: Switch whose DP sink is de-allocated
* @in: DP IN port whose DP sink is de-allocated
*
* De-allocate DP sink from @in using LC SNK_ALLOCATION registers.
*/
int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in)
{
int ret, sink;
u32 val;
if (sw->generation < 3)
return 0;
sink = tb_lc_dp_sink_from_port(sw, in);
if (sink < 0)
return sink;
/* Needs to be owned by CM/SW */
ret = tb_lc_dp_sink_available(sw, sink);
if (ret)
return ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->cap_lc + TB_LC_SNK_ALLOCATION, 1);
if (ret)
return ret;
if (!sink)
val &= ~TB_LC_SNK_ALLOCATION_SNK0_MASK;
else
val &= ~TB_LC_SNK_ALLOCATION_SNK1_MASK;
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->cap_lc + TB_LC_SNK_ALLOCATION, 1);
if (ret)
return ret;
tb_port_dbg(in, "sink %d de-allocated\n", sink);
return 0;
}

View File

@ -80,7 +80,6 @@ static void icl_nhi_lc_mailbox_cmd(struct tb_nhi *nhi, enum icl_lc_mailbox_cmd c
{
u32 data;
pci_read_config_dword(nhi->pdev, VS_CAP_19, &data);
data = (cmd << VS_CAP_19_CMD_SHIFT) & VS_CAP_19_CMD_MASK;
pci_write_config_dword(nhi->pdev, VS_CAP_19, data | VS_CAP_19_VALID);
}

View File

@ -220,7 +220,8 @@ err:
* Creates path between two ports starting with given @src_hopid. Reserves
* HopIDs for each port (they can be different from @src_hopid depending on
* how many HopIDs each port already have reserved). If there are dual
* links on the path, prioritizes using @link_nr.
* links on the path, prioritizes using @link_nr but takes into account
* that the lanes may be bonded.
*
* Return: Returns a tb_path on success or NULL on failure.
*/
@ -259,7 +260,9 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
if (!in_port)
goto err;
if (in_port->dual_link_port && in_port->link_nr != link_nr)
/* When lanes are bonded primary link must be used */
if (!in_port->bonded && in_port->dual_link_port &&
in_port->link_nr != link_nr)
in_port = in_port->dual_link_port;
ret = tb_port_alloc_in_hopid(in_port, in_hopid, in_hopid);
@ -271,8 +274,27 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
if (!out_port)
goto err;
if (out_port->dual_link_port && out_port->link_nr != link_nr)
out_port = out_port->dual_link_port;
/*
* Pick up right port when going from non-bonded to
* bonded or from bonded to non-bonded.
*/
if (out_port->dual_link_port) {
if (!in_port->bonded && out_port->bonded &&
out_port->link_nr) {
/*
* Use primary link when going from
* non-bonded to bonded.
*/
out_port = out_port->dual_link_port;
} else if (!out_port->bonded &&
out_port->link_nr != link_nr) {
/*
* If out port is not bonded follow
* link_nr.
*/
out_port = out_port->dual_link_port;
}
}
if (i == num_hops - 1)
ret = tb_port_alloc_out_hopid(out_port, dst_hopid,
@ -535,3 +557,25 @@ bool tb_path_is_invalid(struct tb_path *path)
}
return false;
}
/**
* tb_path_switch_on_path() - Does the path go through certain switch
* @path: Path to check
* @sw: Switch to check
*
* Goes over all hops on path and checks if @sw is any of them.
* Direction does not matter.
*/
bool tb_path_switch_on_path(const struct tb_path *path,
const struct tb_switch *sw)
{
int i;
for (i = 0; i < path->path_length; i++) {
if (path->hops[i].in_port->sw == sw ||
path->hops[i].out_port->sw == sw)
return true;
}
return false;
}

View File

@ -553,17 +553,17 @@ int tb_port_add_nfc_credits(struct tb_port *port, int credits)
if (credits == 0 || port->sw->is_unplugged)
return 0;
nfc_credits = port->config.nfc_credits & TB_PORT_NFC_CREDITS_MASK;
nfc_credits = port->config.nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK;
nfc_credits += credits;
tb_port_dbg(port, "adding %d NFC credits to %lu",
credits, port->config.nfc_credits & TB_PORT_NFC_CREDITS_MASK);
tb_port_dbg(port, "adding %d NFC credits to %lu", credits,
port->config.nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK);
port->config.nfc_credits &= ~TB_PORT_NFC_CREDITS_MASK;
port->config.nfc_credits &= ~ADP_CS_4_NFC_BUFFERS_MASK;
port->config.nfc_credits |= nfc_credits;
return tb_port_write(port, &port->config.nfc_credits,
TB_CFG_PORT, 4, 1);
TB_CFG_PORT, ADP_CS_4, 1);
}
/**
@ -578,14 +578,14 @@ int tb_port_set_initial_credits(struct tb_port *port, u32 credits)
u32 data;
int ret;
ret = tb_port_read(port, &data, TB_CFG_PORT, 5, 1);
ret = tb_port_read(port, &data, TB_CFG_PORT, ADP_CS_5, 1);
if (ret)
return ret;
data &= ~TB_PORT_LCA_MASK;
data |= (credits << TB_PORT_LCA_SHIFT) & TB_PORT_LCA_MASK;
data &= ~ADP_CS_5_LCA_MASK;
data |= (credits << ADP_CS_5_LCA_SHIFT) & ADP_CS_5_LCA_MASK;
return tb_port_write(port, &data, TB_CFG_PORT, 5, 1);
return tb_port_write(port, &data, TB_CFG_PORT, ADP_CS_5, 1);
}
/**
@ -645,6 +645,7 @@ static int tb_init_port(struct tb_port *port)
ida_init(&port->out_hopids);
}
INIT_LIST_HEAD(&port->list);
return 0;
}
@ -775,6 +776,132 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
return next;
}
static int tb_port_get_link_speed(struct tb_port *port)
{
u32 val, speed;
int ret;
if (!port->cap_phy)
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
speed = (val & LANE_ADP_CS_1_CURRENT_SPEED_MASK) >>
LANE_ADP_CS_1_CURRENT_SPEED_SHIFT;
return speed == LANE_ADP_CS_1_CURRENT_SPEED_GEN3 ? 20 : 10;
}
static int tb_port_get_link_width(struct tb_port *port)
{
u32 val;
int ret;
if (!port->cap_phy)
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
return (val & LANE_ADP_CS_1_CURRENT_WIDTH_MASK) >>
LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT;
}
static bool tb_port_is_width_supported(struct tb_port *port, int width)
{
u32 phy, widths;
int ret;
if (!port->cap_phy)
return false;
ret = tb_port_read(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_0, 1);
if (ret)
return ret;
widths = (phy & LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK) >>
LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT;
return !!(widths & width);
}
static int tb_port_set_link_width(struct tb_port *port, unsigned int width)
{
u32 val;
int ret;
if (!port->cap_phy)
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
val &= ~LANE_ADP_CS_1_TARGET_WIDTH_MASK;
switch (width) {
case 1:
val |= LANE_ADP_CS_1_TARGET_WIDTH_SINGLE <<
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT;
break;
case 2:
val |= LANE_ADP_CS_1_TARGET_WIDTH_DUAL <<
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT;
break;
default:
return -EINVAL;
}
val |= LANE_ADP_CS_1_LB;
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
static int tb_port_lane_bonding_enable(struct tb_port *port)
{
int ret;
/*
* Enable lane bonding for both links if not already enabled by
* for example the boot firmware.
*/
ret = tb_port_get_link_width(port);
if (ret == 1) {
ret = tb_port_set_link_width(port, 2);
if (ret)
return ret;
}
ret = tb_port_get_link_width(port->dual_link_port);
if (ret == 1) {
ret = tb_port_set_link_width(port->dual_link_port, 2);
if (ret) {
tb_port_set_link_width(port, 1);
return ret;
}
}
port->bonded = true;
port->dual_link_port->bonded = true;
return 0;
}
static void tb_port_lane_bonding_disable(struct tb_port *port)
{
port->dual_link_port->bonded = false;
port->bonded = false;
tb_port_set_link_width(port->dual_link_port, 1);
tb_port_set_link_width(port, 1);
}
/**
* tb_port_is_enabled() - Is the adapter port enabled
* @port: Port to check
@ -803,10 +930,11 @@ bool tb_pci_port_is_enabled(struct tb_port *port)
{
u32 data;
if (tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1))
if (tb_port_read(port, &data, TB_CFG_PORT,
port->cap_adap + ADP_PCIE_CS_0, 1))
return false;
return !!(data & TB_PCI_EN);
return !!(data & ADP_PCIE_CS_0_PE);
}
/**
@ -816,10 +944,11 @@ bool tb_pci_port_is_enabled(struct tb_port *port)
*/
int tb_pci_port_enable(struct tb_port *port, bool enable)
{
u32 word = enable ? TB_PCI_EN : 0x0;
u32 word = enable ? ADP_PCIE_CS_0_PE : 0x0;
if (!port->cap_adap)
return -ENXIO;
return tb_port_write(port, &word, TB_CFG_PORT, port->cap_adap, 1);
return tb_port_write(port, &word, TB_CFG_PORT,
port->cap_adap + ADP_PCIE_CS_0, 1);
}
/**
@ -833,11 +962,12 @@ int tb_dp_port_hpd_is_active(struct tb_port *port)
u32 data;
int ret;
ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap + 2, 1);
ret = tb_port_read(port, &data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_2, 1);
if (ret)
return ret;
return !!(data & TB_DP_HDP);
return !!(data & ADP_DP_CS_2_HDP);
}
/**
@ -851,12 +981,14 @@ int tb_dp_port_hpd_clear(struct tb_port *port)
u32 data;
int ret;
ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap + 3, 1);
ret = tb_port_read(port, &data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_3, 1);
if (ret)
return ret;
data |= TB_DP_HPDC;
return tb_port_write(port, &data, TB_CFG_PORT, port->cap_adap + 3, 1);
data |= ADP_DP_CS_3_HDPC;
return tb_port_write(port, &data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_3, 1);
}
/**
@ -874,20 +1006,23 @@ int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
u32 data[2];
int ret;
ret = tb_port_read(port, data, TB_CFG_PORT, port->cap_adap,
ARRAY_SIZE(data));
ret = tb_port_read(port, data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_0, ARRAY_SIZE(data));
if (ret)
return ret;
data[0] &= ~TB_DP_VIDEO_HOPID_MASK;
data[1] &= ~(TB_DP_AUX_RX_HOPID_MASK | TB_DP_AUX_TX_HOPID_MASK);
data[0] &= ~ADP_DP_CS_0_VIDEO_HOPID_MASK;
data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK;
data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK;
data[0] |= (video << TB_DP_VIDEO_HOPID_SHIFT) & TB_DP_VIDEO_HOPID_MASK;
data[1] |= aux_tx & TB_DP_AUX_TX_HOPID_MASK;
data[1] |= (aux_rx << TB_DP_AUX_RX_HOPID_SHIFT) & TB_DP_AUX_RX_HOPID_MASK;
data[0] |= (video << ADP_DP_CS_0_VIDEO_HOPID_SHIFT) &
ADP_DP_CS_0_VIDEO_HOPID_MASK;
data[1] |= aux_tx & ADP_DP_CS_1_AUX_TX_HOPID_MASK;
data[1] |= (aux_rx << ADP_DP_CS_1_AUX_RX_HOPID_SHIFT) &
ADP_DP_CS_1_AUX_RX_HOPID_MASK;
return tb_port_write(port, data, TB_CFG_PORT, port->cap_adap,
ARRAY_SIZE(data));
return tb_port_write(port, data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_0, ARRAY_SIZE(data));
}
/**
@ -896,12 +1031,13 @@ int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
*/
bool tb_dp_port_is_enabled(struct tb_port *port)
{
u32 data;
u32 data[2];
if (tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1))
if (tb_port_read(port, data, TB_CFG_PORT, port->cap_adap + ADP_DP_CS_0,
ARRAY_SIZE(data)))
return false;
return !!(data & (TB_DP_VIDEO_EN | TB_DP_AUX_EN));
return !!(data[0] & (ADP_DP_CS_0_VE | ADP_DP_CS_0_AE));
}
/**
@ -914,19 +1050,21 @@ bool tb_dp_port_is_enabled(struct tb_port *port)
*/
int tb_dp_port_enable(struct tb_port *port, bool enable)
{
u32 data;
u32 data[2];
int ret;
ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1);
ret = tb_port_read(port, data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_0, ARRAY_SIZE(data));
if (ret)
return ret;
if (enable)
data |= TB_DP_VIDEO_EN | TB_DP_AUX_EN;
data[0] |= ADP_DP_CS_0_VE | ADP_DP_CS_0_AE;
else
data &= ~(TB_DP_VIDEO_EN | TB_DP_AUX_EN);
data[0] &= ~(ADP_DP_CS_0_VE | ADP_DP_CS_0_AE);
return tb_port_write(port, &data, TB_CFG_PORT, port->cap_adap, 1);
return tb_port_write(port, data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_0, ARRAY_SIZE(data));
}
/* switch utility functions */
@ -983,7 +1121,7 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active)
u32 data;
int res;
if (!sw->config.enabled)
if (tb_switch_is_icm(sw))
return 0;
sw->config.plug_events_delay = 0xff;
@ -1031,13 +1169,6 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
if (sw->authorized)
goto unlock;
/*
* Make sure there is no PCIe rescan ongoing when a new PCIe
* tunnel is created. Otherwise the PCIe rescan code might find
* the new tunnel too early.
*/
pci_lock_rescan_remove();
switch (val) {
/* Approve switch */
case 1:
@ -1057,8 +1188,6 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
break;
}
pci_unlock_rescan_remove();
if (!ret) {
sw->authorized = val;
/* Notify status change to the userspace */
@ -1120,6 +1249,15 @@ device_name_show(struct device *dev, struct device_attribute *attr, char *buf)
}
static DEVICE_ATTR_RO(device_name);
static ssize_t
generation_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct tb_switch *sw = tb_to_switch(dev);
return sprintf(buf, "%u\n", sw->generation);
}
static DEVICE_ATTR_RO(generation);
static ssize_t key_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@ -1172,6 +1310,36 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR(key, 0600, key_show, key_store);
static ssize_t speed_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_switch *sw = tb_to_switch(dev);
return sprintf(buf, "%u.0 Gb/s\n", sw->link_speed);
}
/*
* Currently all lanes must run at the same speed but we expose here
* both directions to allow possible asymmetric links in the future.
*/
static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL);
static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL);
static ssize_t lanes_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_switch *sw = tb_to_switch(dev);
return sprintf(buf, "%u\n", sw->link_width);
}
/*
* Currently link has same amount of lanes both directions (1 or 2) but
* expose them separately to allow possible asymmetric links in the future.
*/
static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL);
static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL);
static void nvm_authenticate_start(struct tb_switch *sw)
{
struct pci_dev *root_port;
@ -1325,9 +1493,14 @@ static struct attribute *switch_attrs[] = {
&dev_attr_boot.attr,
&dev_attr_device.attr,
&dev_attr_device_name.attr,
&dev_attr_generation.attr,
&dev_attr_key.attr,
&dev_attr_nvm_authenticate.attr,
&dev_attr_nvm_version.attr,
&dev_attr_rx_speed.attr,
&dev_attr_rx_lanes.attr,
&dev_attr_tx_speed.attr,
&dev_attr_tx_lanes.attr,
&dev_attr_vendor.attr,
&dev_attr_vendor_name.attr,
&dev_attr_unique_id.attr,
@ -1358,6 +1531,13 @@ static umode_t switch_attr_is_visible(struct kobject *kobj,
sw->security_level == TB_SECURITY_SECURE)
return attr->mode;
return 0;
} else if (attr == &dev_attr_rx_speed.attr ||
attr == &dev_attr_rx_lanes.attr ||
attr == &dev_attr_tx_speed.attr ||
attr == &dev_attr_tx_lanes.attr) {
if (tb_route(sw))
return attr->mode;
return 0;
} else if (attr == &dev_attr_nvm_authenticate.attr) {
if (sw->dma_port && !sw->no_nvm_upgrade)
return attr->mode;
@ -1388,14 +1568,14 @@ static const struct attribute_group *switch_groups[] = {
static void tb_switch_release(struct device *dev)
{
struct tb_switch *sw = tb_to_switch(dev);
int i;
struct tb_port *port;
dma_port_free(sw->dma_port);
for (i = 1; i <= sw->config.max_port_number; i++) {
if (!sw->ports[i].disabled) {
ida_destroy(&sw->ports[i].in_hopids);
ida_destroy(&sw->ports[i].out_hopids);
tb_switch_for_each_port(sw, port) {
if (!port->disabled) {
ida_destroy(&port->in_hopids);
ida_destroy(&port->out_hopids);
}
}
@ -1716,7 +1896,7 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
}
/* Root switch DMA port requires running firmware */
if (!tb_route(sw) && sw->config.enabled)
if (!tb_route(sw) && !tb_switch_is_icm(sw))
return 0;
sw->dma_port = dma_port_alloc(sw);
@ -1757,6 +1937,153 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
return -ESHUTDOWN;
}
static void tb_switch_default_link_ports(struct tb_switch *sw)
{
int i;
for (i = 1; i <= sw->config.max_port_number; i += 2) {
struct tb_port *port = &sw->ports[i];
struct tb_port *subordinate;
if (!tb_port_is_null(port))
continue;
/* Check for the subordinate port */
if (i == sw->config.max_port_number ||
!tb_port_is_null(&sw->ports[i + 1]))
continue;
/* Link them if not already done so (by DROM) */
subordinate = &sw->ports[i + 1];
if (!port->dual_link_port && !subordinate->dual_link_port) {
port->link_nr = 0;
port->dual_link_port = subordinate;
subordinate->link_nr = 1;
subordinate->dual_link_port = port;
tb_sw_dbg(sw, "linked ports %d <-> %d\n",
port->port, subordinate->port);
}
}
}
static bool tb_switch_lane_bonding_possible(struct tb_switch *sw)
{
const struct tb_port *up = tb_upstream_port(sw);
if (!up->dual_link_port || !up->dual_link_port->remote)
return false;
return tb_lc_lane_bonding_possible(sw);
}
static int tb_switch_update_link_attributes(struct tb_switch *sw)
{
struct tb_port *up;
bool change = false;
int ret;
if (!tb_route(sw) || tb_switch_is_icm(sw))
return 0;
up = tb_upstream_port(sw);
ret = tb_port_get_link_speed(up);
if (ret < 0)
return ret;
if (sw->link_speed != ret)
change = true;
sw->link_speed = ret;
ret = tb_port_get_link_width(up);
if (ret < 0)
return ret;
if (sw->link_width != ret)
change = true;
sw->link_width = ret;
/* Notify userspace that there is possible link attribute change */
if (device_is_registered(&sw->dev) && change)
kobject_uevent(&sw->dev.kobj, KOBJ_CHANGE);
return 0;
}
/**
* tb_switch_lane_bonding_enable() - Enable lane bonding
* @sw: Switch to enable lane bonding
*
* Connection manager can call this function to enable lane bonding of a
* switch. If conditions are correct and both switches support the feature,
* lanes are bonded. It is safe to call this to any switch.
*/
int tb_switch_lane_bonding_enable(struct tb_switch *sw)
{
struct tb_switch *parent = tb_to_switch(sw->dev.parent);
struct tb_port *up, *down;
u64 route = tb_route(sw);
int ret;
if (!route)
return 0;
if (!tb_switch_lane_bonding_possible(sw))
return 0;
up = tb_upstream_port(sw);
down = tb_port_at(route, parent);
if (!tb_port_is_width_supported(up, 2) ||
!tb_port_is_width_supported(down, 2))
return 0;
ret = tb_port_lane_bonding_enable(up);
if (ret) {
tb_port_warn(up, "failed to enable lane bonding\n");
return ret;
}
ret = tb_port_lane_bonding_enable(down);
if (ret) {
tb_port_warn(down, "failed to enable lane bonding\n");
tb_port_lane_bonding_disable(up);
return ret;
}
tb_switch_update_link_attributes(sw);
tb_sw_dbg(sw, "lane bonding enabled\n");
return ret;
}
/**
* tb_switch_lane_bonding_disable() - Disable lane bonding
* @sw: Switch whose lane bonding to disable
*
* Disables lane bonding between @sw and parent. This can be called even
* if lanes were not bonded originally.
*/
void tb_switch_lane_bonding_disable(struct tb_switch *sw)
{
struct tb_switch *parent = tb_to_switch(sw->dev.parent);
struct tb_port *up, *down;
if (!tb_route(sw))
return;
up = tb_upstream_port(sw);
if (!up->bonded)
return;
down = tb_port_at(tb_route(sw), parent);
tb_port_lane_bonding_disable(up);
tb_port_lane_bonding_disable(down);
tb_switch_update_link_attributes(sw);
tb_sw_dbg(sw, "lane bonding disabled\n");
}
/**
* tb_switch_add() - Add a switch to the domain
* @sw: Switch to add
@ -1781,21 +2108,25 @@ int tb_switch_add(struct tb_switch *sw)
* configuration based mailbox.
*/
ret = tb_switch_add_dma_port(sw);
if (ret)
if (ret) {
dev_err(&sw->dev, "failed to add DMA port\n");
return ret;
}
if (!sw->safe_mode) {
/* read drom */
ret = tb_drom_read(sw);
if (ret) {
tb_sw_warn(sw, "tb_eeprom_read_rom failed\n");
dev_err(&sw->dev, "reading DROM failed\n");
return ret;
}
tb_sw_dbg(sw, "uid: %#llx\n", sw->uid);
ret = tb_switch_set_uuid(sw);
if (ret)
if (ret) {
dev_err(&sw->dev, "failed to set UUID\n");
return ret;
}
for (i = 0; i <= sw->config.max_port_number; i++) {
if (sw->ports[i].disabled) {
@ -1803,14 +2134,24 @@ int tb_switch_add(struct tb_switch *sw)
continue;
}
ret = tb_init_port(&sw->ports[i]);
if (ret)
if (ret) {
dev_err(&sw->dev, "failed to initialize port %d\n", i);
return ret;
}
}
tb_switch_default_link_ports(sw);
ret = tb_switch_update_link_attributes(sw);
if (ret)
return ret;
}
ret = device_add(&sw->dev);
if (ret)
if (ret) {
dev_err(&sw->dev, "failed to add device: %d\n", ret);
return ret;
}
if (tb_route(sw)) {
dev_info(&sw->dev, "new device found, vendor=%#x device=%#x\n",
@ -1822,6 +2163,7 @@ int tb_switch_add(struct tb_switch *sw)
ret = tb_switch_nvm_add(sw);
if (ret) {
dev_err(&sw->dev, "failed to add NVM devices\n");
device_del(&sw->dev);
return ret;
}
@ -1848,7 +2190,7 @@ int tb_switch_add(struct tb_switch *sw)
*/
void tb_switch_remove(struct tb_switch *sw)
{
int i;
struct tb_port *port;
if (sw->rpm) {
pm_runtime_get_sync(&sw->dev);
@ -1856,13 +2198,13 @@ void tb_switch_remove(struct tb_switch *sw)
}
/* port 0 is the switch itself and never has a remote */
for (i = 1; i <= sw->config.max_port_number; i++) {
if (tb_port_has_remote(&sw->ports[i])) {
tb_switch_remove(sw->ports[i].remote->sw);
sw->ports[i].remote = NULL;
} else if (sw->ports[i].xdomain) {
tb_xdomain_remove(sw->ports[i].xdomain);
sw->ports[i].xdomain = NULL;
tb_switch_for_each_port(sw, port) {
if (tb_port_has_remote(port)) {
tb_switch_remove(port->remote->sw);
port->remote = NULL;
} else if (port->xdomain) {
tb_xdomain_remove(port->xdomain);
port->xdomain = NULL;
}
}
@ -1882,7 +2224,8 @@ void tb_switch_remove(struct tb_switch *sw)
*/
void tb_sw_set_unplugged(struct tb_switch *sw)
{
int i;
struct tb_port *port;
if (sw == sw->tb->root_switch) {
tb_sw_WARN(sw, "cannot unplug root switch\n");
return;
@ -1892,17 +2235,19 @@ void tb_sw_set_unplugged(struct tb_switch *sw)
return;
}
sw->is_unplugged = true;
for (i = 0; i <= sw->config.max_port_number; i++) {
if (tb_port_has_remote(&sw->ports[i]))
tb_sw_set_unplugged(sw->ports[i].remote->sw);
else if (sw->ports[i].xdomain)
sw->ports[i].xdomain->is_unplugged = true;
tb_switch_for_each_port(sw, port) {
if (tb_port_has_remote(port))
tb_sw_set_unplugged(port->remote->sw);
else if (port->xdomain)
port->xdomain->is_unplugged = true;
}
}
int tb_switch_resume(struct tb_switch *sw)
{
int i, err;
struct tb_port *port;
int err;
tb_sw_dbg(sw, "resuming switch\n");
/*
@ -1950,9 +2295,7 @@ int tb_switch_resume(struct tb_switch *sw)
return err;
/* check for surviving downstream switches */
for (i = 1; i <= sw->config.max_port_number; i++) {
struct tb_port *port = &sw->ports[i];
tb_switch_for_each_port(sw, port) {
if (!tb_port_has_remote(port) && !port->xdomain)
continue;
@ -1976,19 +2319,64 @@ int tb_switch_resume(struct tb_switch *sw)
void tb_switch_suspend(struct tb_switch *sw)
{
int i, err;
struct tb_port *port;
int err;
err = tb_plug_events_active(sw, false);
if (err)
return;
for (i = 1; i <= sw->config.max_port_number; i++) {
if (tb_port_has_remote(&sw->ports[i]))
tb_switch_suspend(sw->ports[i].remote->sw);
tb_switch_for_each_port(sw, port) {
if (tb_port_has_remote(port))
tb_switch_suspend(port->remote->sw);
}
tb_lc_set_sleep(sw);
}
/**
* tb_switch_query_dp_resource() - Query availability of DP resource
* @sw: Switch whose DP resource is queried
* @in: DP IN port
*
* Queries availability of DP resource for DP tunneling using switch
* specific means. Returns %true if resource is available.
*/
bool tb_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
return tb_lc_dp_sink_query(sw, in);
}
/**
* tb_switch_alloc_dp_resource() - Allocate available DP resource
* @sw: Switch whose DP resource is allocated
* @in: DP IN port
*
* Allocates DP resource for DP tunneling. The resource must be
* available for this to succeed (see tb_switch_query_dp_resource()).
* Returns %0 in success and negative errno otherwise.
*/
int tb_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
return tb_lc_dp_sink_alloc(sw, in);
}
/**
* tb_switch_dealloc_dp_resource() - De-allocate DP resource
* @sw: Switch whose DP resource is de-allocated
* @in: DP IN port
*
* De-allocates DP resource that was previously allocated for DP
* tunneling.
*/
void tb_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
if (tb_lc_dp_sink_dealloc(sw, in)) {
tb_sw_warn(sw, "failed to de-allocate DP resource for port %d\n",
in->port);
}
}
struct tb_sw_lookup {
struct tb *tb;
u8 link;

View File

@ -9,7 +9,6 @@
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/delay.h>
#include <linux/platform_data/x86/apple.h>
#include "tb.h"
#include "tb_regs.h"
@ -18,6 +17,7 @@
/**
* struct tb_cm - Simple Thunderbolt connection manager
* @tunnel_list: List of active tunnels
* @dp_resources: List of available DP resources for DP tunneling
* @hotplug_active: tb_handle_hotplug will stop progressing plug
* events and exit if this is not set (it needs to
* acquire the lock one more time). Used to drain wq
@ -25,6 +25,7 @@
*/
struct tb_cm {
struct list_head tunnel_list;
struct list_head dp_resources;
bool hotplug_active;
};
@ -56,17 +57,51 @@ static void tb_queue_hotplug(struct tb *tb, u64 route, u8 port, bool unplug)
/* enumeration & hot plug handling */
static void tb_add_dp_resources(struct tb_switch *sw)
{
struct tb_cm *tcm = tb_priv(sw->tb);
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (!tb_port_is_dpin(port))
continue;
if (!tb_switch_query_dp_resource(sw, port))
continue;
list_add_tail(&port->list, &tcm->dp_resources);
tb_port_dbg(port, "DP IN resource available\n");
}
}
static void tb_remove_dp_resources(struct tb_switch *sw)
{
struct tb_cm *tcm = tb_priv(sw->tb);
struct tb_port *port, *tmp;
/* Clear children resources first */
tb_switch_for_each_port(sw, port) {
if (tb_port_has_remote(port))
tb_remove_dp_resources(port->remote->sw);
}
list_for_each_entry_safe(port, tmp, &tcm->dp_resources, list) {
if (port->sw == sw) {
tb_port_dbg(port, "DP OUT resource unavailable\n");
list_del_init(&port->list);
}
}
}
static void tb_discover_tunnels(struct tb_switch *sw)
{
struct tb *tb = sw->tb;
struct tb_cm *tcm = tb_priv(tb);
struct tb_port *port;
int i;
for (i = 1; i <= sw->config.max_port_number; i++) {
tb_switch_for_each_port(sw, port) {
struct tb_tunnel *tunnel = NULL;
port = &sw->ports[i];
switch (port->config.type) {
case TB_TYPE_DP_HDMI_IN:
tunnel = tb_tunnel_discover_dp(tb, port);
@ -95,9 +130,9 @@ static void tb_discover_tunnels(struct tb_switch *sw)
list_add_tail(&tunnel->list, &tcm->tunnel_list);
}
for (i = 1; i <= sw->config.max_port_number; i++) {
if (tb_port_has_remote(&sw->ports[i]))
tb_discover_tunnels(sw->ports[i].remote->sw);
tb_switch_for_each_port(sw, port) {
if (tb_port_has_remote(port))
tb_discover_tunnels(port->remote->sw);
}
}
@ -130,9 +165,10 @@ static void tb_scan_port(struct tb_port *port);
*/
static void tb_scan_switch(struct tb_switch *sw)
{
int i;
for (i = 1; i <= sw->config.max_port_number; i++)
tb_scan_port(&sw->ports[i]);
struct tb_port *port;
tb_switch_for_each_port(sw, port)
tb_scan_port(port);
}
/**
@ -217,11 +253,16 @@ static void tb_scan_port(struct tb_port *port)
upstream_port->dual_link_port->remote = port->dual_link_port;
}
/* Enable lane bonding if supported */
if (tb_switch_lane_bonding_enable(sw))
tb_sw_warn(sw, "failed to enable lane bonding\n");
tb_scan_switch(sw);
}
static int tb_free_tunnel(struct tb *tb, enum tb_tunnel_type type,
struct tb_port *src_port, struct tb_port *dst_port)
static struct tb_tunnel *tb_find_tunnel(struct tb *tb, enum tb_tunnel_type type,
struct tb_port *src_port,
struct tb_port *dst_port)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_tunnel *tunnel;
@ -230,14 +271,32 @@ static int tb_free_tunnel(struct tb *tb, enum tb_tunnel_type type,
if (tunnel->type == type &&
((src_port && src_port == tunnel->src_port) ||
(dst_port && dst_port == tunnel->dst_port))) {
tb_tunnel_deactivate(tunnel);
list_del(&tunnel->list);
tb_tunnel_free(tunnel);
return 0;
return tunnel;
}
}
return -ENODEV;
return NULL;
}
static void tb_deactivate_and_free_tunnel(struct tb_tunnel *tunnel)
{
if (!tunnel)
return;
tb_tunnel_deactivate(tunnel);
list_del(&tunnel->list);
/*
* In case of DP tunnel make sure the DP IN resource is deallocated
* properly.
*/
if (tb_tunnel_is_dp(tunnel)) {
struct tb_port *in = tunnel->src_port;
tb_switch_dealloc_dp_resource(in->sw, in);
}
tb_tunnel_free(tunnel);
}
/**
@ -250,11 +309,8 @@ static void tb_free_invalid_tunnels(struct tb *tb)
struct tb_tunnel *n;
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) {
if (tb_tunnel_is_invalid(tunnel)) {
tb_tunnel_deactivate(tunnel);
list_del(&tunnel->list);
tb_tunnel_free(tunnel);
}
if (tb_tunnel_is_invalid(tunnel))
tb_deactivate_and_free_tunnel(tunnel);
}
}
@ -263,14 +319,15 @@ static void tb_free_invalid_tunnels(struct tb *tb)
*/
static void tb_free_unplugged_children(struct tb_switch *sw)
{
int i;
for (i = 1; i <= sw->config.max_port_number; i++) {
struct tb_port *port = &sw->ports[i];
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (!tb_port_has_remote(port))
continue;
if (port->remote->sw->is_unplugged) {
tb_remove_dp_resources(port->remote->sw);
tb_switch_lane_bonding_disable(port->remote->sw);
tb_switch_remove(port->remote->sw);
port->remote = NULL;
if (port->dual_link_port)
@ -289,10 +346,13 @@ static void tb_free_unplugged_children(struct tb_switch *sw)
static struct tb_port *tb_find_port(struct tb_switch *sw,
enum tb_port_type type)
{
int i;
for (i = 1; i <= sw->config.max_port_number; i++)
if (sw->ports[i].config.type == type)
return &sw->ports[i];
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (port->config.type == type)
return port;
}
return NULL;
}
@ -304,18 +364,18 @@ static struct tb_port *tb_find_port(struct tb_switch *sw,
static struct tb_port *tb_find_unused_port(struct tb_switch *sw,
enum tb_port_type type)
{
int i;
struct tb_port *port;
for (i = 1; i <= sw->config.max_port_number; i++) {
if (tb_is_upstream_port(&sw->ports[i]))
tb_switch_for_each_port(sw, port) {
if (tb_is_upstream_port(port))
continue;
if (sw->ports[i].config.type != type)
if (port->config.type != type)
continue;
if (!sw->ports[i].cap_adap)
if (port->cap_adap)
continue;
if (tb_port_is_enabled(&sw->ports[i]))
if (tb_port_is_enabled(port))
continue;
return &sw->ports[i];
return port;
}
return NULL;
}
@ -336,10 +396,13 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw,
* Hard-coded Thunderbolt port to PCIe down port mapping
* per controller.
*/
if (tb_switch_is_cr(sw))
if (tb_switch_is_cactus_ridge(sw) ||
tb_switch_is_alpine_ridge(sw))
index = !phy_port ? 6 : 7;
else if (tb_switch_is_fr(sw))
else if (tb_switch_is_falcon_ridge(sw))
index = !phy_port ? 6 : 8;
else if (tb_switch_is_titan_ridge(sw))
index = !phy_port ? 8 : 9;
else
goto out;
@ -358,42 +421,162 @@ out:
return tb_find_unused_port(sw, TB_TYPE_PCIE_DOWN);
}
static int tb_tunnel_dp(struct tb *tb, struct tb_port *out)
static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in,
struct tb_port *out)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_switch *sw = out->sw;
struct tb_tunnel *tunnel;
struct tb_port *in;
int bw, available_bw = 40000;
if (tb_port_is_enabled(out))
return 0;
while (sw && sw != in->sw) {
bw = sw->link_speed * sw->link_width * 1000; /* Mb/s */
/* Leave 10% guard band */
bw -= bw / 10;
do {
sw = tb_to_switch(sw->dev.parent);
if (!sw)
return 0;
in = tb_find_unused_port(sw, TB_TYPE_DP_HDMI_IN);
} while (!in);
/*
* Check for any active DP tunnels that go through this
* switch and reduce their consumed bandwidth from
* available.
*/
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
int consumed_bw;
tunnel = tb_tunnel_alloc_dp(tb, in, out);
if (!tb_tunnel_switch_on_path(tunnel, sw))
continue;
consumed_bw = tb_tunnel_consumed_bandwidth(tunnel);
if (consumed_bw < 0)
return consumed_bw;
bw -= consumed_bw;
}
if (bw < available_bw)
available_bw = bw;
sw = tb_switch_parent(sw);
}
return available_bw;
}
static void tb_tunnel_dp(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_port *port, *in, *out;
struct tb_tunnel *tunnel;
int available_bw;
/*
* Find pair of inactive DP IN and DP OUT adapters and then
* establish a DP tunnel between them.
*/
tb_dbg(tb, "looking for DP IN <-> DP OUT pairs:\n");
in = NULL;
out = NULL;
list_for_each_entry(port, &tcm->dp_resources, list) {
if (tb_port_is_enabled(port)) {
tb_port_dbg(port, "in use\n");
continue;
}
tb_port_dbg(port, "available\n");
if (!in && tb_port_is_dpin(port))
in = port;
else if (!out && tb_port_is_dpout(port))
out = port;
}
if (!in) {
tb_dbg(tb, "no suitable DP IN adapter available, not tunneling\n");
return;
}
if (!out) {
tb_dbg(tb, "no suitable DP OUT adapter available, not tunneling\n");
return;
}
if (tb_switch_alloc_dp_resource(in->sw, in)) {
tb_port_dbg(in, "no resource available for DP IN, not tunneling\n");
return;
}
/* Calculate available bandwidth between in and out */
available_bw = tb_available_bw(tcm, in, out);
if (available_bw < 0) {
tb_warn(tb, "failed to determine available bandwidth\n");
return;
}
tb_dbg(tb, "available bandwidth for new DP tunnel %u Mb/s\n",
available_bw);
tunnel = tb_tunnel_alloc_dp(tb, in, out, available_bw);
if (!tunnel) {
tb_port_dbg(out, "DP tunnel allocation failed\n");
return -ENOMEM;
tb_port_dbg(out, "could not allocate DP tunnel\n");
goto dealloc_dp;
}
if (tb_tunnel_activate(tunnel)) {
tb_port_info(out, "DP tunnel activation failed, aborting\n");
tb_tunnel_free(tunnel);
return -EIO;
goto dealloc_dp;
}
list_add_tail(&tunnel->list, &tcm->tunnel_list);
return 0;
return;
dealloc_dp:
tb_switch_dealloc_dp_resource(in->sw, in);
}
static void tb_teardown_dp(struct tb *tb, struct tb_port *out)
static void tb_dp_resource_unavailable(struct tb *tb, struct tb_port *port)
{
tb_free_tunnel(tb, TB_TUNNEL_DP, NULL, out);
struct tb_port *in, *out;
struct tb_tunnel *tunnel;
if (tb_port_is_dpin(port)) {
tb_port_dbg(port, "DP IN resource unavailable\n");
in = port;
out = NULL;
} else {
tb_port_dbg(port, "DP OUT resource unavailable\n");
in = NULL;
out = port;
}
tunnel = tb_find_tunnel(tb, TB_TUNNEL_DP, in, out);
tb_deactivate_and_free_tunnel(tunnel);
list_del_init(&port->list);
/*
* See if there is another DP OUT port that can be used for
* to create another tunnel.
*/
tb_tunnel_dp(tb);
}
static void tb_dp_resource_available(struct tb *tb, struct tb_port *port)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_port *p;
if (tb_port_is_enabled(port))
return;
list_for_each_entry(p, &tcm->dp_resources, list) {
if (p == port)
return;
}
tb_port_dbg(port, "DP %s resource available\n",
tb_port_is_dpin(port) ? "IN" : "OUT");
list_add_tail(&port->list, &tcm->dp_resources);
/* Look for suitable DP IN <-> DP OUT pairs now */
tb_tunnel_dp(tb);
}
static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
@ -468,6 +651,7 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
{
struct tb_port *dst_port;
struct tb_tunnel *tunnel;
struct tb_switch *sw;
sw = tb_to_switch(xd->dev.parent);
@ -478,7 +662,8 @@ static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
* case of cable disconnect) so it is fine if we cannot find it
* here anymore.
*/
tb_free_tunnel(tb, TB_TUNNEL_DMA, NULL, dst_port);
tunnel = tb_find_tunnel(tb, TB_TUNNEL_DMA, NULL, dst_port);
tb_deactivate_and_free_tunnel(tunnel);
}
static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
@ -533,10 +718,14 @@ static void tb_handle_hotplug(struct work_struct *work)
tb_port_dbg(port, "switch unplugged\n");
tb_sw_set_unplugged(port->remote->sw);
tb_free_invalid_tunnels(tb);
tb_remove_dp_resources(port->remote->sw);
tb_switch_lane_bonding_disable(port->remote->sw);
tb_switch_remove(port->remote->sw);
port->remote = NULL;
if (port->dual_link_port)
port->dual_link_port->remote = NULL;
/* Maybe we can create another DP tunnel */
tb_tunnel_dp(tb);
} else if (port->xdomain) {
struct tb_xdomain *xd = tb_xdomain_get(port->xdomain);
@ -553,8 +742,8 @@ static void tb_handle_hotplug(struct work_struct *work)
port->xdomain = NULL;
__tb_disconnect_xdomain_paths(tb, xd);
tb_xdomain_put(xd);
} else if (tb_port_is_dpout(port)) {
tb_teardown_dp(tb, port);
} else if (tb_port_is_dpout(port) || tb_port_is_dpin(port)) {
tb_dp_resource_unavailable(tb, port);
} else {
tb_port_dbg(port,
"got unplug event for disconnected port, ignoring\n");
@ -567,8 +756,8 @@ static void tb_handle_hotplug(struct work_struct *work)
tb_scan_port(port);
if (!port->remote)
tb_port_dbg(port, "hotplug: no switch found\n");
} else if (tb_port_is_dpout(port)) {
tb_tunnel_dp(tb, port);
} else if (tb_port_is_dpout(port) || tb_port_is_dpin(port)) {
tb_dp_resource_available(tb, port);
}
}
@ -681,6 +870,8 @@ static int tb_start(struct tb *tb)
tb_scan_switch(tb->root_switch);
/* Find out tunnels created by the boot firmware */
tb_discover_tunnels(tb->root_switch);
/* Add DP IN resources for the root switch */
tb_add_dp_resources(tb->root_switch);
/* Make the discovered switches available to the userspace */
device_for_each_child(&tb->root_switch->dev, NULL,
tb_scan_finalize_switch);
@ -702,6 +893,21 @@ static int tb_suspend_noirq(struct tb *tb)
return 0;
}
static void tb_restore_children(struct tb_switch *sw)
{
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (!tb_port_has_remote(port))
continue;
if (tb_switch_lane_bonding_enable(port->remote->sw))
dev_warn(&sw->dev, "failed to restore lane bonding\n");
tb_restore_children(port->remote->sw);
}
}
static int tb_resume_noirq(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
@ -715,6 +921,7 @@ static int tb_resume_noirq(struct tb *tb)
tb_switch_resume(tb->root_switch);
tb_free_invalid_tunnels(tb);
tb_free_unplugged_children(tb->root_switch);
tb_restore_children(tb->root_switch);
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list)
tb_tunnel_restart(tunnel);
if (!list_empty(&tcm->tunnel_list)) {
@ -734,11 +941,10 @@ static int tb_resume_noirq(struct tb *tb)
static int tb_free_unplugged_xdomains(struct tb_switch *sw)
{
int i, ret = 0;
for (i = 1; i <= sw->config.max_port_number; i++) {
struct tb_port *port = &sw->ports[i];
struct tb_port *port;
int ret = 0;
tb_switch_for_each_port(sw, port) {
if (tb_is_upstream_port(port))
continue;
if (port->xdomain && port->xdomain->is_unplugged) {
@ -783,9 +989,6 @@ struct tb *tb_probe(struct tb_nhi *nhi)
struct tb_cm *tcm;
struct tb *tb;
if (!x86_apple_machine)
return NULL;
tb = tb_domain_alloc(nhi, sizeof(*tcm));
if (!tb)
return NULL;
@ -795,6 +998,7 @@ struct tb *tb_probe(struct tb_nhi *nhi)
tcm = tb_priv(tb);
INIT_LIST_HEAD(&tcm->tunnel_list);
INIT_LIST_HEAD(&tcm->dp_resources);
return tb;
}

View File

@ -61,6 +61,8 @@ struct tb_switch_nvm {
* @device: Device ID of the switch
* @vendor_name: Name of the vendor (or %NULL if not known)
* @device_name: Name of the device (or %NULL if not known)
* @link_speed: Speed of the link in Gb/s
* @link_width: Width of the link (1 or 2)
* @generation: Switch Thunderbolt generation
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
* @cap_lc: Offset to the link controller capability (%0 if not found)
@ -97,6 +99,8 @@ struct tb_switch {
u16 device;
const char *vendor_name;
const char *device_name;
unsigned int link_speed;
unsigned int link_width;
unsigned int generation;
int cap_plug_events;
int cap_lc;
@ -127,11 +131,13 @@ struct tb_switch {
* @cap_adap: Offset of the adapter specific capability (%0 if not present)
* @port: Port number on switch
* @disabled: Disabled by eeprom
* @bonded: true if the port is bonded (two lanes combined as one)
* @dual_link_port: If the switch is connected using two ports, points
* to the other port.
* @link_nr: Is this primary or secondary port on the dual_link.
* @in_hopids: Currently allocated input HopIDs
* @out_hopids: Currently allocated output HopIDs
* @list: Used to link ports to DP resources list
*/
struct tb_port {
struct tb_regs_port_header config;
@ -142,10 +148,12 @@ struct tb_port {
int cap_adap;
u8 port;
bool disabled;
bool bonded;
struct tb_port *dual_link_port;
u8 link_nr:1;
struct ida in_hopids;
struct ida out_hopids;
struct list_head list;
};
/**
@ -399,7 +407,7 @@ static inline int tb_sw_read(struct tb_switch *sw, void *buffer,
length);
}
static inline int tb_sw_write(struct tb_switch *sw, void *buffer,
static inline int tb_sw_write(struct tb_switch *sw, const void *buffer,
enum tb_cfg_space space, u32 offset, u32 length)
{
if (sw->is_unplugged)
@ -530,6 +538,17 @@ struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link,
struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_t *uuid);
struct tb_switch *tb_switch_find_by_route(struct tb *tb, u64 route);
/**
* tb_switch_for_each_port() - Iterate over each switch port
* @sw: Switch whose ports to iterate
* @p: Port used as iterator
*
* Iterates over each switch port skipping the control port (port %0).
*/
#define tb_switch_for_each_port(sw, p) \
for ((p) = &(sw)->ports[1]; \
(p) <= &(sw)->ports[(sw)->config.max_port_number]; (p)++)
static inline struct tb_switch *tb_switch_get(struct tb_switch *sw)
{
if (sw)
@ -559,17 +578,17 @@ static inline struct tb_switch *tb_switch_parent(struct tb_switch *sw)
return tb_to_switch(sw->dev.parent);
}
static inline bool tb_switch_is_lr(const struct tb_switch *sw)
static inline bool tb_switch_is_light_ridge(const struct tb_switch *sw)
{
return sw->config.device_id == PCI_DEVICE_ID_INTEL_LIGHT_RIDGE;
}
static inline bool tb_switch_is_er(const struct tb_switch *sw)
static inline bool tb_switch_is_eagle_ridge(const struct tb_switch *sw)
{
return sw->config.device_id == PCI_DEVICE_ID_INTEL_EAGLE_RIDGE;
}
static inline bool tb_switch_is_cr(const struct tb_switch *sw)
static inline bool tb_switch_is_cactus_ridge(const struct tb_switch *sw)
{
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_2C:
@ -580,7 +599,7 @@ static inline bool tb_switch_is_cr(const struct tb_switch *sw)
}
}
static inline bool tb_switch_is_fr(const struct tb_switch *sw)
static inline bool tb_switch_is_falcon_ridge(const struct tb_switch *sw)
{
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE:
@ -591,6 +610,52 @@ static inline bool tb_switch_is_fr(const struct tb_switch *sw)
}
}
static inline bool tb_switch_is_alpine_ridge(const struct tb_switch *sw)
{
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE:
return true;
default:
return false;
}
}
static inline bool tb_switch_is_titan_ridge(const struct tb_switch *sw)
{
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE:
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE:
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE:
return true;
default:
return false;
}
}
/**
* tb_switch_is_icm() - Is the switch handled by ICM firmware
* @sw: Switch to check
*
* In case there is a need to differentiate whether ICM firmware or SW CM
* is handling @sw this function can be called. It is valid to call this
* after tb_switch_alloc() and tb_switch_configure() has been called
* (latter only for SW CM case).
*/
static inline bool tb_switch_is_icm(const struct tb_switch *sw)
{
return !sw->config.enabled;
}
int tb_switch_lane_bonding_enable(struct tb_switch *sw);
void tb_switch_lane_bonding_disable(struct tb_switch *sw);
bool tb_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in);
int tb_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
void tb_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
int tb_port_set_initial_credits(struct tb_port *port, u32 credits);
@ -626,6 +691,8 @@ void tb_path_free(struct tb_path *path);
int tb_path_activate(struct tb_path *path);
void tb_path_deactivate(struct tb_path *path);
bool tb_path_is_invalid(struct tb_path *path);
bool tb_path_switch_on_path(const struct tb_path *path,
const struct tb_switch *sw);
int tb_drom_read(struct tb_switch *sw);
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
@ -634,6 +701,10 @@ int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid);
int tb_lc_configure_link(struct tb_switch *sw);
void tb_lc_unconfigure_link(struct tb_switch *sw);
int tb_lc_set_sleep(struct tb_switch *sw);
bool tb_lc_lane_bonding_possible(struct tb_switch *sw);
bool tb_lc_dp_sink_query(struct tb_switch *sw, struct tb_port *in);
int tb_lc_dp_sink_alloc(struct tb_switch *sw, struct tb_port *in);
int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in);
static inline int tb_route_length(u64 route)
{

View File

@ -122,6 +122,8 @@ struct icm_pkg_header {
#define ICM_FLAGS_NO_KEY BIT(1)
#define ICM_FLAGS_SLEVEL_SHIFT 3
#define ICM_FLAGS_SLEVEL_MASK GENMASK(4, 3)
#define ICM_FLAGS_DUAL_LANE BIT(5)
#define ICM_FLAGS_SPEED_GEN3 BIT(7)
#define ICM_FLAGS_WRITE BIT(7)
struct icm_pkg_driver_ready {

View File

@ -211,37 +211,71 @@ struct tb_regs_port_header {
} __packed;
/* DWORD 4 */
#define TB_PORT_NFC_CREDITS_MASK GENMASK(19, 0)
#define TB_PORT_MAX_CREDITS_SHIFT 20
#define TB_PORT_MAX_CREDITS_MASK GENMASK(26, 20)
/* DWORD 5 */
#define TB_PORT_LCA_SHIFT 22
#define TB_PORT_LCA_MASK GENMASK(28, 22)
/* Basic adapter configuration registers */
#define ADP_CS_4 0x04
#define ADP_CS_4_NFC_BUFFERS_MASK GENMASK(9, 0)
#define ADP_CS_4_TOTAL_BUFFERS_MASK GENMASK(29, 20)
#define ADP_CS_4_TOTAL_BUFFERS_SHIFT 20
#define ADP_CS_5 0x05
#define ADP_CS_5_LCA_MASK GENMASK(28, 22)
#define ADP_CS_5_LCA_SHIFT 22
/* Lane adapter registers */
#define LANE_ADP_CS_0 0x00
#define LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK GENMASK(25, 20)
#define LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT 20
#define LANE_ADP_CS_1 0x01
#define LANE_ADP_CS_1_TARGET_WIDTH_MASK GENMASK(9, 4)
#define LANE_ADP_CS_1_TARGET_WIDTH_SHIFT 4
#define LANE_ADP_CS_1_TARGET_WIDTH_SINGLE 0x1
#define LANE_ADP_CS_1_TARGET_WIDTH_DUAL 0x3
#define LANE_ADP_CS_1_LB BIT(15)
#define LANE_ADP_CS_1_CURRENT_SPEED_MASK GENMASK(19, 16)
#define LANE_ADP_CS_1_CURRENT_SPEED_SHIFT 16
#define LANE_ADP_CS_1_CURRENT_SPEED_GEN2 0x8
#define LANE_ADP_CS_1_CURRENT_SPEED_GEN3 0x4
#define LANE_ADP_CS_1_CURRENT_WIDTH_MASK GENMASK(25, 20)
#define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20
/* Display Port adapter registers */
/* DWORD 0 */
#define TB_DP_VIDEO_HOPID_SHIFT 16
#define TB_DP_VIDEO_HOPID_MASK GENMASK(26, 16)
#define TB_DP_AUX_EN BIT(30)
#define TB_DP_VIDEO_EN BIT(31)
/* DWORD 1 */
#define TB_DP_AUX_TX_HOPID_MASK GENMASK(10, 0)
#define TB_DP_AUX_RX_HOPID_SHIFT 11
#define TB_DP_AUX_RX_HOPID_MASK GENMASK(21, 11)
/* DWORD 2 */
#define TB_DP_HDP BIT(6)
/* DWORD 3 */
#define TB_DP_HPDC BIT(9)
/* DWORD 4 */
#define TB_DP_LOCAL_CAP 0x4
/* DWORD 5 */
#define TB_DP_REMOTE_CAP 0x5
#define ADP_DP_CS_0 0x00
#define ADP_DP_CS_0_VIDEO_HOPID_MASK GENMASK(26, 16)
#define ADP_DP_CS_0_VIDEO_HOPID_SHIFT 16
#define ADP_DP_CS_0_AE BIT(30)
#define ADP_DP_CS_0_VE BIT(31)
#define ADP_DP_CS_1_AUX_TX_HOPID_MASK GENMASK(10, 0)
#define ADP_DP_CS_1_AUX_RX_HOPID_MASK GENMASK(21, 11)
#define ADP_DP_CS_1_AUX_RX_HOPID_SHIFT 11
#define ADP_DP_CS_2 0x02
#define ADP_DP_CS_2_HDP BIT(6)
#define ADP_DP_CS_3 0x03
#define ADP_DP_CS_3_HDPC BIT(9)
#define DP_LOCAL_CAP 0x04
#define DP_REMOTE_CAP 0x05
#define DP_STATUS_CTRL 0x06
#define DP_STATUS_CTRL_CMHS BIT(25)
#define DP_STATUS_CTRL_UF BIT(26)
#define DP_COMMON_CAP 0x07
/*
* DP_COMMON_CAP offsets work also for DP_LOCAL_CAP and DP_REMOTE_CAP
* with exception of DPRX done.
*/
#define DP_COMMON_CAP_RATE_MASK GENMASK(11, 8)
#define DP_COMMON_CAP_RATE_SHIFT 8
#define DP_COMMON_CAP_RATE_RBR 0x0
#define DP_COMMON_CAP_RATE_HBR 0x1
#define DP_COMMON_CAP_RATE_HBR2 0x2
#define DP_COMMON_CAP_RATE_HBR3 0x3
#define DP_COMMON_CAP_LANES_MASK GENMASK(14, 12)
#define DP_COMMON_CAP_LANES_SHIFT 12
#define DP_COMMON_CAP_1_LANE 0x0
#define DP_COMMON_CAP_2_LANES 0x1
#define DP_COMMON_CAP_4_LANES 0x2
#define DP_COMMON_CAP_DPRX_DONE BIT(31)
/* PCIe adapter registers */
#define TB_PCI_EN BIT(31)
#define ADP_PCIE_CS_0 0x00
#define ADP_PCIE_CS_0_PE BIT(31)
/* Hop register from TB_CFG_HOPS. 8 byte per entry. */
struct tb_regs_hop {
@ -278,8 +312,17 @@ struct tb_regs_hop {
#define TB_LC_DESC_PORT_SIZE_SHIFT 16
#define TB_LC_DESC_PORT_SIZE_MASK GENMASK(27, 16)
#define TB_LC_FUSE 0x03
#define TB_LC_SNK_ALLOCATION 0x10
#define TB_LC_SNK_ALLOCATION_SNK0_MASK GENMASK(3, 0)
#define TB_LC_SNK_ALLOCATION_SNK0_CM 0x1
#define TB_LC_SNK_ALLOCATION_SNK1_SHIFT 4
#define TB_LC_SNK_ALLOCATION_SNK1_MASK GENMASK(7, 4)
#define TB_LC_SNK_ALLOCATION_SNK1_CM 0x1
/* Link controller registers */
#define TB_LC_PORT_ATTR 0x8d
#define TB_LC_PORT_ATTR_BE BIT(12)
#define TB_LC_SX_CTRL 0x96
#define TB_LC_SX_CTRL_L1C BIT(16)
#define TB_LC_SX_CTRL_L2C BIT(20)

View File

@ -6,6 +6,7 @@
* Copyright (C) 2019, Intel Corporation
*/
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/list.h>
@ -90,6 +91,22 @@ static int tb_pci_activate(struct tb_tunnel *tunnel, bool activate)
return 0;
}
static int tb_initial_credits(const struct tb_switch *sw)
{
/* If the path is complete sw is not NULL */
if (sw) {
/* More credits for faster link */
switch (sw->link_speed * sw->link_width) {
case 40:
return 32;
case 20:
return 24;
}
}
return 16;
}
static void tb_pci_init_path(struct tb_path *path)
{
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
@ -101,7 +118,8 @@ static void tb_pci_init_path(struct tb_path *path)
path->drop_packages = 0;
path->nfc_credits = 0;
path->hops[0].initial_credits = 7;
path->hops[1].initial_credits = 16;
path->hops[1].initial_credits =
tb_initial_credits(path->hops[1].in_port->sw);
}
/**
@ -225,11 +243,174 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
return tunnel;
}
static int tb_dp_cm_handshake(struct tb_port *in, struct tb_port *out)
{
int timeout = 10;
u32 val;
int ret;
/* Both ends need to support this */
if (!tb_switch_is_titan_ridge(in->sw) ||
!tb_switch_is_titan_ridge(out->sw))
return 0;
ret = tb_port_read(out, &val, TB_CFG_PORT,
out->cap_adap + DP_STATUS_CTRL, 1);
if (ret)
return ret;
val |= DP_STATUS_CTRL_UF | DP_STATUS_CTRL_CMHS;
ret = tb_port_write(out, &val, TB_CFG_PORT,
out->cap_adap + DP_STATUS_CTRL, 1);
if (ret)
return ret;
do {
ret = tb_port_read(out, &val, TB_CFG_PORT,
out->cap_adap + DP_STATUS_CTRL, 1);
if (ret)
return ret;
if (!(val & DP_STATUS_CTRL_CMHS))
return 0;
usleep_range(10, 100);
} while (timeout--);
return -ETIMEDOUT;
}
static inline u32 tb_dp_cap_get_rate(u32 val)
{
u32 rate = (val & DP_COMMON_CAP_RATE_MASK) >> DP_COMMON_CAP_RATE_SHIFT;
switch (rate) {
case DP_COMMON_CAP_RATE_RBR:
return 1620;
case DP_COMMON_CAP_RATE_HBR:
return 2700;
case DP_COMMON_CAP_RATE_HBR2:
return 5400;
case DP_COMMON_CAP_RATE_HBR3:
return 8100;
default:
return 0;
}
}
static inline u32 tb_dp_cap_set_rate(u32 val, u32 rate)
{
val &= ~DP_COMMON_CAP_RATE_MASK;
switch (rate) {
default:
WARN(1, "invalid rate %u passed, defaulting to 1620 MB/s\n", rate);
/* Fallthrough */
case 1620:
val |= DP_COMMON_CAP_RATE_RBR << DP_COMMON_CAP_RATE_SHIFT;
break;
case 2700:
val |= DP_COMMON_CAP_RATE_HBR << DP_COMMON_CAP_RATE_SHIFT;
break;
case 5400:
val |= DP_COMMON_CAP_RATE_HBR2 << DP_COMMON_CAP_RATE_SHIFT;
break;
case 8100:
val |= DP_COMMON_CAP_RATE_HBR3 << DP_COMMON_CAP_RATE_SHIFT;
break;
}
return val;
}
static inline u32 tb_dp_cap_get_lanes(u32 val)
{
u32 lanes = (val & DP_COMMON_CAP_LANES_MASK) >> DP_COMMON_CAP_LANES_SHIFT;
switch (lanes) {
case DP_COMMON_CAP_1_LANE:
return 1;
case DP_COMMON_CAP_2_LANES:
return 2;
case DP_COMMON_CAP_4_LANES:
return 4;
default:
return 0;
}
}
static inline u32 tb_dp_cap_set_lanes(u32 val, u32 lanes)
{
val &= ~DP_COMMON_CAP_LANES_MASK;
switch (lanes) {
default:
WARN(1, "invalid number of lanes %u passed, defaulting to 1\n",
lanes);
/* Fallthrough */
case 1:
val |= DP_COMMON_CAP_1_LANE << DP_COMMON_CAP_LANES_SHIFT;
break;
case 2:
val |= DP_COMMON_CAP_2_LANES << DP_COMMON_CAP_LANES_SHIFT;
break;
case 4:
val |= DP_COMMON_CAP_4_LANES << DP_COMMON_CAP_LANES_SHIFT;
break;
}
return val;
}
static unsigned int tb_dp_bandwidth(unsigned int rate, unsigned int lanes)
{
/* Tunneling removes the DP 8b/10b encoding */
return rate * lanes * 8 / 10;
}
static int tb_dp_reduce_bandwidth(int max_bw, u32 in_rate, u32 in_lanes,
u32 out_rate, u32 out_lanes, u32 *new_rate,
u32 *new_lanes)
{
static const u32 dp_bw[][2] = {
/* Mb/s, lanes */
{ 8100, 4 }, /* 25920 Mb/s */
{ 5400, 4 }, /* 17280 Mb/s */
{ 8100, 2 }, /* 12960 Mb/s */
{ 2700, 4 }, /* 8640 Mb/s */
{ 5400, 2 }, /* 8640 Mb/s */
{ 8100, 1 }, /* 6480 Mb/s */
{ 1620, 4 }, /* 5184 Mb/s */
{ 5400, 1 }, /* 4320 Mb/s */
{ 2700, 2 }, /* 4320 Mb/s */
{ 1620, 2 }, /* 2592 Mb/s */
{ 2700, 1 }, /* 2160 Mb/s */
{ 1620, 1 }, /* 1296 Mb/s */
};
unsigned int i;
/*
* Find a combination that can fit into max_bw and does not
* exceed the maximum rate and lanes supported by the DP OUT and
* DP IN adapters.
*/
for (i = 0; i < ARRAY_SIZE(dp_bw); i++) {
if (dp_bw[i][0] > out_rate || dp_bw[i][1] > out_lanes)
continue;
if (dp_bw[i][0] > in_rate || dp_bw[i][1] > in_lanes)
continue;
if (tb_dp_bandwidth(dp_bw[i][0], dp_bw[i][1]) <= max_bw) {
*new_rate = dp_bw[i][0];
*new_lanes = dp_bw[i][1];
return 0;
}
}
return -ENOSR;
}
static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
{
u32 out_dp_cap, out_rate, out_lanes, in_dp_cap, in_rate, in_lanes, bw;
struct tb_port *out = tunnel->dst_port;
struct tb_port *in = tunnel->src_port;
u32 in_dp_cap, out_dp_cap;
int ret;
/*
@ -239,25 +420,71 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
if (in->sw->generation < 2 || out->sw->generation < 2)
return 0;
/*
* Perform connection manager handshake between IN and OUT ports
* before capabilities exchange can take place.
*/
ret = tb_dp_cm_handshake(in, out);
if (ret)
return ret;
/* Read both DP_LOCAL_CAP registers */
ret = tb_port_read(in, &in_dp_cap, TB_CFG_PORT,
in->cap_adap + TB_DP_LOCAL_CAP, 1);
in->cap_adap + DP_LOCAL_CAP, 1);
if (ret)
return ret;
ret = tb_port_read(out, &out_dp_cap, TB_CFG_PORT,
out->cap_adap + TB_DP_LOCAL_CAP, 1);
out->cap_adap + DP_LOCAL_CAP, 1);
if (ret)
return ret;
/* Write IN local caps to OUT remote caps */
ret = tb_port_write(out, &in_dp_cap, TB_CFG_PORT,
out->cap_adap + TB_DP_REMOTE_CAP, 1);
out->cap_adap + DP_REMOTE_CAP, 1);
if (ret)
return ret;
in_rate = tb_dp_cap_get_rate(in_dp_cap);
in_lanes = tb_dp_cap_get_lanes(in_dp_cap);
tb_port_dbg(in, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
in_rate, in_lanes, tb_dp_bandwidth(in_rate, in_lanes));
/*
* If the tunnel bandwidth is limited (max_bw is set) then see
* if we need to reduce bandwidth to fit there.
*/
out_rate = tb_dp_cap_get_rate(out_dp_cap);
out_lanes = tb_dp_cap_get_lanes(out_dp_cap);
bw = tb_dp_bandwidth(out_rate, out_lanes);
tb_port_dbg(out, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
out_rate, out_lanes, bw);
if (tunnel->max_bw && bw > tunnel->max_bw) {
u32 new_rate, new_lanes, new_bw;
ret = tb_dp_reduce_bandwidth(tunnel->max_bw, in_rate, in_lanes,
out_rate, out_lanes, &new_rate,
&new_lanes);
if (ret) {
tb_port_info(out, "not enough bandwidth for DP tunnel\n");
return ret;
}
new_bw = tb_dp_bandwidth(new_rate, new_lanes);
tb_port_dbg(out, "bandwidth reduced to %u Mb/s x%u = %u Mb/s\n",
new_rate, new_lanes, new_bw);
/*
* Set new rate and number of lanes before writing it to
* the IN port remote caps.
*/
out_dp_cap = tb_dp_cap_set_rate(out_dp_cap, new_rate);
out_dp_cap = tb_dp_cap_set_lanes(out_dp_cap, new_lanes);
}
return tb_port_write(in, &out_dp_cap, TB_CFG_PORT,
in->cap_adap + TB_DP_REMOTE_CAP, 1);
in->cap_adap + DP_REMOTE_CAP, 1);
}
static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
@ -297,6 +524,56 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
return 0;
}
static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
{
struct tb_port *in = tunnel->src_port;
const struct tb_switch *sw = in->sw;
u32 val, rate = 0, lanes = 0;
int ret;
if (tb_switch_is_titan_ridge(sw)) {
int timeout = 10;
/*
* Wait for DPRX done. Normally it should be already set
* for active tunnel.
*/
do {
ret = tb_port_read(in, &val, TB_CFG_PORT,
in->cap_adap + DP_COMMON_CAP, 1);
if (ret)
return ret;
if (val & DP_COMMON_CAP_DPRX_DONE) {
rate = tb_dp_cap_get_rate(val);
lanes = tb_dp_cap_get_lanes(val);
break;
}
msleep(250);
} while (timeout--);
if (!timeout)
return -ETIMEDOUT;
} else if (sw->generation >= 2) {
/*
* Read from the copied remote cap so that we take into
* account if capabilities were reduced during exchange.
*/
ret = tb_port_read(in, &val, TB_CFG_PORT,
in->cap_adap + DP_REMOTE_CAP, 1);
if (ret)
return ret;
rate = tb_dp_cap_get_rate(val);
lanes = tb_dp_cap_get_lanes(val);
} else {
/* No bandwidth management for legacy devices */
return 0;
}
return tb_dp_bandwidth(rate, lanes);
}
static void tb_dp_init_aux_path(struct tb_path *path)
{
int i;
@ -324,12 +601,12 @@ static void tb_dp_init_video_path(struct tb_path *path, bool discover)
path->weight = 1;
if (discover) {
path->nfc_credits = nfc_credits & TB_PORT_NFC_CREDITS_MASK;
path->nfc_credits = nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK;
} else {
u32 max_credits;
max_credits = (nfc_credits & TB_PORT_MAX_CREDITS_MASK) >>
TB_PORT_MAX_CREDITS_SHIFT;
max_credits = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
/* Leave some credits for AUX path */
path->nfc_credits = min(max_credits - 2, 12U);
}
@ -361,6 +638,7 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
tunnel->init = tb_dp_xchg_caps;
tunnel->activate = tb_dp_activate;
tunnel->consumed_bandwidth = tb_dp_consumed_bandwidth;
tunnel->src_port = in;
path = tb_path_discover(in, TB_DP_VIDEO_HOPID, NULL, -1,
@ -419,6 +697,7 @@ err_free:
* @tb: Pointer to the domain structure
* @in: DP in adapter port
* @out: DP out adapter port
* @max_bw: Maximum available bandwidth for the DP tunnel (%0 if not limited)
*
* Allocates a tunnel between @in and @out that is capable of tunneling
* Display Port traffic.
@ -426,7 +705,7 @@ err_free:
* Return: Returns a tb_tunnel on success or NULL on failure.
*/
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out)
struct tb_port *out, int max_bw)
{
struct tb_tunnel *tunnel;
struct tb_path **paths;
@ -441,8 +720,10 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
tunnel->init = tb_dp_xchg_caps;
tunnel->activate = tb_dp_activate;
tunnel->consumed_bandwidth = tb_dp_consumed_bandwidth;
tunnel->src_port = in;
tunnel->dst_port = out;
tunnel->max_bw = max_bw;
paths = tunnel->paths;
@ -478,8 +759,8 @@ static u32 tb_dma_credits(struct tb_port *nhi)
{
u32 max_credits;
max_credits = (nhi->config.nfc_credits & TB_PORT_MAX_CREDITS_MASK) >>
TB_PORT_MAX_CREDITS_SHIFT;
max_credits = (nhi->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
return min(max_credits, 13U);
}
@ -689,3 +970,62 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel)
tb_path_deactivate(tunnel->paths[i]);
}
}
/**
* tb_tunnel_switch_on_path() - Does the tunnel go through switch
* @tunnel: Tunnel to check
* @sw: Switch to check
*
* Returns true if @tunnel goes through @sw (direction does not matter),
* false otherwise.
*/
bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel,
const struct tb_switch *sw)
{
int i;
for (i = 0; i < tunnel->npaths; i++) {
if (!tunnel->paths[i])
continue;
if (tb_path_switch_on_path(tunnel->paths[i], sw))
return true;
}
return false;
}
static bool tb_tunnel_is_active(const struct tb_tunnel *tunnel)
{
int i;
for (i = 0; i < tunnel->npaths; i++) {
if (!tunnel->paths[i])
return false;
if (!tunnel->paths[i]->activated)
return false;
}
return true;
}
/**
* tb_tunnel_consumed_bandwidth() - Return bandwidth consumed by the tunnel
* @tunnel: Tunnel to check
*
* Returns bandwidth currently consumed by @tunnel and %0 if the @tunnel
* is not active or does consume bandwidth.
*/
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel)
{
if (!tb_tunnel_is_active(tunnel))
return 0;
if (tunnel->consumed_bandwidth) {
int ret = tunnel->consumed_bandwidth(tunnel);
tb_tunnel_dbg(tunnel, "consumed bandwidth %d Mb/s\n", ret);
return ret;
}
return 0;
}

View File

@ -27,8 +27,11 @@ enum tb_tunnel_type {
* @npaths: Number of paths in @paths
* @init: Optional tunnel specific initialization
* @activate: Optional tunnel specific activation/deactivation
* @consumed_bandwidth: Return how much bandwidth the tunnel consumes
* @list: Tunnels are linked using this field
* @type: Type of the tunnel
* @max_bw: Maximum bandwidth (Mb/s) available for the tunnel (only for DP).
* Only set if the bandwidth needs to be limited.
*/
struct tb_tunnel {
struct tb *tb;
@ -38,8 +41,10 @@ struct tb_tunnel {
size_t npaths;
int (*init)(struct tb_tunnel *tunnel);
int (*activate)(struct tb_tunnel *tunnel, bool activate);
int (*consumed_bandwidth)(struct tb_tunnel *tunnel);
struct list_head list;
enum tb_tunnel_type type;
unsigned int max_bw;
};
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down);
@ -47,7 +52,7 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_port *down);
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in);
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out);
struct tb_port *out, int max_bw);
struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_port *dst, int transmit_ring,
int transmit_path, int receive_ring,
@ -58,6 +63,9 @@ int tb_tunnel_activate(struct tb_tunnel *tunnel);
int tb_tunnel_restart(struct tb_tunnel *tunnel);
void tb_tunnel_deactivate(struct tb_tunnel *tunnel);
bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel);
bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel,
const struct tb_switch *sw);
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel);
static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel)
{

View File

@ -1404,10 +1404,9 @@ struct tb_xdomain_lookup {
static struct tb_xdomain *switch_find_xdomain(struct tb_switch *sw,
const struct tb_xdomain_lookup *lookup)
{
int i;
struct tb_port *port;
for (i = 1; i <= sw->config.max_port_number; i++) {
struct tb_port *port = &sw->ports[i];
tb_switch_for_each_port(sw, port) {
struct tb_xdomain *xd;
if (port->xdomain) {