Now that the driver can handle every possible tunnel types there is no
point to log everything as info level so turn these to happen at debug
level instead.
While at it remove duplicated tunnel activation log message
(tb_tunnel_activate() calls tb_tunnel_restart() which print the same
message) and add one missing '\n' termination.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
In addition to PCIe and Display Port tunnels it is also possible to
create tunnels that forward DMA traffic from the host interface adapter
(NHI) to a NULL port that is connected to another domain through a
Thunderbolt cable. These tunnels can be used to carry software messages
such as networking packets.
To support this we introduce another tunnel type (TB_TUNNEL_DMA) that
supports paths from NHI to NULL port and back.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Now that we have capability to discover existing tunnels during driver
load there is no point tearing down tunnels when the driver gets
unloaded. Instead we can just leave them running. If user disconnects
devices while there is no Thunderbolt driver loaded, tunneled protocol
hotplug happens and is handled by the corresponding driver (pciehp in
case of PCIe tunnel, GFX driver in case of DP tunnel).
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Display Port tunnels are somewhat more complex than PCIe tunnels as it
requires 3 tunnels (AUX Rx/Tx and Video). In addition we are not
supposed to create the tunnels immediately when a DP OUT is enumerated.
Instead we need to wait until we get hotplug event to that adapter port
or check if the port has HPD set before tunnels can be established. This
adds Display Port tunneling support to the software connection manager.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
In Apple Macs the boot firmware (EFI) connects all devices automatically
when the system is started, before it hands over to the OS. Instead of
ignoring we discover all those PCIe tunnels and record them using our
internal structures, just like we do when a device is connected after
the OS is already up.
By doing this we can properly tear down tunnels when devices are
disconnected. Also this allows us to resume the existing tunnels after
system suspend/resume cycle.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
State of the connected devices and tunnel configuration is not known
during resume. For example some paths may not be complete anymore if the
user has unplugged the related devices. So instead of marking all paths
as inactive we go ahead and deactivate them explicitly before we restart
them.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Now that we can allocate hop IDs per port on a path, we can take
advantage of this and create tunnels covering longer paths than just
between two adjacent switches. PCIe actually does not need this as it
is typically a daisy chain between two adjacent switches but this way we
do not need to hard-code creation of the tunnel.
While there add name to struct tb_path to make debugging easier, and
update kernel-doc comments.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
To be able to tunnel non-PCIe traffic, separate tunnel functionality
into generic and PCIe specific parts. Rename struct tb_pci_tunnel to
tb_tunnel, and make it hold an array of paths instead of just two.
Update all the tunneling functions to take this structure as parameter.
We also move tb_pci_port_active() to switch.c (and rename it) where we
will be keeping all port and switch related functions.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
In order to tunnel non-PCIe traffic as well rename tunnel_pci.[ch] to
tunnel.[ch] to reflect this fact. No functional changes.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>