2019-05-19 20:07:45 +08:00
|
|
|
# SPDX-License-Identifier: GPL-2.0-only
|
2020-03-26 22:01:19 +08:00
|
|
|
config VHOST_IOTLB
|
|
|
|
tristate
|
|
|
|
help
|
|
|
|
Generic IOTLB implementation for vhost and vringh.
|
2020-04-02 00:46:22 +08:00
|
|
|
This option is selected by any driver which needs to support
|
|
|
|
an IOMMU in software.
|
2020-03-26 22:01:19 +08:00
|
|
|
|
2020-03-26 22:01:17 +08:00
|
|
|
config VHOST_RING
|
|
|
|
tristate
|
2020-03-26 22:01:20 +08:00
|
|
|
select VHOST_IOTLB
|
2020-03-26 22:01:17 +08:00
|
|
|
help
|
|
|
|
This option is selected by any driver which needs to access
|
|
|
|
the host side of a virtio ring.
|
|
|
|
|
2020-04-06 20:12:47 +08:00
|
|
|
config VHOST_DPN
|
|
|
|
bool
|
|
|
|
depends on !ARM || AEABI
|
|
|
|
default y
|
|
|
|
help
|
|
|
|
Anything selecting VHOST or VHOST_RING must depend on VHOST_DPN.
|
|
|
|
This excludes the deprecated ARM ABI since that forces a 4 byte
|
|
|
|
alignment on all structs - incompatible with virtio spec requirements.
|
|
|
|
|
2020-03-26 22:01:17 +08:00
|
|
|
config VHOST
|
|
|
|
tristate
|
|
|
|
select VHOST_IOTLB
|
|
|
|
help
|
|
|
|
This option is selected by any driver which needs to access
|
|
|
|
the core of vhost.
|
|
|
|
|
|
|
|
menuconfig VHOST_MENU
|
|
|
|
bool "VHOST drivers"
|
|
|
|
default y
|
|
|
|
|
|
|
|
if VHOST_MENU
|
|
|
|
|
vhost_net: a kernel-level virtio server
What it is: vhost net is a character device that can be used to reduce
the number of system calls involved in virtio networking.
Existing virtio net code is used in the guest without modification.
There's similarity with vringfd, with some differences and reduced scope
- uses eventfd for signalling
- structures can be moved around in memory at any time (good for
migration, bug work-arounds in userspace)
- write logging is supported (good for migration)
- support memory table and not just an offset (needed for kvm)
common virtio related code has been put in a separate file vhost.c and
can be made into a separate module if/when more backends appear. I used
Rusty's lguest.c as the source for developing this part : this supplied
me with witty comments I wouldn't be able to write myself.
What it is not: vhost net is not a bus, and not a generic new system
call. No assumptions are made on how guest performs hypercalls.
Userspace hypervisors are supported as well as kvm.
How it works: Basically, we connect virtio frontend (configured by
userspace) to a backend. The backend could be a network device, or a tap
device. Backend is also configured by userspace, including vlan/mac
etc.
Status: This works for me, and I haven't see any crashes.
Compared to userspace, people reported improved latency (as I save up to
4 system calls per packet), as well as better bandwidth and CPU
utilization.
Features that I plan to look at in the future:
- mergeable buffers
- zero copy
- scalability tuning: figure out the best threading model to use
Note on RCU usage (this is also documented in vhost.h, near
private_pointer which is the value protected by this variant of RCU):
what is happening is that the rcu_dereference() is being used in a
workqueue item. The role of rcu_read_lock() is taken on by the start of
execution of the workqueue item, of rcu_read_unlock() by the end of
execution of the workqueue item, and of synchronize_rcu() by
flush_workqueue()/flush_work(). In the future we might need to apply
some gcc attribute or sparse annotation to the function passed to
INIT_WORK(). Paul's ack below is for this RCU usage.
(Includes fixes by Alan Cox <alan@linux.intel.com>,
David L Stevens <dlstevens@us.ibm.com>,
Chris Wright <chrisw@redhat.com>)
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-01-14 14:17:27 +08:00
|
|
|
config VHOST_NET
|
2013-01-17 10:53:56 +08:00
|
|
|
tristate "Host kernel accelerator for virtio net"
|
2020-04-06 20:12:47 +08:00
|
|
|
depends on NET && EVENTFD && (TUN || !TUN) && (TAP || !TAP) && VHOST_DPN
|
2013-05-06 16:38:21 +08:00
|
|
|
select VHOST
|
vhost_net: a kernel-level virtio server
What it is: vhost net is a character device that can be used to reduce
the number of system calls involved in virtio networking.
Existing virtio net code is used in the guest without modification.
There's similarity with vringfd, with some differences and reduced scope
- uses eventfd for signalling
- structures can be moved around in memory at any time (good for
migration, bug work-arounds in userspace)
- write logging is supported (good for migration)
- support memory table and not just an offset (needed for kvm)
common virtio related code has been put in a separate file vhost.c and
can be made into a separate module if/when more backends appear. I used
Rusty's lguest.c as the source for developing this part : this supplied
me with witty comments I wouldn't be able to write myself.
What it is not: vhost net is not a bus, and not a generic new system
call. No assumptions are made on how guest performs hypercalls.
Userspace hypervisors are supported as well as kvm.
How it works: Basically, we connect virtio frontend (configured by
userspace) to a backend. The backend could be a network device, or a tap
device. Backend is also configured by userspace, including vlan/mac
etc.
Status: This works for me, and I haven't see any crashes.
Compared to userspace, people reported improved latency (as I save up to
4 system calls per packet), as well as better bandwidth and CPU
utilization.
Features that I plan to look at in the future:
- mergeable buffers
- zero copy
- scalability tuning: figure out the best threading model to use
Note on RCU usage (this is also documented in vhost.h, near
private_pointer which is the value protected by this variant of RCU):
what is happening is that the rcu_dereference() is being used in a
workqueue item. The role of rcu_read_lock() is taken on by the start of
execution of the workqueue item, of rcu_read_unlock() by the end of
execution of the workqueue item, and of synchronize_rcu() by
flush_workqueue()/flush_work(). In the future we might need to apply
some gcc attribute or sparse annotation to the function passed to
INIT_WORK(). Paul's ack below is for this RCU usage.
(Includes fixes by Alan Cox <alan@linux.intel.com>,
David L Stevens <dlstevens@us.ibm.com>,
Chris Wright <chrisw@redhat.com>)
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-01-14 14:17:27 +08:00
|
|
|
---help---
|
|
|
|
This kernel module can be loaded in host kernel to accelerate
|
|
|
|
guest networking with virtio_net. Not to be confused with virtio_net
|
|
|
|
module itself which needs to be loaded in guest kernel.
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module will
|
|
|
|
be called vhost_net.
|
|
|
|
|
2013-05-02 08:52:59 +08:00
|
|
|
config VHOST_SCSI
|
|
|
|
tristate "VHOST_SCSI TCM fabric driver"
|
2020-04-06 20:12:47 +08:00
|
|
|
depends on TARGET_CORE && EVENTFD && VHOST_DPN
|
2013-05-06 16:38:21 +08:00
|
|
|
select VHOST
|
2013-05-02 08:52:59 +08:00
|
|
|
default n
|
|
|
|
---help---
|
|
|
|
Say M here to enable the vhost_scsi TCM fabric module
|
|
|
|
for use with virtio-scsi guests
|
2013-03-20 11:20:14 +08:00
|
|
|
|
2016-07-28 22:36:35 +08:00
|
|
|
config VHOST_VSOCK
|
|
|
|
tristate "vhost virtio-vsock driver"
|
2020-04-06 20:12:47 +08:00
|
|
|
depends on VSOCKETS && EVENTFD && VHOST_DPN
|
2016-07-28 22:36:35 +08:00
|
|
|
select VHOST
|
2020-03-26 22:01:17 +08:00
|
|
|
select VIRTIO_VSOCKETS_COMMON
|
2016-07-28 22:36:35 +08:00
|
|
|
default n
|
|
|
|
---help---
|
|
|
|
This kernel module can be loaded in the host kernel to provide AF_VSOCK
|
|
|
|
sockets for communicating with guests. The guests must have the
|
|
|
|
virtio_transport.ko driver loaded to use the virtio-vsock device.
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module will be called
|
|
|
|
vhost_vsock.
|
|
|
|
|
2020-03-26 22:01:23 +08:00
|
|
|
config VHOST_VDPA
|
|
|
|
tristate "Vhost driver for vDPA-based backend"
|
2020-04-06 20:12:47 +08:00
|
|
|
depends on EVENTFD && VHOST_DPN
|
2020-03-26 22:01:23 +08:00
|
|
|
select VHOST
|
2020-04-12 16:36:55 +08:00
|
|
|
depends on VDPA
|
2020-03-26 22:01:23 +08:00
|
|
|
help
|
|
|
|
This kernel module can be loaded in host kernel to accelerate
|
|
|
|
guest virtio devices with the vDPA-based backends.
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called vhost_vdpa.
|
|
|
|
|
2015-04-24 20:27:24 +08:00
|
|
|
config VHOST_CROSS_ENDIAN_LEGACY
|
|
|
|
bool "Cross-endian support for vhost"
|
|
|
|
default n
|
|
|
|
---help---
|
|
|
|
This option allows vhost to support guests with a different byte
|
|
|
|
ordering from host while using legacy virtio.
|
|
|
|
|
|
|
|
Userspace programs can control the feature using the
|
|
|
|
VHOST_SET_VRING_ENDIAN and VHOST_GET_VRING_ENDIAN ioctls.
|
|
|
|
|
|
|
|
This is only useful on a few platforms (ppc64 and arm64). Since it
|
|
|
|
adds some overhead, it is disabled by default.
|
|
|
|
|
|
|
|
If unsure, say "N".
|
2020-03-26 22:01:19 +08:00
|
|
|
|
2020-03-26 22:01:17 +08:00
|
|
|
endif
|