linux-sg2042/drivers/vhost/Kconfig

15 lines
493 B
Plaintext
Raw Normal View History

vhost_net: a kernel-level virtio server What it is: vhost net is a character device that can be used to reduce the number of system calls involved in virtio networking. Existing virtio net code is used in the guest without modification. There's similarity with vringfd, with some differences and reduced scope - uses eventfd for signalling - structures can be moved around in memory at any time (good for migration, bug work-arounds in userspace) - write logging is supported (good for migration) - support memory table and not just an offset (needed for kvm) common virtio related code has been put in a separate file vhost.c and can be made into a separate module if/when more backends appear. I used Rusty's lguest.c as the source for developing this part : this supplied me with witty comments I wouldn't be able to write myself. What it is not: vhost net is not a bus, and not a generic new system call. No assumptions are made on how guest performs hypercalls. Userspace hypervisors are supported as well as kvm. How it works: Basically, we connect virtio frontend (configured by userspace) to a backend. The backend could be a network device, or a tap device. Backend is also configured by userspace, including vlan/mac etc. Status: This works for me, and I haven't see any crashes. Compared to userspace, people reported improved latency (as I save up to 4 system calls per packet), as well as better bandwidth and CPU utilization. Features that I plan to look at in the future: - mergeable buffers - zero copy - scalability tuning: figure out the best threading model to use Note on RCU usage (this is also documented in vhost.h, near private_pointer which is the value protected by this variant of RCU): what is happening is that the rcu_dereference() is being used in a workqueue item. The role of rcu_read_lock() is taken on by the start of execution of the workqueue item, of rcu_read_unlock() by the end of execution of the workqueue item, and of synchronize_rcu() by flush_workqueue()/flush_work(). In the future we might need to apply some gcc attribute or sparse annotation to the function passed to INIT_WORK(). Paul's ack below is for this RCU usage. (Includes fixes by Alan Cox <alan@linux.intel.com>, David L Stevens <dlstevens@us.ibm.com>, Chris Wright <chrisw@redhat.com>) Acked-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-01-14 14:17:27 +08:00
config VHOST_NET
tristate "Host kernel accelerator for virtio net"
depends on NET && EVENTFD && (TUN || !TUN) && (MACVTAP || !MACVTAP)
vhost_net: a kernel-level virtio server What it is: vhost net is a character device that can be used to reduce the number of system calls involved in virtio networking. Existing virtio net code is used in the guest without modification. There's similarity with vringfd, with some differences and reduced scope - uses eventfd for signalling - structures can be moved around in memory at any time (good for migration, bug work-arounds in userspace) - write logging is supported (good for migration) - support memory table and not just an offset (needed for kvm) common virtio related code has been put in a separate file vhost.c and can be made into a separate module if/when more backends appear. I used Rusty's lguest.c as the source for developing this part : this supplied me with witty comments I wouldn't be able to write myself. What it is not: vhost net is not a bus, and not a generic new system call. No assumptions are made on how guest performs hypercalls. Userspace hypervisors are supported as well as kvm. How it works: Basically, we connect virtio frontend (configured by userspace) to a backend. The backend could be a network device, or a tap device. Backend is also configured by userspace, including vlan/mac etc. Status: This works for me, and I haven't see any crashes. Compared to userspace, people reported improved latency (as I save up to 4 system calls per packet), as well as better bandwidth and CPU utilization. Features that I plan to look at in the future: - mergeable buffers - zero copy - scalability tuning: figure out the best threading model to use Note on RCU usage (this is also documented in vhost.h, near private_pointer which is the value protected by this variant of RCU): what is happening is that the rcu_dereference() is being used in a workqueue item. The role of rcu_read_lock() is taken on by the start of execution of the workqueue item, of rcu_read_unlock() by the end of execution of the workqueue item, and of synchronize_rcu() by flush_workqueue()/flush_work(). In the future we might need to apply some gcc attribute or sparse annotation to the function passed to INIT_WORK(). Paul's ack below is for this RCU usage. (Includes fixes by Alan Cox <alan@linux.intel.com>, David L Stevens <dlstevens@us.ibm.com>, Chris Wright <chrisw@redhat.com>) Acked-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-01-14 14:17:27 +08:00
---help---
This kernel module can be loaded in host kernel to accelerate
guest networking with virtio_net. Not to be confused with virtio_net
module itself which needs to be loaded in guest kernel.
To compile this driver as a module, choose M here: the module will
be called vhost_net.
tcm_vhost: Initial merge for vhost level target fabric driver This patch adds the initial code for tcm_vhost, a Vhost level TCM fabric driver for virtio SCSI initiators into KVM guest. This code is currently up and running on v3.5-rc2 host+guest from target-pending/for-next-merge. Using tcm_vhost requires Zhi's -> Stefan -> nab's qemu vhost-scsi tree here: http://git.kernel.org/?p=virt/kvm/nab/qemu-kvm.git;a=shortlog;h=refs/heads/vhost-scsi -- Changelog v4 -> v5: Expose ABI version via VHOST_SCSI_GET_ABI_VERSION + use Rev 0 as starting point for v3.6-rc code (Stefan + ALiguori + nab) Convert vhost_scsi_handle_vq() to vq_err() (nab + MST) Minor style fixes from checkpatch (nab) Changelog v3 -> v4: Rename vhost_vring_target -> vhost_scsi_target (mst + nab) Use TRANSPORT_IQN_LEN in vhost_scsi_target->vhost_wwpn[] def (nab) Move back to drivers/vhost/, and just use drivers/vhost/Kconfig.tcm (mst) Move TCM_VHOST related ioctl defines from include/linux/vhost.h -> drivers/vhost/tcm_vhost.h as requested by MST (nab) Move Kbuild.tcm include from drivers/staging -> drivers/vhost/, and just use 'if STAGING' around 'source drivers/vhost/Kbuild.tcm' Changelog v2 -> v3: Unlock on error in tcm_vhost_drop_nexus() (DanC) Fix strlen() doesn't count the terminator (DanC) Call kfree() on an error path (DanC) Convert tcm_vhost_write_pending to use target_execute_cmd (hch + nab) Fix another strlen() off by one in tcm_vhost_make_tport (DanC) Add option under drivers/staging/Kconfig, and move to drivers/vhost/tcm/ as requested by MST (nab) Changelog v1 -> v2: Fix tv_cmd completion -> release SGL memory leak (nab) Fix sparse warnings for static variable usage ((Fengguang Wu) Fix sparse warnings for min() typing + printk format specs (Fengguang Wu) Convert to cmwq submission for I/O dispatch (nab + hch) Changelog v0 -> v1: Merge into single source + header file, and move to drivers/vhost/ Acked-by: Michael S. Tsirkin <mst@redhat.com> Cc: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> Cc: Anthony Liguori <aliguori@us.ibm.com> Cc: Zhi Yong Wu <wuzhy@cn.ibm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2012-07-19 05:31:32 +08:00
if STAGING
source "drivers/vhost/Kconfig.tcm"
endif