Having PCIe/PCI-X capability isn't enough to assume that there are
extended capabilities. Both specs define that the first capability
header is all zero if there are no extended capabilities. Testing
for this avoids an erroneous message about hiding capability 0x0 at
offset 0x100.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
eventfd_fget() tests to see whether the file is an eventfd file, which
we then immediately pass to eventfd_ctx_fileget(), which again tests
whether the file is an eventfd file. Simplify slightly by using
fdget() so that we only test that we're looking at an eventfd once.
fget() could also be used, but fdget() makes use of fget_light() for
another slight optimization.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Add the default O_CLOEXEC flag for device file descriptors. This is
generally considered a safer option as it allows the user a race free
option to decide whether file descriptors are inherited across exec,
with the default avoiding file descriptor leaks.
Reported-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Macro get_unused_fd() is used to allocate a file descriptor with
default flags. Those default flags (0) can be "unsafe":
O_CLOEXEC must be used by default to not leak file descriptor
across exec().
Instead of macro get_unused_fd(), functions anon_inode_getfd()
or get_unused_fd_flags() should be used with flags given by userspace.
If not possible, flags should be set to O_CLOEXEC to provide userspace
with a default safe behavor.
In a further patch, get_unused_fd() will be removed so that
new code start using anon_inode_getfd() or get_unused_fd_flags()
with correct flags.
This patch replaces calls to get_unused_fd() with equivalent call to
get_unused_fd_flags(0) to preserve current behavor for existing code.
The hard coded flag value (0) should be reviewed on a per-subsystem basis,
and, if possible, set to O_CLOEXEC.
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Link: http://lkml.kernel.org/r/cover.1376327678.git.ydroneaud@opteya.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
VFIO is designed to be used via ioctls on file descriptors
returned by VFIO.
However in some situations support for an external user is required.
The first user is KVM on PPC64 (SPAPR TCE protocol) which is going to
use the existing VFIO groups for exclusive access in real/virtual mode
on a host to avoid passing map/unmap requests to the user space which
would made things pretty slow.
The protocol includes:
1. do normal VFIO init operation:
- opening a new container;
- attaching group(s) to it;
- setting an IOMMU driver for a container.
When IOMMU is set for a container, all groups in it are
considered ready to use by an external user.
2. User space passes a group fd to an external user.
The external user calls vfio_group_get_external_user()
to verify that:
- the group is initialized;
- IOMMU is set for it.
If both checks passed, vfio_group_get_external_user()
increments the container user counter to prevent
the VFIO group from disposal before KVM exits.
3. The external user calls vfio_external_user_iommu_id()
to know an IOMMU ID. PPC64 KVM uses it to link logical bus
number (LIOBN) with IOMMU ID.
4. When the external KVM finishes, it calls
vfio_group_put_external_user() to release the VFIO group.
This call decrements the container user counter.
Everything gets released.
The "vfio: Limit group opens" patch is also required for the consistency.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
If an attempt is made to unbind a device from vfio-pci while that
device is in use, the request is blocked until the device becomes
unused. Unfortunately, that unbind path still grabs the device_lock,
which certain things like __pci_reset_function() also want to take.
This means we need to try to acquire the locks ourselves and use the
pre-locked version, __pci_reset_function_locked().
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Remove debugging WARN_ON if we get a spurious notify for a group that
no longer exists. No reports of anyone hitting this, but it would
likely be a race and not a bug if they did.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
BUS_NOTIFY_DEL_DEVICE triggers IOMMU drivers to remove devices from
their iommu group, but there's really nothing we can do about it at
this point. If the device is in use, then the vfio sub-driver will
block the device_del from completing until it's released. If the
device is not in use or not owned by a vfio sub-driver, then we
really don't care that it's being removed.
The current code can be triggered just by unloading an sr-iov driver
(ex. igb) while the VFs are attached to vfio-pci because it makes an
incorrect assumption about the ordering of driver remove callbacks
vs the DEL_DEVICE notification.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Largely hugepage support for vfio/type1 iommu and surrounding cleanups and fixes.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQIcBAABAgAGBQJR2uNvAAoJECObm247sIsiJRYQAJK15MfXgJq2PtBABNvFUAOG
nqUvLgBgM5Ow1NI0Rzh9jkNohNqCvXDFGaWXXnsaX83hIpi59GFK31W2E3SiFCj3
xISA9SUnm7Kjt9LAF6HTNz805zBkshIOk4MCx6HlezVWSRlWwT3rZzI4dI2fMvl8
iPRk1Ion3QSQui99HWfXv/rtezAIzgZqsFqPC6DjWRfN7LcdEtKtcQwnrSb5GGY9
3TIRY9IRYTSfJ2yjSz5f5258JxoDG5sR8dTMkgG2Gm92iGvGcPGpzQWPzVc4t+TO
PdTqtv9ftEyAJKsYTFjPIod8XbzJBa1FSPadVAIfwF0JCDcsSFjoWGp+RzMQQSF8
MK3VsnQ/pqJfs2nJHDQbWbKu0qWYPntvOCdojZ4679ceDTd0t515npfYeDQuX8yU
fAA5rB46mDXjyxikTP574NdnkcGjbAj7EOCp7s+WTsVPGQQ3mId/3fQw0Wg7bE6v
jaJqdRj70SNTRHs8DFLQhvSZgpef4RzepE4sRBZqzY4vWd4riNcAC3Got+F2rQy3
X4hcHHU/5LGLoGMxOJQmuBfKVM8RAgikq6w2RfttVMLeKCknKtJ29OnotKilvILh
W8nAOGxRnkmONFfHakNJtLl5tQJ4FQXc2cG8OeIIhHgheJjUxL72/zv8bBxOo7rY
jUBjtZ5riQXc/ck4FEGI
=9+Jh
-----END PGP SIGNATURE-----
Merge tag 'vfio-v3.11' of git://github.com/awilliam/linux-vfio
Pull vfio updates from Alex Williamson:
"Largely hugepage support for vfio/type1 iommu and surrounding cleanups
and fixes"
* tag 'vfio-v3.11' of git://github.com/awilliam/linux-vfio:
vfio/type1: Fix leak on error path
vfio: Limit group opens
vfio/type1: Fix missed frees and zero sized removes
vfio: fix documentation
vfio: Provide module option to disable vfio_iommu_type1 hugepage support
vfio: hugepage support for vfio_iommu_type1
vfio: Convert type1 iommu to use rbtree
Pull powerpc updates from Ben Herrenschmidt:
"This is the powerpc changes for the 3.11 merge window. In addition to
the usual bug fixes and small updates, the main highlights are:
- Support for transparent huge pages by Aneesh Kumar for 64-bit
server processors. This allows the use of 16M pages as transparent
huge pages on kernels compiled with a 64K base page size.
- Base VFIO support for KVM on power by Alexey Kardashevskiy
- Wiring up of our nvram to the pstore infrastructure, including
putting compressed oopses in there by Aruna Balakrishnaiah
- Move, rework and improve our "EEH" (basically PCI error handling
and recovery) infrastructure. It is no longer specific to pseries
but is now usable by the new "powernv" platform as well (no
hypervisor) by Gavin Shan.
- I fixed some bugs in our math-emu instruction decoding and made it
usable to emulate some optional FP instructions on processors with
hard FP that lack them (such as fsqrt on Freescale embedded
processors).
- Support for Power8 "Event Based Branch" facility by Michael
Ellerman. This facility allows what is basically "userspace
interrupts" for performance monitor events.
- A bunch of Transactional Memory vs. Signals bug fixes and HW
breakpoint/watchpoint fixes by Michael Neuling.
And more ... I appologize in advance if I've failed to highlight
something that somebody deemed worth it."
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (156 commits)
pstore: Add hsize argument in write_buf call of pstore_ftrace_call
powerpc/fsl: add MPIC timer wakeup support
powerpc/mpic: create mpic subsystem object
powerpc/mpic: add global timer support
powerpc/mpic: add irq_set_wake support
powerpc/85xx: enable coreint for all the 64bit boards
powerpc/8xx: Erroneous double irq_eoi() on CPM IRQ in MPC8xx
powerpc/fsl: Enable CONFIG_E1000E in mpc85xx_smp_defconfig
powerpc/mpic: Add get_version API both for internal and external use
powerpc: Handle both new style and old style reserve maps
powerpc/hw_brk: Fix off by one error when validating DAWR region end
powerpc/pseries: Support compression of oops text via pstore
powerpc/pseries: Re-organise the oops compression code
pstore: Pass header size in the pstore write callback
powerpc/powernv: Fix iommu initialization again
powerpc/pseries: Inform the hypervisor we are using EBB regs
powerpc/perf: Add power8 EBB support
powerpc/perf: Core EBB support for 64-bit book3s
powerpc/perf: Drop MMCRA from thread_struct
powerpc/perf: Don't enable if we have zero events
...
We also don't handle unpinning zero pages as an error on other exits
so we can fix that inconsistency by rolling in the next conditional
return.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
vfio_group_fops_open attempts to limit concurrent sessions by
disallowing opens once group->container is set. This really doesn't
do what we want and allow for inconsistent behavior, for instance a
group can be opened twice, then a container set giving the user two
file descriptors to the group. But then it won't allow more to be
opened. There's not much reason to have the group opened multiple
times since most access is through devices or the container, so
complete what the original code intended and only allow a single
instance.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
With hugepage support we can only properly aligned and sized ranges.
We only guarantee that we can unmap the same ranges mapped and not
arbitrary sub-ranges. This means we might not free anything or might
free more than requested. The vfio unmap interface started storing
the unmapped size to return to userspace to handle this. This patch
fixes a few places where we don't properly handle those cases, moves
a memory allocation to a place where failure is an option and checks
our loops to make sure we don't get into an infinite loop trying to
remove an overlap.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Add a module option to vfio_iommu_type1 to disable IOMMU hugepage
support. This causes iommu_map to only be called with single page
mappings, disabling the IOMMU driver's ability to use hugepages.
This option can be enabled by loading vfio_iommu_type1 with
disable_hugepages=1 or dynamically through sysfs. If enabled
dynamically, only new mappings are restricted.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
We currently send all mappings to the iommu in PAGE_SIZE chunks,
which prevents the iommu from enabling support for larger page sizes.
We still need to pin pages, which means we step through them in
PAGE_SIZE chunks, but we can batch up contiguous physical memory
chunks to allow the iommu the opportunity to use larger pages. The
approach here is a bit different that the one currently used for
legacy KVM device assignment. Rather than looking at the vma page
size and using that as the maximum size to pass to the iommu, we
instead simply look at whether the next page is physically
contiguous. This means we might ask the iommu to map a 4MB region,
while legacy KVM might limit itself to a maximum of 2MB.
Splitting our mapping path also allows us to be smarter about locked
memory because we can more easily unwind if the user attempts to
exceed the limit. Therefore, rather than assuming that a mapping
will result in locked memory, we test each page as it is pinned to
determine whether it locks RAM vs an mmap'd MMIO region. This should
result in better locking granularity and less locked page fudge
factors in userspace.
The unmap path uses the same algorithm as legacy KVM. We don't want
to track the pfn for each mapping ourselves, but we need the pfn in
order to unpin pages. We therefore ask the iommu for the iova to
physical address translation, ask it to unpin a page, and see how many
pages were actually unpinned. iommus supporting large pages will
often return something bigger than a page here, which we know will be
physically contiguous and we can unpin a batch of pfns. iommus not
supporting large mappings won't see an improvement in batching here as
they only unmap a page at a time.
With this change, we also make a clarification to the API for mapping
and unmapping DMA. We can only guarantee unmaps at the same
granularity as used for the original mapping. In other words,
unmapping a subregion of a previous mapping is not guaranteed and may
result in a larger or smaller unmapping than requested. The size
field in the unmapping structure is updated to reflect this.
Previously this was unmodified on mapping, always returning the the
requested unmap size. This is now updated to return the actual unmap
size on success, allowing userspace to appropriately track mappings.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
We need to keep track of all the DMA mappings of an iommu container so
that it can be automatically unmapped when the user releases the file
descriptor. We currently do this using a simple list, where we merge
entries with contiguous iovas and virtual addresses. Using a tree for
this is a bit more efficient and allows us to use common code instead
of inventing our own.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The enables VFIO on the pSeries platform, enabling user space
programs to access PCI devices directly.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
VFIO implements platform independent stuff such as
a PCI driver, BAR access (via read/write on a file descriptor
or direct mapping when possible) and IRQ signaling.
The platform dependent part includes IOMMU initialization
and handling. This implements an IOMMU driver for VFIO
which does mapping/unmapping pages for the guest IO and
provides information about DMA window (required by a POWER
guest).
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
devtmpfs_delete_node() calls devnode() callback with mode==NULL but
vfio still tries to write there.
The patch fixes this.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Changes include extension to support PCI AER notification to userspace, byte granularity of PCI config space and access to unarchitected PCI config space, better protection around IOMMU driver accesses, default file mode fix, and a few misc cleanups.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQIcBAABAgAGBQJRgox9AAoJECObm247sIsizJwP/23KprR6BuRt168Obo+xC/lp
kj1E4Hd4XHx6T/5XRexa4GqhmyLqgoqbS19uj/K6ebY9rouVKo0V6OYNFDQiFR/F
yEGjYIBKV9/eBZMQI4RbZpZKGDXTeFrXyjsac1m51JrR3st8zsvZlNPE/TjzhSfz
jMnsCC99ZwIFjmptw/yWwgInswy1n9H9iICS14Xn1x7v71iyOE32reTG6M9HsPHe
Xm5F0K5iLQX8qoISHAomrTnFmCLxp5Y2qhh7nZWmC2gCbPBEnGdqx4prwzJayAaC
DAN8osaYldoKQuwI4wFYMICHxxCEFk8xU58FeKCmeQJZgomefVAoRKf86gAEsSLR
XHptBQOktWVKNFhzZWyOuAX3iua/hmWcOa05I6inycV1x2ZAGlNhvJnj1iKIphLk
+neBFAD9wDcAJZdXkfudq0jJ9jJTSAWBv8d+hozNfkjTVhF+wRnhZ7V6dZfb9+W6
kPGbgwqKApLoOabQbxbZ5Ftr6S7344prB/HAMywGa+xZfoxoYPxBHCi0rSTvUTWy
oWLtzrKRbjBDc0qF4eEK9RZlmVt2ZmCUnUB3kbWED4mrJAHu4LlFghnAloePrIZ3
kXlMARWbyw/E+1WIMO4Ito5+1s3zVsJJgzVsAQDJkTQw2aYDhns73/Y25Dj7x7QS
BKebrXnh2xVCN6OIu+eX
=F0Cc
-----END PGP SIGNATURE-----
Merge tag 'vfio-for-v3.10' of git://github.com/awilliam/linux-vfio
Pull vfio updates from Alex Williamson:
"Changes include extension to support PCI AER notification to
userspace, byte granularity of PCI config space and access to
unarchitected PCI config space, better protection around IOMMU driver
accesses, default file mode fix, and a few misc cleanups."
* tag 'vfio-for-v3.10' of git://github.com/awilliam/linux-vfio:
vfio: Set container device mode
vfio: Use down_reads to protect iommu disconnects
vfio: Convert container->group_lock to rwsem
PCI/VFIO: use pcie_flags_reg instead of access PCI-E Capabilities Register
vfio-pci: Enable raw access to unassigned config space
vfio-pci: Use byte granularity in config map
vfio: make local function vfio_pci_intx_unmask_handler() static
VFIO-AER: Vfio-pci driver changes for supporting AER
VFIO: Wrapper for getting reference to vfio_device
Minor 0 is the VFIO container device (/dev/vfio/vfio). On it's own
the container does not provide a user with any privileged access. It
only supports API version check and extension check ioctls. Only by
attaching a VFIO group to the container does it gain any access. Set
the mode of the container to allow access.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
If a group or device is released or a container is unset from a group
it can race against file ops on the container. Protect these with
down_reads to allow concurrent users.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reported-by: Michael S. Tsirkin <mst@redhat.com>
All current users are writers, maintaining current mutual exclusion.
This lets us add read users next.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
We now cache the MSI/MSI-X capability offsets in the struct pci_dev,
so no need to find the capabilities again.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
PCI_MSIX_FLAGS_BIRMASK is mis-named because the BIR mask is in the
Table Offset register, not the flags ("Message Control" per spec)
register.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Currently, we use pcie_flags_reg to cache PCI-E Capabilities Register,
because PCI-E Capabilities Register bits are almost read-only. This patch
use pcie_caps_reg() instead of another access PCI-E Capabilities Register.
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Devices like be2net hide registers between the gaps in capabilities
and architected regions of PCI config space. Our choices to support
such devices is to either build an ever growing and unmanageable white
list or rely on hardware isolation to protect us. These registers are
really no different than MMIO or I/O port space registers, which we
don't attempt to regulate, so treat PCI config space in the same way.
Reported-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Tested-by: Gavin Shan <shangw@linux.vnet.ibm.com>
The config map previously used a byte per dword to map regions of
config space to capabilities. Modulo a bug where we round the length
of capabilities down instead of up, this theoretically works well and
saves space so long as devices don't try to hide registers in the gaps
between capabilities. Unfortunately they do exactly that so we need
byte granularity on our config space map. Increase the allocation of
the config map and split accesses at capability region boundaries.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Tested-by: Gavin Shan <shangw@linux.vnet.ibm.com>
The VFIO_DEVICE_SET_IRQS ioctl takes a start and count parameter, both
of which are unsigned. We attempt to bounds check these, but fail to
account for the case where start is a very large number, allowing
start + count to wrap back into the valid range. Bounds check both
start and start + count.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
vfio_pci_intx_unmask_handler() was not declared. It should be static.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The vfio drivers call kmalloc or kzalloc, but do not
include <linux/slab.h>, which causes build errors on
ARM.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: kvm@vger.kernel.org
- New VFIO_SET_IRQ ioctl option to pass the eventfd that is signaled when
an error occurs in the vfio_pci_device
- Register pci_error_handler for the vfio_pci driver
- When the device encounters an error, the error handler registered by
the vfio_pci driver gets invoked by the AER infrastructure
- In the error handler, signal the eventfd registered for the device.
- This results in the qemu eventfd handler getting invoked and
appropriate action taken for the guest.
Signed-off-by: Vijay Mohan Pandarathil <vijaymohan.pandarathil@hp.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
- Added vfio_device_get_from_dev() as wrapper to get
reference to vfio_device from struct device.
- Added vfio_device_data() as a wrapper to get device_data from
vfio_device.
Signed-off-by: Vijay Mohan Pandarathil <vijaymohan.pandarathil@hp.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Convert to the much saner new idr interface.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The CONFIG_EXPERIMENTAL config item has not carried much meaning for a
while now and is almost always enabled by default. As agreed during the
Linux kernel summit, remove it from any "depends on" lines in Kconfigs.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
PCI defines display class VGA regions at I/O port address 0x3b0, 0x3c0
and MMIO address 0xa0000. As these are non-overlapping, we can ignore
the I/O port vs MMIO difference and expose them both in a single
region. We make use of the VGA arbiter around each access to
configure chipset access as necessary.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
We give the user access to change the power state of the device but
certain transitions result in an uninitialized state which the user
cannot resolve. To fix this we need to mark the PowerState field of
the PMCSR register read-only and effect the requested change on behalf
of the user. This has the added benefit that pdev->current_state
remains accurate while controlled by the user.
The primary example of this bug is a QEMU guest doing a reboot where
the device it put into D3 on shutdown and becomes unusable on the next
boot because the device did a soft reset on D3->D0 (NoSoftRst-).
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
pcieport does nice things like manage AER and we know it doesn't do
DMA or expose any user accessible devices on the host. It also keeps
the Memory, I/O, and Busmaster bits enabled, which is pretty handy
when trying to use anyting below it. Devices owned by pcieport cannot
be given to users via vfio, but we can tolerate them not being owned
by vfio-pci.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
vfio_dev_present is meant to give us a wait_event callback so that we
can block removing a device from vfio until it becomes unused. The
root of this check depends on being able to get the iommu group from
the device. Unfortunately if the BUS_NOTIFY_DEL_DEVICE notifier has
fired then the device-group reference is no longer searchable and we
fail the lookup.
We don't need to go to such extents for this though. We have a
reference to the device, from which we can acquire a reference to the
group. We can then use the group reference to search for the device
and properly block removal.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
We can actually handle MMIO and I/O port from the same access function
since PCI already does abstraction of this. The ROM BAR only requires
a minor difference, so it gets included too. vfio_pci_config_readwrite
gets renamed for consistency.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The read and write functions are nearly identical, combine them
and convert to a switch statement. This also makes it easy to
narrow the scope of when we use the io/mem accessors in case new
regions are added.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
A read from a range hidden from the user (ex. MSI-X vector table)
attempts to fill the user buffer up to the end of the excluded range
instead of up to the requested count. Fix it.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: stable@vger.kernel.org
Devices making use of PM reset are getting incorrectly identified as
not supporting reset because pci_pm_reset() fails unless the device is
in D0 power state. When first attached to vfio_pci devices are
typically in an unknown power state. We can fix this by explicitly
setting the power state or simply calling pci_enable_device() before
attempting a pci_reset_function(). We need to enable the device
anyway, so move this up in our vfio_pci_enable() function, which also
simplifies the error path a bit.
Note that pci_disable_device() does not explicitly set the power
state, so there's no need to re-order vfio_pci_disable().
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The two labels for error recovery in function vfio_pci_init() is out of
order, so fix it.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Comments from dev_driver_string(),
/* dev->driver can change to NULL underneath us because of unbinding,
* so be careful about accessing it.
*/
So use ACCESS_ONCE() to guard access to dev->driver field.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
On error recovery path in function vfio_create_group(), it should
unregister the IOMMU notifier for the new VFIO group. Otherwise it may
cause invalid memory access later when handling bus notifications.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Move the device reset to the end of our disable path, the device
should already be stopped from pci_disable_device(). This also allows
us to manipulate the save/restore to avoid the save/reset/restore +
save/restore that we had before.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The virq_disabled flag tracks the userspace view of INTx masking
across interrupt mode changes, but we're not consistently applying
this to the interrupt and masking handler notion of the device.
Currently if the user sets DisINTx while in MSI or MSIX mode, then
returns to INTx mode (ex. rebooting a qemu guest), the hardware has
DisINTx+, but the management of INTx thinks it's enabled, making it
impossible to actually clear DisINTx. Fix this by updating the
handler state when INTx is re-enabled.
Cc: stable@vger.kernel.org
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
We need to be ready to recieve an interrupt as soon as we call
request_irq, so our eventfd context setting needs to be moved
earlier. Without this, an interrupt from our device or one
sharing the interrupt line can pass a NULL into eventfd_signal
and oops.
Cc: stable@vger.kernel.org
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Our mmap path mistakely relied on vma->vm_pgoff to get set in
remap_pfn_range. After b3b9c293, that path only applies to
copy-on-write mappings. Set it in our own code.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The VM_RESERVED flag was killed off in commit 314e51b985 ("mm: kill
vma flag VM_RESERVED and mm->reserved_vm counter"), and replaced by the
proper semantic flags (eg "don't core-dump" etc). But there was a new
use of VM_RESERVED that got missed by the merge.
Fix the remaining use of VM_RESERVED in the vfio_pci driver, replacing
the VM_RESERVED flag with VM_DONTEXPAND | VM_DONTDUMP.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation,org>
vfoi-pci supports a mechanism like KVM's irqfd for unmasking an
interrupt through an eventfd. There are two ways to shutdown this
interface: 1) close the eventfd, 2) ioctl (such as disabling the
interrupt). Both of these do the release through a workqueue,
which can result in a segfault if two jobs get queued for the same
virqfd.
Fix this by protecting the pointer to these virqfds by a spinlock.
The vfio pci device will therefore no longer have a reference to it
once the release job is queued under lock. On the ioctl side, we
still flush the workqueue to ensure that any outstanding releases
are completed.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
It's not critical (anymore) since another thread closing the file will block
on ->device_lock before it gets to dropping the final reference, but it's
definitely cleaner that way...
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
we really need to make sure that dropping the last reference happens
under the group->device_lock; otherwise a loop (under device_lock)
might find vfio_device instance that is being freed right now, has
already dropped the last reference and waits on device_lock to exclude
the sucker from the list.
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Add PCI device support for VFIO. PCI devices expose regions
for accessing config space, I/O port space, and MMIO areas
of the device. PCI config access is virtualized in the kernel,
allowing us to ensure the integrity of the system, by preventing
various accesses while reducing duplicate support across various
userspace drivers. I/O port supports read/write access while
MMIO also supports mmap of sufficiently sized regions. Support
for INTx, MSI, and MSI-X interrupts are provided using eventfds to
userspace.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
This VFIO IOMMU backend is designed primarily for AMD-Vi and Intel
VT-d hardware, but is potentially usable by anything supporting
similar mapping functionality. We arbitrarily call this a Type1
backend for lack of a better name. This backend has no IOVA
or host memory mapping restrictions for the user and is optimized
for relatively static mappings. Mapped areas are pinned into system
memory.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
VFIO is a secure user level driver for use with both virtual machines
and user level drivers. VFIO makes use of IOMMU groups to ensure the
isolation of devices in use, allowing unprivileged user access. It's
intended that VFIO will replace KVM device assignment and UIO drivers
(in cases where the target platform includes a sufficiently capable
IOMMU).
New in this version of VFIO is support for IOMMU groups managed
through the IOMMU core as well as a rework of the API, removing the
group merge interface. We now go back to a model more similar to
original VFIO with UIOMMU support where the file descriptor obtained
from /dev/vfio/vfio allows access to the IOMMU, but only after a
group is added, avoiding the previous privilege issues with this type
of model. IOMMU support is also now fully modular as IOMMUs have
vastly different interface requirements on different platforms. VFIO
users are able to query and initialize the IOMMU model of their
choice.
Please see the follow-on Documentation commit for further description
and usage example.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>