2009-07-14 07:02:34 +08:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2009, Microsoft Corporation.
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify it
|
|
|
|
* under the terms and conditions of the GNU General Public License,
|
|
|
|
* version 2, as published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope it will be useful, but WITHOUT
|
|
|
|
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
|
|
|
* more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License along with
|
|
|
|
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
|
|
|
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* Haiyang Zhang <haiyangz@microsoft.com>
|
|
|
|
* Hank Janssen <hjanssen@microsoft.com>
|
|
|
|
*/
|
2011-03-30 04:58:47 +08:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2009-07-15 06:08:20 +08:00
|
|
|
#include <linux/kernel.h>
|
2011-02-12 01:59:43 +08:00
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/wait.h>
|
2009-08-18 08:22:08 +08:00
|
|
|
#include <linux/mm.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2010-05-05 06:55:05 +08:00
|
|
|
#include <linux/module.h>
|
2011-10-05 03:29:52 +08:00
|
|
|
#include <linux/hyperv.h>
|
2014-02-02 11:02:20 +08:00
|
|
|
#include <linux/uio.h>
|
2015-12-15 08:01:47 +08:00
|
|
|
#include <linux/interrupt.h>
|
2011-05-13 10:34:15 +08:00
|
|
|
|
2011-05-13 10:34:28 +08:00
|
|
|
#include "hyperv_vmbus.h"
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-02-12 02:00:12 +08:00
|
|
|
#define NUM_PAGES_SPANNED(addr, len) \
|
|
|
|
((PAGE_ALIGN(addr + len) >> PAGE_SHIFT) - (addr >> PAGE_SHIFT))
|
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2010-10-08 02:40:08 +08:00
|
|
|
* vmbus_setevent- Trigger an event notification on the specified
|
2010-03-05 06:11:00 +08:00
|
|
|
* channel.
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2010-10-08 02:40:08 +08:00
|
|
|
static void vmbus_setevent(struct vmbus_channel *channel)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2010-10-01 01:52:13 +08:00
|
|
|
struct hv_monitor_page *monitorpage;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-07-02 07:26:37 +08:00
|
|
|
/*
|
|
|
|
* For channels marked as in "low latency" mode
|
|
|
|
* bypass the monitor page mechanism.
|
|
|
|
*/
|
|
|
|
if ((channel->offermsg.monitor_allocated) &&
|
|
|
|
(!channel->low_latency)) {
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Each u32 represents 32 channels */
|
2011-03-21 21:41:37 +08:00
|
|
|
sync_set_bit(channel->offermsg.child_relid & 31,
|
2011-01-27 04:12:14 +08:00
|
|
|
(unsigned long *) vmbus_connection.send_int_page +
|
2010-11-09 06:04:38 +08:00
|
|
|
(channel->offermsg.child_relid >> 5));
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2013-09-14 02:32:55 +08:00
|
|
|
/* Get the child to parent monitor page */
|
|
|
|
monitorpage = vmbus_connection.monitor_pages[1];
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-03-21 21:41:37 +08:00
|
|
|
sync_set_bit(channel->monitor_bit,
|
2010-11-09 06:04:39 +08:00
|
|
|
(unsigned long *)&monitorpage->trigger_group
|
|
|
|
[channel->monitor_grp].pending);
|
2009-07-30 05:00:11 +08:00
|
|
|
|
2009-09-02 08:24:57 +08:00
|
|
|
} else {
|
2012-12-01 22:46:43 +08:00
|
|
|
vmbus_set_event(channel);
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2010-10-08 02:40:08 +08:00
|
|
|
* vmbus_open - Open the specified channel.
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2010-10-08 02:40:08 +08:00
|
|
|
int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
|
2010-10-01 01:52:13 +08:00
|
|
|
u32 recv_ringbuffer_size, void *userdata, u32 userdatalen,
|
|
|
|
void (*onchannelcallback)(void *context), void *context)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2011-08-26 00:48:55 +08:00
|
|
|
struct vmbus_channel_open_channel *open_msg;
|
2011-08-26 00:48:57 +08:00
|
|
|
struct vmbus_channel_msginfo *open_info = NULL;
|
2009-07-14 07:02:34 +08:00
|
|
|
void *in, *out;
|
2009-07-16 05:56:45 +08:00
|
|
|
unsigned long flags;
|
2015-02-28 03:26:01 +08:00
|
|
|
int ret, err = 0;
|
2015-06-01 12:27:03 +08:00
|
|
|
struct page *page;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-01-20 23:45:05 +08:00
|
|
|
spin_lock_irqsave(&newchannel->lock, flags);
|
2013-05-24 03:02:32 +08:00
|
|
|
if (newchannel->state == CHANNEL_OPEN_STATE) {
|
|
|
|
newchannel->state = CHANNEL_OPENING_STATE;
|
|
|
|
} else {
|
2015-01-20 23:45:05 +08:00
|
|
|
spin_unlock_irqrestore(&newchannel->lock, flags);
|
2013-05-24 03:02:32 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2015-01-20 23:45:05 +08:00
|
|
|
spin_unlock_irqrestore(&newchannel->lock, flags);
|
2013-05-24 03:02:32 +08:00
|
|
|
|
2010-11-09 06:04:38 +08:00
|
|
|
newchannel->onchannel_callback = onchannelcallback;
|
|
|
|
newchannel->channel_callback_context = context;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Allocate the ring buffer */
|
2015-06-01 12:27:03 +08:00
|
|
|
page = alloc_pages_node(cpu_to_node(newchannel->target_cpu),
|
|
|
|
GFP_KERNEL|__GFP_ZERO,
|
|
|
|
get_order(send_ringbuffer_size +
|
|
|
|
recv_ringbuffer_size));
|
|
|
|
|
|
|
|
if (!page)
|
|
|
|
out = (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO,
|
|
|
|
get_order(send_ringbuffer_size +
|
|
|
|
recv_ringbuffer_size));
|
|
|
|
else
|
|
|
|
out = (void *)page_address(page);
|
2011-02-12 01:59:00 +08:00
|
|
|
|
2015-02-28 03:26:00 +08:00
|
|
|
if (!out) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto error0;
|
|
|
|
}
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
in = (void *)((unsigned long)out + send_ringbuffer_size);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-11-09 06:04:38 +08:00
|
|
|
newchannel->ringbuffer_pages = out;
|
|
|
|
newchannel->ringbuffer_pagecount = (send_ringbuffer_size +
|
2010-10-01 01:52:13 +08:00
|
|
|
recv_ringbuffer_size) >> PAGE_SHIFT;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-05-10 22:55:21 +08:00
|
|
|
ret = hv_ringbuffer_init(
|
|
|
|
&newchannel->outbound, out, send_ringbuffer_size);
|
|
|
|
|
2010-05-13 23:56:30 +08:00
|
|
|
if (ret != 0) {
|
2010-05-06 03:27:49 +08:00
|
|
|
err = ret;
|
2012-10-13 04:22:42 +08:00
|
|
|
goto error0;
|
2010-05-06 03:27:49 +08:00
|
|
|
}
|
|
|
|
|
2011-05-10 22:55:21 +08:00
|
|
|
ret = hv_ringbuffer_init(
|
|
|
|
&newchannel->inbound, in, recv_ringbuffer_size);
|
2010-05-13 23:56:30 +08:00
|
|
|
if (ret != 0) {
|
2010-05-06 03:27:49 +08:00
|
|
|
err = ret;
|
2012-10-13 04:22:42 +08:00
|
|
|
goto error0;
|
2010-05-06 03:27:49 +08:00
|
|
|
}
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Establish the gpadl for the ring buffer */
|
2010-11-09 06:04:38 +08:00
|
|
|
newchannel->ringbuffer_gpadlhandle = 0;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-08 02:40:08 +08:00
|
|
|
ret = vmbus_establish_gpadl(newchannel,
|
2010-11-09 06:04:45 +08:00
|
|
|
newchannel->outbound.ring_buffer,
|
2010-10-01 01:52:13 +08:00
|
|
|
send_ringbuffer_size +
|
|
|
|
recv_ringbuffer_size,
|
2010-11-09 06:04:38 +08:00
|
|
|
&newchannel->ringbuffer_gpadlhandle);
|
2010-05-06 03:27:37 +08:00
|
|
|
|
2010-05-13 23:56:30 +08:00
|
|
|
if (ret != 0) {
|
2010-05-06 03:27:37 +08:00
|
|
|
err = ret;
|
2012-10-13 04:22:42 +08:00
|
|
|
goto error0;
|
2010-05-06 03:27:37 +08:00
|
|
|
}
|
2009-09-02 08:24:57 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Create and init the channel open message */
|
2011-08-26 00:48:57 +08:00
|
|
|
open_info = kmalloc(sizeof(*open_info) +
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct vmbus_channel_open_channel),
|
|
|
|
GFP_KERNEL);
|
2011-08-26 00:48:57 +08:00
|
|
|
if (!open_info) {
|
2010-05-06 03:27:34 +08:00
|
|
|
err = -ENOMEM;
|
2015-02-28 03:26:04 +08:00
|
|
|
goto error_gpadl;
|
2010-05-06 03:27:34 +08:00
|
|
|
}
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-08-26 00:48:57 +08:00
|
|
|
init_completion(&open_info->waitevent);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-08-26 00:48:57 +08:00
|
|
|
open_msg = (struct vmbus_channel_open_channel *)open_info->msg;
|
2011-08-26 00:48:55 +08:00
|
|
|
open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL;
|
|
|
|
open_msg->openid = newchannel->offermsg.child_relid;
|
|
|
|
open_msg->child_relid = newchannel->offermsg.child_relid;
|
|
|
|
open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle;
|
|
|
|
open_msg->downstream_ringbuffer_pageoffset = send_ringbuffer_size >>
|
2009-09-02 08:24:57 +08:00
|
|
|
PAGE_SHIFT;
|
2012-12-01 22:46:48 +08:00
|
|
|
open_msg->target_vp = newchannel->target_vp;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
if (userdatalen > MAX_USER_DEFINED_BYTES) {
|
2010-05-06 03:27:38 +08:00
|
|
|
err = -EINVAL;
|
2015-02-28 03:26:04 +08:00
|
|
|
goto error_gpadl;
|
2010-05-06 03:27:38 +08:00
|
|
|
}
|
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
if (userdatalen)
|
2011-08-26 00:48:55 +08:00
|
|
|
memcpy(open_msg->userdata, userdata, userdatalen);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
|
2011-08-26 00:48:57 +08:00
|
|
|
list_add_tail(&open_info->msglistentry,
|
2011-01-27 04:12:14 +08:00
|
|
|
&vmbus_connection.chn_msg_list);
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-08-26 00:48:55 +08:00
|
|
|
ret = vmbus_post_msg(open_msg,
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct vmbus_channel_open_channel));
|
2011-03-30 04:58:44 +08:00
|
|
|
|
2014-08-28 07:25:35 +08:00
|
|
|
if (ret != 0) {
|
|
|
|
err = ret;
|
2012-10-13 04:22:42 +08:00
|
|
|
goto error1;
|
2014-08-28 07:25:35 +08:00
|
|
|
}
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-06-10 08:08:56 +08:00
|
|
|
wait_for_completion(&open_info->waitevent);
|
2011-02-12 01:59:43 +08:00
|
|
|
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
|
2011-08-26 00:48:57 +08:00
|
|
|
list_del(&open_info->msglistentry);
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-05-07 08:47:40 +08:00
|
|
|
if (open_info->response.open_result.status) {
|
|
|
|
err = -EAGAIN;
|
|
|
|
goto error_gpadl;
|
|
|
|
}
|
2013-05-24 03:02:32 +08:00
|
|
|
|
2015-05-07 08:47:40 +08:00
|
|
|
newchannel->state = CHANNEL_OPENED_STATE;
|
2011-08-26 00:48:57 +08:00
|
|
|
kfree(open_info);
|
2015-05-07 08:47:40 +08:00
|
|
|
return 0;
|
2010-05-06 03:27:34 +08:00
|
|
|
|
2012-10-13 04:22:42 +08:00
|
|
|
error1:
|
|
|
|
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
|
|
|
|
list_del(&open_info->msglistentry);
|
|
|
|
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
|
|
|
|
2015-02-28 03:26:04 +08:00
|
|
|
error_gpadl:
|
|
|
|
vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle);
|
|
|
|
|
2012-10-13 04:22:42 +08:00
|
|
|
error0:
|
2011-02-12 01:59:00 +08:00
|
|
|
free_pages((unsigned long)out,
|
|
|
|
get_order(send_ringbuffer_size + recv_ringbuffer_size));
|
2011-08-26 00:48:57 +08:00
|
|
|
kfree(open_info);
|
2015-02-28 03:26:00 +08:00
|
|
|
newchannel->state = CHANNEL_OPEN_STATE;
|
2010-05-06 03:27:34 +08:00
|
|
|
return err;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
2010-10-22 00:58:21 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_open);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-01-28 14:29:40 +08:00
|
|
|
/* Used for Hyper-V Socket: a guest client's connect() to the host */
|
|
|
|
int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id,
|
|
|
|
const uuid_le *shv_host_servie_id)
|
|
|
|
{
|
|
|
|
struct vmbus_channel_tl_connect_request conn_msg;
|
|
|
|
|
|
|
|
memset(&conn_msg, 0, sizeof(conn_msg));
|
|
|
|
conn_msg.header.msgtype = CHANNELMSG_TL_CONNECT_REQUEST;
|
|
|
|
conn_msg.guest_endpoint_id = *shv_guest_servie_id;
|
|
|
|
conn_msg.host_service_id = *shv_host_servie_id;
|
|
|
|
|
|
|
|
return vmbus_post_msg(&conn_msg, sizeof(conn_msg));
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(vmbus_send_tl_connect_request);
|
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2010-10-08 02:40:08 +08:00
|
|
|
* create_gpadl_header - Creates a gpadl for the specified buffer
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2010-10-08 02:40:08 +08:00
|
|
|
static int create_gpadl_header(void *kbuffer, u32 size,
|
2016-06-04 08:09:23 +08:00
|
|
|
struct vmbus_channel_msginfo **msginfo)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
|
|
|
int i;
|
2010-10-01 01:52:13 +08:00
|
|
|
int pagecount;
|
|
|
|
struct vmbus_channel_gpadl_header *gpadl_header;
|
|
|
|
struct vmbus_channel_gpadl_body *gpadl_body;
|
|
|
|
struct vmbus_channel_msginfo *msgheader;
|
|
|
|
struct vmbus_channel_msginfo *msgbody = NULL;
|
|
|
|
u32 msgsize;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
int pfnsum, pfncount, pfnleft, pfncurr, pfnsize;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
pagecount = size >> PAGE_SHIFT;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* do we need a gpadl body msg */
|
2010-10-01 01:52:13 +08:00
|
|
|
pfnsize = MAX_SIZE_CHANNEL_MESSAGE -
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct vmbus_channel_gpadl_header) -
|
|
|
|
sizeof(struct gpa_range);
|
2010-10-01 01:52:13 +08:00
|
|
|
pfncount = pfnsize / sizeof(u64);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
if (pagecount > pfncount) {
|
2009-09-02 08:24:57 +08:00
|
|
|
/* we need a gpadl body */
|
2009-07-28 04:47:24 +08:00
|
|
|
/* fill in the header */
|
2010-10-01 01:52:13 +08:00
|
|
|
msgsize = sizeof(struct vmbus_channel_msginfo) +
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct vmbus_channel_gpadl_header) +
|
2010-10-01 01:52:13 +08:00
|
|
|
sizeof(struct gpa_range) + pfncount * sizeof(u64);
|
|
|
|
msgheader = kzalloc(msgsize, GFP_KERNEL);
|
|
|
|
if (!msgheader)
|
2010-05-06 03:27:35 +08:00
|
|
|
goto nomem;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-11-09 06:04:38 +08:00
|
|
|
INIT_LIST_HEAD(&msgheader->submsglist);
|
|
|
|
msgheader->msgsize = msgsize;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
gpadl_header = (struct vmbus_channel_gpadl_header *)
|
2010-11-09 06:04:38 +08:00
|
|
|
msgheader->msg;
|
|
|
|
gpadl_header->rangecount = 1;
|
|
|
|
gpadl_header->range_buflen = sizeof(struct gpa_range) +
|
2010-10-01 01:52:13 +08:00
|
|
|
pagecount * sizeof(u64);
|
2011-01-27 04:12:13 +08:00
|
|
|
gpadl_header->range[0].byte_offset = 0;
|
|
|
|
gpadl_header->range[0].byte_count = size;
|
2010-10-01 01:52:13 +08:00
|
|
|
for (i = 0; i < pfncount; i++)
|
2014-01-28 07:03:42 +08:00
|
|
|
gpadl_header->range[0].pfn_array[i] = slow_virt_to_phys(
|
|
|
|
kbuffer + PAGE_SIZE * i) >> PAGE_SHIFT;
|
2010-10-01 01:52:13 +08:00
|
|
|
*msginfo = msgheader;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
pfnsum = pfncount;
|
|
|
|
pfnleft = pagecount - pfncount;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* how many pfns can we fit */
|
2010-10-01 01:52:13 +08:00
|
|
|
pfnsize = MAX_SIZE_CHANNEL_MESSAGE -
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct vmbus_channel_gpadl_body);
|
2010-10-01 01:52:13 +08:00
|
|
|
pfncount = pfnsize / sizeof(u64);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* fill in the body */
|
2010-10-01 01:52:13 +08:00
|
|
|
while (pfnleft) {
|
|
|
|
if (pfnleft > pfncount)
|
|
|
|
pfncurr = pfncount;
|
2009-07-14 07:02:34 +08:00
|
|
|
else
|
2010-10-01 01:52:13 +08:00
|
|
|
pfncurr = pfnleft;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
msgsize = sizeof(struct vmbus_channel_msginfo) +
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct vmbus_channel_gpadl_body) +
|
2010-10-01 01:52:13 +08:00
|
|
|
pfncurr * sizeof(u64);
|
|
|
|
msgbody = kzalloc(msgsize, GFP_KERNEL);
|
2011-06-07 06:50:06 +08:00
|
|
|
|
|
|
|
if (!msgbody) {
|
|
|
|
struct vmbus_channel_msginfo *pos = NULL;
|
|
|
|
struct vmbus_channel_msginfo *tmp = NULL;
|
|
|
|
/*
|
|
|
|
* Free up all the allocated messages.
|
|
|
|
*/
|
|
|
|
list_for_each_entry_safe(pos, tmp,
|
|
|
|
&msgheader->submsglist,
|
|
|
|
msglistentry) {
|
|
|
|
|
|
|
|
list_del(&pos->msglistentry);
|
|
|
|
kfree(pos);
|
|
|
|
}
|
|
|
|
|
2010-05-06 03:27:35 +08:00
|
|
|
goto nomem;
|
2011-06-07 06:50:06 +08:00
|
|
|
}
|
|
|
|
|
2010-11-09 06:04:38 +08:00
|
|
|
msgbody->msgsize = msgsize;
|
2010-10-01 01:52:13 +08:00
|
|
|
gpadl_body =
|
2010-11-09 06:04:38 +08:00
|
|
|
(struct vmbus_channel_gpadl_body *)msgbody->msg;
|
2009-09-02 08:24:57 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Gpadl is u32 and we are using a pointer which could
|
|
|
|
* be 64-bit
|
2011-06-07 06:49:56 +08:00
|
|
|
* This is governed by the guest/host protocol and
|
|
|
|
* so the hypervisor gurantees that this is ok.
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2010-10-01 01:52:13 +08:00
|
|
|
for (i = 0; i < pfncurr; i++)
|
2014-01-28 07:03:42 +08:00
|
|
|
gpadl_body->pfn[i] = slow_virt_to_phys(
|
|
|
|
kbuffer + PAGE_SIZE * (pfnsum + i)) >>
|
|
|
|
PAGE_SHIFT;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* add to msg header */
|
2010-11-09 06:04:38 +08:00
|
|
|
list_add_tail(&msgbody->msglistentry,
|
|
|
|
&msgheader->submsglist);
|
2010-10-01 01:52:13 +08:00
|
|
|
pfnsum += pfncurr;
|
|
|
|
pfnleft -= pfncurr;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
2009-09-02 08:24:57 +08:00
|
|
|
} else {
|
2009-07-28 04:47:24 +08:00
|
|
|
/* everything fits in a header */
|
2010-10-01 01:52:13 +08:00
|
|
|
msgsize = sizeof(struct vmbus_channel_msginfo) +
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct vmbus_channel_gpadl_header) +
|
2010-10-01 01:52:13 +08:00
|
|
|
sizeof(struct gpa_range) + pagecount * sizeof(u64);
|
|
|
|
msgheader = kzalloc(msgsize, GFP_KERNEL);
|
|
|
|
if (msgheader == NULL)
|
2010-07-17 00:13:51 +08:00
|
|
|
goto nomem;
|
2016-06-04 08:09:23 +08:00
|
|
|
|
|
|
|
INIT_LIST_HEAD(&msgheader->submsglist);
|
2010-11-09 06:04:38 +08:00
|
|
|
msgheader->msgsize = msgsize;
|
2010-10-01 01:52:13 +08:00
|
|
|
|
|
|
|
gpadl_header = (struct vmbus_channel_gpadl_header *)
|
2010-11-09 06:04:38 +08:00
|
|
|
msgheader->msg;
|
|
|
|
gpadl_header->rangecount = 1;
|
|
|
|
gpadl_header->range_buflen = sizeof(struct gpa_range) +
|
2010-10-01 01:52:13 +08:00
|
|
|
pagecount * sizeof(u64);
|
2011-01-27 04:12:13 +08:00
|
|
|
gpadl_header->range[0].byte_offset = 0;
|
|
|
|
gpadl_header->range[0].byte_count = size;
|
2010-10-01 01:52:13 +08:00
|
|
|
for (i = 0; i < pagecount; i++)
|
2014-01-28 07:03:42 +08:00
|
|
|
gpadl_header->range[0].pfn_array[i] = slow_virt_to_phys(
|
|
|
|
kbuffer + PAGE_SIZE * i) >> PAGE_SHIFT;
|
2010-10-01 01:52:13 +08:00
|
|
|
|
|
|
|
*msginfo = msgheader;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
2010-05-06 03:27:35 +08:00
|
|
|
nomem:
|
2010-10-01 01:52:13 +08:00
|
|
|
kfree(msgheader);
|
|
|
|
kfree(msgbody);
|
2010-05-06 03:27:35 +08:00
|
|
|
return -ENOMEM;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2010-10-08 02:40:08 +08:00
|
|
|
* vmbus_establish_gpadl - Estabish a GPADL for the specified buffer
|
2009-09-02 08:24:57 +08:00
|
|
|
*
|
2010-10-01 01:52:13 +08:00
|
|
|
* @channel: a channel
|
2014-01-28 07:03:42 +08:00
|
|
|
* @kbuffer: from kmalloc or vmalloc
|
2010-10-01 01:52:13 +08:00
|
|
|
* @size: page-size multiple
|
|
|
|
* @gpadl_handle: some funky thing
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2010-10-08 02:40:08 +08:00
|
|
|
int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
|
2010-10-01 01:52:13 +08:00
|
|
|
u32 size, u32 *gpadl_handle)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2010-10-01 01:52:13 +08:00
|
|
|
struct vmbus_channel_gpadl_header *gpadlmsg;
|
|
|
|
struct vmbus_channel_gpadl_body *gpadl_body;
|
|
|
|
struct vmbus_channel_msginfo *msginfo = NULL;
|
2016-06-04 08:09:24 +08:00
|
|
|
struct vmbus_channel_msginfo *submsginfo, *tmp;
|
2009-09-12 09:46:44 +08:00
|
|
|
struct list_head *curr;
|
2010-10-01 01:52:13 +08:00
|
|
|
u32 next_gpadl_handle;
|
2009-07-16 05:56:45 +08:00
|
|
|
unsigned long flags;
|
2010-05-06 03:27:34 +08:00
|
|
|
int ret = 0;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-01-10 15:54:33 +08:00
|
|
|
next_gpadl_handle =
|
|
|
|
(atomic_inc_return(&vmbus_connection.next_gpadl_handle) - 1);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-06-04 08:09:23 +08:00
|
|
|
ret = create_gpadl_header(kbuffer, size, &msginfo);
|
2010-05-06 03:27:34 +08:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-05-10 22:55:39 +08:00
|
|
|
init_completion(&msginfo->waitevent);
|
2010-05-06 03:27:34 +08:00
|
|
|
|
2010-11-09 06:04:38 +08:00
|
|
|
gpadlmsg = (struct vmbus_channel_gpadl_header *)msginfo->msg;
|
|
|
|
gpadlmsg->header.msgtype = CHANNELMSG_GPADL_HEADER;
|
|
|
|
gpadlmsg->child_relid = channel->offermsg.child_relid;
|
|
|
|
gpadlmsg->gpadl = next_gpadl_handle;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
|
2010-11-09 06:04:38 +08:00
|
|
|
list_add_tail(&msginfo->msglistentry,
|
2011-01-27 04:12:14 +08:00
|
|
|
&vmbus_connection.chn_msg_list);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-01-27 04:12:08 +08:00
|
|
|
ret = vmbus_post_msg(gpadlmsg, msginfo->msgsize -
|
2010-10-01 01:52:13 +08:00
|
|
|
sizeof(*msginfo));
|
2011-03-30 04:58:44 +08:00
|
|
|
if (ret != 0)
|
2011-06-07 06:50:12 +08:00
|
|
|
goto cleanup;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-06-04 08:09:23 +08:00
|
|
|
list_for_each(curr, &msginfo->submsglist) {
|
|
|
|
submsginfo = (struct vmbus_channel_msginfo *)curr;
|
|
|
|
gpadl_body =
|
|
|
|
(struct vmbus_channel_gpadl_body *)submsginfo->msg;
|
2009-09-12 09:46:44 +08:00
|
|
|
|
2016-06-04 08:09:23 +08:00
|
|
|
gpadl_body->header.msgtype =
|
|
|
|
CHANNELMSG_GPADL_BODY;
|
|
|
|
gpadl_body->gpadl = next_gpadl_handle;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-06-04 08:09:23 +08:00
|
|
|
ret = vmbus_post_msg(gpadl_body,
|
|
|
|
submsginfo->msgsize -
|
|
|
|
sizeof(*submsginfo));
|
|
|
|
if (ret != 0)
|
|
|
|
goto cleanup;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
}
|
2014-08-28 07:25:34 +08:00
|
|
|
wait_for_completion(&msginfo->waitevent);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* At this point, we received the gpadl created msg */
|
2010-11-09 06:04:38 +08:00
|
|
|
*gpadl_handle = gpadlmsg->gpadl;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-06-07 06:50:12 +08:00
|
|
|
cleanup:
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
|
2010-11-09 06:04:38 +08:00
|
|
|
list_del(&msginfo->msglistentry);
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
2016-06-04 08:09:24 +08:00
|
|
|
list_for_each_entry_safe(submsginfo, tmp, &msginfo->submsglist,
|
|
|
|
msglistentry) {
|
|
|
|
kfree(submsginfo);
|
|
|
|
}
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
kfree(msginfo);
|
2009-07-14 07:02:34 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2010-10-21 23:47:43 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_establish_gpadl);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2010-10-08 02:40:08 +08:00
|
|
|
* vmbus_teardown_gpadl -Teardown the specified GPADL handle
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2010-10-08 02:40:08 +08:00
|
|
|
int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2009-08-27 06:16:04 +08:00
|
|
|
struct vmbus_channel_gpadl_teardown *msg;
|
2009-08-19 06:21:19 +08:00
|
|
|
struct vmbus_channel_msginfo *info;
|
2009-07-16 05:56:45 +08:00
|
|
|
unsigned long flags;
|
2014-08-28 07:25:32 +08:00
|
|
|
int ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-09-02 08:24:57 +08:00
|
|
|
info = kmalloc(sizeof(*info) +
|
|
|
|
sizeof(struct vmbus_channel_gpadl_teardown), GFP_KERNEL);
|
2010-05-06 03:27:34 +08:00
|
|
|
if (!info)
|
|
|
|
return -ENOMEM;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-05-10 22:55:39 +08:00
|
|
|
init_completion(&info->waitevent);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-11-09 06:04:38 +08:00
|
|
|
msg = (struct vmbus_channel_gpadl_teardown *)info->msg;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-11-09 06:04:38 +08:00
|
|
|
msg->header.msgtype = CHANNELMSG_GPADL_TEARDOWN;
|
|
|
|
msg->child_relid = channel->offermsg.child_relid;
|
|
|
|
msg->gpadl = gpadl_handle;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
|
2010-11-09 06:04:38 +08:00
|
|
|
list_add_tail(&info->msglistentry,
|
2011-01-27 04:12:14 +08:00
|
|
|
&vmbus_connection.chn_msg_list);
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
2011-01-27 04:12:08 +08:00
|
|
|
ret = vmbus_post_msg(msg,
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct vmbus_channel_gpadl_teardown));
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2014-08-28 07:25:32 +08:00
|
|
|
if (ret)
|
|
|
|
goto post_msg_err;
|
|
|
|
|
|
|
|
wait_for_completion(&info->waitevent);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2014-08-28 07:25:32 +08:00
|
|
|
post_msg_err:
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
|
2010-11-09 06:04:38 +08:00
|
|
|
list_del(&info->msglistentry);
|
2011-01-27 04:12:07 +08:00
|
|
|
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-16 03:48:29 +08:00
|
|
|
kfree(info);
|
2009-07-14 07:02:34 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2010-10-21 23:39:59 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_teardown_gpadl);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2014-04-09 09:45:53 +08:00
|
|
|
static void reset_channel_cb(void *arg)
|
|
|
|
{
|
|
|
|
struct vmbus_channel *channel = arg;
|
|
|
|
|
|
|
|
channel->onchannel_callback = NULL;
|
|
|
|
}
|
|
|
|
|
2014-08-28 07:25:33 +08:00
|
|
|
static int vmbus_close_internal(struct vmbus_channel *channel)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2009-08-27 06:16:04 +08:00
|
|
|
struct vmbus_channel_close_channel *msg;
|
2009-09-02 08:24:57 +08:00
|
|
|
int ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-12-15 08:01:47 +08:00
|
|
|
/*
|
|
|
|
* process_chn_event(), running in the tasklet, can race
|
|
|
|
* with vmbus_close_internal() in the case of SMP guest, e.g., when
|
|
|
|
* the former is accessing channel->inbound.ring_buffer, the latter
|
|
|
|
* could be freeing the ring_buffer pages.
|
|
|
|
*
|
|
|
|
* To resolve the race, we can serialize them by disabling the
|
|
|
|
* tasklet when the latter is running here.
|
|
|
|
*/
|
2016-06-10 09:47:24 +08:00
|
|
|
hv_event_tasklet_disable(channel);
|
2015-12-15 08:01:47 +08:00
|
|
|
|
2015-12-15 08:01:48 +08:00
|
|
|
/*
|
|
|
|
* In case a device driver's probe() fails (e.g.,
|
|
|
|
* util_probe() -> vmbus_open() returns -ENOMEM) and the device is
|
|
|
|
* rescinded later (e.g., we dynamically disble an Integrated Service
|
|
|
|
* in Hyper-V Manager), the driver's remove() invokes vmbus_close():
|
|
|
|
* here we should skip most of the below cleanup work.
|
|
|
|
*/
|
|
|
|
if (channel->state != CHANNEL_OPENED_STATE) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2013-05-24 03:02:32 +08:00
|
|
|
channel->state = CHANNEL_OPEN_STATE;
|
|
|
|
channel->sc_creation_callback = NULL;
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Stop callback and cancel the timer asap */
|
2014-08-29 09:29:53 +08:00
|
|
|
if (channel->target_cpu != get_cpu()) {
|
|
|
|
put_cpu();
|
2014-04-09 09:45:53 +08:00
|
|
|
smp_call_function_single(channel->target_cpu, reset_channel_cb,
|
|
|
|
channel, true);
|
2014-08-29 09:29:53 +08:00
|
|
|
} else {
|
2014-04-09 09:45:53 +08:00
|
|
|
reset_channel_cb(channel);
|
2014-08-29 09:29:53 +08:00
|
|
|
put_cpu();
|
|
|
|
}
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Send a closing message */
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-06-07 06:49:59 +08:00
|
|
|
msg = &channel->close_msg.msg;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-11-09 06:04:38 +08:00
|
|
|
msg->header.msgtype = CHANNELMSG_CLOSECHANNEL;
|
|
|
|
msg->child_relid = channel->offermsg.child_relid;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-01-27 04:12:08 +08:00
|
|
|
ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_close_channel));
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2014-08-28 07:25:33 +08:00
|
|
|
if (ret) {
|
|
|
|
pr_err("Close failed: close post msg return is %d\n", ret);
|
|
|
|
/*
|
|
|
|
* If we failed to post the close msg,
|
|
|
|
* it is perhaps better to leak memory.
|
|
|
|
*/
|
2015-12-15 08:01:47 +08:00
|
|
|
goto out;
|
2014-08-28 07:25:33 +08:00
|
|
|
}
|
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Tear down the gpadl for the channel's ring buffer */
|
2014-08-28 07:25:33 +08:00
|
|
|
if (channel->ringbuffer_gpadlhandle) {
|
|
|
|
ret = vmbus_teardown_gpadl(channel,
|
|
|
|
channel->ringbuffer_gpadlhandle);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Close failed: teardown gpadl return %d\n", ret);
|
|
|
|
/*
|
|
|
|
* If we failed to teardown gpadl,
|
|
|
|
* it is perhaps better to leak memory.
|
|
|
|
*/
|
2015-12-15 08:01:47 +08:00
|
|
|
goto out;
|
2014-08-28 07:25:33 +08:00
|
|
|
}
|
|
|
|
}
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Cleanup the ring buffers for this channel */
|
2011-05-10 22:55:22 +08:00
|
|
|
hv_ringbuffer_cleanup(&channel->outbound);
|
|
|
|
hv_ringbuffer_cleanup(&channel->inbound);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-02-12 01:59:00 +08:00
|
|
|
free_pages((unsigned long)channel->ringbuffer_pages,
|
|
|
|
get_order(channel->ringbuffer_pagecount * PAGE_SIZE));
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-12-15 08:01:47 +08:00
|
|
|
out:
|
2016-06-10 09:47:24 +08:00
|
|
|
hv_event_tasklet_enable(channel);
|
2015-12-15 08:01:47 +08:00
|
|
|
|
2014-08-28 07:25:33 +08:00
|
|
|
return ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
2013-05-24 03:02:32 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* vmbus_close - Close the specified channel
|
|
|
|
*/
|
|
|
|
void vmbus_close(struct vmbus_channel *channel)
|
|
|
|
{
|
|
|
|
struct list_head *cur, *tmp;
|
|
|
|
struct vmbus_channel *cur_channel;
|
|
|
|
|
|
|
|
if (channel->primary_channel != NULL) {
|
|
|
|
/*
|
|
|
|
* We will only close sub-channels when
|
|
|
|
* the primary is closed.
|
|
|
|
*/
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Close all the sub-channels first and then close the
|
|
|
|
* primary channel.
|
|
|
|
*/
|
|
|
|
list_for_each_safe(cur, tmp, &channel->sc_list) {
|
|
|
|
cur_channel = list_entry(cur, struct vmbus_channel, sc_list);
|
|
|
|
if (cur_channel->state != CHANNEL_OPENED_STATE)
|
|
|
|
continue;
|
|
|
|
vmbus_close_internal(cur_channel);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Now close the primary.
|
|
|
|
*/
|
|
|
|
vmbus_close_internal(channel);
|
|
|
|
}
|
2010-10-22 00:52:22 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_close);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-03-01 03:39:04 +08:00
|
|
|
int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
|
2010-10-01 01:52:13 +08:00
|
|
|
u32 bufferlen, u64 requestid,
|
2015-03-01 03:39:04 +08:00
|
|
|
enum vmbus_packet_type type, u32 flags, bool kick_q)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2009-08-28 07:02:36 +08:00
|
|
|
struct vmpacket_descriptor desc;
|
2010-10-01 01:52:13 +08:00
|
|
|
u32 packetlen = sizeof(struct vmpacket_descriptor) + bufferlen;
|
2011-01-20 16:32:01 +08:00
|
|
|
u32 packetlen_aligned = ALIGN(packetlen, sizeof(u64));
|
2014-02-02 11:02:20 +08:00
|
|
|
struct kvec bufferlist[3];
|
2010-10-01 01:52:13 +08:00
|
|
|
u64 aligned_data = 0;
|
2009-09-02 08:24:57 +08:00
|
|
|
int ret;
|
2012-12-01 22:46:36 +08:00
|
|
|
bool signal = false;
|
2016-01-28 14:29:45 +08:00
|
|
|
bool lock = channel->acquire_ring_lock;
|
2015-08-02 07:08:14 +08:00
|
|
|
int num_vecs = ((bufferlen != 0) ? 3 : 1);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Setup the descriptor */
|
2011-01-27 04:12:13 +08:00
|
|
|
desc.type = type; /* VmbusPacketTypeDataInBand; */
|
|
|
|
desc.flags = flags; /* VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED; */
|
2009-09-02 08:24:57 +08:00
|
|
|
/* in 8-bytes granularity */
|
2011-01-27 04:12:13 +08:00
|
|
|
desc.offset8 = sizeof(struct vmpacket_descriptor) >> 3;
|
|
|
|
desc.len8 = (u16)(packetlen_aligned >> 3);
|
|
|
|
desc.trans_id = requestid;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2014-02-02 11:02:20 +08:00
|
|
|
bufferlist[0].iov_base = &desc;
|
|
|
|
bufferlist[0].iov_len = sizeof(struct vmpacket_descriptor);
|
|
|
|
bufferlist[1].iov_base = buffer;
|
|
|
|
bufferlist[1].iov_len = bufferlen;
|
|
|
|
bufferlist[2].iov_base = &aligned_data;
|
|
|
|
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-08-02 07:08:14 +08:00
|
|
|
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, num_vecs,
|
2016-07-02 07:26:35 +08:00
|
|
|
&signal, lock, channel->signal_policy);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-03-19 03:29:30 +08:00
|
|
|
/*
|
|
|
|
* Signalling the host is conditional on many factors:
|
|
|
|
* 1. The ring state changed from being empty to non-empty.
|
|
|
|
* This is tracked by the variable "signal".
|
|
|
|
* 2. The variable kick_q tracks if more data will be placed
|
|
|
|
* on the ring. We will not signal if more data is
|
|
|
|
* to be placed.
|
|
|
|
*
|
2015-12-15 08:01:54 +08:00
|
|
|
* Based on the channel signal state, we will decide
|
|
|
|
* which signaling policy will be applied.
|
|
|
|
*
|
2015-03-19 03:29:30 +08:00
|
|
|
* If we cannot write to the ring-buffer; signal the host
|
|
|
|
* even if we may not have written anything. This is a rare
|
|
|
|
* enough condition that it should not matter.
|
2016-01-28 14:29:39 +08:00
|
|
|
* NOTE: in this case, the hvsock channel is an exception, because
|
|
|
|
* it looks the host side's hvsock implementation has a throttling
|
|
|
|
* mechanism which can hurt the performance otherwise.
|
2015-03-19 03:29:30 +08:00
|
|
|
*/
|
2015-12-15 08:01:54 +08:00
|
|
|
|
2016-01-28 14:29:39 +08:00
|
|
|
if (((ret == 0) && kick_q && signal) ||
|
|
|
|
(ret && !is_hvsock_channel(channel)))
|
2010-10-08 02:40:08 +08:00
|
|
|
vmbus_setevent(channel);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2015-03-01 03:39:04 +08:00
|
|
|
EXPORT_SYMBOL(vmbus_sendpacket_ctl);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* vmbus_sendpacket() - Send the specified buffer on the given channel
|
|
|
|
* @channel: Pointer to vmbus_channel structure.
|
|
|
|
* @buffer: Pointer to the buffer you want to receive the data into.
|
|
|
|
* @bufferlen: Maximum size of what the the buffer will hold
|
|
|
|
* @requestid: Identifier of the request
|
|
|
|
* @type: Type of packet that is being send e.g. negotiate, time
|
|
|
|
* packet etc.
|
|
|
|
*
|
|
|
|
* Sends data in @buffer directly to hyper-v via the vmbus
|
|
|
|
* This will send the data unparsed to hyper-v.
|
|
|
|
*
|
|
|
|
* Mainly used by Hyper-V drivers.
|
|
|
|
*/
|
|
|
|
int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
|
|
|
|
u32 bufferlen, u64 requestid,
|
|
|
|
enum vmbus_packet_type type, u32 flags)
|
|
|
|
{
|
|
|
|
return vmbus_sendpacket_ctl(channel, buffer, bufferlen, requestid,
|
|
|
|
type, flags, true);
|
|
|
|
}
|
2010-10-08 02:40:08 +08:00
|
|
|
EXPORT_SYMBOL(vmbus_sendpacket);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2015-03-01 03:39:03 +08:00
|
|
|
* vmbus_sendpacket_pagebuffer_ctl - Send a range of single-page buffer
|
|
|
|
* packets using a GPADL Direct packet type. This interface allows you
|
|
|
|
* to control notifying the host. This will be useful for sending
|
|
|
|
* batched data. Also the sender can control the send flags
|
|
|
|
* explicitly.
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2015-03-01 03:39:03 +08:00
|
|
|
int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
|
2010-10-01 01:52:13 +08:00
|
|
|
struct hv_page_buffer pagebuffers[],
|
|
|
|
u32 pagecount, void *buffer, u32 bufferlen,
|
2015-03-01 03:39:03 +08:00
|
|
|
u64 requestid,
|
|
|
|
u32 flags,
|
|
|
|
bool kick_q)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2009-09-02 08:24:57 +08:00
|
|
|
int ret;
|
|
|
|
int i;
|
2010-09-21 05:07:51 +08:00
|
|
|
struct vmbus_channel_packet_page_buffer desc;
|
2010-10-01 01:52:13 +08:00
|
|
|
u32 descsize;
|
|
|
|
u32 packetlen;
|
|
|
|
u32 packetlen_aligned;
|
2014-02-02 11:02:20 +08:00
|
|
|
struct kvec bufferlist[3];
|
2010-10-01 01:52:13 +08:00
|
|
|
u64 aligned_data = 0;
|
2012-12-01 22:46:36 +08:00
|
|
|
bool signal = false;
|
2016-01-28 14:29:45 +08:00
|
|
|
bool lock = channel->acquire_ring_lock;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-10-01 01:52:13 +08:00
|
|
|
if (pagecount > MAX_PAGE_BUFFER_COUNT)
|
2010-05-06 03:27:39 +08:00
|
|
|
return -EINVAL;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
|
2009-09-02 08:24:57 +08:00
|
|
|
/*
|
2010-09-21 05:07:51 +08:00
|
|
|
* Adjust the size down since vmbus_channel_packet_page_buffer is the
|
2009-09-02 08:24:57 +08:00
|
|
|
* largest size we support
|
|
|
|
*/
|
2010-10-01 01:52:13 +08:00
|
|
|
descsize = sizeof(struct vmbus_channel_packet_page_buffer) -
|
|
|
|
((MAX_PAGE_BUFFER_COUNT - pagecount) *
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(struct hv_page_buffer));
|
2010-10-01 01:52:13 +08:00
|
|
|
packetlen = descsize + bufferlen;
|
2011-01-20 16:32:01 +08:00
|
|
|
packetlen_aligned = ALIGN(packetlen, sizeof(u64));
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Setup the descriptor */
|
2011-01-27 04:12:13 +08:00
|
|
|
desc.type = VM_PKT_DATA_USING_GPA_DIRECT;
|
2015-03-01 03:39:03 +08:00
|
|
|
desc.flags = flags;
|
2010-10-01 01:52:13 +08:00
|
|
|
desc.dataoffset8 = descsize >> 3; /* in 8-bytes grandularity */
|
|
|
|
desc.length8 = (u16)(packetlen_aligned >> 3);
|
|
|
|
desc.transactionid = requestid;
|
|
|
|
desc.rangecount = pagecount;
|
|
|
|
|
|
|
|
for (i = 0; i < pagecount; i++) {
|
2011-01-27 04:12:11 +08:00
|
|
|
desc.range[i].len = pagebuffers[i].len;
|
|
|
|
desc.range[i].offset = pagebuffers[i].offset;
|
|
|
|
desc.range[i].pfn = pagebuffers[i].pfn;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
|
|
|
|
2014-02-02 11:02:20 +08:00
|
|
|
bufferlist[0].iov_base = &desc;
|
|
|
|
bufferlist[0].iov_len = descsize;
|
|
|
|
bufferlist[1].iov_base = buffer;
|
|
|
|
bufferlist[1].iov_len = bufferlen;
|
|
|
|
bufferlist[2].iov_base = &aligned_data;
|
|
|
|
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-01-28 14:29:45 +08:00
|
|
|
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
|
2016-07-02 07:26:35 +08:00
|
|
|
&signal, lock, channel->signal_policy);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-03-19 03:29:30 +08:00
|
|
|
/*
|
|
|
|
* Signalling the host is conditional on many factors:
|
|
|
|
* 1. The ring state changed from being empty to non-empty.
|
|
|
|
* This is tracked by the variable "signal".
|
|
|
|
* 2. The variable kick_q tracks if more data will be placed
|
|
|
|
* on the ring. We will not signal if more data is
|
|
|
|
* to be placed.
|
|
|
|
*
|
2015-12-15 08:01:54 +08:00
|
|
|
* Based on the channel signal state, we will decide
|
|
|
|
* which signaling policy will be applied.
|
|
|
|
*
|
2015-03-19 03:29:30 +08:00
|
|
|
* If we cannot write to the ring-buffer; signal the host
|
|
|
|
* even if we may not have written anything. This is a rare
|
|
|
|
* enough condition that it should not matter.
|
|
|
|
*/
|
2015-12-15 08:01:54 +08:00
|
|
|
|
2015-03-19 03:29:30 +08:00
|
|
|
if (((ret == 0) && kick_q && signal) || (ret))
|
2010-10-08 02:40:08 +08:00
|
|
|
vmbus_setevent(channel);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2015-03-19 03:29:29 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer_ctl);
|
2015-03-01 03:39:03 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* vmbus_sendpacket_pagebuffer - Send a range of single-page buffer
|
|
|
|
* packets using a GPADL Direct packet type.
|
|
|
|
*/
|
|
|
|
int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
|
|
|
|
struct hv_page_buffer pagebuffers[],
|
|
|
|
u32 pagecount, void *buffer, u32 bufferlen,
|
|
|
|
u64 requestid)
|
|
|
|
{
|
|
|
|
u32 flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
|
|
|
|
return vmbus_sendpacket_pagebuffer_ctl(channel, pagebuffers, pagecount,
|
|
|
|
buffer, bufferlen, requestid,
|
|
|
|
flags, true);
|
|
|
|
|
|
|
|
}
|
2010-10-22 00:29:54 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-01-10 15:54:34 +08:00
|
|
|
/*
|
|
|
|
* vmbus_sendpacket_multipagebuffer - Send a multi-page buffer packet
|
|
|
|
* using a GPADL Direct packet type.
|
|
|
|
* The buffer includes the vmbus descriptor.
|
|
|
|
*/
|
|
|
|
int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
|
|
|
|
struct vmbus_packet_mpb_array *desc,
|
|
|
|
u32 desc_size,
|
|
|
|
void *buffer, u32 bufferlen, u64 requestid)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
u32 packetlen;
|
|
|
|
u32 packetlen_aligned;
|
|
|
|
struct kvec bufferlist[3];
|
|
|
|
u64 aligned_data = 0;
|
|
|
|
bool signal = false;
|
2016-01-28 14:29:45 +08:00
|
|
|
bool lock = channel->acquire_ring_lock;
|
2015-01-10 15:54:34 +08:00
|
|
|
|
|
|
|
packetlen = desc_size + bufferlen;
|
|
|
|
packetlen_aligned = ALIGN(packetlen, sizeof(u64));
|
|
|
|
|
|
|
|
/* Setup the descriptor */
|
|
|
|
desc->type = VM_PKT_DATA_USING_GPA_DIRECT;
|
|
|
|
desc->flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
|
|
|
|
desc->dataoffset8 = desc_size >> 3; /* in 8-bytes grandularity */
|
|
|
|
desc->length8 = (u16)(packetlen_aligned >> 3);
|
|
|
|
desc->transactionid = requestid;
|
|
|
|
desc->rangecount = 1;
|
|
|
|
|
|
|
|
bufferlist[0].iov_base = desc;
|
|
|
|
bufferlist[0].iov_len = desc_size;
|
|
|
|
bufferlist[1].iov_base = buffer;
|
|
|
|
bufferlist[1].iov_len = bufferlen;
|
|
|
|
bufferlist[2].iov_base = &aligned_data;
|
|
|
|
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
|
|
|
|
|
2016-01-28 14:29:45 +08:00
|
|
|
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
|
2016-07-02 07:26:35 +08:00
|
|
|
&signal, lock, channel->signal_policy);
|
2015-01-10 15:54:34 +08:00
|
|
|
|
|
|
|
if (ret == 0 && signal)
|
|
|
|
vmbus_setevent(channel);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(vmbus_sendpacket_mpb_desc);
|
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2010-10-08 02:40:08 +08:00
|
|
|
* vmbus_sendpacket_multipagebuffer - Send a multi-page buffer packet
|
2010-03-05 06:11:00 +08:00
|
|
|
* using a GPADL Direct packet type.
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2010-10-08 02:40:08 +08:00
|
|
|
int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel,
|
2010-10-01 01:52:13 +08:00
|
|
|
struct hv_multipage_buffer *multi_pagebuffer,
|
|
|
|
void *buffer, u32 bufferlen, u64 requestid)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2009-09-02 08:24:57 +08:00
|
|
|
int ret;
|
2010-09-21 05:07:51 +08:00
|
|
|
struct vmbus_channel_packet_multipage_buffer desc;
|
2010-10-01 01:52:13 +08:00
|
|
|
u32 descsize;
|
|
|
|
u32 packetlen;
|
|
|
|
u32 packetlen_aligned;
|
2014-02-02 11:02:20 +08:00
|
|
|
struct kvec bufferlist[3];
|
2010-10-01 01:52:13 +08:00
|
|
|
u64 aligned_data = 0;
|
2012-12-01 22:46:36 +08:00
|
|
|
bool signal = false;
|
2016-01-28 14:29:45 +08:00
|
|
|
bool lock = channel->acquire_ring_lock;
|
2011-01-27 04:12:11 +08:00
|
|
|
u32 pfncount = NUM_PAGES_SPANNED(multi_pagebuffer->offset,
|
|
|
|
multi_pagebuffer->len);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2014-04-25 23:03:50 +08:00
|
|
|
if (pfncount > MAX_MULTIPAGE_BUFFER_COUNT)
|
2010-05-06 03:27:39 +08:00
|
|
|
return -EINVAL;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-09-02 08:24:57 +08:00
|
|
|
/*
|
2010-09-21 05:07:51 +08:00
|
|
|
* Adjust the size down since vmbus_channel_packet_multipage_buffer is
|
2009-09-02 08:24:57 +08:00
|
|
|
* the largest size we support
|
|
|
|
*/
|
2010-10-01 01:52:13 +08:00
|
|
|
descsize = sizeof(struct vmbus_channel_packet_multipage_buffer) -
|
|
|
|
((MAX_MULTIPAGE_BUFFER_COUNT - pfncount) *
|
2009-09-02 08:24:57 +08:00
|
|
|
sizeof(u64));
|
2010-10-01 01:52:13 +08:00
|
|
|
packetlen = descsize + bufferlen;
|
2011-01-20 16:32:01 +08:00
|
|
|
packetlen_aligned = ALIGN(packetlen, sizeof(u64));
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* Setup the descriptor */
|
2011-01-27 04:12:13 +08:00
|
|
|
desc.type = VM_PKT_DATA_USING_GPA_DIRECT;
|
2010-09-21 05:07:51 +08:00
|
|
|
desc.flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
|
2010-10-01 01:52:13 +08:00
|
|
|
desc.dataoffset8 = descsize >> 3; /* in 8-bytes grandularity */
|
|
|
|
desc.length8 = (u16)(packetlen_aligned >> 3);
|
|
|
|
desc.transactionid = requestid;
|
2010-09-21 05:07:51 +08:00
|
|
|
desc.rangecount = 1;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-01-27 04:12:11 +08:00
|
|
|
desc.range.len = multi_pagebuffer->len;
|
|
|
|
desc.range.offset = multi_pagebuffer->offset;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-01-27 04:12:11 +08:00
|
|
|
memcpy(desc.range.pfn_array, multi_pagebuffer->pfn_array,
|
2010-10-01 01:52:13 +08:00
|
|
|
pfncount * sizeof(u64));
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2014-02-02 11:02:20 +08:00
|
|
|
bufferlist[0].iov_base = &desc;
|
|
|
|
bufferlist[0].iov_len = descsize;
|
|
|
|
bufferlist[1].iov_base = buffer;
|
|
|
|
bufferlist[1].iov_len = bufferlen;
|
|
|
|
bufferlist[2].iov_base = &aligned_data;
|
|
|
|
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-01-28 14:29:45 +08:00
|
|
|
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
|
2016-07-02 07:26:35 +08:00
|
|
|
&signal, lock, channel->signal_policy);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2012-12-01 22:46:36 +08:00
|
|
|
if (ret == 0 && signal)
|
2010-10-08 02:40:08 +08:00
|
|
|
vmbus_setevent(channel);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2010-10-22 00:23:59 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_sendpacket_multipagebuffer);
|
2010-05-05 06:55:05 +08:00
|
|
|
|
|
|
|
/**
|
2010-10-08 02:40:08 +08:00
|
|
|
* vmbus_recvpacket() - Retrieve the user packet on the specified channel
|
2010-10-01 01:52:13 +08:00
|
|
|
* @channel: Pointer to vmbus_channel structure.
|
|
|
|
* @buffer: Pointer to the buffer you want to receive the data into.
|
|
|
|
* @bufferlen: Maximum size of what the the buffer will hold
|
|
|
|
* @buffer_actual_len: The actual size of the data after it was received
|
|
|
|
* @requestid: Identifier of the request
|
2010-05-05 06:55:05 +08:00
|
|
|
*
|
|
|
|
* Receives directly from the hyper-v vmbus and puts the data it received
|
|
|
|
* into Buffer. This will receive the data unparsed from hyper-v.
|
|
|
|
*
|
|
|
|
* Mainly used by Hyper-V drivers.
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2015-12-15 11:02:00 +08:00
|
|
|
static inline int
|
|
|
|
__vmbus_recvpacket(struct vmbus_channel *channel, void *buffer,
|
|
|
|
u32 bufferlen, u32 *buffer_actual_len, u64 *requestid,
|
|
|
|
bool raw)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
|
|
|
int ret;
|
2012-12-01 22:46:57 +08:00
|
|
|
bool signal = false;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-12-15 11:02:01 +08:00
|
|
|
ret = hv_ringbuffer_read(&channel->inbound, buffer, bufferlen,
|
|
|
|
buffer_actual_len, requestid, &signal, raw);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2012-12-01 22:46:57 +08:00
|
|
|
if (signal)
|
|
|
|
vmbus_setevent(channel);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2015-12-15 11:02:00 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int vmbus_recvpacket(struct vmbus_channel *channel, void *buffer,
|
|
|
|
u32 bufferlen, u32 *buffer_actual_len,
|
|
|
|
u64 *requestid)
|
|
|
|
{
|
|
|
|
return __vmbus_recvpacket(channel, buffer, bufferlen,
|
|
|
|
buffer_actual_len, requestid, false);
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
2010-10-08 02:40:08 +08:00
|
|
|
EXPORT_SYMBOL(vmbus_recvpacket);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2010-10-08 02:40:08 +08:00
|
|
|
* vmbus_recvpacket_raw - Retrieve the raw packet on the specified channel
|
2009-09-02 08:24:57 +08:00
|
|
|
*/
|
2010-10-08 02:40:08 +08:00
|
|
|
int vmbus_recvpacket_raw(struct vmbus_channel *channel, void *buffer,
|
2010-10-01 01:52:13 +08:00
|
|
|
u32 bufferlen, u32 *buffer_actual_len,
|
|
|
|
u64 *requestid)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2015-12-15 11:02:00 +08:00
|
|
|
return __vmbus_recvpacket(channel, buffer, bufferlen,
|
|
|
|
buffer_actual_len, requestid, true);
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
2010-10-22 00:09:23 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_recvpacket_raw);
|