Drivers: hv: vmbus: Use channel_mutex in channel_vp_mapping_show()

The primitive currently uses channel->lock to protect the loop over
sc_list w.r.t. list additions/deletions but it doesn't protect the
target_cpu(s) loads w.r.t. a concurrent target_cpu_store(): while the
data races on target_cpu are hardly of any concern here, replace the
channel->lock critical section with a channel_mutex critical section
and extend the latter to include the loads of target_cpu; this same
pattern is also used in hv_synic_cleanup().

Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com>
Link: https://lore.kernel.org/r/20200617164642.37393-6-parri.andrea@gmail.com
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Wei Liu <wei.liu@kernel.org>
This commit is contained in:
Andrea Parri (Microsoft) 2020-06-17 18:46:39 +02:00 committed by Wei Liu
parent 12d0dd8e72
commit 3eb0ac869c
1 changed files with 3 additions and 4 deletions

View File

@ -507,18 +507,17 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
{
struct hv_device *hv_dev = device_to_hv_device(dev);
struct vmbus_channel *channel = hv_dev->channel, *cur_sc;
unsigned long flags;
int buf_size = PAGE_SIZE, n_written, tot_written;
struct list_head *cur;
if (!channel)
return -ENODEV;
mutex_lock(&vmbus_connection.channel_mutex);
tot_written = snprintf(buf, buf_size, "%u:%u\n",
channel->offermsg.child_relid, channel->target_cpu);
spin_lock_irqsave(&channel->lock, flags);
list_for_each(cur, &channel->sc_list) {
if (tot_written >= buf_size - 1)
break;
@ -532,7 +531,7 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
tot_written += n_written;
}
spin_unlock_irqrestore(&channel->lock, flags);
mutex_unlock(&vmbus_connection.channel_mutex);
return tot_written;
}