virtio_blk: revert QUEUE_FLAG_VIRT addition
It seems like the addition of QUEUE_FLAG_VIRT caueses major performance regressions for Fedora users: https://bugzilla.redhat.com/show_bug.cgi?id=509383 https://bugzilla.redhat.com/show_bug.cgi?id=505695 while I can't reproduce those extreme regressions myself I think the flag is wrong. Rationale: QUEUE_FLAG_VIRT expands to QUEUE_FLAG_NONROT which casus the queue unplugged immediately. This is not a good behaviour for at least qemu and kvm where we do have significant overhead for every I/O operations. Even with all the latested speeups (native AIO, MSI support, zero copy) we can only get native speed for up to 128kb I/O requests we already are down to 66% of native performance for 4kb requests even on my laptop running the Intel X25-M SSD for which the QUEUE_FLAG_NONROT was designed. If we ever get virtio-blk overhead low enough that this flag makes sense it should only be set based on a feature flag set by the host. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This commit is contained in:
parent
2fdc246aaf
commit
f8b12e513b
|
@ -332,7 +332,6 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
|
|||
}
|
||||
|
||||
vblk->disk->queue->queuedata = vblk;
|
||||
queue_flag_set_unlocked(QUEUE_FLAG_VIRT, vblk->disk->queue);
|
||||
|
||||
if (index < 26) {
|
||||
sprintf(vblk->disk->disk_name, "vd%c", 'a' + index % 26);
|
||||
|
|
Loading…
Reference in New Issue