I am testing using iozone.
Here is a recommendation from serverfault.com/…o-performance
The optimal configuration is (usually) as follows:
- On the host, set elevator=deadline
- Use virtio and only virtio
- use raw LVs whenever possible. Qcow2 gives overhead. Files on a FS also have overhead
- in the VM use the elevator=noop (See blog.bodhizazen.net/…rformance)
- both in host and VM, use noatime,nodiratime in fstab wherever possible
- Make sure the virtio drivers are up to date, especially the windows ones.
To change the scheduler on the fly, (from ezunix.org)
- Check value: (change vd to sd if guest using sda/sdb etc.)
for f in /sys/block/vd*/queue/scheduler; do cat $f; done
for f in /sys/block/sd*/queue/scheduler; do cat $f; done
for f in /sys/block/vd*/queue/scheduler; do echo "noop" > $f; done
for f in /sys/block/sd*/queue/scheduler; do echo "deadline" > $f; done
This thread: KVM causes high CPU loaded when cache=‘none’ suggests:
OK, with additional option
io=‘native’ in disk section and IO scheduler
cfq on host system, I’ll get the best results for my system. IO rate is nearly the same for all values of option
io in guest XML and for IO scheduler on host and guest, only
cache=‘unsafe’ gives significantly more performance. But only with
noop scheduler in guest and
cfq scheduler on host I’ll get lowest CPU load.
RAID with LVM