You are here

KVM disk performance

I am testing using iozone.

Here is a recommendation from serverfault.com/…o-performance

The optimal configuration is (usually) as follows:

  1. On the host, set elevator=deadline
  2. Use virtio and only virtio
  3. use raw LVs whenever possible. Qcow2 gives overhead. Files on a FS also have overhead
  4. in the VM use the elevator=noop (See blog.bodhizazen.net/…rformance)
  5. both in host and VM, use noatime,nodiratime in fstab wherever possible
  6. Make sure the virtio drivers are up to date, especially the windows ones.

To change the scheduler on the fly, (from ezunix.org)

  • Check value: (change vd to sd if guest using sda/sdb etc.)
for f in /sys/block/vd*/queue/scheduler; do cat $f; done
  • or
for f in /sys/block/sd*/queue/scheduler; do cat $f; done
  • Change temporarily:
for f in /sys/block/vd*/queue/scheduler; do echo "noop" > $f; done
  • or
for f in /sys/block/sd*/queue/scheduler; do echo "deadline" > $f; done

Alternative view

This thread: KVM causes high CPU loaded when cache=‘none’ suggests:

OK, with additional option io=‘native’ in disk section and IO scheduler cfq on host system, I’ll get the best results for my system. IO rate is nearly the same for all values of option io in guest XML and for IO scheduler on host and guest, only cache=‘unsafe’ gives significantly more performance. But only with io=‘native’, noop scheduler in guest and cfq scheduler on host I’ll get lowest CPU load.

RAID with LVM

Topic: