Computer Keyboard copyright marciecasas CC BY

While, it is possible to connect kvm to an iSCSI target and use it as a backing disk for a virtual machine, it can be beneficial to boot directly from the iSCSI target. It removes a couple layers of indirection from disk reads and writes, which can significantly improve latency. In order to do this, a helper is needed to jump from the bios into the remote kernel. This can be accomplished with a nice little piece of open-source software called iPXE.

Prerequisites

To begin with, install the following support software:

qemu-system-x86

qemu-utils (qemu-img on fedora)

ipxe-qemu (ipxe-roms-qemu on fedora)

targetcli

iproute2 (most linux distros ship with this installed)

a vnc client

Network Setup

Begin by creating an isolated network bridge for the vm:

sudo ip link add virbr0 type bridge

sudo ip link set up virbr0

Next, allow qemu to use the bridge:

echo "allow virbr0" | sudo tee -a /etc/qemu/bridge.conf

Finally, give the hypervisor an address on the bridge:

sudo ip addr add 192.168.10.1/24 dev virbr0

iSCSI Target Setup

You will need a simple linux image for testing. Cirros is a very small busybox-based distribution that is great for testing:

Next, convert it from qcow format to a raw drive:

qemu-img convert -O raw cirros-0.3.4-x86_64-disk.img cirros.raw

Setup the converted image as a backstore:

sudo targetcli /backstores/fileio/ \

create cirros $PWD/cirros.raw 100M false

Then make a new iSCSI lun available on the hypervisor ip:

sudo targetcli /iscsi create iqn.2016-01.com.example:cirros

sudo targetcli \

/iscsi/iqn.2016-01.com.example:cirros/tpg1/luns \

create /backstores/fileio/cirros

sudo targetcli \

/iscsi/iqn.2016-01.com.example:cirros/tpg1/portals \

create 192.168.10.1

Finally, set up acls and make sure the drive is writable:

sudo targetcli \

/iscsi/iqn.2016-01.com.example:cirros/tpg1 set attribute \ authentication=0 demo_mode_write_protect=0 \

generate_node_acls=1 cache_dynamic_acls=1

Booting

Now the setup is complete, run qemu with some reasonable options:

sudo qemu-system-x86_64 `# funky name for kvm` \

-smp cpus=2 `# the more, the better` \

-display vnc=0.0.0.0:0 `# to access the display` \

-boot order=n `# boot from the NIC` \

-netdev bridge,br=virbr0,id=virtio0 `# use our bridge` \

-device virtio-net-pci,netdev=virtio0 `# use a virtio-net device` \

It is possible to configure ipxe to boot using dhcp options, but we are going to configure the interface manually. This means we have to vnc console in to our guest to use the ipxe command line. You should be able to connect to port 5900 on the ip of your hypervisor machine with most vnc software. I use chicken on my macbook pro.

ifopen net0

set net0/ip 192.168.10.10

set net0/netmask 255.255.255.0

sanboot iscsi:192.168.10.1::::iqn.2016-01.com.example:cirros

This sets up an ip address for the guest and tells it to boot from the iSCSI target you created earlier. You should see cirros booting in the vnc console. You have booted a vm directly from an iSCSI target. Congratulations! If you’d like to improve network performance, you can learn how to do the same with SR-IOV in Part II.