Redhat KVM Cheat sheet

Title:

Redhat KVM Cheat sheet

Author:

Douglas O’Leary <dkoleary@olearycomputers.com>

Description:

Examples of standard commands used in KVM manipulation

Date created:

04/19/2014

Date updated:

04/19/2014

Disclaimer:

Standard: Use the information that follows at your own risk. If you screw up a system, don’t blame it on me…



General

Getting tired of having to check out google or man pages whenever I go back to the KVM and, while the GUI is actually usable, I have an issue with GUIs. Everything KVM related can be done through the command line.

Installation

Easiest way is to install the Virtualization groups via yum. I also tend to move the images directory so I’m not filling up /var. Short easy steps:

  • Install the Virtualization groups:

    # yum grouplist | grep -i virt
      Virtualization
      Virtualization Client
      Virtualization Platform
      Virtualization Tools
    
    # yum grouplist | grep -i virt | while read line
      do
        yum -y groupinstall "${line}"
      done
    
  • Create another LV for images:

    # lvcreate -L 200g -n ignite vg00
    # mkfs.ext4 /dev/vg00/ignite
    # mkdir -p -m 755 /ignite
    # // Edit fstab
      /dev/mapper/vg00-ignite /ignite  ext4    defaults  1 2
    # mount /ignite
    # mkdir -p -m 755 /ignite/images
    # chcon --reference /var/lib/libvirt/images /ignite/images
    # rmdir /var/lib/libvirt/images
    # ln -s /ignite/images /var/lib/libvirt/images
    
  • Verify virtualization packages are enabled and started:

    # chkconfig --list | grep -i virt
    libvirt-guests  0:off   1:off   2:on    3:on    4:on    5:on    6:off
    libvirtd        0:off   1:off   2:off   3:on    4:on    5:on    6:off
    # service libvirtd start
    Starting libvirtd daemon:                                  [  OK  ]
    # service libvirt-guests start
    
  • Starting the libvirtd should automatically update your firewall rules; however, if you have problems connecting, check that the ports are open:

    # diff /tmp/pre /tmp/ost
    2a3,6
    > ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:53
    > ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:53
    > ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:67
    > ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:67
    10a15,19
    > ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED
    > ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0
    > ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
    > REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
    > REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
    

    If ports 53/67 aren’t running, simply restart libvirtd. That’ll update the running firewall for you. service libvirtd restart

  • Run virt-manager just to ensure everything connects correctly.

List, Start, and Stop guests

  • virsh list or virsh list --all

The main difference is that the –all command will display guests that are stopped whereas it won’t if you leave it off.

  • virsh start ${guest}

  • virsh destroy ${guest}

The destroy argument is badly named. It doesn’t eliminate the guest; it just stops it … hard. There won’t be any shutdown command run. It’s akin to yanking the power out of a system.

  • virsh shutdown ${guest}

shutdown, as you might expect, does a graceful OS shutdown.

(Un)setting guests to auto-start

  • virsh autostart ${dom}

  • ln -s /etc/libvirt/qemu/${dom}.xml /etc/libvirt/qemu/autostart/${dom}.xml

  • virst autostart --disable ${dom}

  • unlink /etc/libvirt/qemu/autostart/${dom}.xml

Accessing the guest console

  • Easiest (assuming X11), execute virt-manager, right click on the guest in question, then select open

  • Directly: virt-viewer ${domain}

  • There’s also a virsh console ${domain} command to have a text display. Useful when the vnc connection isn’t as required (no X11 on the server, etc).

Install a new guest

  • Interactive install:

    • name is required

    • ram is required and measured in megs

    • vcpus is not required.

    • noautoconsole:

      • without this, a console will automatically come up;

      • you won’t get your command prompt back until the install’s done.

      • <CTRL> C out of the virt-install command and your install aborts.

        virt-install --name vm1 --ram 2048 --vcpus=2 \
        --disk path=/var/lib/libvirt/images/vm1.img,size=10 \
        --noautoconsole --os-type=linux --os-variant=rhel6 \
        --location ftp://192.168.122.1/pub/inst
        
  • Non-interactive install (kickstart)

    • same arguments with one addition:

    • -x "ks=ftp://192.168.122.1/pub/kickstart/vm.cfg"

    • Assuming your kickstart file is correct, you’ll soon have a new virtual.

      virt-install --name vm1 --ram 2048 --vcpus=2 \
      --disk path=/var/lib/libvirt/images/vm1.img,size=10 \
      --noautoconsole --os-type=linux --os-variant=rhel6 \
      --location ftp://192.168.122.1/pub/inst \
      -x "ks=ftp://192.168.122.1/pub/kickstart/vm.cfg"
      
      Starting install...
      Retrieving file vmlinuz...             | 7.6 MB     00:00 ...
      Retrieving file initrd.img...          |  60 MB     00:00 ...
      Allocating 'vm1.img'                   |  10 GB     00:00
      
  • If installing on a network other than the default

    • ID the bridge: virsh net-info ${network} (see next section for details)

    • Add --network bridge=${bridge} to the virt-install commands above.

      virt-install --name outsider1 --ram 2048 --vcpus=2 \
      --disk path=/var/lib/libvirt/images/outsider1.img,size=10 \
      --noautoconsole --os-type=linux --os-variant=rhel6 \
      --network bridge=virbr1 --location ftp://192.168.200.1/pub/inst \
      -x "ks=ftp://192.168.200.1/pub/kickstart/vm.cfg"
      [[snip]]
      

Cloning a new guest

Reasonably straight forward with a couple of minor gotchas. Command:

virt-clone -o tester1 -n tester2 \
--file /var/lib/libvirt/images/tester2.img

The –file arg doesn’t show up in the short command help; but, it is in the man page.

More importantly, when the system boots, it will be an exact replica of the original, including IP addresses, host names, and MAC addresses of the NIC. Particularly with that MAC address, the new clone won’t have an eth0; it’ll be renamed as eth1. The process to get it back is:

  • Boot to single user mode

  • Update hostnames and IP addresses in:

    • /etc/sysconfig/network

    • /etc/sysconfig/network-scripts/ifcfg-*

    • /etc/hosts

  • Remove HWADDR entry in /etc/sysconfig/nework-scripts/ifcfg-eth0

  • Remove /etc/udev/rules.d/70-persistent-net.rules

  • HALT, not reboot, the system.

  • Power the clone back on virsh start ${domain}. It should come up fine.

Virtual network manipulation

The commands associated with network manipulation seem a lot more basic than others and yet network manipulation is, without a doubt, the most complex item related to KVM with which I’ve had to deal.

KVM network types:

  • Bridged: Guests will be on the same network as the vm host. This is how you set up a real environment with real applications accessible by real people. There are plenty of sites that demonstrate how to set up a bridge network. Once done, simply kickstart your guests with the same network information as any other system on your network.

  • NAT: The default KVM network. All guests will be NAT’ed from the external network and from other subnets on the host. Guests on the same subnet won’t be NATed. * Hybrid: Not an official network type that I’ve seen, but it should be. In short, guests on the host are not NATed from each other regardless of subnet but are NATed to the external network.

  • Non-NATed: the VM host still has multiple virtual subnets defined with guests on them; however, the guests are able to access the external network and are acessible by the external network.

For the discussion that follow, assume this network information:

External network:

192.168.12.0/24

Virtual subnet 1:

192.168.122.0/24

Virtual subnet 2:

192.168.200.0/24

NAT:

  • As mentioned, the default network environment. Simply install KVM and go.

  • When creating multiple subnets, KVM guests may or may not be able to access each other regardless of NATing. I spent a rather entertaining Sunday afternoon troubleshooting why the two virtual subnets seemingly couldn’t talk to each other. The key data point came when I determined that the first subnet could ping the second but the second couldn’t ping the first. iptables firewall rules…

    • Every time you create a new subnet, at least through the virt-manager GUI, libvirtd puts rules in the FORWARD chain for every subnet. Those rules look like:

      # show_fwd
      .  Chain FORWARD (policy ACCEPT)
      .  target     prot opt source               destination
      01 ACCEPT     all  --  0.0.0.0/0            192.168.200.0/24    state RELATED,ESTABLISHED
      02 ACCEPT     all  --  192.168.200.0/24     0.0.0.0/0
      03 ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
      04 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      05 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      06 ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED
      07 ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0
      08 ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
      09 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      10 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      11 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited
      
    • Notice the extra reject and accept lines on lines 3 through 5 and 8 through 10. That chain should look like:

      # show_fwd
      .  Chain FORWARD (policy ACCEPT)
      .  target     prot opt source               destination
      01 ACCEPT     all  --  0.0.0.0/0            192.168.200.0/24    state RELATED,ESTABLISHED
      02 ACCEPT     all  --  192.168.200.0/24     0.0.0.0/0
      03 ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED
      04 ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0
      05 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited
      
    • To correct that, ID the rule number by counting from the top ACCEPT line (or write an inline function to print them out for you…), then execute: iptables -D FORWARD ${num}. It might be easier to simply flush the FORWARD rule and reset it appropriately. If I write a script to do that, I’ll post it here.

    • The POSTROUTING rule is the one that actually does the NATing:

      # iptables -t nat -L POSTROUTING
      Chain POSTROUTING (policy ACCEPT)
      target     prot opt source               destination
      MASQUERADE  tcp  --  192.168.200.0/24    !192.168.200.0/24    masq ports: 1024-65535
      MASQUERADE  udp  --  192.168.200.0/24    !192.168.200.0/24    masq ports: 1024-65535
      MASQUERADE  all  --  192.168.200.0/24    !192.168.200.0/24
      MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535
      MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535
      MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
      
  • To reset your KVM network back to a fully functioning default:

    • Stop libvirtd and iptables: service libvirtd stop && service iptables stop

    • Restart iptables then libvirtd: service iptables start && service libvirtd start

    • With multiple subnets, your forward rule will look like this again:

      # show_fwd
         Chain FORWARD (policy ACCEPT)
         target     prot opt source               destination
      01 ACCEPT     all  --  0.0.0.0/0            192.168.200.0/24    state RELATED,ESTABLISHED
      02 ACCEPT     all  --  192.168.200.0/24     0.0.0.0/0
      03 ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
      04 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      05 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      06 ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED
      07 ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0
      08 ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
      09 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      10 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      11 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited
      
    • Delete the duplicate and erroneous lines:

      for x in 10 9 8 5 4 3
      do
      echo iptables -D FORWARD ${x}
      iptables -D FORWARD ${x}
      done
      iptables -D FORWARD 10
      iptables -D FORWARD 9
      iptables -D FORWARD 8
      iptables -D FORWARD 5
      iptables -D FORWARD 4
      iptables -D FORWARD 3
      # show_fwd
      Chain FORWARD (policy ACCEPT)
      .. target     prot opt source               destination
      01 ACCEPT     all  --  0.0.0.0/0            192.168.200.0/24    state RELATED,ESTABLISHED
      02 ACCEPT     all  --  192.168.200.0/24     0.0.0.0/0
      03 ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED
      04 ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0
      05 REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
      

Hybrid:

To review, you want your guests not to be NATed from each other and to have external network connectivity.

  • Start with the default, functiong environment listed above.

  • Flush the POSTROUTING chains:

    # iptables -t nat -F POSTROUTING
    # iptables -t nat -L POSTROUTING
    Chain POSTROUTING (policy ACCEPT)
    target     prot opt source               destination
    
  • Set NATing on the ethernet NIC only:

    iptables -t nat -A POSTROUTING -s 192.168.122.0/24 -o eth0 -j MASQUERADE
    iptables -t nat -A POSTROUTING -s 192.168.200.0/24 -o eth0 -j MASQUERADE
    

Non-NATed:

This set up is really going to irritate your networking team as it means that routing to the subnets will have to be set up on the external network, possibly even at the host level. I haven’t set it up yet in my environment and probably won’t. The inconvenience of having to ssh to my vmhost prior to the guests isn’t that great.

Virtual volume manipulation

  • List volumes in the default (or any other) pool:

# virsh vol-list default
Name                 Path
-----------------------------------------
bt5-gnome.img        /var/lib/libvirt/images/bt5-gnome.img
guest-1.img          /var/lib/libvirt/images/guest-1.img
guest-2.img          /var/lib/libvirt/images/guest-2.img
guest.img            /var/lib/libvirt/images/guest.img
guest1.img           /var/lib/libvirt/images/guest1.img
python.img           /var/lib/libvirt/images/python.img
testies.img          /var/lib/libvirt/images/testies.img
vm1.img              /var/lib/libvirt/images/vm1.img
  • List volumes assigned to a guest:

# virsh domblklist vm1
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/vm1.img
  • Create a new volume which can then be added to systems. Note that vol-create-as is all one word.

# virsh vol-create-as default vm1-1.img 10g
Vol vm1-1.img created

# virsh vol-list default | head -2 ; virsh vol-list default | grep vm
Name                 Path
-----------------------------------------
vm1-1.img            /var/lib/libvirt/images/vm1-1.img
vm1.img              /var/lib/libvirt/images/vm1.img

# ll /var/lib/libvirt/images/vm*
-rw-------. 1 root root 10737418240 Jun 25 11:57 /var/lib/libvirt/images/vm1-1.img
-rw-------. 1 qemu qemu 10737418240 Jun 25 11:58 /var/lib/libvirt/images/vm1.img
  • Add a previously created volume to a guest:
    • volume is the absolute path to the file

    • vdb is how the disk will be presented to the guest. Note the target entry from the virsh domblklist vm1 command above. Use the corresponding naming convention for your environment.

# virsh attach-disk vm1 /var/lib/libvirt/images/vm1-1.img vdb --persistent
Disk attached successfully
  • Remove a disk from a guest:

# virsh detach-disk vm1 vdb --persistent
Disk detached successfully
  • Delete a detached volume:

# virsh vol-delete /var/lib/libvirt/images/vm1-1.img default
Vol /var/lib/libvirt/images/vm1-1.img deleted

Delete a guest

  • Stop the guest, if needed: virsh destroy ${guest}

  • Delete the guest definition: virsh undefine ${guest}

  • Remove the guest’s disk.

  • If you do it often enough, setting up a function might be useful:

kill_vm()
{  [[ ${#1} -eq 0 ]] && return
   vm=$1
   pv=$(virsh domblklist ${vm} | grep /var | awk '{print $NF}')
   virsh destroy ${vm}
   virsh undefine ${vm}
   virsh vol-delete ${pv}
   [[ -f ${pv} ]] && rm ${pv}
}
kill_vm vm1