Xen FAQ

v1.1 - 2008-01-17 little errata

v1.0 - 2007-10-08 initail version


Table of Contents

1. General FAQs
2. Further HOWTOs

1. General FAQs

Common Xen problems.

1.1. How to limit dom0 cpu number?
1.2. How to prevent xen from rebooting when panic?
1.3. How to escape from an vm console?
1.4. What is the domain builder in the guest configure file?
1.5. How to access virtual console tty0 in vncviewer?
1.6. How many VIFs can I use in HVM guest?
1.7. How many VBDs can I use in HVM guest?

1.1.

How to limit dom0 cpu number?

There are three methods to get that:

  • Edit /etc/xen/xend-config.sxp:

    # In SMP system, dom0 will use dom0-cpus # of CPUS
    # If dom0-cpus = 0, dom0 will take all cpus available
    (dom0-cpus 1)
  • Add "dom0_max_vcpus=1" to xen hypervisor kernel command line.

  • Add "maxcpus=1" to Dom0 kernel command line.

1.2.

How to prevent xen from rebooting when panic?

There are two places to consider:

  • For xen hypervisor: appended "noreboot" to xen's command line.

  • For linux dom0: append "panic=0" to linux's comand line, or:

    # echo "0" > /proc/sys/kernel/panic

1.3.

How to escape from an vm console?

Use "Ctrl+5" or "Ctrl+]" to get out of the vm console.

1.4.

What is the domain builder in the guest configure file?

Currently, only two domain builders come with xen: linux and hvm. Linux is ELF format kernels and support loadable modules; hvm is for unmodified OS. Adding new builder for a.out format kernel image (Plan 9 and Minix) is on the way.

"builder" is associated with "ostype" field in the XenStore.

1.5.

How to access virtual console tty0 in vncviewer?

For RealVNC vncviewer client, press F8 popup the configure memu. Select "Alt" and "Ctrl". Then press "F1" to get to tty1 console.

1.6.

How many VIFs can I use in HVM guest?

HVM guest can support at most 8 emulated VIFs. This limit has been hard coded in qemu code tools/ioemu/vl.h:

#define MAX_NICS 8

If using pv drivers in hvm guest, there's no limit (the same with pv guest).

There is a bug before xen-3.2 limiting for vif numbers of HVM guest. See: http://lists.xensource.com/archives/html/xen-devel/2007-05/msg01047.html

For xen before 3.2, as a workaround, you can adding more than 8 vifs by specify the vif type:

vif = [ 'type=netfront', 'type=netfront', 'type=netfront', 'type=netfront', 'type=netfront', 'type=netfront', 'type=netfront', 'type=netfront', 'type=netfront', ]

1.7.

How many VBDs can I use in HVM guest?

HVM guest can support at most 4 + 7 qemu-dm emulated VBDs. This limit has been hard coded in qemu code tools/ioemu/vl.h:

#define MAX_DISKS 4
#define MAX_SCSI_DISKS 7

If using pv drivers in hvm guest, there's no limit (the same with pv guest).

2. Further HOWTOs

Further Xen HOWTOs.

2.1. How to setup Kdump for Xen?

Following these steps to setup Kdump for xen debugging in RHEL5.

  1. Append following parameters to xen boot command line:

    crashkernel=128M@32M
  2. Comment the following lines in /etc/init.d/kdump:

    #       MEM_RESERVED=`echo $KDUMP_COMMANDLINE | grep "crashkernel=[0-9]\+[MmKkGg]@[0-9]\+[MmGgKk]"`
    #       if [ -z "$MEM_RESERVED" ]
    #       then
    #               $LOGGER "No crashkernel parameter specified for running kernel"
    #               return 1
    #       fi
  3. Change KDUMP_KERNELVER in /etc/sysconfig/kdump:

    KDUMP_KERNELVER="2.6.18-8.0.0.4.1.el5"
  4. Reboot.

  5. Run:

    # chkconfig --level 2345 kdump on
    # service kdump start
  6. Testing:

    # echo "c" >/proc/sysrq-trigger
  7. Reboot.

Reference

2.2. How to enable automatic core dumps of xen guests?

Xendump is a facility for capturing vmcore dumps from Xen guests. It is built-in to the Xen Hypervisor. To configure Xendump follow the steps described below:

  1. Edit /etc/xen/xend-config.sxp and change the following line:

    #(enable-dump no)

    to:

    (enable-dump yes)
  2. Restart the xen daemon:

    # service xend restart
  3. Testing.

    1. Change the vm config file to:

      on_crash = 'restart'
    2. Start the vm with:

      # xm create vm.cfg
    3. Do the following command within a para-virtualized (PV) Xen guest:

      # sysctl -w kernel.panic=1
      # sysctl -w kernel.panic_on_oops=1
      # echo "c" >/proc/sysrq-trigger

Note

Right now, Xendump can be configured to capture vmcore dumps of para-virtualized (PV) Xen guests automatically upon a crash. However, vmcore dumps from fully-virtualized (FV) Xen guests can only be taken manually by running the xm dump-core command.

2.3. How to setup gdb for xen guest debugging

Following these steps to setup gdbserver for xen.

  1. Build the GDB server:

    $ cd tools/debugger/gdb/
    $ ./gdbbuild
  2. Copy ./gdb-6.2.1-linux-i386-xen/gdb/gdbserver/gdbserver-xen to your test machine (dom0).

  3. On your test machine, run:

    gdbserver-xen 127.0.0.1:9999 --attach $domid
  4. In another terminal of your test machine, run:

    gdb /path/to/vmlinux-syms-2.6.xx-xenU

    From within the gdb client session:

    (gdb) directory /path/to/linux-2.6.xx-xenU [*]
    (gdb) target remote 127.0.0.1:9999
    (gdb) bt
    (gdb) disass

2.4. How to set the virtual guest clock?

By default, the clocks in a Linux VM are synchronized to the clock running on the control domain, and cannot be independently changed (Any attempts to set or modify the time in a guest will fail). This mode is a convenient default, since only the control domain needs to be running the NTP service to keep accurate time across all VMs.

Paravirtualized guests may also perform their own system clock management:

  • Add the following lines to /etc/sysctl.conf, and reboot the system:

    # Set independent wall clock time
    xen.independent_wallclock = 1
  • You can temporarily override the setting for the current session in the proc filesystem. For example, as root run the following command on the guest:

    # echo 1 > /proc/sys/xen/independent_wallclock
  • Pass "independent_wallclock=1" as a boot parameter to the VM.

Note

This setting does not apply to hardware virtualized guests.

2.5. How to connect to serial console of HVM guest?

It is easy to connect to the serial console of HVM guest. You should:

  1. Add "serial = 'pty'" to vm configure file vm.cfg.

  2. Add following lines to /boot/grub/grub.conf of the vm:

    serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
    terminal --timeout=10 serial console

    And add kernel parameter: "console=tty0 console=ttyS0,115200n8". This tells linux to print logs on both tty0 and ttyS0. The result like this:

    kernel /boot/vmlinuz-2.6.18-8.el5 ro root=LABEL=/ console=tty0 console=ttyS0,115200n8
  3. Add "ttyS0" to /etc/securetty of the vm.

  4. Add the following line to /etc/inittab of the vm:

    co:2345:respawn:/sbin/agetty ttyS0 115200 vt100-nav
  5. then execute the following command to start the domain and you'll get the serial console output:

    # xm create -c vm.cfg

2.6. How to access VM graphical console by VNC?

To run vnciewer on Dom0, log in to Dom0 with:

$ ssh -X hostname

If you add:

vncconsole=1

to the a hvm guest config file, a vncviewer session will startup when you create that domain.

If you want to set up VNC access to the host computer (Dom0) for any remote computer, edit the /etc/xen/xend-config.sxp file:

vnclisten=0.0.0.0

The same option can be applied to VM private configure file to override xend global settings.

Setting vnclisten to 0.0.0.0 sets the VNCViewer to listen on any port, and redirects the vncframebuffer to any host. This may compromise security on the host machine.

VNC access to Dom0 can be restricted to a particular host by setting vnclisten to the ip address of the host in the /etc/xen/xend-config.sxp file.

2.7. How to allow root login on xen console for PV guest?

In order to allow root to login on xen console (for para-virtualized machine), you should:

  1. Add "xvc0" to /etc/securetty of the vm.

  2. Add the following line to /etc/inittab of the vm:

    co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav

2.8. How to enable VFB support in Xen?

For PVM, the configure options look like:

vfb = ["type=vnc,vncunused=1,vnclisten=0.0.0.0,vncpasswd=passwd"]

Start the vm, then get:

/usr/lib64/xen/bin/xen-vncfb --unused --listen 0.0.0.0 --domid 13 --title xen_el5_x86_64_para

You can kill this process and rerun /usr/lib64/xen/bin/xen-vncfb.

For HVM, the configure options look like::

vnc=1
vncunused=1
vnclisten="0.0.0.0"
vncpasswd="passwd"
vncconsole=1

Start the vm, then get:

/usr/lib64/xen/bin/qemu-dm -d 9 -vcpus 1 -boot c -serial pty -acpi -domain-name xen_el5_x86_64_hvm -net nic,vlan=1,macaddr=00:16:3e:5a:af:2a,model=rtl8139 -net tap,vlan=1,bridge=xenbr0 -vnc 0.0.0.0:0 -vncunused -vncviewer

You cannot kill this process. Otherwise, the vm will be destroyed.