The Book of Xen

Chapter 4

[23] Except when they"re QCOW images. Let"s ignore that for now. Except when they"re QCOW images. Let"s ignore that for now.

Ma.s.s Deployment Of course, all this is tied up in the broader question of provisioning infrastructure provisioning infrastructure and higher-level tools like Kickstart, SystemImager, and so on. Xen amplifies the problem by increasing the number of servers you own exponentially and making it easy and quick to bring another server online. That means you now need the ability to automatically deploy lots of hosts. and higher-level tools like Kickstart, SystemImager, and so on. Xen amplifies the problem by increasing the number of servers you own exponentially and making it easy and quick to bring another server online. That means you now need the ability to automatically deploy lots of hosts.

Manual Deployment The most basic approach (a.n.a.logous to tarring up a filesystem tarring up a filesystem) is probably to build a single tarball using any of the methods we"ve discussed and then make a script that part.i.tions, formats, and mounts each domU file and then extracts the tarball.

For example: #!/bin/bash

LVNAME=$1

lvcreate-Cy-L1024-n${LVNAME}lvmdisk

parted/dev/lvmdisk/${LVNAME}mklabelmsdos parted/dev/lvmdisk/${LVNAME}mkpartfsprimaryext201024

kpartx-p""-av/dev/lvmdisk/${LVNAME}

tune2fs-j/dev/mapper/${LVNAME}1

mount/dev/mapper/${LVNAME}1/mountpoint

tar-C/mountpoint-zxf/opt/xen/images/base.tar.gz

umount/mountpoint

kpartx-d/dev/lvmdisk/${LVNAME}

cat>/etc/xen/${LVNAME}

name="$LVNAME"

memory=128 disk=["phy:/dev/lvmdisk/${LVNAME},xvda,w"]

vif=[""]

kernel="/boot/vmlinuz-2.6-xenU"

EOF

exit0 This script takes a domain name as an argument, provisions storage from a tarball at /opt/xen/images/base.tar.gz /opt/xen/images/base.tar.gz, and writes a config file for a basic domain, with a gigabyte of disk and 128MB of memory. Further extensions to this script are, as always, easy to imagine. We"ve put this script here mostly to show how simple it can be to create a large number of domU images quickly with Xen. Next, we"ll move on to more elaborate provisioning systems.

QEMU and Your Existing Infrastructure Another way to do ma.s.s provisioning is with QEMU, extending the QEMU installation we previously outlined. Because QEMU simulates a physical machine, you can use your existing provisioning tools with QEMU-in effect treating virtual machines exactly like physical machines. For example, we"ve done this using SystemImager to perform automatic installs on the emulated machines.

This approach is perhaps the most flexible (and most likely integrates best with your current provisioning system), but it"s slow. Remember, KQEMU and Xen are not compatible, so you are running old-school, software-only QEMU. Slow! And needlessly slow because when a VM has been created, there"s nothing to keep you from duplicating it rather than going through the entire process again. But it works, and it works the exact same way as your previous provisioning system.[24]

We"ll describe a basic setup with SystemImager and QEMU, which should be easy enough to generalize to whichever other provisioning system you"ve got in place.

Setting Up SystemImager First, install SystemImager using your method of choice-yum, apt-get apt-get, download from We downloaded the RPMs from SystemImager using the sis-install script: #wget #shinstall-v--download-only--tag=stable--directory.systemconfigurator systemimager-clientsystemimager-commonsystemimager-i386boot-standard systemimager-i386initrd_templatesystemimager-server SystemImager works by taking a system image of a golden client golden client, hosting that image on a server, and then automatically rolling the image out to targets. In the Xen case, these components-golden client, server, and targets-can all exist on the same machine. We"ll a.s.sume that the server is dom0, the client is a domU that you"ve installed by some other method, and the targets are new domUs.

Begin by installing the dependency, systemconfigurator systemconfigurator, on the server: #rpm-ivhsystemconfigurator-*

Then install the server packages: #rpm-ivhsystemimager-common-*systemimager-server-* systemimager-i386boot-standard-*

Boot the golden client using xm create xm create and install the packages (note that we are performing these next steps within the domU rather than the dom0): and install the packages (note that we are performing these next steps within the domU rather than the dom0): :/path/to/systemimager/*.

#rpm-ivhsystemconfigurator-*

#rpm-ivhsystemimager-common-*systemimager-client-* systemimager-i386boot-initrd_template-*

SystemImager"s process for generating an image from the golden client is fairly automated. It uses rsync rsync to copy files from the client to the image server. Make sure the two hosts can communicate over the network. When that"s done, run on the client: to copy files from the client to the image server. Make sure the two hosts can communicate over the network. When that"s done, run on the client: #si_prepareclient--server Then run on the server: #si_getimage--golden_client--imageporter--exclude/mnt The server will connect to the client and build the image, using the name porter porter.

Now you"re ready to configure the server to actually serve out the image. Begin by running the si_mkbootserver si_mkbootserver script and answering its questions. It"ll configure DHCP and TFTP for you. script and answering its questions. It"ll configure DHCP and TFTP for you.

#si_mkbootserver Then answer some more questions about the clients: #si_mkclients Finally, use the provided script to enable netboot for the requisite clients: #si_mkclientnetboot--netboot--clientslennoxrosseangus And you"re ready to go. Boot the QEMU machine from the emulated network adapter (which we"ve left unspecified on the command line because it"s active by default): #qemu--hda/xen/lennox/root.img--bootn Of course, after the clients install, you will need to create domU configurations. One way might be to use a simple script (in Perl this time, for variety): #!/usr/bin/perl $name=$ARGV[0]; open(XEN,">","/etc/xen/$name"); printXEN

memory=128 name="$name"

disk=["tap:aio:/xen/$name/root.img,hda1,w"]

vif=[""]

root="/dev/hda1ro"

CONFIG close(XEN); (Further refinements, such as generating an IP based on the name, are of course easy to imagine.) In any case, just run this script with the name as argument: #makeconf.pllennox And then start your shiny new Xen machine: #xmcreate-c/etc/xen/lennox Installing pypxeboot Like PyGRUB, pypxeboot is a Python script that acts as a domU bootloader. Just as PyGRUB loads a kernel from the domain"s virtual disk, pypxeboot loads a kernel from the network, after the fashion of PXEboot (for Preboot eXecution Environment) on standalone computers. It accomplishes this by calling udhcpc udhcpc (the micro-DHCP client) to get a network configuration, and then TFTP to download a kernel, based on the MAC address specified in the domain config file. (the micro-DHCP client) to get a network configuration, and then TFTP to download a kernel, based on the MAC address specified in the domain config file.

pypxeboot isn"t terribly hard to get started with. You"ll need the pypxeboot package itself, udhcp, and tftp. Download the packages and extract them. You can get pypxeboot from and udhcp from and udhcp from Your distro will most likely include the tftp client already.

The pypxeboot package includes a patch for udhcp that allows udhcp to take a MAC address from the command line. Apply it.

#patch-p0

Copy pypxeboot and outputpy.udhcp.sh outputpy.udhcp.sh to appropriate places: to appropriate places: #cppypxeboot-0.0.2/pypxeboot/usr/bin #cppypxeboot-0.0.2/outputpy.udhcp.sh/usr/share/udhcpc Next set up the TFTP server for network boot. The boot server can be essentially the same as a boot server for physical machines, with the caveat that the kernel and initrd need to support Xen paravirtualization. We used the setup generated by Cobbler, but any PXE environment should work.

Now you should be able to use pypxeboot with a domU configuration similar to the following: bootloader="/usr/bin/pypxeboot"

vif=["mac=00:16:3E:11:11:11"]

bootargs=vif[0]NoteThe regex that finds the MAC address in pypxeboot is easily confused. If you specify other parameters, put s.p.a.ces between the mac= mac= parameter and the surrounding commas, for example parameter and the surrounding commas, for example, vif = ["vifname=lady , mac=00:16:3E:11:11:11 , bridge=xenbr0"] vif = ["vifname=lady , mac=00:16:3E:11:11:11 , bridge=xenbr0"].

Create the domain: #xmcreatelady Usingconfigfile"/etc/xen/lady".

pypxeboot:requestinginfoforMACaddress00:16:3E:11:11:11 pypxeboot:gettingcfgforIP192.l68.4.114(C0A80427)fromserver192.168.4.102 pypxeboot:downloadinginitrdusingcmd:tftp192.168.4.102-c get/images/scotland-xen-i386/initrd.img/var/lib/xen/initrd.BEUTCy pypxeboot:downloadingkernelusingcmd:tftp192.168.4.102-c get/images/scotland-xen-i386/vmlinuz/var/lib/xen/kernel.8HJDNE Starteddomainlady Automated Installs the Red Hat Way Red Hat uses Kickstart to provision standalone systems. A full discussion of Kickstart is probably best left to Red Hat"s doc.u.mentation-suffice it to say that Kickstart has been designed so that, with some supporting tools, you can install Xen domUs with it.

The tools you"ll most likely want to use to install virtual machines are Cobbler and koan koan. Cobbler is the server software, while koan koan ( (Kickstart over a network)[25] is the client. With the is the client. With the --virt --virt option, option, koan koan supports installing to a virtual machine. supports installing to a virtual machine.

This being a Red Hat tool, you can install it with yum yum.

No, sorry, we lied about that. First you"ll need to add the Extra Packages for Enterprise Linux Extra Packages for Enterprise Linux repository to your repository to your yum yum configuration. Install the package describing the additional repo: configuration. Install the package describing the additional repo: rpm-ivh Now you can install Cobbler with you can install Cobbler with yum yum: #yuminstallcobbler Then you"ll want to configure it. Run cobbler check cobbler check, which will give you a list of issues that may interfere with Cobbler. For example, out of the box, Cobbler reported these issues for us: Thefollowingpotentialproblemsweredetected: #0:The"server"fieldin/var/lib/cobbler/settingsmustbesettosomething otherthanlocalhost,orkickstartingfeatureswillnotwork.Thisshouldbe aresolvablehostnameorIPforthebootserverasreachablebyallmachines thatwilluseit.

#1:ForPXEtobefunctional,the"next_server"fieldin/var/lib/cobbler/ settingsmustbesettosomethingotherthan127.0.0.1,andshouldmatchthe IPofthebootserveronthePXEnetwork.

#2:change"disable"to"no"in/etc/xinetd.d/tftp #3:servicehttpdisnotrunning #4:sinceiptablesmayberunning,ensure69,80,25150,and25151are unblocked #5:reposyncisnotinstalled,needforcobblerreposync,install/upgradeyum- utils?

#6:yumdownloaderisnotinstalled,neededforcobblerrepoaddwith--rpm- listparameter,install/upgradeyum-utils?

After you"ve fixed these problems, you"re ready to use Cobbler. This involves setting up install media and adding profiles.

First, find some install media. Kickstart is a Red Hatspecific package, so Cobbler works only with Red Hatlike distros (SUSE is also supported, but it"s experimental). Cobbler supports importing a Red Hatstyle install tree via rsync rsync, a mounted DVD, or NFS. Here we"ll use a DVD-for other options, see Cobbler"s man page.

#cobblerimport--path/mnt/dvd--name=scotland If you"re using a network install source, this may take a while. A full mirror of one architecture is around 5GB of software. When it"s done downloading, you can see the mirror status by running cobbler report cobbler report. When you"ve got a directory tree, you can use it as an install source by adding a profile profile for each type of virtual machine you plan to install. We suggest installing through Cobbler rather than for each type of virtual machine you plan to install. We suggest installing through Cobbler rather than bare bare pypxeboot and Kickstart because it has features aimed specifically at setting up virtual machines. For example, you can specify the domU image size and RAM amount in the machine profile (in GB and MB, respectively): pypxeboot and Kickstart because it has features aimed specifically at setting up virtual machines. For example, you can specify the domU image size and RAM amount in the machine profile (in GB and MB, respectively): #cobblerprofileadd-name=bar-distro=foo-virt-file-size=4-virt-ram=128 When you"ve added profiles, the next step is to tell Cobbler to regenerate some data, including PXEboot menus: #cobblersync Finally, you can use the client, koan koan, to build the virtual machine. Specify the Cobbler server, a profile, and optionally a name for the virtual machine. We also used the --nogfx --nogfx option to disable the VNC framebuffer. If you leave the framebuffer enabled, you won"t be able to interact with the domU via option to disable the VNC framebuffer. If you leave the framebuffer enabled, you won"t be able to interact with the domU via xm console xm console: #koan--virt--server=localhost--profile=scotland--virt-name=lady--nogfx koan will then create a virtual machine, install, and automatically create a domU config so that you can then start the domU using will then create a virtual machine, install, and automatically create a domU config so that you can then start the domU using xm xm: #xmcreate-clady

[24] This can be made faster by using an HVM domU for the SystemImager install, rather than a QEMU instance. Not This can be made faster by using an HVM domU for the SystemImager install, rather than a QEMU instance. Not blazing fast blazing fast, but an improvement.

[25] It begs the question of whether there are non-networked Kickstart installs, but we"ll let that slide. It begs the question of whether there are non-networked Kickstart installs, but we"ll let that slide.

And Then...

In this chapter, we"ve gone through a bunch of install methods, ranging from the generic and brute force to the specialized and distro-specific. Although we haven"t covered anything in exhaustive detail, we"ve done our best to outline the procedures to emphasize when you might want to, say, use yum yum, and when you might want to use QEMU. We"ve also gestured in the direction of possible pitfalls with each method.

Many of the higher-level domU management tools also include a quick-and-easy way to install a domU if none of these more generic methods strike your fancy. (See Chapter6 Chapter6 for details.) For example, you"re most likely to encounter for details.) For example, you"re most likely to encounter virt-install virt-install in the context of Red Hat"s in the context of Red Hat"s virt-manager virt-manager.

The important thing, though, is to tailor the install method to your needs. Consider how many systems you"re going to install, how similar they are to each other, and the intended role of the domU, and then pick whatever makes the most sense.

Chapter4.STORAGE WITH XEN

Throughout this book, so far, we"ve talked about Xen mostly as an integrated whole, a complete virtualization solution solution, to use marketing"s word. The reality is a bit more complex than that. Xen itself is only one component of a platform that aims to free users from having to work with real hardware. The Xen hypervisor virtualizes a processor (along with several other basic components, as outlined in Chapter2 Chapter2), but it relies on several underlying technologies to provide seamless abstractions of the resources a computer needs. This distinction is clearest in the realm of storage, where Xen has to work closely with a virtualized storage layer to provide the capabilities we expect of a virtual machine.

By that we mean that Xen, combined with appropriate storage mechanisms, provides near total hardware independence. The user can run the Xen machine anywhere, move the instance about almost at will, add storage freely, save the filesystem state cleanly, and remove it easily after it"s done.

Sounds good? Let"s get started.

Storage: The Basics The first thing to know about storage-before we dive into configuration on the dom0 side-is how to communicate its existence to the domain. DomUs find their storage by examining the domU config file for a disk= disk= line. Usually it"ll look something like this: line. Usually it"ll look something like this: disk=[ "phy:/dev/cleopatra/menas,sda,w", "phy:/dev/cleopatra/menas_swap,sdb,w"

This line defines two devices, which appear to the domU as sda sda and and sdb sdb. Both are physical,[26] as indicated by the as indicated by the phy: phy: prefix-other storage backends have their own prefixes, such as prefix-other storage backends have their own prefixes, such as file: file: and and tap: tap: for file-backed devices. You can mix and match backing device types as you like-we used to provide a pair of for file-backed devices. You can mix and match backing device types as you like-we used to provide a pair of phy: phy: volumes and a file-backed read-only "rescue" image. volumes and a file-backed read-only "rescue" image.

We call this a line, but it"s really more of a stanza-you can put the strings on separate lines, indent them with tabs, and put s.p.a.ces after the commas if you think that makes it more readable. In this case, we"re using LVM, with a volume group named cleopatra cleopatra and a pair of logical volumes called and a pair of logical volumes called menas menas and and menas_swap menas_swap.

NoteBy convention, we"ll tend to use the same name for a domain, its devices, and its config file. Thus, here, the logical volumes menas menas and and menas_swap menas_swap belong to the domain belong to the domain menas, menas, which has the config file which has the config file /etc/xen/menas /etc/xen/menas and network interfaces with similar names. This helps to keep everything organized and network interfaces with similar names. This helps to keep everything organized.

You can examine the storage attached to a domain by using the xm block-list xm block-list command-for example: command-for example: #xmblock-listmenas VdevBEhandlestateevt-chring-refBE-path 204900468/local/domain/0/backend/vbd/1/2049 205000479/local/domain/0/backend/vbd/1/2050 Now, armed with this knowledge, we can move on to creating backing storage in the dom0.

[26] As you may gather, a As you may gather, a physical physical device is one that can be accessed via the block device semantics, rather than necessarily a discrete piece of hardware. The prefix instructs Xen to treat the device as a basic block device, rather than providing the extra translation required for a file-backed image. device is one that can be accessed via the block device semantics, rather than necessarily a discrete piece of hardware. The prefix instructs Xen to treat the device as a basic block device, rather than providing the extra translation required for a file-backed image.

Varying Types of Storage It should come as little surprise, this being the world of open source, that Xen supports many different storage options, each with its own strengths, weaknesses, and design philosophy. These options broadly fall into the categories of file based file based and and device based device based.

Xen can use a file file as a block device. This has the advantage of being simple, easy to move, mountable from the host OS with minimal effort, and easy to manage. It also used to be very slow, but this problem has mostly vanished with the advent of the blktap driver. The file-based block devices differ in the means by which Xen accesses them (basic loopback versus blktap) and the internal format (AIO, QCOW, etc.). as a block device. This has the advantage of being simple, easy to move, mountable from the host OS with minimal effort, and easy to manage. It also used to be very slow, but this problem has mostly vanished with the advent of the blktap driver. The file-based block devices differ in the means by which Xen accesses them (basic loopback versus blktap) and the internal format (AIO, QCOW, etc.).

Xen can also perform I/O to a physical physical device. This has the obvious drawback of being difficult to scale beyond your ability to add physical devices to the machine. The physical device, however, can be anything the kernel has a driver for, including hardware RAID, fibre channel, MD, network block devices, or LVM. Because Xen accesses these devices via DMA (direct memory access) between the device driver and the Xen instance, mapping I/O directly into the guest OS"s memory region, a domU can access physical devices at near-native speeds. device. This has the obvious drawback of being difficult to scale beyond your ability to add physical devices to the machine. The physical device, however, can be anything the kernel has a driver for, including hardware RAID, fibre channel, MD, network block devices, or LVM. Because Xen accesses these devices via DMA (direct memory access) between the device driver and the Xen instance, mapping I/O directly into the guest OS"s memory region, a domU can access physical devices at near-native speeds.

© 2024 www.topnovel.cc