Latest Articles related to all categories. Microsoft, Twitter, Xbox, Autos and much more

Full width home advertisement

Post Page Advertisement [Top]

A UML instance is a full-fledged Linux machine running on the host
Linux. It runs all the software and services that any other Linux
machine does. The difference is that UML instances can be conjured up
on demand and then thrown away when not needed. This advantage
lies behind the large range of applications that I and other people have
found for UML.
In addition to the flexibility of being able to create and destroy virtual
machines within seconds, the instances themselves can be dynamically
reconfigured. Virtual peripherals, processors, and memory can be
added and removed arbitrarily to and from a running UML instance.
There are also much looser limits on hardware configurations for
UML instances than for physical machines. In particular, they are not
limited to the hardware they are running on. A UML instance may
have more memory, more processors, and more network interfaces,
disks, and other devices than its host, or even any possible host. This

makes it possible to test software for hardware you don’t own, but have
to support, or to configure software for a network before the network is
available.


In this book, I will describe the many uses of UML and provide
step-by-step instructions for using it. In doing so, I will provide you, the
reader, with the information and techniques needed to make full use of
UML. As the original author and current maintainer of UML, I have
seen UML mature from its decidedly cheesy beginnings to its current
state where it can do basically everything that any other Linux
machine can do (see Table 1.1).
A BIT OF HISTORY
I started working on UML in earnest in February 1999 after having the
idea that porting Linux to itself might be practical. I tossed the idea
around in the back of my head for a few months in late 1998 and early
1999. I was thinking about what facilities it would need from the host
and whether the system call interface provided by Linux was rich
enough to provide those facilities. Ultimately, I decided it probably was,
and in the cases where I wasn’t sure, I could think of workarounds.
So, around February, I pulled a copy of the 2.0.32 kernel tree off of
a Linux CD (probably a Red Hat source CD) because it was too painful
to try to download it through my dialup. Within the resulting kernel
tree, I created the directories my new port was going to need without
putting any files in them. This is the absolute minimum amount of infrastructure
you need for a new port. With the directories present, the kernel
build process can descend into them and try to build what’s there.
Table 1.1 UML Development Timeline
Date Event
Late 1998 to early 1999 I think about whether UML is possible.
Feb. 1999 I start working on UML.
June 3, 1999 UML is announced to the Linux kernel mailing list.
Sept. 12, 2002 UML is merged into 2.5.34.
June 21, 2004 I join Intel.

Needless to say, with nothing in those directories, the build didn’t
even start to work. I needed to add the necessary build infrastructure,
such as Makefiles. So, I added the minimal set of things needed to get
the kernel build to continue and looked at what failed next. Missing were
a number of header files used by the generic (hardware-independent)
portions of the kernel that the port needs to provide. I created them as
empty files, so that the #include preprocessor directives would at
least succeed, and proceeded onward.
At this point, the kernel build started complaining about missing
macros, variables, and functions—the things that should have been
present in my empty header files and nonexistent C source files. This
told me what I needed to think about implementing. I did so in the
same way as before: For the most part, I implemented the functions as
stubs that didn’t do anything except print an error message. I also started
adding real headers, mostly by copying the x86 headers into my include
directory and removing the things that had no chance of compiling.
After defining many of these useless procedures, I got the UML
build to “succeed.” It succeeded in the sense that it produced a program
I could run. However, running it caused immediate failures due to the
large number of procedures I defined that didn’t do what they were
supposed to—they did nothing at all except print errors. The utility of
these errors is that they told me in what order I had to implement
these things for real.
So, for the most part, I plodded along, implementing whatever
function printed its name first, making small increments of progress
through the boot process with each addition. In some cases, I needed to
implement a subsystem, resulting in a related set of functions.
Implementation continued in this vein for a few months, interrupted
by about a month of real, paying work. In early June, I got UML
to boot a small filesystem up to a login prompt, at which point I could
log in and run commands. This may sound impressive, but UML was
still bug-ridden and full of design mistakes. These would be rooted out
later, but at the time, UML was not much more than a proof of concept.
Because of design decisions made earlier, such fundamental things
as shared libraries and the ability to log in on the main console didn’t
work. I worked around the first problem by compiling a minimal set of
tools statically, so they didn’t need shared libraries. This minimal set of
tools was what I populated my first UML filesystem with. At the time
of my announcement, I made this filesystem available for download
since it was the only way anyone else was going to get UML to boot.

Because of another design decision, UML, in effect, put itself in
the background, making it impossible for it to accept input from the
terminal. This became a problem when you tried to log in. I worked
around this by writing what amounted to a serial line driver, allowing
me to attach to a virtual serial line on which I could log in.
These are two of the most glaring examples of what didn’t work at
that point. The full list was much longer and included other things
such as signal delivery and process preemption. They didn’t prevent
UML from working convincingly, even though they were fairly fundamental
problems, and they would get fixed later.
At the time, Linus was just starting the 2.3 development kernel
series. My first “UML-ized” kernel was 2.0.32, which, even at the time,
was fairly old. So, I bit the bullet and downloaded a “modern” kernel,
which was 2.3.5 or so. This started the process, which continues to this
day, of keeping in close touch with the current development kernels
(and as of 2.4.0, the stable ones as well).
Development continued, with bugs being fixed, design mistakes
rectified (and large pieces of code rewritten from scratch), and drivers
and filesystems added. UML spent a longer than usual amount of time
being developed out of pool, that is, not integrated into the mainline
Linus’ kernel tree. In part, this was due to laziness. I was comfortable
with the development methodology I had fallen into and didn’t see
much point in changing it.
However, pressure mounted from various sources to get UML into
the main kernel tree. Many people wanted to be able to build UML
from the kernel tree they downloaded from http://www.kernel.org or
got with their distribution. Others, wanting the best for the UML
project, saw inclusion in Linus’ kernel as being a way of getting some
public recognition or as a stamp of approval from Linus, thus attracting
more users to UML. More pragmatically, some people, who were
largely developers, noted that inclusion in the official kernel would
cause updates and bug fixes to happen in UML “automatically.” This
would happen as someone made a pass over the kernel sources, for
example, to change an interface or fix a family of bugs, and would cover
UML as part of that pass. This would save me the effort of looking
through the patch representing a new kernel release, finding those
changes, figuring out the equivalent changes needed in UML, and
making them. This had become my habit over the roughly four years of
UML development before it was merged by Linus. It had become a routine
part of UML development, so I didn’t begrudge the time it took,

but there is no denying that it did take time that would have been better
spent on other things.
So, roughly in the spring of 2002, I started sending updated UML
patches to Linus, requesting that they be merged. These were ignored
for some months, and I was starting to feel a bit discouraged, when out
of the blue, he merged my 2.5.34 patch on September 12, 2002. I had
sent the patch earlier to Linus as well as the kernel mailing list and
one of my own UML lists, as usual, and had not thought about it further.
That day, I was idling on an Internet Relay Chat (IRC) channel
where a good number of the kernel developers hang around and talk.
Suddenly, Arnaldo Carvalho de Melo (a kernel contributor from Brazil
and the CTO of Conectiva, the largest Linux distribution in South
America) noticed that Linus had merged my patch into his tree.
The response to this from the other kernel hackers, and a little
later, from the UML community and wider Linux community, was gratifying
positive. A surprisingly (to me) large number of people were genuinely
happy that UML had been merged, and, in doing so, got the
recognition they thought it deserved.
At this writing, it is three years later, and UML is still under very
active development. There have been ups and downs. Some months
after UML was merged, I started finding it hard to get Linus to accept
updated patches. After a number of ignored patches, I started maintaining
UML out of tree again, with the effect that the in-tree version
of UML started to bit-rot. It stopped compiling because no one was
keeping it up to date with changes to internal kernel interfaces, and of
course bugs stopped being fixed because my fixes weren’t being merged
by Linus.
Late in 2004, an energetic young Italian hacker named Paolo Giarrusso
got Andrew Morton, Linus’ second-in-command, to include UML
in his tree. The so-called “-mm” tree is a sort of purgatory for kernel
patches. Andrew merges patches that may or may not be suitable for
Linus’ kernel in order to give them some wider exposure and see if they
are suitable. Andrew took patches representing the current UML at the
time from Paolo, and I followed that up with some more patches. Presently,
Andrew forwarded those patches, along with many others, to Linus,
who included them in his tree. All of a sudden, UML was up to date in
the official kernel tree, and I had a reliable conduit for UML updates.
I fed a steady stream of patches through this conduit, and by the
time of the 2.6.9 release, you could build a working UML from the official
tree, and it was reasonably up to date.

Throughout this period, I had been working on UML on a volunteer
basis. I took enough contracting work to keep the bills paid and
the cats fed. Primarily, this was spending a day a week at the Institute
for Security Technology Studies at Dartmouth College, in northern
New Hampshire, about an hour from my house. This changed around
May and June of 2004, when, nearly simultaneously, I got job offers
from Red Hat and Intel. Both were very generous, offering to have me
spend my time on UML, with no requirements to move. I ultimately
accepted Intel’s offer and have been an Intel employee in the Linux OS
group since.
Coincidentally, the job offers came on the fifth anniversary of
UML’s first public announcement. So, in five years, UML went from
nothing to a fully supported part of the official Linux kernel.
WHAT IS UML USED FOR?
During the five years since UML began, I have seen steady growth in
the UML user base and in the number and variety of applications and
uses for UML. My users have been nothing if not inventive, and I have
seen uses for UML that I would never have thought of.
Server Consolidation
Naturally, the most common applications of UML are the obvious ones.
Virtualization has become a hot area of the computer industry, and
UML is being used for the same things as other virtualization technologies.
Server consolidation is a major one, both internally within organizations
and externally between them. Internal consolidation usually
takes the form of moving several physical servers into the same number
of virtual machines running on a single physical host. External
consolidation is usually an ISP or hosting company offering to rent
UML instances to the public just as they rent physical servers. Here,
multiple organizations end up sharing physical hardware with each other.
The main attraction is cost savings. Computer hardware has
become so powerful and so cheap that the old model of one service, or
maybe two, per machine now results in hardware that is almost totally
idle. There is no technical reason that many services, and their data
and configurations, couldn’t be copied onto a single server. However, it
is easier in many cases to copy each entire server into a virtual machine

and run them all unchanged on a single host. It is less risky since the
configuration of each is the same as on the physical server, so moving it
poses no chance of upsetting an already-debugged environment.
In other cases, virtual servers may offer organizational or political
benefits. Different services may be run by different organizations, and
putting them on a single physical server would require giving the root
password to each organization. The owner of the hardware would naturally
tend to feel queasy about this, as would any given organization
with respect to the others. A virtual server neatly solves this by giving
each service its own virtual machine with its own root password. Having
root privileges in a virtual machine in no way requires root privileges
on the host. Thus, the services are isolated from the physical
host, as well as from each other. If one of them gets messed up, it won’t
affect the host or the other services.
Moving from production to development, UML virtual machines
are commonly used to set up and test environments before they go live
in production. Any type of environment from a single service running
on a single machine to a network running many services can be tested
on a single physical host. In the latter case, you would set up a virtual
network of UMLs on the host, run the appropriate services on the virtual
hosts, and test the network to see that it behaves properly.
In a complex situation like this, UML shines because of the ease of
setting up and shutting down a virtual network. This is simply a matter
of running a set of commands, which can be scripted. Doing this
without using virtual machines would require setting up a network of
physical machines, which is vastly more expensive in terms of time,
effort, space, and hardware. You would have to find the hardware, from
systems to network cables, find some space to put it in, hook it all
together, install and configure software, and test it all. In addition to
the extra time and other resources this takes, compared to a virtual
test environment, none of this can be automated.
In contrast, with a UML testbed, this can be completely automated.
It is possible, and fairly easy, to full automate the configuration
and booting of a virtual network and the testing of services running on
that network. With some work, this can be reduced to a single script
that can be run with one command. In addition, you can make changes
to the network configuration by changing the scripts that set it up,
rather than rewiring and rearranging hardware. Different people can
also work independently on different areas of the environment by booting
virtual networks on their own workstations. Doing this in a physical

environment would require separate physical testbeds for each person
working on the project.
Implementing this sort of testbed using UML systems instead of
physical ones results in the near-elimination of hardware requirements,
much greater parallelism of development and testing, and greatly
reduced turnaround time on configuration changes. This can reduce
the time needed for testing and improve the quality of the subsequent
deployment by increasing the amount and variety of testing that’s possible
in a virtual environment.
A number of open source projects, and certainly a much larger
number of private projects, use UML in this way. Here are a couple
that I am aware of.
☞ Openswan (http://www.openswan.org), the open source IPSec project,
uses a UML network for nightly regression testing and its kernel
development.
☞ BusyBox (http://www.busybox.net), a small-footprint set of Linux
utilities, uses UML for its testing.
Education
Consider moving the sort of UML setup I just described from a corporate
environment to an educational one. Instead of having a temporary
virtual staging environment, you would have a permanent virtual environment
in which students will wreak havoc and, in doing so, hopefully
learn something.
Now, the point of setting up a complicated network with interrelated
services running on it is simply to get it working in the virtual
environment, rather than to replicate it onto a physical network once
it’s debugged. Students will be assigned to make things work, and once
they do (or don’t), the whole thing will be torn down and replaced with
the next assignment.
The educational uses of UML are legion, including courses that
involve any sort of system administration and many that involve programming.
System administration requires the students to have root
privileges on the machines they are learning on. Doing this with physical
machines on a physical network is problematic, to say the least.
As root, a student can completely destroy the system software
(and possibly damage the hardware). With the system on a physical
network, a student with privileges can make the network unusable by,

wittingly or unwittingly, spoofing IP addresses, setting up rogue DNS
or DHCP servers, or poisoning ARP (Address Resolution Protocol)1
caches on other machines on the network.
These problems all have solutions in a physical environment.
Machines can be completely reimaged between boots to undo whatever
damage was done to the system software. The physical network can be
isolated from any other networks on which people are trying to do real
work. However, all this takes planning, setup, time, and resources that
just aren’t needed when using a UML environment.
The boot disk of a UML instance is simply a file in the host’s filesystem.
Instead of reimaging the disk of a physical machine between
boots, the old UML root filesystem file can be deleted and replaced with
a copy of the original. As will be described in later chapters, UML has a
technology called COW (Copy-On-Write) files, which allow changes to a
filesystem to be stored in a host file separate from the filesystem itself.
Using this, undoing changes to a filesystem is simply a matter of deleting
the file that contains the changes. Thus, reimaging a UML system
takes a fraction of a second, rather than the minutes that reimaging a
disk can take.
Looking at the network, a virtual network of UMLs is by default
isolated from everything else. It takes effort, and privileges on the host,
to allow a virtual network to communicate with a physical one. In addition,
an isolated physical network is likely to have a group of students
on it, so that one sufficiently malign or incompetent student could prevent
any of the others from getting anything done. With a UML
instance, it is feasible (and the simplest option) to give each student a
private network. Then, an incompetent student can’t mess up anyone

else’s network.
1. ARP is used on an Ethernet network to convert IP addresses to Ethernet addresses.
Each machine on an Ethernet network advertises what IP addresses
it owns, and this information is stored by the other machines on the network
in their ARP caches. A malicious system could advertise that it owns
an IP address that really belongs to a different machine, in effect, hijacking
the address. For example, hijacking the address of the local name server
would result in name server requests being sent to the hijacking machine
rather than the legitimate name server. Nearly all Internet operations begin
with a name lookup, so hijacking the address of the name server gives
an enormous amount of control of the local network to the attacker.

UML is also commonly used for learning kernel-level programming.
For novice to intermediate kernel programming students, UML
is a perfect environment in which to learn. It provides an authentic
kernel to modify, with the development and debugging tools that
should already be familiar. In addition, the hardware underneath this
kernel is virtualized and thus better behaved than physical hardware.
Failures will be caused by buggy software, not by misbehaving devices.
So, students can concentrate on debugging the code rather than diagnosing
broken or flaky hardware.
Obviously, dealing with broken, flaky, slightly out-of-spec, notquite-
standards-compliant devices are an essential part of an expert
kernel developer’s repertoire. To reach that exalted status, it is necessary
to do development on physical machines. But learning within a
UML environment can take you most of the way there.
Over the years, I have heard of education institutions teaching
many sort of Linux administration courses using UML. Some commercial
companies even offer system administration courses over the Internet
using UML. Each student is assigned a personal UML, which is
accessible over the Internet, and uses it to complete the coursework.

No comments:

Post a Comment

Bottom Ad [Post Page]