Latest Articles related to all categories. Microsoft, Twitter, Xbox, Autos and much more

Full width home advertisement

Post Page Advertisement [Top]

Moving from system administration to development, I’ve seen a number
of programming courses that use UML instances. Kernel-level programming
is the most obvious place for UMLs. A system-level
programming course is similar to a system administration course in
that each student should have a dedicated machine. Anyone learning
kernel programming is probably going to crash the machine, so you
can’t really teach such a course on a shared machine.
UML instances have all the advantages already described, plus a
couple of bonuses. The biggest extra is that, as a normal process running
on the host, a UML instance can be debugged with all the tools
that someone learning system development is presumably already
familiar with. It can be run under the control of gdb, where the student
can set breakpoints, step through code, examine data, and do everything
else you can do with gdb. The rest of the Linux development
environment works as well with UML as with anything else. This
includes gprof and gcov for profiling and test coverage and strace
and ltrace for system call and library tracing.

Another bonus is that, for tracking down tricky timing bugs, the
debugging tool of last resort, the print statement, can be used to dump
data out to the host without affecting the timing of events within the
UML kernel. With a physical machine, this ranges from extremely
hard to impossible. Anything you do to store information for later
retrieval can, and probably will, change the timing enough to obscure
the bug you are chasing. With a UML instance, time is virtual, and it
stops whenever the virtual machine isn’t in the host’s userspace, as it
is when it enters the host kernel to log data to a file.


A popular use for UML is development for hardware that does not
yet exist. Usually, this is for a piece of embedded hardware—an appliance
of some sort that runs Linux but doesn’t expose it. Developing the
software inside UML allows the software and hardware development to
run in parallel. Until the actual devices are available, the software can
be developed in a UML instance that is emulating the hardware.
Examples of this are hard to come by because embedded developers
are notoriously close-lipped, but I know of a major networking
equipment manufacturer that is doing development with UML. The
device will consist of several systems hooked together with an internal
network. This is being simulated by a script that runs a set of UML
instances (one per system in the device) with a virtual network running
between them and a virtual network to the outside. The software
is controlling the instances in exactly the same that it will control the
systems within the final device.
Going outside the embedded device market, UML is used to simulate
large systems. A UML instance can have a very large amount of
memory, lots of processors, and lots of devices. It can have more of all
these things than the host can, making it an ideal way to simulate a
larger system than you can buy. In addition to simulating large systems,
UML can also simulate clusters. A couple of open source clustering systems
and a larger number of cluster components, such as filesystems
and heartbeats, have been developed using UML and are distributed in
a form that will run within a set of UMLs.

No comments:

Post a Comment

Bottom Ad [Post Page]