Hall of Mirrors: Virtualization, part 2

As Chaos Central incorporates the new virtualization server into the network, the already convoluted relationship between desktop presentations and locality is rapidly becoming a hall of mirrors.  If you’ve ever stood between facing mirrors, you’ve seen the effect of looking at a reflection of a reflection, repeated into the distance as either the reflectivity losses fades the shrinking copies of yourself to black or slight misalignment of the mirrors curves the images out of sight.  Or, you might have experienced a more elaborate hall of mirrors in a carnival, where angled mirrors show images from distant parts of a maze, or multiple images of yourself from different views.

In computer networks, server and desktop virtualization, where one physical machine appears to be many separate machines, depends on remote network connections: terminal sessions, the X Window System, web applications, or, in the case of desktop virtualization, Virtual Network Console services.  At Chaos Central, we use all of these tools, both within our own network and to maintain servers and applications at remote client sites.  But, while we’re incorporating the new server into our network, things get a little fun-house disorienting.

We’re in the middle of a two-month-long data migration project at a client site, so the laptop that has been the mainstay of the company for the past year has, in one pane, a window that shows the desktop, or console, of a virtualization host, forwarded through a secure tunnel connnection with several anchor points in the customer’s network. In that window are several terminal sessions, logged into client systems served by that remote host, and a web browser looking at the application on one of those hosts.  Confused?  Once back at Chaos Central, it gets more complicated.

In the process of reclaiming the upstairs office at Chaos Central, we’ve installed the new virtualization server on one side of the room, with it’s token text-mode console, and the old Windows XP machine with the new wide-screen monitor (a necessity, as we will see) on the larger desk.  Since the children’s electronics and paperwork still occupies the large worktable, and there are other complications with currently-running applications, we can’t move the laptop right now.  But, since the Windows machine has been converted essentially into a graphics terminal and serves as the control station for the virtualization server, the laptop is virtually moved upstairs through the magic of VNC.

So, now we have a simple Windows XP machine, which is, in itself, running only the Citrix XenServer management console, Firefox, TightVNC client, and PuTTY, an SSH terminal emulator (and, of course, the 1920×1080 widescreen monitor).  On-screen is where the hall of mirrors materializes.  Inside the Xenserver console, we can select the consoles, text or graphic at low resolution–one at a time–for each of the growing list of computer systems being built on the virtualization host.  Once built, we connect with each of them through VNC (to display the desktop), or PuTTY, or with a web application running on the virtual machine via a tab in the Firefox browser.

One of the VNC sessions on this now-cluttered screen is the on-going business running on the laptop downstairs, which has, in the six panes currently configured, multiple terminal sessions, a Google Chrome browser with 30-odd tabs open, a Firefox browser with several tabs open, a half-dozen PDF documents open in xpdf or Adobe Acrobat Reader, and, of course, the VNC session linked to that far-off virtualization host and its client virtual machines.

Of course, all of this is not about the structure of this shaky hall of mirrors, but about being connected to the data we need to do a job, wherever we are and no matter where the results are delivered.  Virtualization is not new: the original premise of the Unix system was to make it appear to each of many users that he or she was running the computer:  one computer became hundreds, in a fly-eye view.  The use of terminals connected by cables or the telephone system allowed users to be physically separated from the actual machine.

The development of graphical user interfaces allowed a person at a personal computer or graphics terminal to connect to many processes on the same machine or to processes on many separate machines.  The ability to do the latter gave way to the proliferation of special-purpose servers, each running a different application or set of applications, and using one of a variety of operating systems.  As computers got smaller, data centers continued to grow, replacing large mainframe systems with racks of specialized or redundant servers.  Space, power, and cooling requirements soared, along with the considerable investment in hardware and personnel to install, configure, and run the many systems.  Expanding the network often meant knocking out walls, or worse, moving the entire complex to a new building.  With uninterrupted data flow a business necessity, moving existing hardware was no longer an option: it became necessary to reduce the hardware footprint to permit redundancy of both hardware and data.  The first foray into accomplishing this was the introduction of blade servers, where separate processor boards shared a power supply, coupled with storage networks, where disk space became separate from the processors.

Still, adding capability required adding more hardware: hardware that operating logs showed was largely idle, waiting for requests from users, or in standby to take over in case of failure.  The way to solve this was virtualization, where complete server software and operating system images could be shared by a single hardware cluster, balanced for optimal loading, and distributed across minimally-redundant processing and storage capability.

All of this evolutionary and revolutionary design provides continuous business data availability, if done well.  For the IT department, it represents a continually changing environment, moving and replicating systems behind the scenes, while business goes on.  A bit less stressful than the career-breaking and empire-toppling outages possible in the days of non-redundant large systems, but unheralded and unnoticed, if everything goes well.

For the developer and consultant, virtualization means having many different computer systems at one’s disposal, with only one major hardware investment.  But, the migration is still labor-intensive, and results in a screen with hundreds of nested windows, where the depth of redirection can sometimes be detected only in the time delay between hand movements and movements on the screen, where network switching delays and redirection destroy the illusion of locality.