Hall of Mirrors: Virtualization, part 2

As Chaos Central incorporates the new virtualization server into the network, the already convoluted relationship between desktop presentations and locality is rapidly becoming a hall of mirrors.  If you’ve ever stood between facing mirrors, you’ve seen the effect of looking at a reflection of a reflection, repeated into the distance as either the reflectivity losses fades the shrinking copies of yourself to black or slight misalignment of the mirrors curves the images out of sight.  Or, you might have experienced a more elaborate hall of mirrors in a carnival, where angled mirrors show images from distant parts of a maze, or multiple images of yourself from different views.

In computer networks, server and desktop virtualization, where one physical machine appears to be many separate machines, depends on remote network connections: terminal sessions, the X Window System, web applications, or, in the case of desktop virtualization, Virtual Network Console services.  At Chaos Central, we use all of these tools, both within our own network and to maintain servers and applications at remote client sites.  But, while we’re incorporating the new server into our network, things get a little fun-house disorienting.

We’re in the middle of a two-month-long data migration project at a client site, so the laptop that has been the mainstay of the company for the past year has, in one pane, a window that shows the desktop, or console, of a virtualization host, forwarded through a secure tunnel connnection with several anchor points in the customer’s network. In that window are several terminal sessions, logged into client systems served by that remote host, and a web browser looking at the application on one of those hosts.  Confused?  Once back at Chaos Central, it gets more complicated.

In the process of reclaiming the upstairs office at Chaos Central, we’ve installed the new virtualization server on one side of the room, with it’s token text-mode console, and the old Windows XP machine with the new wide-screen monitor (a necessity, as we will see) on the larger desk.  Since the children’s electronics and paperwork still occupies the large worktable, and there are other complications with currently-running applications, we can’t move the laptop right now.  But, since the Windows machine has been converted essentially into a graphics terminal and serves as the control station for the virtualization server, the laptop is virtually moved upstairs through the magic of VNC.

So, now we have a simple Windows XP machine, which is, in itself, running only the Citrix XenServer management console, Firefox, TightVNC client, and PuTTY, an SSH terminal emulator (and, of course, the 1920×1080 widescreen monitor).  On-screen is where the hall of mirrors materializes.  Inside the Xenserver console, we can select the consoles, text or graphic at low resolution–one at a time–for each of the growing list of computer systems being built on the virtualization host.  Once built, we connect with each of them through VNC (to display the desktop), or PuTTY, or with a web application running on the virtual machine via a tab in the Firefox browser.

One of the VNC sessions on this now-cluttered screen is the on-going business running on the laptop downstairs, which has, in the six panes currently configured, multiple terminal sessions, a Google Chrome browser with 30-odd tabs open, a Firefox browser with several tabs open, a half-dozen PDF documents open in xpdf or Adobe Acrobat Reader, and, of course, the VNC session linked to that far-off virtualization host and its client virtual machines.

Of course, all of this is not about the structure of this shaky hall of mirrors, but about being connected to the data we need to do a job, wherever we are and no matter where the results are delivered.  Virtualization is not new: the original premise of the Unix system was to make it appear to each of many users that he or she was running the computer:  one computer became hundreds, in a fly-eye view.  The use of terminals connected by cables or the telephone system allowed users to be physically separated from the actual machine.

The development of graphical user interfaces allowed a person at a personal computer or graphics terminal to connect to many processes on the same machine or to processes on many separate machines.  The ability to do the latter gave way to the proliferation of special-purpose servers, each running a different application or set of applications, and using one of a variety of operating systems.  As computers got smaller, data centers continued to grow, replacing large mainframe systems with racks of specialized or redundant servers.  Space, power, and cooling requirements soared, along with the considerable investment in hardware and personnel to install, configure, and run the many systems.  Expanding the network often meant knocking out walls, or worse, moving the entire complex to a new building.  With uninterrupted data flow a business necessity, moving existing hardware was no longer an option: it became necessary to reduce the hardware footprint to permit redundancy of both hardware and data.  The first foray into accomplishing this was the introduction of blade servers, where separate processor boards shared a power supply, coupled with storage networks, where disk space became separate from the processors.

Still, adding capability required adding more hardware: hardware that operating logs showed was largely idle, waiting for requests from users, or in standby to take over in case of failure.  The way to solve this was virtualization, where complete server software and operating system images could be shared by a single hardware cluster, balanced for optimal loading, and distributed across minimally-redundant processing and storage capability.

All of this evolutionary and revolutionary design provides continuous business data availability, if done well.  For the IT department, it represents a continually changing environment, moving and replicating systems behind the scenes, while business goes on.  A bit less stressful than the career-breaking and empire-toppling outages possible in the days of non-redundant large systems, but unheralded and unnoticed, if everything goes well.

For the developer and consultant, virtualization means having many different computer systems at one’s disposal, with only one major hardware investment.  But, the migration is still labor-intensive, and results in a screen with hundreds of nested windows, where the depth of redirection can sometimes be detected only in the time delay between hand movements and movements on the screen, where network switching delays and redirection destroy the illusion of locality.

Virtual(ization) Reality

When Chaos Central was centered in Montana, we used to say our house was heated by computers.  This was partially true: when we retired our hand-me-down backup server, a huge Dell 2400-series server, our electric bill dropped by $5 per month, and the temperature in the garage workshop (aka the world’s smallest aircraft factory) dropped by 5 Fahrenheit degrees.  We didn’t have virtualization then, but we did have a single-board computer installed in one of our SPARC Solaris machines that ran Windows.

Over the years, we have had separate machines for Windows, Linux (SuSE, Red Hat, Ubuntu), FreeBSD, and Solaris, usually at least six running at any given time, plus at least one laptop for road trips, rigged for dual boot with Windows and Linux.  In our move to Washington State, the FreeBSD machine and one of the two Solaris/SPARC machines did not survive a couple months storage, so we have been down to four systems: the remaining Solaris/SPARC development machine, a Windows XP machine for running TurboTax, a Ubuntu Linux desktop for the Nice Person’s fabric arts business, and the Unix Curmudgeon relegated to a Ubuntu Linux laptop for everything else.

Having a laptop as primary computer is a good idea for frequent travelers, but the average laptop just doesn’t have the heavy lifting capability to run a Unix consultancy. Plus, we were traveling often enough that taking turns at the keyboard just wasn’t fitting our frantic schedule, so we added a netbook (running Ubuntu, of course) to the mix, which gets used mostly “on the road.”   Yet, the laptop ends up with too much (we’ve maxed the memory and disk capabilities), including things we don’t use on the road, and too little, because we are restricted to one system.

Meanwhile, the Unix Curmudgeon encroached on the Nice Person’s big Linux machine, installing VirtualBox with virtual machines with FreeBSD, CentOS, Fedora Linux, and Windows XP.  We’ve been running our finances with Quicken under Crossover Office or Wine on Linux for a long time, but some of the features in our 2002 edition weren’t available.  Just changing the company address on our invoices took exporting and hacking the QIF file and re-importing it, and we’d had a few scary recovery moments.   So, we upgraded to Quicken 2010, which, like many products now, is essentially incomplete in itself and requires Genuine Microsoft Windows to run.  This led to yet another instance of Windows XP (yes, we have licenses for these, along with the OEM disks, from prior systems, of which all of the hardware except the side panels with the activation code has gone to the recycle center) in virtualization.

Since the desktop Linux machine runs best with only one virtual machine at a time, it was time to rethink the computer inventory at Chaos Central.  For the past nine months, Chaos Central has been a multi-generational household, with children and grandchildren stashed in “spare” bedrooms, and the Unix Curmudgeon’s office taken over by the children for their own home office, leaving Information Engineering Services as a virtual company, operating from a laptop wedged between the floor looms in the Realizations Fabric Arts weaving studio/office/sewing room in the basement.   But, with the possibility of the nest being emptied (again) soon, the Unix Curmudgeon has stealthily begun to retake a bit of real estate in the upstairs office.

In the interest of efficiency and performance, we added a new Dell server to the stack, one several generations removed from the old Dell 2400 retired in 2008, and a considerably smaller footprint.  Now, Sun VirtualBox is an excellent virtualization system–for a workstation–but, with Oracle systematically dismantling many of Sun’s former offerings, and the need to provide network-wide services, we decided to go with a Xen-based system.  Now, it is simple enough to provision virtual machines on any modern Linux, and nothing we haven’t already done for clients, but it does take a bit of skill in configuration.  A number of companies, like Citrix and VMware, have packaged virtualization solutions that run as the primary operating system on the virtualization host.  We selected Citrix XenServer as the candidate platform, if for no other reason than, because the platform foundation is a stripped-down CentOS distribution with the Xen management utility replaced by a custom Citrix application (which, inexplicably, runs only on Windows), it has a stripped-down, bare-bones free version.  Unlike the VMware limited 60-day free trial version, Citrix has no time limit, so evaluation can be more leisurely, and the limited feature set is probably adequate in the long term for a small consultancy that doesn’t have to manage dozens of VMs or worry about high-availability solutions.

The problem with adding a server-class machine to a small network is that a server is designed to be just that–a workhorse machine that delivers over the network, so most have robust disk controllers and high-performance disks, and high-speed networks, but poor or non-existent graphics capabilities.  We took the opportunity to acquire a large-format monitor (1920×1080) with our purchase, but the video capabilities of the server are way too modest to drive it (i.e., fixed video memory, no AGP).  But, because we do have the old obligatory, token Windows machine hanging around (which also runs Ubuntu in a Wubi installation), and we have to use it for the Xen management console anyway, the new monitor works quite well on the old machine as a simple graphics terminal.  The connections with the virtual machines will be via VNC anyway, so it isn’t necessary to have a graphics console on the server itself.  We dragged a surviving 14-inch monitor (which once served as part of a home-built “portable desktop” mini-ITX system) out of storage to serve as the server console monitor.

Building the virtual systems promises to be quite an adventure in itself.  A common use for virtualization in the Enterprise is to consolidate Windows servers that used to be relegated to separate machines, primarily due to the memory-management limitations of 32-bit Windows, or to provide Linux  in virtual private servers for web clients, where the virtual machines are essentially all built from similar templates.  In our case, we need to provide multiple Linux distributions, along with one or more versions of Windows, for software testing and development for Intel and AMD systems.  The adventure begins now.  Stay tuned for progress reports.

Labor Day

Labor Day is just another workday when you work from home…

Here we are, on the first Labor Day weekend since our “non-retirement” nearly a year ago, also the first anniversary of unloading at least some of our “stuff” at our new home and close to the second anniversary of the official start of the Great Recession.  It is becoming increasingly apparent that the era of the traditional “retirement” that folks born before the Great Depression have enjoyed is over.  Those of use between the Greatest Generation and the Boomers are in a kind of limbo where expectations and reality are diverging rapidly.

A number of factors weigh in on this sea change in our perception of the “Golden Years” as we approach the second decade in the new century.  First, many of us in our seventh decade of life are much more active and healthy than were our parent’s generation.  In the mid-twentieth century, retirement age was very close to the average life expectancy of those entering the workforce.  Statistically, only half of those entering the workforce would survive to see retirement, and those who did were not expected to be productive workers by that time, and live in retirement only a few years.  Today, the average life expectancy of someone entering the workforce as a young adult is in the mid-70s.  Those of us already at retirement age can expect to live another 16 years–on average!

This is no surprise, as the life expectancy has been increasing steadily for many decades, while the retirement age, set in the 1930s, remained the same until recently and is only gradually rising.  “Retirement” is no longer an extended vacation or extremely long weekend of household puttering: few today, raised in the age of easy credit and ravaged savings, have the finances to travel and play or indulge in expensive hobbies for decades, and, well, not all of us enjoy home repairs and gardening all that much to devote all our waking hours to it.  So, retirement becomes a new career opportunity, either to seek a new career or transform our old one.

In the case of the Unix Curmudgeon, our life hasn’t changed much so far, except the office is downstairs instead of across town, but we’ve been in that situation before, in the mid-1990s; the hours are a bit more flexible, but the result is that just as much work gets done, for less money and stretched across longer days.   Tasking is still driven by email and the intensity (and volume) of work varies proportionally with the urgency of server log messages.

“Working from Home” is the flip side of   “FAXing from the Beach,” the topic of an earlier post.  While the home worker has increased personal freedom, he or she is essentially immersed in work, 24x7x365. Still, this arrangement has its merits:  work gets done, but on a personal time scale.  The world of business isn’t quite ready for this transition, though.  The modern office culture, developed when communications was by memo rushed from desk to desk, is archaic when documents and messages can be instantly transmitted around the world, but is ingrained in corporate policies and procedures and the mindset that unseen workers are unsupervised workers.  Granted, telecommuting doesn’t scale well to many jobs, though it is surprising how many it does cover well.

System administration and programming are nearly ideal candidates for remote work, except for those rare times when a certain amount of “laying on of hands” is necessary, like installing, removing, or repairing hardware.  Installation and removal can be largely scheduled well in advance.  The computers I tend most of the time are physically 600 miles from my home-based office, sometimes further if I am traveling: I have made in-person appearances several times over the past year, sometimes just to remind the community-at-large that I still do work there, and sometimes to man-handle the hardware.

My last trip to the client’s site involved shutting down and removing an old server, a more or less symbolic act, since I had, over the previous few weeks, migrated the functionality to other machines and even changed the identity of the machine on the network.  The other part of the equation, repair, becomes almost a non-issue with the proliferation of virtualization and high-availability clustering.  As long as there are enough hosts to handle all of the services needed, the virtual server images can be moved from one to the other and computing loads balanced among the remaining servers in a cluster, almost behind the scenes, so that even repairs can be scheduled at a convenient time.

So, this Labor Day, as I monitor the migration of hundreds of thousands of files to consolidate several older, smaller servers’ data into one, and draft a quote for the next year’s contract with my primary client, I realize that retirement is a relic of the past: the private and public pensions available to persons of a certain age only serve to supplement the uncertain income of hourly contract fees and permit a slightly more relaxed work schedule (though when projects dictate, the workday sometimes runs well over eight hours and through the weekend).  Labor Day is a celebration of continuing to be a productive member of the workforce, at an age that previous generations of workers were expected to cash in and step aside.  That was a noble idea when social security was invented, to make room for younger, more able workers in an age when unemployment was more severe than it is today.  But, today, skill and experience counts, too, and age is not a factor for knowledge workers, as long as their minds are agile and accepting of new ideas.  We’re used to that–in the 45 years since we’ve been in this business, the only constant has been the rate of change, governed by Moore’s Law.  Ever increasing speed and capacity opens new avenues for change.  Some of those changes have been waiting years to be practical to implement: for instance, microprogramming, the principle on which most modern computer architectures is based, was invented in the early 1950s, but had to wait until the development of the large-scale integrated circuit chips in the late 1970s to become realizable on a practical basis.  Multi-processors and distributed clusters, once reserved only for vital scientific and military purposes, are now affordable to everyone, but only a few of us have long experience with programming and using them.

The exuberance and curiousity of youth is a great boon to business, but so is the calculated patience of us elders.  The curve of Moore’s Law is flattening–the secret to speed and capacity in the future lays in the techniques of the past, by which we wrung performance out of those early, limited systems, with slow processors and small memories.  Old is useless only if what it does isn’t needed anymore (if it ever was) or superseded by a paradigm shift.  Who knows paradigm shift better than those of us who have lived through the entire history of computing?

Wired Again – There’s No Place Like Home

After 4700 miles in three weeks of “Road Tour 2010,” we are back at Chaos Central, with our wired (and optionally, wireless) network, and can resume full-contact computing. The Unix Curmudgeon has been itching to reconfigure the Nice Person’s new HP Mini to run Ubuntu, but the hard facts of life are, you need to have a wired connection to get wireless working on an HP, with the Broadcom chipset, or find some other way to get the STA package loaded.

We did load up Ubuntu Netbook 10.04 on a USB stick, and it looks great, though slow in the “live CD” mode. The menu is a lot more full-featured than the otherwise excellent HP QuickWeb interface. But, to be useful for work, the system needs to be installed on the hard drive.

Our preference, of course, is to build a machine from the ground up with Linux or FreeBSD or Solaris on it, but there aren’t too many laptop barebones kits available, so for our mobile computers, we start with a machine that has Windows pre-installed. Being frugal as well as curmudgeonly, it seems a shame to throw away something we’ve paid for, so we don’t usually take the wipe-the-disk option when installing Linux on an existing Windows machine. Besides, as a software developer and occasional consumer of Windows-based software that simply won’t run under WINE, it’s handy to have Windows available when we absolutely can’t avoid it, so we opt for a dual-boot system.

There are two ways to relegate the Microsoft Tax penalty to a no-interest savings account: either repartition the hard drive to squeeze the Windows installation aside to provide a Linux native partition, or use Wubi (if Ubuntu is your distro of choice). On my development machine, I took the repartition route, since Wubi wasn’t integrated with Ubuntu when I started, and I need all the performance I can get. For most users, Wubi is the way to go, as there is no scary repartitioning, and actually no need to burn an install CD, as Wubi can be downloaded as a small EXE file.

The Nice Person asked if we had to keep Windows: I said we might need it, “just in case,” so she agreed to leave it in place, as long as it didn’t sneak up on us and boot when we didn’t want it (which it does do “out of the box” if you boot the machine and don’t click somewhere on the HP Quickweb screen before the 15-second countdown ends).

Wubi works by creating a large file in the Windows NTFS file system that has a Linux ext4 file system built in it. Wubi adds itself to the Windows NT boot loader, which when selected at Windows boot time, loads GRUB for the Linux boot, for alternate kernel or recovery options.  [There are some more details that make all this possible, but that’s the principle.]

The problem with installing Ubuntu using Wubi is, of course, that you do have to install it under Windows, with all the frustration that entails. Since we only boot Windows when we absolutely must, that chore becomes an ordeal of warnings about virus protection out of date, missing updates, and so forth, which take an hour or two to resolve, along with several reboots and “Do not touch this machine” admonitions before we can actually get started with our own work.  I did take the opportunity to install ClamAV for Windows–no need to subscribe to the paid anti-virus and spyware suites if Windows will be rarely used.

At this writing, we have, squirreled away in hard disk partitions on machines that came with it, and in virtual machine loop-back files on the native Unix/Linux systems, at least one copy each of Windows 98, Windows XP, Vista, and Windows 7, so we are pretty  much covered for any software testing we want to do, or for the handful of programs we can’t get to run under WINE on Linux.

The Wubi install went well, except Windows went to sleep during the download, which delayed things.  Despite having a bootable “live” USB drive, Wubi downloads the ISO via a torrent, so we had to not only be wired but enable high-order portmapping through the firewall to the netbook during the install, not something with which we’re comfortable while using Windows, but at least the privileged ports are still protected.

Of course, after installing, it is necessary to run the Update Manager.  When updating a Wubi installation with a new kernel, it is important to check the “no, I don’t want to install GRUB” box when asked, if you want to keep the NTloader for the initial boot screen.  At some point, Ubuntu realizes it doesn’t have a driver loaded for the Broadcom wireless, so it will try to get one from the ‘Net.  This driver is available on the alternate install CD, but it is easier to install it through a wired connection.  Ubuntu 9.04 and 9.10 didn’t work with the STA driver provided by Canonical, either downloaded or on the alternate install disk, so you had to obtain the source package from Broadcom and compile and install it with ‘make’, but the STA driver package provided for 10.04 worked just fine.  After installing, just click on the network icon on the task bar to look for wireless networks.

We’re used to working in a network, so having an SSH daemon and other networking packages is handy.  Accordingly, I allocated a lot more space for the Wubi disk, to make sure I had enough space for extra packages.  However the netbook is just that–a modestly-powered Internet “terminal”–so we don’t expect to do any heavy lifting with the device.  Having Ubuntu installed in addition to the HP Quickweb will make road trips seem almost like being at home.

One last chore is to boot to Windows once more and change the boot order so that Ubuntu is the successor to HP Quickweb instead of Windows, to satisfy the Nice Person that she won’t have Windows foisted on her by accident.  In the start menu, right-click on “Computer”, click on “Properties.”  In the window, select “Advanced System Settings,” then the options for Startup, select Ubuntu as the default system to boot.  In the “old days” of NT4, we used to hack the boot.ini file, but that seems to have been folded into binary format in later versions.  And, it works! Windows doesn’t boot up by accident, now.  Life is good.

FAXing From the Beach: Mixing Business and Travel

A number of years ago, a forward-thinking company (I forget which one) put an ad on TV that zoomed in on a tablet-looking device on the arm of a beach chair. The punch line was, “FAX from the beach? You will.” Well, that was a long time ago, before the dot-com bubble burst and before the Internet turned FAX into one of those quaint twentieth-century technologies that nobody remembers fondly. And we did FAX from the beach, just not quite that simply. Even before the now-archaic ad broke, we were dragging thermal-paper FAX machines and luggable computers with modems off to time-share escapes. The extension cords and phone drops didn’t quite reach to the beach, but we could see it from our room.

Today, the “FAX from the beach” mode is the art of remote computing. If you have a Unix system, or the right combination of Windows applications, and the blessings of the Network Police at $WORK, you can “be at your desk” from almost anywhere on the planet, thanks to WiFi almost everywhere. But, technology is, as we all know, not always infallible. Cell phone coverage isn’t everywhere for every carrier, and WiFi systems get overloaded at inconvenient times, or the hotel or coffee shop decides your time is up and logs you off their system until you refresh your login. And, at best, shared WiFi connections are slow, compared with your network at work or even at home.

Graphical user interfaces are all the rage, but waiting for a Virtual Network Console (VNC) session to crawl down the screen and fill in the detail is painfully slow. Most of the time, VNC works pretty fast, as long as you are on a local network, or there are no firewalls between you and the remote host. Since running without a firewall is corporate and financial suicide, regardless of your base operating system, the only safe way to run VNC is either through an SSH tunnel with port forwarding, which is an incredibly complex process, or to run the viewer on the remote host with X11 forwarding, which SSH handles internally. Unfortunately, X11 is way too chatty to use over a slow link, so the initial display creeps down your screen, taking several passes before it is ready to use. Then, the mouse tends to react sluggishly, making the whole experience a bit less productive than desired.

One thing that does help is to use the -C option on the slowest of the SSH links to use compression on that link. And, there may be multiple links, since a really secure connection will relay through a gateway server straddling the corporate firewall. But, this becomes an extra planning process and procedural step, complicating the process. But, if you need a 24×7 connection to a remote host, this might be your only option. One advantage to using VNC is the persistence of the remote desktop, which can be a representation of the actual console display on the remote, or, in the case of Unix, a separate login display. Unix, being a multi-user system, has had the capability of multiple graphical desktops over the network for decades, but VNC transmits a single window containing a “picture” of the desktop instead of dozens of window and widget objects that compose an X Window desktop.

In recent years, vendors have employed remote-host initiated connections, using HTTP to exchange host display information with the support technician’s computer. This technology is also available through third-party services, where both the remote and local hosts connect through the external service. To securely effect this type of remote connection, the connection information must be passed from the operator of the remote host to the operator of the local host, so this method is also limited in scope. But, when it works, it is adequate, as compression is built into the connection protocols.

For Unix administrators and programmers, there is another method of connecting to a remote computer with the safety of a persistent remote process: the ‘screen’ utiitity program creates virtual command-line text terminals. ‘screen’ allows multiple terminal sessions to be run on the remote host under one remote login session.

One of the big advantages of using persistent remote connections is the ability to connect, say “at the beach,” or in a coffee shop, start up some processes, then disconnect and reconnect later from another location, even from another computer. Most servers now have “Lights Out Management” devices built in, so we can even turn the power on and off and monitor the start-up processes remotely. These tools enable us twenty-first century leisure-seekers to work from the beach as if we were in our windowless cubicles in the basement next to the data center. At least we have the satisfaction that, if we had looked up from our computer, we would have been able to see a spectacular sunset. Hey, it’s dark out! When did that happen? Hmm. Tomorrow night, we’ll be in a different city, but the view will be the same–unless I change my desktop background picture…

Musings on Unix, Bicycling, Quilting, Weaving, Old Houses, and other diversions

%d bloggers like this: