Category Archives: All things Unix

A Kind of Darkness: enduring an internet outage

The Internet went down today, at least at our house, and at an unknown number of other houses on our street, along with their TV feed (we don’t have TV). But, we know about the others only because we have a smart phone. I managed to keep the wifi turned off long enough to login to Comcast via the cellular network for a system status update. This may seem the height of absurdity, to need to access the Internet to find out why the Internet is down, but that is the future to which we have come. We used to have phone service on cable, too, which would have left us totally deaf and blind, but with cell phones, it is possible to call tech support. Except, we use the Internet to find phone numbers. I’m not sure we have a paper contract or information packet that has the support number. At any rate, the Internet has also resulted in the depletion of the help desk: it is much more efficient to have the computers check your connection status than to explain your location and account number to a person (after waiting a very long time in queue), then phrase the question properly.  The web app checks our Internet connection (yes, it is down), and then announces “an outage has been reported in your area.”

Sitting in the office without Internet is a bit like sitting in the house with a general power outage. We still have lights, and computers, but–as I am doing now–we have to write to local files instead of interacting with the blog server out in the “cloud” somewhere, for later upload, a bit like reading by candlelight. There was a time, 20 years ago, when we actually composed email on our computer, after which the computer would initiate a call on the modem to contact the next server up the chain to send the mail and receive any waiting incoming mail. A few of our friends who live beyond cable and fiber still use dial-up, but the sound of a modem negotiating a connection is as rare as the clop-clop sound of horse-drawn carriages on Main Street.

So, as we wait to get reconnected with the day’s crop of cute cat videos, we can reflect a bit on not only how far we’ve come, but how far we have to go. The next wave, of course, is to get completely unwired, with community high-speed broadband wifi, affordable cellular networks, and wearable, always-connected computing. I’m not sure about the public in general, but for us, traditional television is dead–we haven’t had a TV for at least seven years. The future is in Internet services like Netflix: movies on demand, news stories on demand, and some mix of live streaming feed, as we already have with the major news services and Net-centric services. A high-speed cellular network can (but probably won’t) remove the single point of failure that the “last mile” wired connection represents.  With the arrival of ubiquitous networking comes the newest tablet system, running Google OS, where the device only supplies a display and connection to processing and data storage hosted in “the cloud,” which exists as a distributed network of huge data centers scattered across the world.  Without a network connection, the device is as useful as an unwound pocket watch.

Which brings us  to another point: with cell phones constantly connected to the phone network, we have no need to wear or carry timepieces anymore: a generation of plastic LCD or LED wrist watches have become junk, and the mechanical watches and clocks of an earlier age have become quaint pieces of animated jewelry.  To wear such jewelry, or other ornamentation made from the dissected parts, identifies one as part of the steampunk movement, a re-imagining of a future where the workings of civilization are visible and can be tinkered with, where function merges with style, as in the hand-wrought brass and filigreed cast iron implements and open-frame steamworks of the early industrial age.  The computer age has passed so quickly from a vast tangle of wires and visible circuits to slick slabs of glass with microscopic complexity embedded within, that the magic has turned from white to black: one can no longer understand the machine by simply observing it operate.

Which brings us to the obvious: the only constant in the last half-century has been the rate of change. We constantly must adapt to new ways of doing old things and getting used to doing things we didn’t imagine a few years ago (even if we are avid science-fiction fans: sci-fi is always a comment on extremes taken to their logical (or illogical) conclusion, while reality takes a turn away from extremes, often in a completely different direction).

So, now well into our fifth year of non-retirement, we keep moving forward, not only exploring new activities associated with actual retirement, such as more frequent travel and taking up new hobbies (often at the expense of old ones), but also keeping up with the state of the art in our chosen profession. In the last few weeks, we have set up a virtual server to explore the concept of containers, a not-new, but relatively undeveloped way to isolate different services hosted on the same machine. What makes this attractive now is the emergence of Docker, which is a nascent container management system, making it easy to build, administer, and distribute containers focused on a single application. As with all emerging technology, it is still brittle and requires specific hosting configurations, but it is a very promising approach to a new way of distributing and hosting Linux applications.

At the same time, we are learning to use Git, a fifth-generation software version control system, and have set up a git server in our office network. We’ve used version control systems since graduate school in the 1980s, first with the original SCCS (Source Code Control System), then with the simpler and excellent RCS (Revision Control System), which we admittedly still use for local version management when developing and administering systems, brief encounters with CVS (Concurrent Versions System), which introduced client-server modes as Unix moved from a single mainframe with terminals to a network of servers and workstations, and, fleetingly, with SVN (SubVersioN, a major remake of CVS). Git, by virtue of being the tool of choice for Linux kernel development, has become the new standard. It also has the advantage of using a snapshot model of the project space. Each of these has progressively moved from a simple difference model in a single directory on a single machine, building on the common tools of Unix and network protocols to make possible collaborative development on a world-wide scale.  Of course, these networked tools beg to be hosted on repositories “in the cloud,” which requires an Internet connection to fetch and update files in collaborative projects.

And lastly, we have finally succumbed to the lure of Python, one of the last of the major scripting languages to be mastered, having become proficient over the years in Perl, PHP, and Ruby, and, by necessity, at least conversant with Javascript. Python has a lot of appeal, being a relatively pure object-oriented language and with a lot of extensibiity that is well-documented. But, the syntax is a bit odd, with use of white space instead of curly brackets to denote code blocks and colons to connect name/value declarations. There is a lot of LISP-like philosophy behind Python, so it is not entirely strange, just the syntax. The real reason for finally learning Python has to do with the emergence of the very popular Raspberry Pi microcomputer, which promotes Python, and the fact that I gave one to my 10-year-old grandson, along with the book, Python for Kids, in hopes of introducing a new generation to the joys of tinkering with computers and making them do new things.

So, there it is: we have become dependent on the Internet in much the same as we have become dependent on electricity, the telephone, and the internal combustion engine. At the same time, we have become distanced from the technology of the Internet: everyone uses it, but few can actually make it work. Not everyone needs to, but it is still a good idea to understand the principles on which it is based–the fundamentals of programming and design. Not only does learning to program enable one to understand how the Internet works on in internal level, but the process teaches one to partition tasks, organize procedures, and recognize relationships in data, essential for many aspects of life in general.

Like on power-outage nights, we retire early, rising well before the late winter dawn to find—oh, look, a new episode of “Simon’s Cat” on Facebook.  Yes, the Internet is back.

Living with Linux — Keeping the home fires burning (for baking Pi’s)

It’s been a while since our dour post on the folly of Congress and the personal impact of political maneuvering.  After a quick flurry to catch up projects before the end of October, which was also the end of a contract that we’ve been on continuously, through three changes in contract management, since the summer of 2001, and the last four years as an independent contractor.  We did expect a follow-on contract from a new prime contractor: this is the way many government service contracts go–the management changes, but the workers get picked up by the new company, so there is continuity of service, and a bit of job security, despite lack of “brand loyalty.”  During the 1990s, I seemed to change jobs about every 18 months on average, moving to a new client and a different employer, so keeping the same client is a bonus.

Unfortunately, the chaos resulting from the shutdown meant that the contract transition did not go smoothly.  We departed on a scheduled vacation trip to Kauai with nothing settled, but the first time since 1989 that we’ve gone on vacation without the prospect of work interrupting. Nevertheless, midway through the week, the contract negotiations got underway in earnest, complete with five-hour timezone shift: when we returned, there was a flurry of activity and then, back to the  grind, continuing on the ongoing projects that were interrupted by both the shutdown and the contract turnover.

While we were gone, the usual Pacific Northwest November storms came early, knocking out power to our network, so there was much to do to get things running again.  Several machines needed a rather lengthy disk maintenance check, and the backup system was full, as usual.  So, we took advantage of 1) the possibiliity of future earnings due to contract renewal and 2) delays in getting the paperwork actually signed and work started, to do some system maintenance and planning, starting with acquiring a bigger backup disk.

Secondly, our office Linux workstation had had a bad update, trying to install some experimental software from a questionable repository, to the extent that the video driver crashed and could not be restored.  Upgrading the system to a newer distribution didn’t help, as upgrades depend on a working configuration.  Now, we’ve been using Ubuntu as our primary desktop workstation environment since 2007, through several major upgrades, one of the longest tenures of distros ever (though we did use Solaris for a decade or more, along with various Linux versions).  Ubuntu has one of the best repositories of useful software and easy updates and add-ons of a lot of things we use from day to day.  But, in the last couple of years, the shift has been to a simpler interface and targeting an audience of mostly web-surfers who use computers for entertainment and communication, but little else.  Consequently, the development and productivity support has suffered.  The new interfaces on personal workstations, like Ubuntu’s Unity and Microsoft’s latest fiasco, Windows 8, have turned desktop computers and laptops into giant smart phones, without the phone (unless you have Skype installed).  One of the other items to suffer was the ability to build system install disks from a DVD download.

So, after failing to restore the system with a newer version of Ubuntu, on which it is getting more and more difficult to configure the older, busy Gnome desktop model that we’ve been used to using for the last decade or more, we decided to reinstall from scratch.  Ubuntu also somehow lost the ability to reliably create install disks, as we tried several times to create a bootable CD, DVD, or memory stick, to no avail.  So, since we primarily use Red Hat or its free cousin, CentOS, as the basis for the workhorse science servers at work and to drive our own virtual machine host, I installed CentOS version 6 on the workstation.  All is well, except CentOS (the Community ENTerprise Operating System) is really intended to be a server or engineering workstation, so it has been a slow process of installing the productivity software to do image editing for photos and movies and build up the other programming tools that are not quite so common, including a raft of custom modules.  Since Red Hat and its spin-off evolve a bit more slowly than the six-month update cycle for Ubuntu, there has been some version regression and some things we’re used to using daily aren’t well-supported anymore as we get closer to the next major release from Red Hat.  Since restoring all my data from backup took most of a day and night, and adding software on the fly as needed has been tedious, I’m a bit reluctant to go back. Besides, I need to integrate a physical desktop system with the cluster of virtual machines I’m building on our big server for an upcoming project, so there we are.

The main development/travel machine, a quad-core laptop with a powerful GPU and lots of RAM, is still running Ubuntu 12.04 (with Gnome grafted on as the desktop manager), but has had its own issues with overheating. So, this morning I opened it up for a general checkup.  Everything seemed to be working in the fan department, but I did get a lot of dust out of the radiator on the liquid cooling system, and the machine has been running a lot cooler today.

Because of the power outage, and promises of more to come as the winter progresses, we’ve been looking at a more robust solution to our network services and incoming gateway: up until now, we’ve been using old desktop machines retired from workstation status and revamping them as firewalls and network information servers, which does extend their useful life, but at the expense of being power hungry and a bit unstable.  But, the proliferation of tiny hobby computers has made the prospect of low-power appliances very doable.  So, we are now in the process of configuring a clutch of Raspberry Pi computers, about the size of a deck of playing cards, into the core network services.  These can run for days on the type of battery packs that keep the big servers up for 10 minutes to give them time to shut down, and, if they do lose power, they are “instant on” when the power comes back.  And, they run either Linux or FreeBSD, so the transition is relatively painless.  The new backup disk is running fine, and the old one will soon be re-purposed for archiving data or holding system images for building virtual machines, or extending the backups even further.

So it goes: the system remains a work in progress, but there does finally seem to be progress.  We even have caught up enough to do some actual billable work, following three really lean months of travel and contract lapses.

Tour 2013 — Aftermath: Piecing Together the Pictures

Even in the age of instant digital photography, we still have to take the time to assemble our photo albums either during or after a vacation tour. Although the day-to-day blogs have features selected photos, I decided to group all of the photos that were fit to print (and some that are questionable, but fall under the category of “art” or convey a sense of the journey not captured otherwise) into a group of video montages.

To do this, I merged photos from our still cameras and phones, renaming them as needed to fit them in the proper sequence. I wrote a short Perl script that uses ImageMagick to crop and resize the photos into wide-screen format suitable for the movie software, then renames them in the sequence pattern needed by the film editor. I then created an image sequence film, specifying the number of frames per photos to generate a fairly rapid slideshow sequence, and set it to music obtained from the royalty-free selections on www.freemusicarchive.org, with titles. I use the OpenShot video editor on Ubuntu Linux, and upload to the Vimeo video streaming site. I ended up with three short clips, averaging three minutes each, of the Michigan, Wisconsin, and train trip portions of our tour.

Michigan montage from Larye Parkins on Vimeo.

Wisconsin montage from Larye Parkins on Vimeo.

Train montage from Larye Parkins on Vimeo.

I have an hour or more of video footage taking with the GoPro handlebar camera that I am working on editing down to short films representing the stages of our tour, as well, but that will take a bit more time…

Sleuthing the Wiley Thermals

Yesterday, we were hit with a thermal shutdown on the big laptop.  Installing psensor and the coretemp module helped get a handle on the issue, which centered on the Nvidia GeForce 540M GPU.  Hardware drivers have always been an issue for Linux, since the Open Source software model conflicts with the need for peripheral vendors to keep the internals of their hardware secret, which they do so by not releasing the source code for the software that links the hardware with the operating system.  That’s fine for a closed, proprietary system like Microsoft Windows, the primary market.  As Linux users, not in the business of redistributing systems, we would be happy with an add-on driver that works.  But, since Linux is a small portion of the market, there is little incentive for hardware vendors to write Linux-specific driver software.  And, the software that is available is not always optimized for the Linux kernel, with the result that it either is buggy or skimps on reliability features.

Nvidia has, in the past year, incurred the ire of Linux founder Linus Torvalds for just these issues.  The Ubuntu Linux distribution that we run on our systems comes with a more or less generic Nvidia driver.  While users can download an updated driver from Nvidia, installation is a bit daunting, requiring the system to be reconfigured for text-mode login in order to rebuild the X11 graphics links.  Those of us who have been around long enough to remember hand-tuning the X-Window system configuration files, fingers poised over the “kill” keyboard sequence, ready to shut down X11 in an instant to avoid burning the monitor if the settings were wrong, have grave misgivings about tinkering with the graphics.  Plus, the forums on the ‘Net seemed to show that users were having overheating problems no matter what combinations of driver and distribution versions were used.  Custom configurations seem to be less than desirable for production machines, so we elected to look for another solution.

A bit more searching on the ‘Net came across the fwts (FirmWare Test Suite) utility package.  Once installed, it ran, pointing out compatibility issues between the computer BIOS and the kernel/driver configuration.  One of the automatic corrective actions performed by fwts was to switch the operating mode from “performance” to “normal,” which immediately lowered the operating temperature of all the components.  The GPU still shows a temperature increase under load, but the fan hardly runs anymore, whereas earlier in the week it was running on high most of the time.

The take-away message here is, updating kernels and/or drivers can and will sometimes result in conflicts with your hardware.  Linux has come a long way toward a plug-and-play, run “out of the box.” installation, but it still pays to test and evaluate hardware configurations, just like the old days of Unix.  Actually, in the “bad old days” of a few commercial Unix systems, the range of hardware combinations was often very limited, so compatibility issues had been carefully tuned out by the system vendor.  But, those systems were expensive.  In the Open Source world of Linux, where the system is expected to run on any combination of hardware on the commodity PC market, some outliers are to be expected.  For the average desktop Linux user, converting an old Windows machine to Linux will work just fine.  But, for “power users” and server applications, some engineering and testing may be required for optimum performance.  Certainly, the psensor and fwts software will be an important part of the Linux toolkit from now on.

Upgrade Woes: Changing the Way We’ve Always Done Things

A while back, a wave of learning curve frustration swept through Chaos Central.  One of the tenets of the computing life is that the march of time brings change at a mind-boggling rate.  Most Linux distributions have settled on a six-month update cycle, with almost daily patch-level updates in between.  The patch-level updates go mostly unnoticed to the average user, but sometimes quirks are induced.  But, the major distribution updates bring changes to the desktop decor ranging from the equivalent of new curtains and furniture slipcovers to knocking down walls and repainting.

Since we use our computers for productive work, we tend to keep the furniture and paint to basic office cubicle mode.  Since about 2007, we’ve tended toward running Ubuntu as the primary desktop systems.  Unless there is some very good reason, we also tend to use the Long-Term Support versions.  However, our last new computers (December 2011) arrived with Ubuntu 11.10, which was OK even though it defaulted to Unity, since the previous LTS was 10.04, which had some deficiencies in the WiFi area.  Being resistant to change, we quickly reverted to the Gnome desktop: the new but not necessarily improved Unity desktop seeming to be a dumbing-down of the desktop, and getting in the way of the usual cluttered, multi-tasking way of doing business (we don’t call our office Chaos Central for no reason).

Of course, in the spring of 2013, we suddenly were faced with the End-of-Life clock on Ubuntu 11.10.  The obvious choice, then, is to upgrade to the 12.04LTS, instead of the newly-release 13.04 version.  The default on reboot is the Unity desktop, though Ubuntu now does provide Gnome (Ubuntu Classic) as a choice.  Having recently acquired an Android phone, our first “smart phone,” we decided to give Unity a try–for a while.

OK, it isn’t so bad, once you get used to the idea that you can have multiple screens, and clicking on the launcher icon of a running process switches to it (unless there are multiple copies, but we also discovered how to display everything, a la OS/X).  However, we recently discovered the dark side to Unity: stability.

I had noticed that the fan seemed to be running on high on my main laptop, a quad-core machine with Nvidia GPU from Zareason, one of the few Linux-only system vendors.  Then, while watching a 30-minute video in full-screen, the machine suddenly powered down, due to overtemperature.  This has only happened before when I inadvertently blocked the air intakes.  Not so, this time.  It was blowing very hot air.

A bit of research into system temperature monitors led me to the psensor package, along with the coretemp kernel module and supporting software.  Yow!  The graph showed the primary culprit to be the Nvidia card, which spiked between 80 and 90C whenever a new graphics window was opened.  More research showed the most likely cause to be Unity.  Reverting to Gnome helped reduce the overall temperature, but the spikes are still there.  One of the issues here is conflicts between Ubuntu and the Nvidia native drivers.  We did have the Nvidia drivers installed until the 12.04 upgrade, but, since they don’t support the 3-D extensions in Unity, we left the Ubuntu drivers in place.

All of this brings to the fore the fact that, despite the attempt to make computers (all of them, including Apple, Microsoft, and Linux) look and feel like a big smart phone, the power desktop is not user-friendly.  The sealed-panel model employed by Apple and Microsoft means you have to live with what came out of the box, but Linux, with the open-source model, means that you can (and, by inference, must) tinker under the hood.  The upside is, that, with diligence and perseverance, you can fix it and decorate it to suit your own tastes and workstyle.