$50 Coffee

We love Starbucks.  Not that they are the greatest coffee company ever, but because they are literally everywhere and consistent.  When we travel, we know what we can get at the sign of the Mermaid.  We also use the Starbucks Gold Card, a debit card that saves a lot of hassle with carrying cash or putting up with credit card transaction fees per cup.

So, when we were in Seattle, the ancestral home and headquarters of Starbucks, for an IT conference, we had a need for “one for the road” when heading home.  There was a Starbucks just around the corner, but, being in a hurry, we elected to drive (us, the intrepid bike tourists?  What were we thinking?), with the excuse that it’s “on the way.”  Of course, urban shops don’t have drive-up windows (which we almost never use, anyway–half the enjoyment is ogling the pastry case and the coffee accessory displays arrayed to direct and slow traffic flow), and there is no parking on the street except a slot about two feet shorter than our not-so-big Jeep.  No problem, there’s another Starbucks just a few short blocks away, walkable in less time that it takes to chug a Venti caramel latte.  Such is life in big cities on the west coast: if you can see a Starbucks, odds are very good that you are in one already.

So, off we go, straight ahead.  And, lo, there is a parking spot directly in front of the store.  It’s about 5:30pm on a Saturday; the parking permit machine is halfway down the block.  It’s gotta be after hours, right?  So, we dash in, order coffee and a pastry, depleting our gold card balance by about $6.  But, by the time the pastry comes out of the microwave and we dash back to our car, there’s a $44 ticket stuck in the wiper blade, with the ink still drying, and a date stamp of 5:27, about 20 seconds ago.

Of course, the $50 coffee pales in comparison to staying at the conference hotel.  Oh, it was a very nice hotel, to be sure, but city real estate is at a premium, so everything is compact.  Double bed, up against the wall, and bathroom so small that you had to step into the tub to close the door.  But, big closet, complete with complementary (with $20 deposit) umbrella (this is the Emerald City, after all), and corresponding big price tag, at nearly $200 a night, bring your own iPod if you want to wake to music.  But, for this, we got a spectacular view of the sunset over the Olympic Mountains (they’re to the north from our windows at home).

And, the conference was terrific: so much talent in the Silicon Rain Forest: most of the  instructors were fairly local, and absolutely tops in their fields.  I took classes in network troubleshooting and server configuration management from local folks (Portland, OR being fairly local) and VM cluster management from one of Google’s top New York sysadmins, whose books line my office shelves.  Will we back? You bet, but next time, we take the ferry and the bus, like we used to when we worked here, back in the 90s.  Parking in the city is a non-starter.

Warp Speed and Time Dilation

Exactly 11 months ago, we announced our new fascination with strings, regular expressions, and patterns–using yarn instead of bytes, in the blog article  Strung Out and Warped. The process of warping a loom is much like writing computer programs.  First, you decide what you want the project to do, then select the materials, write a flow diagram (in weaving, called a draft), then code (wind and thread the warp).  Testing is a matter of running through each combination of inputs and outputs, then fixing or redesigning as necessary. In the weaving process (and, sometimes in the programming process as well), a poorly plannned and/or badly executed project sometimes runs out of budget, and the project either gets terminated or shelved and put back in the budget cycle if it is important.

And, so it went.  After weaving a couple inches of weft and examining the results, the pattern had some glitches in it.  I discovered I had skipped a step in the threading pattern, about 1/3 of the way across the warp. Correcting this problem required unthreading about 100 threads and moving them to the next harness.  About 3/4 of the way through this process, real work crept in, and the time budget ran out.

The project sat idle for about nine months.  Finally, when a similar fate befell an actual programming project, the Unix Curmudgeon (under some prodding from the Nice Person, who wanted to use the loom the stalled project was tying up) reopened the project and finished threading those last couple dozen warp threads, retied the warp, and started weaving.  Again, the testing cycle, a couple of inches of weft later, showed a break in the pattern.  This time, it was only a couple threads in the wrong harnesses.  Clipping the heddle and tying a new one on the proper harness fixed the problem, and the project proceeded.

The project is a stitched doubleweave, with 8/2 wool on top at 12epi, and 20/2 Tencel at 24epi on the bottom. A short video of the weaving process is here.

The weaving process, once “debugged,” proceeded swiftly.  Other than a few issues with maintaining uniform tension between the two different fibers (wool grabs everything, and I broke a warp thread that got doubled over when I advanced the warp), and a couple of weft floats caused by not having enough warp tension (helped by sticky wool warp), the project turned out well.  Documentation of projects is also helpful: the first hem end is tabby, while the second hem end is pattern weave, the weaver having forgotten what he did several weeks ago…

The finished product was “fulled’ (rinsed in hot water to let the fiber tension even out and the fabric shrink a bit), and laid out flat to dry.  It turned out just a bit wider than planned (the weft tension was not as high as estimated), but exactly the planned length, which was measured while weaving, with a 10% shrinkage allowance.

Finished wool-tencel scarf

The completed scarf is a “re-imagining” of a World War I aviators scarf. Modern aviator scarves are silk, but in the original open-cockpit planes of the 1914-1918 era, wool was needed for warmth, and silk for wiping engine oil from goggles and instruments. Most of the early combat planes on both sides used rotary engines (not to be confused with the Wankel rotor engine). These engines used a lot of oil, which was usually castor oil, which does not dissolve in gasoline. Without a model to work from, I decided to try to create a two-sided scarf using stitched double-weave. The weave structure does not separate the fibers, but the wool is dominant on one side and the tencel, a modern silk substitute, is dominant on the other. The result is interesting, if not quite what I expected.

Unlike computer software, where the bugs can be removed by patching and performance improved by refactoring, a woven cloth is what it is, unless the errors are caught and corrected while weaving. The knot where the warp thread broke shows as a woolly spot on the tencel side and the inadvertent weft floats show as well in the pattern. The two misthreading errors were corrected, but the item was not rewoven, so they show in the first couple of inches. But, the discipline required to weave well should also improve my coding skills, as coding errors are easier to correct if caught early, and refactoring during development (like reweaving to remove mistakes) results in a better product.

Web Evolution

One of the conundrums of the past month at Chaos Central has been the problem of making changes to a web site for which the Unix Curmudgeon is the content editor but not the programmer. The site, a few years old, was a set of custom pages with a simple content editor that allowed the web editor to create, update, or delete some of the entries on some of the pages. The tool allowed photos to be inserted in some of the forms, and some of them were in calendar format, with input boxes for dates. The problem was updating documents in the download section, for which there was no editing form. This was becoming an acute problem because the documents in question tend to change year to year.

At first, the solution to the problem seemed to be to modify the source code to add the required functionality, which involved getting the source code from the author and permission to modify it, something we had done before to change a static list page to read from a tab delimited file.  But, the types of changes didn’t always fit with the format of the administration forms and still required sending files to the server administrator, as we didn’t have FTP or SSH access to the site.  Then, we noticed that the web hosting service had recently added WordPress to the stable of offerings.  The solution was obvious: convert the entire site to WordPress.  Of course, the convenience of fill-in-the-blank forms would be gone, but we would have the ability to create new pages, add member accounts, create limited-access content, and upload both documents and photos.

The process was fairly simple:  using the stock, standard WordPress template, the content of the current site was simply copied and pasted into new pages, and the site configured as a web site with a blog rather than the default blog with pages format.  Some editing of the content to fit with the standard WordPress theme style models, and juggling the background and header to fit with the color scheme and appearance of the old site, and incorporate the organization’s logo in the header, and it was done: the system administrator replaced the old site with the new, with appropriate redirection mapping from the old PHP URLs to the corresponding WordPress pages. This migration represented yet another step in the evolution of the web, or, more properly, in our experience with on-line content.

In the beginning, there was the concept of markup languages.  My first encounter with such was in the mid 1980s with the formatting tags in Ventura Publisher, which was the first desktop publishing tool, introduced in GEM (Graphical Environment Manager), a user interface orginally developed for CP/M, the first microcomputer operating system, preceding MS-DOS by a few years (MS-DOS evolved from a 16-bit port of the 8-bit CP/M).  Markup tags grew from the penciled editing marks used in the typewriter age, by which editors indicated changes to retype copy:  Capitalize this, underline (bold) this, start a new paragraph, etc.  In typesetting, markups indicated the composition element, i.e., chapter heading, subparagraph heading, bullet list, etc, rather than specific indent, typeface and size, etc.  In electronic documents, tags were inserted as part of the text, like <tag>this</tag>.  Where the tag delimiters needed to be installed in the text, they were described as special characters, like &gt;tag&lt; to print “<tag>” (which, if you look at the page source for this document, you will see nested several layers deep, since printing “&” requires yet another “escape,” &amp;).

One of the reasons for the rise of markup languages as plain-text tags in documents was the proliferation of software systems, all of which were incompatible, and for which the markup tags were generally binary, i.e., not human readable.  Gradually, the tags became standardized.  When the World Wide Web was conceived, an augmented subset of the newly-minted Standardized Generalized Markup Language (SGML) was christened  HyperText Markup Language (HTML).  HTML used markup tags primarily to facilitate linking different parts of text out of order, or even enable jumping to different documents.  Later, the anchor (<A>) tag  and its variants were expanded to allow insertion of images and other elements.

Of course, these early web documents were static, and editors and authors had to memorized and type dozens of different markup tags.  To make matters worse, the primary advantage of markup tags, the identification of composition elements independent of style, became subverted as browser software proliferated.  In the beginning, the interpretation of tags was controlled by the Document Type Definition (DTD), the “back end” part of the markup language concept.  The DTD is a fairly standard description of how to render the tags in a particular typesetting or display system.  Each HTML document is supposed to include a tag that identifies the DTD for the set of tags used in the rest of the document.  But, since different browsers might display a particular tag with different fonts–size, typeface, color, etc.–the HTML tags allowed style modifiers to specify how the particular element enclosed by that one tag would be displayed.  This not only invites chaos, i.e., allows every instance of the same tag to be displayed differently, but most word processors, when converting from internal format to HTML, surround every text element with the precise style that applies to that text element, making it virtually impossible to edit for style in HTML form.  To combat this proclivity toward creative styling, the Cascading Style Sheet (CSS) was invented, allowing authors and editors to globally define a specific style for a tag, or locally define styles within a cascade, by using the <DIV> and <SPAN> tags to define a block of text or subsection within a tag.

It quickly became obvious that HTML, as a static markup, was not adequate for the dynamic growth of information on the Web.  Web pages could interact with users, by transmitting snippets of programming script code (between <SCRIPT></SCRIPT> tags) bound to markup tags by modifiers to create effects like changing text color when the mouse is over the enclosed text, or preprocess form input to validate data entry before transmitting it to the server.  The most popular scripting language for this is Javascript.  Of course, in the beginning, competing browsers interpreted the code differently or even supported different dialects of the language, which caused no end of grief for web programmers.  Fortunately, the popularity and proliferation of non-Microsoft browsers in recent years has caused Microsoft to tone down that aspect of their quest for world domination, so Javascript is now more or less standardized.

In order to use the Web as an interactive tool, and an interface for applications running on the server, it was necessary to augment the HyperText Transmission Protocol (HTTP), the language that the server uses to process requests from the browser, to pass requests to internal programs on the server.  This was implemented through the Computer Gateway Interface (CGI — not to be confused with Computer Generated Imagery used in movie-making animation and special effects).  Originally, it was necessary to write all of the code to generate HTML documents from the CGI code and to parse the input from the browser. But, thanks to a Perl module, CGI.pm, written by renowned scientist Dr. Lincoln Stein, this became a lot easier and established the Perl scripting language as the de facto web programming language.

But, as the Web became ubiquitous, most content on the web was still static HTML, created by individuals using simple HTML tags in a text editor or saving their word processing documents as HTML, or using PC-based HTML editors like Homesite or Dreamweaver.  Adding interactive elements to these pages required them to be rewritten as CGI programs that emitted (programmer-eze for “printed”) the now-dynamic content.  By now, however, web servers had incorporated internal modules that could run CGI programs directly without calling the external interpreter software and incurring extra memory overhead.  By adding special HTML tags interpreted by the server, snippets of script code could be added in-line with the page content, making it much easier to convert static pages to dynamic ones.  Since the primary need was to add the ability to process form input, this led to the development of specialized server-side scripting languages, such as Rasmus Lerdorf’s PHP.

But, now that creating web pages was becoming a programming task more than a word-smithing task, there was a need for better authoring tools.  The proliferation of PHP and the spread of high-speed Internet access made it more feasible to actually put interactive user applications on web servers.  Web editing moved from the personal computer to the Web itself, as the concept of content management systems took hold.  Early forms were Wikis, where users could enter data with Yet Another Markup Standard, that would be stored on the server and displayed as HTML.  More free-form text form processors followed, making possible forums of dialogue and whole web formats for group interaction, using engines like PHP-Nuke and others, that used a database back-end to store input and PHP to render the stored content and collect new content. The expansion of the forums into tools for diarists and essayists in the form of blogs (from weB LOG) led to development of even more powerful content management systems, like Joomla and WordPress, capable of developing powerful web sites without programming.

So, we have progressed in evolution from desktop publishing to the Web, to interactive applications, to converting static sites to dynamic ones, and finally, to converting custom programs to templates for generalized site-building engines.  The Web, through new web forums for social interaction between friends and relatives who have never seen raw HTML code, allows ordinary folks to converse with friends and relatives across the world, to post photos, videos, and links to other sites of interest, just as the original hypertext designers intended.  What seemed arcane and innovative thinking 30 years ago is now just another form of natural human interaction.

But, for those of us who make our living interpreting dreams in current technology, the bar moves up again.  As we no longer think about the double newline needed at the beginning of every HTML document after the “Content-type” line and before the <HTML> tag, which is the first code emitted from a CGI program or from the server itself, we no longer need to write CSS files from scratch or PHP functions to perform common actions.  But, we need to learn the new tools and still remember how to tweak the code for those distinctive touches that separate the ordinary from the special.  And, there are still lots of sites to upgrade…