Category Archives: All things Unix

Memory Leaks: Programming in Retirement

What “retired” software engineers do: keep making software. A project I’ve been putting off for a long time, having been interrupted from time to time with medical issues and other projects, including learning Python, which I avoided for many years, writing in PHP, Perl, and Ruby instead. Python is the current “most popular” cross-platform programming language and has a lot of library support, so the likelihood of someone else [younger] being able to maintain this project in the future is good.
 
But making progress on it now: Starting with 619 photos of pages and items from “sample books” of handwoven patterns, containing notes and actual fabric samples made by weaving guild members up to 50 years ago. The books were too fragile to allow members to peruse them, so one of our members carefully photographed each item and the books put in storage. I’ve processed the photos to enhance the contrast from the faded and yellowed pages so the writing is legible, and put the binary images into a database, organizing them in 186 separate sets, with one to eight images per sample.
I’m now writing code to display the samples and provide forms to manually digitize the information in the notes, to make them searchable. In the screenshot, left to right: Python language reference (on-line), four “sandbox” terminal windows with, clockwise from top left, shell to launch programs, database “sandbox” to test query construction, Python “sandbox” to test methods and objects in code, and code from a demo script I wrote some time ago to explore methods of displaying images from the database, from which to copy code into the full project files. Yes, there are three monitors attached to my computer. The monitor on the right is the editor screen for four of the files I’m currently working on, that interact with each other to select, retrieve, and display the data.

Evolution of the Personal Computer – a Remembered History

The following is an article I wrote on Quora in response to a question about the evolution of the personal computer.  Looking back at the 45-year history of the “personal confuser” as we sometimes call those devices we now can’t seem to live without.  Realizing that most of the population was born into a world where the PC has always been there, it is sometimes important to remember where we came from and how we got here.

Soon after the development of the microprocessor, in the early 1970s, the Altair 8000 appeared on the market in 1975, powered by the Intel 8080 8-bit CPU. Many more 8-bit machines followed, coupled with keyboards and CRT monitors, though many of the early machines intended for home use used television sets for monitors, connected through a video to TV converter outputting an NTSC signal on VHF channel 3 or 4.

Commodore introduced the PET computer, a complete unit with keyboard and monitor in one desktop case, which ran a version of BASIC as an operating system. In the early 1980s, a home version, the Commodore VIC-20, was introduced, with the computer built into an enlarged keyboard, with a cartridge tape drive for data, and video output to an optional CRT monitor or a TV converter. These machines had a whopping 5KB of memory.

A number of smaller, more portable machines appeared, all with BASIC as the operating system, stored in various incompatible byte-code forms. Tandy Corp. sold the Radio Shack TRS-80 (which stood for Tandy Radio Shack w/ 8080 CPU), which promptly got the moniker “Trash-80” from critics. Chip-maker Texas Instruments, which had earlier killed the mechanical slide rule market with their hand-held calculators, came out with the TI-99 computer. Another low-cost compact machine was from Timex. Most of these machines used the MOS 6502 8-bit microprocessor, or the Zilog Z-80.

Steve Jobs and Steve Wozniak came out with the Apple computer, which, in Jobs’ inimitable marketing instincts, was named Apple II in the production model, to give the impression that it was a new and improved version of an earlier model, actually a prototype built in a wooden box. Apple would become one of the most successful of the early personal computers.

About the time Jobs and Wozniak were tinkering, Gary Kildahl, also in California, was experimenting with uses for the 8-bit Intel microprocessor, came up with a loader and command shell for a Winchester hard disk, which he named CP/M, Control Program for Microprocessors, introducing the concept of a block-structured disk operating system and a command shell that wasn’t BASIC. Prior computers were capable of running machine-code programs directly in addition to BASIC, but CP/M was the first to emphasize using binary programs on disk as the primary operating mode.

Some of the first commercial CP/M machines were the Osborne I luggable computer, which featured a 360-KB 5–1/4″ floppy drive, a 16×64-character CRT terminal, and a detachable keyboard in the lid of a suit-case-sized case that weighed over 15 kg. Kaypro came out with a similar form factor CP/M machine that was more suitable for desktop use, with a larger display screen and keyboard.

With the introduction of the 16-bit 8086 CPU from Intel, IBM became interested in adding a microprocessor-based system to their business line of mainframes and terminals, first approaching Kildahl’s Digital Research, developers of CP/M, and Bill Gates’ fledgling Microsoft, which had gotten its start writing BASIC interpreters for the Altair and other hobbyist microprocessor-based systems, and had recently acquired a 16-bit independent rewrite of CP/M, which, with the addition of some concepts from Xenix, a micro-processor-based version of Unix that Microsoft had licensed, became MS-DOS, the Disk Operating System. IBM adopted MS-DOS as PC-DOS for its new personal computer, the IBM PC, which featured an open system design that allowed third-party vendors to develop add-on hardware and write device drivers for it, which, of course, led to the proliferation of “IBM clones” from many different manufacturers.

By this time, the successful marketing of the Osborne and Kaypro CP/M machines and the Apple II had spawned a flurry of software development companies building business software that would run on the desktop without requiring a mainframe back-end, including spreadsheets, project planning software, and word processors. Stand-alone word processing workstations had sprung up with the microprocessor revolution, but were single-purpose machines, relegated to the “typing pool” at large corporations and document-preparation companies. With the introduction of a desktop machine from IBM, then the largest computer company on the planet, those vendors quickly ported their products to the 16-bit MS-DOS platform, and CP/M faded from the market, in part due to the untimely death of its founder.

Microsoft solidified its grip on the personal computer operating system market with exclusive contracts with various PC manufacturers. Apple focused its market on education and the arts, moving to the wider word-size market with the Motorola 68000, a 32-bit CPU with a 16-bit data bus, and a graphical desktop system based on the Xerox Palo Alto Research Center (PARC) Star project. The Macintosh was introduced in 1984. Microsoft countered with the Windows desktop environment running on top of MS-DOS soon after.

IBM commissioned a new graphical operating system to replace the Windows environment as 32-bit CPUs came into wide use in the early 1990s, but disputes between IBM and Microsoft led to the split into OS-2 on the IBM side and Windows 95 on the Microsoft side. Microsoft’s grip on the generic PC market won out, with OS/2 gradually fading away with the incompatibility between OS/2 and Windows 95 and Windows NT.

On the Unix side, Microsoft sold Xenix rights to the Santa Cruz Operation early on, which, after porting to 32-bit, became SCO Unix, but never got our of the small niche as a multi-user solution for small to medium-sized businesses. The mainframe Unix of the 1980s ported to high-end 32-bit microprocessor workstations used in academic and scientific research.

In 1990, Linus Torvalds, a college student in Finland, desiring a 32-bit alternative to the 16-bit micro-kernel teaching tool, Minix, built a new Unix-like monolithic 32-bit kernel, around which he and developers all over the world wrapped the GNU software collection from the Free Software Foundation, creating the GNU/Linux operating system, which has, in the 21st century, captured the network server market from Unix and replaced the Unix workstations on desktops of developers and research scientists, as well as taken over the hobby market started with the Altair 8000 in 1975, with re-purposed former Windows machines and the proliferation of tiny single-board computers, led by the Raspberry Pi.

The adaptable Linux kernel also became the core of Google’s Android operating system, which, along with iOS, the embedded version of the now BSD-Unix-based operating system adopted by Apple at the turn of the century, drives all the world’s handheld computing devices, the ultimate personal computers most of us carry with us everywhere, disguised as telephones, pagers, cameras, and music and video entertainment devices, as well as portals to the World Wide Web. Microsoft’s Windows system remains the default OS for the desktop and laptop environment, and the core server operating system in corporations—for now.

GOTO Considered [Rarely] Necessary

My answer is below, but, surprisingly enough, only a couple of days later, I found it expedient to use a GOTO in a PHP script, admittedly to fix a logic problem that wasn’t very well thought out.  In a while() loop, a test at the end needed to start a new output section, but the current data set needed to be put in the new section.  The expediency was to use the initialization code to start the new section, which was at the top of the loop, but allowing the loop to cycle would read a new data set, so the “quick-and-dirty” solution was to jump to the top of the loop without reading the new value.  Problem solved, and preserved the integrity of the code block, which was the main problem with GOTO in unstructured programs, back in 1968 when Edsgar Dykstra gave his infamous “GOTO Considered Harmful” proclamation.

The only time I have used GOTO has been to implement a decision construct not supported by the language. Of course, GOTO was necessary in linear programming languages which did not have syntactic constructs as we use in structured programming. All compiled programs have a GOTO in each and every looping or decision construct, but in the resulting output code. When it is necessary to use, it must be a construct defined in a safe manner, rigorously applied. Usually, this would be to stop execution in the middle of a code block by a GOTO to exit the block, something which most structured languages already have, in the form of a ‘break’ statement, or similar keyword. I could imagine a deeply nested decision tree construct where GOTO could be used for clarity, but there would probably be another way to write such a construct to be unambiguous.

One such example in C/C++ I could envision would be to construct a type of switch construct that accepted strings instead of integers as case arguments, where GOTO took the place of the break clause. I had used a similar construct in COBOL successfully, to build a switch-type operation, which doesn’t exist in COBOL, such that, once a test succeeded, the rest of the tests would be skipped. (Disclaimer: I took a job writing COBOL as a last resort, around 1990, in the middle of changing careers from systems engineering to software engineering—not a recommended choice: wrote COBOL by day at work and C by night at grad school).


Afterword:  Here is a snippet of the code block…

  
while ($fil = readdir($subdir)) {
newpage:
    if (is_dir($fil)) { continue;}
    if ( $row > 8) {
.
.
.
    if ($column >= 5 ) {
      $column = 1;
      $row++;
      print ("\n");
      goto newpage;
    }
.
.
.

The Curmudgeon Abides

This month marks four years since I finally stopped renewing consulting contracts, which made me officially retired.  Since then, I have continued to maintain my pro bono client lists, and took some time to learn enough Python coding to put up a custom webcam at home, but have let my professional organization memberships lapse and do less coding than ever.  I have continued to create bad videos of our too-infrequent bicycle rides, but the technical skills are gradually eroding without some stimulus to keep up.

Recently, that stimulus came with signing up for the Quora social media site, in which participants can ask questions about random subjects, which get directed to members who have listed some level of expertise in those particular subjects.  So, I get asked questions about software engineering, Linux, operating systems in general, and other related fields which are fading from memory, forcing me to do a bit of research to verify facts I think I know for sure, but which invariably turn out to be not true [or at least, any more].

As a result, my rambling and sometimes oblique discourses on things about which I know very little, but about which I have strong opinions yet, get thrust upon the world, or at least the segment of the Quora community interested in such things.  Some questions are inane, badly formed, or prompt me to ask myself, “People do that now?”  At any rate, a lot of questions get multiple answers, and the answers are ranked by how many members (which may or may not include the original member who posed the question) “upvote” a particular answer.  Quora tends to encourage continued participation by announcing who has upvoted your answer, but that is tempered by statistics showing how many people have seen your answer.  Which, if you weigh that against the paucity of upvotes, means most users glanced at it and moved on.  At least I haven’t seen any “downvotes,” yet.

The social engineering model is fueled by advertising: if users bother to read your post past the opening paragraph, they are greeted by an advertisement before getting to see the rest of the response.  So, Quora has a vested interest in getting lots of responses to questions, and generating lots of questions to be answered.  A large percentage of the questions I get fed are apparently generated by an AI algorithm rather than a real person.  The majority of questions submitted by real people are ones that come from those interested in how to advance in the field, or aspiring programmers curious about pay scales.  Students wonder about the downside of struggling to become a code monkey: how to advance without a formal education or survive in the industry long enough to pay off student loans.  Some, I assume, are looking for answers to class problems without doing their own research.

There are the usual Linux versus Windows arguments, and some loaded questions, possibly posed by ringers to justify promoting a point of view.  A number of the respondents to questions have impressive résumés, and are much better qualified to answer the questions authoritatively than I or many of the others that offer what are clearly biased opinions not grounded in fact.  Many of the questions appear to come from practitioners and aspirants who are in the global marketplace, not many with down-home American monikers like Joe and Charlie, which leads me to fear that the U.S. heartland just isn’t growing a lot of technologists these days, but have relinquished progress to ambitious immigrants and the growing tech sector in the developing world.

So it goes.  Besides keeping my personal technical knowledge base current, and maybe passing on some historical lore to the new generation of coders and admins, I’m preserving a tenuous connection with the community, albeit virtual rather than face-to-face.  However, after a long career of being either the lone “factory rep” at a customer site or the lone Unix guy at a Windows shop, dependent on USENET or other on-line forums for community, it isn’t much different.  It’s as close as I can get to being part of a “Senior Net,” offering advice and guidance as a community service.  And, I get to learn new things, or at least remember the old ones better.

 

 

Internet Purgatory

I’m writing the draft of this post on a word processor (LibreOffice, naturally), for a good reason. We’ve had the same web hosting provider for 17 years, ever since we moved from Missoula to Hamilton in Montana and lost access to ISDN services. For two years before that, we had hosted our own web sites and email from a couple of servers hung from the floor joists, in the basement.

When we needed to find a new home for our Internet presence, Modwest, a new and growing Missoula company, stepped in. The plans they provided were ideal: running on Linux servers, with SSH (Secure Shell) login access and the ability to park multiple domains on one account (at that time, we had two already: parkins.org and info-engineering-svc.com). Everything worked fine, and we added and deleted domains over the years, for ourselves (realizations-mt.com and judyparkins.com) and, temporarily, for clients (onewomandesigns.com). It just worked, and we also added WordPress engines to our personal/business domains. My programming sorted out which domain got served which landing page, and the links from there went to subdomains to keep the filesets separate.

We finally retired the old quilting web site, realizations-mt.com, when the registration expired in early 2018, but rolled the legacy pages into a subdomain of judyparkins.com to keep her client galleries on-line. Then, this spring, Modwest announced they had sold out to Intertune, another web hosting provider headquartered in San Diego. Billing transferred to the new host, with the same pricing. Fine. But, eventually, they told us, the websites and mailboxes would be transferred to the Intertune servers. The Internet thrives on economies of scale—the bigger the organization, the fewer resources are needed for failover, backup, and support.

So, at an inconvenient time (we planned to be out-of-town for a week), they informed us that our parkins.org account would be migrated, so we dutifully switched the collective domains to the new servers, upon which our blogs disappeared, and the other two domains disappeared entirely, along with mail service. Frantic exchanges by phone and email ensued:

Them: “Oh, we’re only migrating parkins.org at this time.”

Us: “But, they share a file set, database, and mailboxes, and have subdomains. It’s one account. And you didn’t migrate the subdomain content at all.”

Them: “Oh, gee, we’ve never seen anything like this. (ed. Note: almost all web hosting services support this.) Switch the others back to Modwest. We’ll get back to you.”

Us: “Unsatisfactory—they are all one and the same, just different names in DNS.”

Us: “Hello? Is anybody there?”

Us: “Our blogs still don’t work, and our mail is scattered across several mail servers.”

Them: “OK, we’ll do what you said, for judyparkins.com and the subdomains.”

Us: “You didn’t. The subdomains sort of work, but the WordPress installation doesn’t, because the three domains are intertwined.”

Them: “OK, try the judyparkins.com now.”

Us: “The blog works, sort of, for Judy’s, but mine doesn’t, and judyparkins.com isn’t receiving mail.

Them: “Oh, wrong mail server. Try it now.”

Us: “OK, now do the same thing for info-engineering-svc.com”

Us: “Hello? Is anybody there? It looks like your servers are pointed the right way, but the email and blogs still don’t work.”

Them: “Oh, wrong mail server. Try it now.”

Us: “OK, the mail works now, but the Larye blog is totally broken: I can’t see blogs.info-engineering-svc.com at all, and the admin page on blogs.parkins.org is broken yet. I can’t publish anything or respond to comments, nada.”

Us: “Hello? Is anybody there?”

Us: “Hello? … Hello?

Now, we could have decided early in this process to not trust them to migrate this successfully and moved everything to a different hosting service, but that would involve a setup fee and transferring all of our files ourselves—which, for the web, isn’t a big deal, but migrating 20,000 emails sitting in hundreds of folders on an IMAP server is—and working through an unfamiliar web control panel, so we didn’t. We should have, as the same level of service, with more space, is actually cheaper on the web hosting service we had before we moved to Montana in 1999.

But, meanwhile,  we’re busy, so a few days passed, and they still hadn’t replied to my latest comm check.  Rather than risk exposing my latent Tourette Syndrome with an expletive-laden outburst via email, I rechecked their servers to see if the info-engineering-svc.com site was working again.  It was, so I switched the domain pointers in ICANN, and waited.  The blog still didn’t work, and I sent another, unacknowledged message to say so.  But, finally, the info-engineering-svc.com blog started working [no reponse from Intertune, but it’s not magic, so they did something], at least in the display mode.  The administrative pages still did not work.

In between guests and visiting relatives, I decided to troubleshoot the WordPress installation, as something was definitely amiss, now that everything was on the same server.  But, changes to the configuration files seemed to have no effect, and a check of the error logs showed the same errors, which didn’t reflect the changes I had made.  A comparison of the installed file set with my backup I had taken before the migration showed no differences: The light begins to dawn, that my blog installation doesn’t match what Intertune is actually serving.

Sure enough, there is a ~/blogs folder, which I had instructed Intertune to configure as the blogs subdomain, and a ~/www/blogs folder, a different version of the wp-admin file set.  After a quick check to make sure that the dozen or two different files in the wp-admin folder were the only differences in the several thousand files in the blog, I copied the version from my backup into the “live” folder, and violá! the admin dashboard appeared, and here we are, composing the rest of the story on-line.

As it turns out, Intertune did not follow my instructions, but did something different, and did not tell me (not for the first time, either).  Somewhere in the middle of the migration, my blog installation got updated, and Judy’s did not, when the blogs were split between Modwest and Intertune, so that the update was only partial, breaking the installation when Intertune took it upon themselves to migrate files I had already manually installed, but to a different location.

So it goes.  One clue was in the WordPress FAQ, which suggested that a HTTP 500 error might be corrected by re-installing the wp-admin directory, which turned out to be the case. Whether we stay with Intertune or not depends on whether we meet any more difficulty with a tech staff that seems incredibly inept, and how much more work we want to do to move our Internet presence to yet another new hosting service, with new rules, and setup fees.  Our old webmail installation no longer works, being reconfigured by Intertune to use their web mail client instead, but we can live with that, and it’s less work for us, though with also less versatility and customization.  Now, to finish tweaking all the PHP-language scripts in the webs for compatibility with PHP 5.3.29.