On the way back from an outing to the beach yesterday, we passed the cooling towers for the ill-fated WPPSS #3 and #5 nuclear power plants near the village of Satsop. Having recently commented on a book about an ambitious software project, I reflected on a common theme when it comes to discussing software, “Why can’t software be engineered like bridges and power plants?” Well, perhaps it is. Like software projects, civil engineering projects also fail, sometimes spectacularly. Bridges and stadiums fall down, buildings and power plants remain uncompleted eyesores.
The misfortune of the Washington Public Power Supply System, with the unfortunate acronym pronounced “whoops,” is a prime example. Oft-cited references from Seattle newspapers at the time mostly quote the cost overruns and delays as the major cause of the WPPSS default, which we don’t deny. But, other factors certainly had a hand in pushing opinion against completion of the project. One was an inflated estimate of power demand when the projects were initally launched. Public engineering projects, like large software projects, take a long time to build, during which the original requirements may change radically. Rather than spurring demand for alternate energy sources, the 1973 oil crisis was so sudden and deep, that it instead instigated conservation and frugality, reducing overall demand. At the same time, the question of alternate sources brought environmental concerns to the fore, and some of the delays were due to demands for stricter environmental accounting, such as potential impact on salmon and other fisheries of dumping excess heat into the Satsop River, and the fact that two million people lived directly downwind, under the radiation plume in the event of an accidental release. The latter was of concern because part of the delay and cost overrun was due to rework of substandard assemblies. The plant construction was halted, and power customers in the Northwest were faced with a 2.25 billion dollar bond default, a spectacular failure, indeed. Incidently, the only plant in the project to eventually be completed does produce electricity at lower cost than the combined hydro, nuclear, and coal-powered grid, but the failures at Three Mile Island and Chernobyl thoroughly dampened enthusiasm for nuclear power for decades.
How does this relate to software? Well, we can certainly learn from the lessons here:
- Requirements change in unexpected ways
- Poor training and planning results in costly rework
- Failures in operation can be catastrophic to your business
- Small pilot projects can succeed, where large-scale efforts fail
The practice of software engineering has evolved a great deal in the 40-odd years since Edsger Dijkstra called for the evolution of structure in coding practice with his “GOTO Considered Harmful” essay, and, indeed, much changed since my personal epiphany 20 years ago, when I discovered, on enrolling in a formal study of software engineering, that I had apparently spent the previous 20 years in another profession with the same name. Software is not like concrete and steel, that must be planned completely from the beginning and constructed stepwise from beginning to end, with only incidental changes along the way, though the classic “waterfall” software methodology was modeled on the brick-and-mortar construction. A number of methodologies have evolved, including the spiral method, in which the concrete and steel are considered as plastic as the facade and details; and the rapid prototyping method, which is akin to a false-front movie set, where the user interface and operations come first, using off-the-shelf or mock-up back end components, with features that appear automated operated by stage hands behind the scenes. The spiral method essentially uses more frequent inspections, so that mistakes or misunderstandings in the foundation get bulldozed and rebuilt before the frame goes up. Newer methods, like extreme programming, merely address the issue of code verification and validation, organizing programming teams to reduce coding errors and incorrect or omitted implementations of the specifications by using continuous inspection.
A lot of my work falls into a sort of reverse rapid-prototyping mold: the problem gets solved in the back end, which is difficult, but produces a useful result. Often, the project is a one-time run, especially when it is part of a scientific research project. The expensive, time-consuming, but not particularly difficult front end, the user interface, just never gets built. The user gives me a flat file or spreadsheet or maybe just a list of data sources and a description of what is needed back, and I give him or her a spreadsheet or graph with the results. End of project. Some of these “one-off” projects end up becoming research tools valuable for a number of projects, so they get converted from a command-line script into a web CGI application. The point here is that cost is put in where it gets the best bang for the buck–as the requirements evolve. And, as a bonus, when the front-end gets grafted on, I often retain the option of running the program in command-line mode, which greatly simplifies testing and debugging. The prime factor is not necessarily overall cost of the project, but cost per use, that dictates the tradeoff between doing a lot of manual steps (which are part of the design process anyway) and writing a program to do it automatically.
On the other side of the coin are the web site designs, a craft which evolved concurrently with the rise of rapid prototyping as a methodology, and is well-suited to that approach. For these, I design sections of the pages with the layout in CSS and the content management in PHP, for the most part. At issue here is the content itself. I usually start with simple PHP arrays or maybe tab-delimited flat files, especially for sites that change slowly or don’t have a lot of data initially. These have the advantage of being immensely portable, since the data is self-contained. But, for sites where the data is collected from the user, a database is easy enough to drop in, if the data-management functions are sufficiently cohesive. Like the scientific research scripts, I usually like to handle the data updates myself at first, since the hard part to program is data validation. Until you have done a few of these, the number of variations in format the same data can take–that the user considers “the same”–is mind-boggling. Since a lot of my scripts acquire data from files rather than from dialog, it becomes necessary to parse all possible variations and then validate the result, rather than ask the user to re-input the data. Phone-number-handling routines do this all the time, removing parentheses, dashes, white space, and checking for other non-numeric inputs as well as making sure the right number of characters are present, but each custom data format requires its own parser.
This seems like a lot of work, but for a site that is undergoing rapid evolution, due to rapid prototyping and spiral development, putting a lot of effort into coding a user content-management system onto the back end is much more expensive to modify than to have the developer add the data until the site stabilizes. That is, of course, unless the site demands frequent large changes, in which case we design the input part first. When I develop a site with a database component, I code the schema into the application, so that all I have to do when I make major changes during development is to run the script from the command line with a switch to delete the tables and rebuild them with the new schema. This option is not available from the web interface, so the application is protected from accidental erasure of all the data. So, even though software is seemingly constructed by trial and error, the template and methodology are engineered, though not as formally partitioned as in model-view-controller architectures like Ruby On Rails and others.
There is an old saying that you can have it fast, cheap, or good; pick any two. This means, of course, that fast and good isn’t cheap, because everything needs to be done all at once, and any changes or corrections will be more sweeping, therefore more expensive. To get something cheap and good, it won’t be fast, but evolve over time, with functionality added as it is needed, incrementally, so that few large, expensive changes are needed. Given a rapid prototype with a skeleton backend, the developer has a chance to work with the user and refine the requirements, adding code instead of replacing it. Using a canned content management system or a model-view-controller development system can produce a system quickly at low cost, but custom look and feel and innovative or complex functionality won’t be there without a lot more work.
Change orders and redesign are a factor in any engineering project, but software seems especially susceptible, probably because the customer doesn’t see the size and complexity. When we build a house and something comes out suboptimal or just not quite what we expected, we are more likely to live with it than to rip out walls and pour more concrete, because we can see for ourselves what it will cost. But, invariably, the first thing a customer says during the rollout of custom software built for him is, “Gee, that’s great, but now can you make it do [something completely different].” Yes, we can, but sometimes we have to throw away everything and start over: the new functionality may call for a very different architecture. Attempts to graft on the changes will result in a product that is not fast, not cheap, and not good. Rapid prototyping, focus on the core mission (solve a problem, attract customers, etc), and incremental development with some behind-the-scenes work can produce a useful product at a reasonable price, even if parts of it get ripped out or mothballed along the way.
Like this:
Like Loading...