A 1950 science fiction story by Damon Knight described an alien invasion where communications between the aliens and humans was largely one sided, with the aliens’ intents and purposes not clear, though they provided much advanced technology, seemingly for free. The humans obtained a copy of one of the aliens’ books. The title was translated first, and rendered as “To Serve Man.” The general thought among the humans was that this proved that the aliens were here to help solve the humans’ problems and make a better life, so the alien’s every request was met with approval, including sending many on extended visits to the aliens’ home world–until more of the book was translated, when it became clear that the volume was a cookbook.
And so it is: we have been invaded by an alien species, the computer, that also “serves man.” In the early days of computing, the computer was just “the computer.” A person or team set up a program, prepared
the input, and ran it. Later, when multitasking became popular, along with multiuser systems, and users were connected to it by terminals, the computer itself was referred to as the “mainframe,” as opposed to
the terminal concentrators and the terminals themselves. The term “mainframe” also distinguished large computers that ran an entire organization from “minicomputers,” which were still multitasking and
sometimes even multiuser, but at a departmental or small business scale. Even early networking was among collections of mainframes and minicomputers.
But, the advent of the personal computer changed all that. Early on, personal computers could be connected to a mainframe or a minicomputer by running a “terminal emulator” program, but soon, true networking
came to the desktop computer as well, with larger machines providing shared disk space or memory or processing speed that was not economically feasible to put on every user’s desktop. These more powerful machines were called “servers,” and the machines they served referred to as clients. As desktop machines became more powerful, superficially there was no difference between the client machines and the server machines, so it became fashionable to simply order the same systems, with more memory and disk, and use them as servers. The advent of compute clusters reinforced this thinking. The first clusters were actually built using obsolete client-class machines grouped together to provide processor pools for multiple applications or even to implement parallel processing.
However, relying on machines designed for end-user applications to function as servers is as gross a misinterpretation of the meaning of “to serve” as in the cautionary tale in the first paragraph. “Personal computers” are designed primarily to process business data–word processing, spreadsheets, and presentation graphics, for a single user, optimized for display on the console monitor and input from the keyboard, and some exchange of data over the network. The next step up, what we used to call workstation class machines, are
essentially personal computers optimized for high-quality graphics display and repeated complex calculations requiring lots of memory and/or processing speed and power. Machines designed to play video games are similar, but usually of lower quality components. Mobile computers range from business-class “notebook” and “laptop” versions meant for travel or casual use, down to “netbooks,” machines designed primarily to run web browsers and exchange data to and from the network, similar to thin-client displays found in organizations with large application servers or multi-user systems, little more than a keyboard, display, and network card. All of these are designed to operate in an environment that is comfortable for the user, i.e., 20 degrees Celsius, in a roomy and well-ventilated space, and not produce excessive noise.
With the exception of so-called desk-side servers, true server-class machines are designed to operate in a climate-controlled environment that may not be comfortable for humans for long periods of time–they are usually densely packed in racks, with extra cooling. Their fans run at high speed much of the time, at high noise levels, where the air supply may be uncomfortably cold and at higher velocities and volumes than in human-occupied spaces. The machines themselves are designed to be operated without attached keyboards or monitors, or at least without dedicated console equipment, perhaps sharing one console among eight or more systems. Servers operate unattended, so the ideal machine has internal monitoring sensors and remote console services. A server’s role is to communicate with many clients, so must have high input/output capacity, often with multiple network connections. They must be capable of switching between hundreds of processes a second, and often have much larger processor memory caches than single-user-oriented machines. Because the high cost of high performance is shared among many users and processes, they will have faster memory, more memory capacity, and disk arrays optimized for fail-safe operation, error-correction, and high speed throughput.
If your business requires network-based services, it would be wise to select servers that provide the performance, reliability, and availability required for the specific mission, with room for expansion to meet growing needs over the life of the system, and choose between large symmetric multiprocessor systems and clusters of smaller machines based on the problem to be solved. Using even high-end personal workstations with similar CPU, memory, and disk capacities just may be a mistake, the difference between being served and being the next meal for the competition.