Taming the Office Network

We know a few folks who are still on dial-up Internet access at home, but most of us, at least those of us who are “in the business” have DSL or cable, some sort of high-speed broadband connection.  The service providers originally intended one connection, one computer.  But, many households now have more than one computer, and, in the case of Chaos Central, where we run two businesses, one of which builds software, web sites, and manages other networks,  we have “many,” most of which are virtualized, but look, to the network, like individual computers.

The standard DSL or cable modem now comes configured as a network server appliance, with Network Address Translation (NAT), Domain Name Service (DNS), and Dynamic Host Control Protocol (DHCP).  But, as an appliance, the little modem does a less-than-adequate job in each area, with limited control available to the user via a web menu.  At Chaos Central, we have been gradually migrating these network functions to “real” (Linux/Unix, of course) servers, for both performance and control.

Back at the Rocky Mountain Nexus of Chaos Central, in Montana, we used a community-wide wireless, established in the pre-DSL days.  The radio connection was configured as a bridge, so we built a FreeBSD-based router to handle the NAT functions, and put DNS on an internal server.  A separate wireless bridge/router handled DHCP for laptops and such, but the rest of the network had static addresses.

When Chaos Central’s West Coast Nexus coalesced, both the FreeBSD router and the DNS server were casualties of the move, so we relied on the [new] DSL modem for network services, assigning static addresses outside the DHCP scope for servers and workstations that needed to be accessed through SSH.  But, as the stable of virtual servers proliferated, the shortcomings of the DSL modem as a network appliance became painfully obvious.

NAT works by assigning a port to each client system, though which to tunnel requests in and out of the system.  The first DSL modem we had didn’t keep track of which client requested what on which port for some services, so services like FTP, that listen on one port and transmit on another, didn’t work unless the ports were explicitly forwarded to the specific client that needed to use FTP.  This wasn’t a big deal at the time, since the Unix side of the business uses mainly SSH and most public download services offer a choice of HTTP or FTP.  So, NAT, while not smart, works, most of the time…

DNS became an issue once there were too many physical and virtual servers to keep track of in /etc/hosts (LMHOSTS for Windows clients).  The “little appliance that almost could” uses dynamic DNS, by which the client offers its name to the server, so machines can find each other by name.  But, the user interface doesn’t allow a lot of configuration options, so it had to go.

When setting up a private LAN DNS zone, we like to use the form  “company.lan” and generate named.conf.local files and zone files accordingly.  In these, we list the static addresses and server names, and also assign names like “dhcp-2” to enough addresses in the DHCP scope to cover the likely number of clients.  Printers, wireless routers, and portable machines are easier to use as DHCP clients, and, as we shall see later, DHCP with reserved addresses can simplify static assignments as well.

Which brings us to DHCP.  The modem’s DHCP doesn’t allow for a lot of configuration.  Despite offering to do so through the interface, it just didn’t seem to “want” to substitute our LAN DNS server for the internal and external DNS services, which meant manually editing the /etc/resolv.conf file every time the DHCP lease was renewed on DHCP clients, if we needed to address local machines that didn’t use Dynamic DNS (a number of Linux distros do, but some do not–I personally don’t like DDNS because of the potential for name conflicts when you allow servers to name themselves).

So, the next step was to set up our own DHCP server–first turning off the DHCP service in the modem.  Having our own service allows us to specify the local DNS server, domain, search domain, and, better yet, to map the MAC addresses of various machines to static addresses outside the zone, to provide reserved addresses for those machines.  There is a definite advantage to having everything use DHCP–you no longer have to modify the network configuration on each machine if you move or add a service, you just change the service records in the DHCP server and renew the leases on the clients.  We’re using Ubuntu 10.10 Server Edition for DNS and DHCP, using BIND9 for name service software and DHCP3 for address assignment, running as a virtual server.  The setup was quite easy, but, then, we’ve been doing that for 15 years and had the DNS zone templates from the old site archived.  We’re used to hand-editing the files, but Webmin does a great job of guiding the new user through the setup process and managing the services afterwords.

The next step in the process of taking charge of your own network is to configure the DSL or cable modem as a pass-through device, add a second network card to a spare machine, and configure your own router, with a more reliable NAT and a more configurable firewall, and set up a local NTP timeserver.  But, that’s a future project.  Right now, we’re evaluating network storage solutions for $CLIENT, and took time out to clean up and fix things to make that easier.