Apologies in advance for the technical detail, but I’ve been trying to get the grandchildren interested in learning to code–if not as a career, just to make the world less magical and more driven by thoughtful application of disciplined skill, like driving a car or cooking a meal. And, above all, it can be fun. You know who you are. The rest of of you, sit down, hang on, and share with the next generation. The other lesson here is that it is never too late to learn something new, nor too early.
Regular readers may know that I became more or less “full time” retired this fall, with the ending of the latest in a 14-year sequence of contracts supporting the NIH Rocky Mountain Laboratories. Oh, I have a few commercial web clients left, with the occasional rewrite, addition, or help request when a data entry goes awry, and a few pro bono web sites to manage, but, for the first time in 50 years, I actually have only a few minor updates in this last week of the year.
Typically, the systems support folk perform maintenance on systems when the majority of the staff has time off, which, in the United States, tends to occur only between Christmas and New Years Day. In the pre-Unix days, this often meant traveling from coast to coast to work on U.S. Navy ships in port for the holiday. In the past 20 years, it meant working late on days when the rest of the staff got early dismissal, or telecommuting on holidays.
So, having a bit of free time, I’ve decided to “play,” which, for the thoroughly addicted Unix Curmudgeon, means writing programs for fun and wiring up systems just to learn something new (or practice rusty skills). The geek toy of the year, or maybe decade, is the Raspberry Pi computer, the $35 Linux board (more like $50-$100, depending on what accessories are needed to set up and run it–and, once they are set up, you can run them “headless,” without keyboard, monitor, or mouse, and access them from your PC, Mac, iPad, or phone, with the proper app). Normally fairly frugal, I have somehow managed to acquire five of these little beasties, the last one because I thought one died, but it was the SD card (like the one in your camera) that serves as the primary “disk drive,” so I bought an extra SD card and have an extra box. I also bought one to get my hands on the new B+ model, the one with a 40-pin I/O jack and four USB ports instead of the old-style with 34-pins and two USB jacks. I did refrain from buying another protective case, instead choosing to bolt the “replacement” board to a piece of scrap hardwood so it won’t short out.
The purpose of the Raspberry Pi, as conceived by its British designers, is to promote learning and experimenting. Thus, the I/O port, with a number of digital and analog inputs and outputs, and a special port for a miniature CCD camera board the size of a postage stamp. Of course, I had to have one of those, too, along with a “daughterboard” for another that provides convenient digital input/output lines with screw terminals and a couple of tiny relays.
The Raspberry Pi is, though small, a “real” Linux server, and can run almost any software. Having been a Unix/Linux system administrator for the past 25 years, I find them fun to play with and use to recreate small versions of big networks. I have one that provides network information and internet name service to the local network, another that is a router and cluster head node, connecting two networks together and managing one of them, including disk sharing, one that is a gateway to the Internet (providing secure access into our network) and also a web server, one that is a print server, and another that is a database server. And, of course, one that is a camera platform. (In case you are counting, some do several things at once–that’s what Linux and Unix are good at, even in a small machine.)
Lacking interesting ideas, I merely pointed the camera out my office window, to provide a “window on the world,” for which I have started to experiment with different programming. First, I simply put up a web site (using the Pi as the web server) that allows the user to take a picture of what the camera is looking at “right now.” The Raspberry Pi promotes the use of the Python programming language (named by its author, Guido van Rossum, after the British comedy team, “Monte Python”), a language that I have heretofore avoided, since it uses “white space” as a block delimiter, and white space has been the bane of Unix shell programmers (of which I am one) since the beginning of the epoch (1/1/1970). Nevertheless, it is a solid, well-constructed language, which is growing rapidly, in both academia as a beginning programming language and in industry and research as a powerful tool for rapid prototyping and robust permanent systems. Python is, unlike most scripting languages, strongly typed (meaning variables are distinctly numbers, text, or other structures) and naturally object-oriented, which means data can inherit processing methods and structure, and details of structure can be hidden from other parts of the program, making programs easier to extend and debug, and making objects easily imported into other programs.
So, with Python as a de facto system programming language, the libraries that operate the external ports and the camera are, of course, written in Python. Using the devices requires Python programming skills, incentive enough to finally learn the language.
The web camera project quickly evolved into a combined weather station and surveillance system, and, not surprisingly, expanded my Javascript programming skills as well. Javascript is at the core of most interactive web applications, as it mostly runs in the user’s browser, providing dynamic interaction with the server without the need to reload the current web page, and capable of performing automatic functions. Since part of the project involves sewing a sequence of still images into a timelapse video, it also involved building a command-line video editor from source code and learning to manipulate video with it.
The web page: All this code does is display a photo, then update the time and the photo every 15 seconds (the rate at which the camera takes pictures). The python code runs on the server, the Javascript code runs in the user’s browser. The image acquisition happens in another program, which follows.
#!/usr/bin/env python import cgi import time import os import sys import urllib2 import json os.environ['TZ'] = "PST8PDT" print "Content-type: text/html" print Now = time.localtime(time.time()) DST = time.tzname[Now.tm_isdst] Date = time.asctime(Now) print """ <html> <head><title>Raspberry PiCam</title> <body> <h1>The view from Chaos Central <span id="nowTime">%s</span></h1> """ % Date f = urllib2.urlopen('http://api.wunderground.com/api/79be9651d0e9e9fc/geolookup/conditions/q/WA/Shelton.json') json_string = f.read() parsed_json = json.loads(json_string) location = parsed_json['location']['city'] title = parsed_json['current_observation']['image']['title'] link = parsed_json['current_observation']['image']['link'] weather = parsed_json['current_observation']['weather'] temp_f = parsed_json['current_observation']['temp_f'] temp_c = parsed_json['current_observation']['temp_c'] wind = parsed_json['current_observation']['wind_string'] updated = parsed_json['current_observation']['observation_time'] print """ <img src="/images/current.png" id="currentImage"/> <script language="Javascript"> setInterval(function() { var currentImageElement = document.getElementById('currentImage'); currentImageElement.src = '/images/current.png?rand=' + Math.random(); var now = new Date(); var h = now.getHours(); var m = now.getMinutes(); var s = now.getSeconds(); formatnum = function( num ) { if ( num < 10 ) { return '0' + num; } else { return '' + num; } } var time = formatnum(h) + ':' + formatnum(m) + ':' + formatnum(s); var nowTimeElement = document.getElementById("nowTime"); nowTimeElement.innerHTML = time; }, 15000); </script> """ print "<p>The temperature in %s is: %s F / %s C<br />" % (location, temp_f, temp_c) print "and the weather is %s<br />" % weather print "Wind: %s<br /><br />" % wind print "Weather Data %s<br />" % updated print "Reload page to update weather: image updates automatically every 15 seconds<br />" print "Weather data provided by <a href=\"%s\">%s</a><br />" % (link, title) print "Image realized on a <a href=\"http://www.raspberrypi.org\">Raspberry Pi</a></p>" f.close() print """ </body> </html> """
So, confused yet? Below is the code that actually runs the camera. It takes a picture every 15 seconds, numbering the pictures so that a third program can sew them together into a timelapse video. The timezone hack in the shebang line (the first line of the script) powers the timing of the script. This script is started each day by the system before the earliest sunrise, then waits until sunrise (obtained from the weather programming interface), and runs until sunset. We start 30 minutes early and stop 30 minutes later to start/stop in twilight.
#!/usr/bin/env TZ=PST8PDT python import time import picamera import os import urllib2 import json riset = urllib2.urlopen('http://api.wunderground.com/api/...(api key here).../astronomy/q/WA/Shelton.json') json_string = riset.read() parsed_json = json.loads(json_string) sunriseh = int(parsed_json['sun_phase']['sunrise']['hour']) sunrisem = int(parsed_json['sun_phase']['sunrise']['minute']) if ( sunrisem <= 30 ): sunrisem += 30 sunriseh = sunriseh - 1 else: sunrisem = sunrisem - 30 while (time.localtime().tm_hour < sunriseh): if ( time.localtime().tm_min < 30 ): time.sleep(1800) else: time.sleep((sunrisem * 60) + 1 ) while (time.localtime().tm_min < sunrisem): time.sleep(60 * (sunrisem - time.localtime().tm_min + 1)) sunseth = int(parsed_json['sun_phase']['sunset']['hour']) sunsetm = int(parsed_json['sun_phase']['sunset']['minute']) if ( sunsetm >= 30 ): sunsetm = sunsetm - 30 sunseth += 1 else: sunsetm += 30 print 'Sunrise: ' + str(sunriseh) + ':' + str(sunrisem) + ', Sunset: ' + str(sunseth) + ':' + str(sunsetm) wday=time.localtime().tm_wday logdir = '/mnt/TIMELAPSE/' + str(wday) # remove last week's files. import shutil shutil.rmtree(logdir) os.mkdir(logdir) # grab camera and start recording with picamera.PiCamera() as camera: camera.resolution = (320,240) camera.start_preview() time.sleep(2) # loop: capture method grabs camera device for duration. for filename in camera.capture_continuous(logdir + '/img{counter:04d}.png'): shutil.copyfile(filename, '/var/www/images/current.png') print('Captured %s' % filename) time.sleep(interval) if ( time.localtime().tm_hour >= sunseth and time.localtime().tm_min <= sunsetm ): shutil.copyfile('/var/www/images/NoImage.png', '/var/www/images/current.png') exit()
So, there it is, a program that finds the hours of daylight, day after day, and records four photos a minute, which should catch at least of glimpse of anyone entering the property. Each picture is copied to the web site as “current.png” so that the Javascript in the web page can update it to anyone currently watching (or at least who has a tab open to the site).
The next evolution is to make a timelapse movie, which, at 8 frames per second, displays an hour of observation every 30 seconds. At faster display would make the movie shorter, but moving objects won’t appear long enough for the eye to recognise them, and slower internet connections/browsers may drop frames, missing data altogether.
This code runs in a virtual server on our main virtual host, which contains system images for the various versions and releases of Linux, Windows, and FreeBSD that we support. This happens to run on CentOS7, the latest free Red Hat clone. It could run on the Raspberry Pi, but requires a lot of time and processing power, so we chose to distribute parts of the process on a faster machine with more memory: the Pi has a 32-bit 700Mhz Atom processor and 512 MB of memory; the virtual host has a quad-core 64-bit 2.4Ghz (3 times faster) Intel Xeon processor and 8GB (16 times as much) memory.
#!/bin/bash cd ~/TIMELAPSE ~/bin/ffmpeg -framerate 8 -i ./images/${1}/img%04d.png -r 30 -pix_fmt yuv420p surveil${1}.mp4
Short, eh? A simple, essentially one-line shell script running the ffmpeg command with a lot of options set. But, there’s more to this. For one, the files are on an external hard drive attached to the Raspberry Pi. It takes much less processing power and time to copy files, and we can copy them incrementally through the day, so we have a driver script on the Raspberry Pi to send the files to the main server, run the remote video processing script, and retrieve the resulting video file. In this case, the included ‘my_agent’ script sets the environment needed to login to the remote machine using a security agent’s pre-authorized key .
#!/bin/bash . my_agent rsync -av /mnt/TIMELAPSE/* centos7:TIMELAPSE/images/ ssh centos7 ./TIMELAPSE/makevideo.sh $1 rsync -av centos7:TIMELAPSE/surveil*.mp4 /mnt/TIMELAPSE/videos/
Lastly, a web interface is needed to display the video on the user’s browser: This is still under development as an integrated part of the webcam application, but relies on a snippet of HTML version 5 code, the latest version of the HyperText Markup Language that Tim Berners-Lee spun off as a subset/variant of the 1986 Standard Generalized Markup Language (SGML) to invent the World Wide Web 25 years ago, in 1989 (it didn’t get built until 1990). HTML 5 provides powerful tags to define multi-media constructs like video and audio that previously required specialized streaming server software and browser plugins to implement. The code snippet below contains the Javascript hack needed to signal the browser to reload a new version of the file, rather than replay the cached version. The final version will offer the option of displaying a timelapse video for the current day to current time (within the hour) or for any day in the past week (hence the use of an external disk drive on the Raspberry Pi, in order to store a week’s worth of surveillance video and the still pictures from which it is built).
<video width="320" height="240" controls> <source src="/TIMELAPSE/videos/surveil0.mp4" type="video/mp4" id="vidFile"> </video> <script language="Javascript"> var currentVidFile = document.getElementById('vidFile'); currentVidFile.src = '/TIMELAPSE/videos/surveil0.mp4?rand=' + Math.random(); </script>
And, so, that’s what old programmers do for fun during Christmas break. Meanwhile, I’ve developed some familiarity and skill with Python programming, honed Javascript skills, and refreshed my skills building software from source packages, and kept up to date on the latest system software from Red Hat. Jobs are out there… On the Internet, no one knows you’re a geezer, or simply doesn’t care, if you have the code skills they need.
If you want to see what’s outside our window, check on http://www.parkins.org/webcam during the day ( the camera is off at night). The picture will update every 15 seconds until you close the browser window or tab (or go to another site from this window). Of course, it is a work in progress, and we have recently made changes to our router, so it might not work at any given time.