This is my hundredth post at Griffonage-Dot-Com, and to mark the occasion I’ve pulled together new examples of some favorite techniques and subjects showcased here in the past. Think of it as a plate heaped up for you at the Griffonage Buffet and set down before you in place of the usual a la carte menu. My fiftieth post, back in July 2016, was similarly retrospective, but all the examples I provided in it were ones I’d already presented in previous posts, and this time around I wanted to do something different. The main change since then is that I’ve begun doing my own programming, mostly in MATLAB, rather than relying on existing tools (such as FotoMorph, Abrosoft FaceMixer, ImageToSound, and AudioPaint). Now when I think of some crazy new digital processing technique I’d like to try, I can usually figure out how to do it. Whether that’s a good thing or not, you’ll have to decide for yourself. Meanwhile, Griffonage-Dot-Com remains as before a home for my experiments in wringing insights and experiences from documents of all kinds and from all periods, by whatever means, with whatever degree of seriousness. That said, I find that it’s also been veering increasingly into algorithmic art lately. If you’re worried about how this connects to the other topics I’ve blogged about here, you could check out my remarks here about “aleatory forms”—but really you’d probably be better off just finding something more important to worry about.
Historical Sound Recordings
Historical sound recordings have long been a favorite subject of mine, and I hope to share more of them here in the future than I have in the past—I certainly have no shortage of material. Here, for example, is a catchy advertising jingle for Playtex girdles I was able to salvage from a delaminating lacquer disc some years ago only by playing it at half speed. It’s probably the “Playtex Living Girdles Song” copyrighted in 1956 by Nelson Ideas Inc., but whatever it is, it’s a compelling little earworm, and I don’t think it’s available anywhere else.
What I had wasn’t bad, but I couldn’t control it;
I tried many girdles and still wasn’t trim.
Wasn’t fat, no not that; I just took rearranging,
And as much as I paid, all my girdles gave in.
At last I found Playtex, the perfect solution:
The free, easy girdle that holds me in right.
I’m slimmer in Playtex, I’m trimmer in Playtex,
The hold-me-in girdle, so firm yet so light.
Now I’m sleek and I’m chic and I have a new freedom.
My Playtex controls me; it’s comfortable too.
Every way, every day, its control is unending:
Find out for yourself what a Playtex can do.
Playtex actually holds you in better than girdles costing twice as much. It keeps its shape and the same hold-in power even after six months. Try Playtex Lightweight: less weight, more hold-in power than you’ve dreamed possible. Only four ninety-five.
As challenging as it was to transfer that recording to a digital file—and I’m sensitive to an audible “bump” at the fifty-second mark which I might revisit if I didn’t think a second playback attempt might kill what’s left of the disc—the feat pales in comparison to some other audio-archaeological projects with which I’ve been associated. The First Sounds initiative is best known for its playback of the phonautograms of Édouard-Léon Scott de Martinville, who invented the principle of recording sounds out of the air over time with an eardrum-like membrane (although nobody pulled sound back out of such a record in turn until Thomas Edison in 1877). Scott’s phonautograms consist of waveforms scratched by a stylus into a thin layer of resinous soot deposited on paper. For some time, my colleagues and I relied on existing tools to convert them into playable audio, but I’ve since written my own software for doing this—called Picture Kymophone (note that the specific version I blogged about here three years ago has since been superseded by more recent ones as I’ve fine-tuned algorithms and interfaces). With this software in hand, I’ve been slowly working my way through the remaining “unplayed” Scott phonautograms and making them talk and sing. I haven’t wanted to release the results in dribs and drabs, but the present occasion strikes me as solemn enough to warrant a couple sneak previews.
Scott’s phonautograms fall into two major categories: the ones that have tuning-fork traces and the ones that don’t. The issue is that Scott rotated the drum of his phonautograph by hand, which resulted in severe fluctuations in speed. Starting probably in 1859, he recorded the vibrations of a tuning-fork alongside the trace of whatever main subject he was studying, and whenever one of these tuning-fork traces is present, we can use it today as a pilot tone for speed stabilization. Here’s speed-stabilized audio from one phonautogram with a tuning-fork trace, a record of “Au Clair de la Lune” dated April 17, 1860 (it’s leaf 4 in the Regnault dossier, and #37 in my discography).
This is the second of Scott’s three known phonautograms of “Au Clair de la Lune,” if we consider them chronologically by recording date, and it’s the last to have been played back. The one dated April 9, 1860, is the most famous: not only was it the first major phonautogram we released as audio back in March 2008, but it’s also the earliest complete phonautogram with a tuning-fork trace, making it the oldest substantial recording of the human voice we can hear with the rock-solid speed stabilization the tuning fork permits. Another, dated April 20, 1860, is the best-recorded of the three, as judged by ear. The one I’m unveiling here for the first time doesn’t have any equally compelling claim to fame, but taken in conjunction with the other two, it gives us our only opportunity to hear three different renditions by Scott of the same song.
Many phonautograms also survive from earlier phases of Scott’s work, mostly from 1857, but unfortunately they don’t have the tuning-fork pilot tones that we’ve been using to speed-stabilize phonautograms from 1860. Here’s an example of raw audio from one of these earlier phonautograms, with no attempt at speed stabilization: SEIN 8/54-28 (#13 in my discography, with a facsimile available here on page 33).
Just what are we listening to here? It sounds to me a bit like the pitch contours of speech, maybe played back a bit on the fast side. But it’s not, in fact, a recording of speech. Scott labeled this particular phonautogram as the first of nine numbered plates used to illustrate a presentation he gave to the Société d’encouragement pour l’industrie nationale (SEIN) of October 28, 1857. Here’s what he tells us about its contents and method of recording:
Pl. I. notes du médium tenues et vocalisant – La membrane est une mince baudruche. Elle est placée, comme dans le conduit acoustique, dans une position inclinée par rapport à l’axe du tuyau. Les silences se traduisent par une ligne droite, certains renflements du son par l’onde dite d’inflexion. Dans les sons voilés l’onde de condensation est subdivisée en deux dents, l’une grande, l’autre petite
[Pl. I. notes of medium pitch held and vocalized – The membrane is a thin goldbeater’s skin. It is placed, as in the acoustic conduit, in a position inclined with respect to the axis of the pipe. The silences are rendered by a straight line, certain swells of the voice by the wave called of inflection. In the muted sounds the wave of condensation is subdivided into two teeth, one large, the other small]
The content of the recording, then, should be “notes of medium pitch held and vocalized”; and by “vocalized,” Scott almost certainly meant that the notes were sung wordlessly on one or more vowel sounds. Jules Lissajous—who reviewed all the various phonautograms Scott submitted to the SEIN—wrote on the back of this one: “Cette planche preuve nettement que l’appareil peut servir à compter le nombre des vibrations,” i.e., “This plate proves nicely that the apparatus can serve to count the number of vibrations.” Maybe so. We just don’t know what their timing was, which kind of defeats the purpose.
And yet there may be hope. Simply stabilizing the pitch across each “note” might bring us closer to “held” notes if we could make educated guesses about where “notes” begin and end and what they are. Furthermore, I came up with a strategy back in 2017 for trying to stabilize the speed of tuning-forkless phonautograms even when we’re wholly at sea about their content, as described here. When I’ve applied this strategy to phonautograms with tuning-fork traces, so that I know ahead of time what they ought to sound like, the results have been promising about 50% of the time. But my efforts to apply the same strategy to phonautograms from 1857 have foundered so far on the difficulty of calculating reliable pitch contours for them.
Pictures → Sounds, Sounds → Pictures
What’s the world’s oldest sound recording? That depends mainly on your definition of a “sound recording.”
Scott’s phonautograms were the first time-based records of sound waves picked up out of the air with a membrane—the first “recordings” made according to the principle Thomas Edison would go on to exploit in his phonograph of 1877. But a lot of the things that we accept in practice as “sound recordings” today didn’t come into being that way; “phonograms” might be a more accurate and honest word for them. Think of electromagnetic guitar pickups or sound synthesizers: how many Top 40 hits don’t feature one or another of those? And “sound recordings” can manifest themselves not only as “waveforms” (graphs of amplitude over time) but also as spectrograms (graphs of frequency over time); open up a sound editing program of your choice, and chances are you can toggle back and forth between both types of display. If we choose a definition of “sound recording” that’s broad enough to accommodate all these phenomena, then we may also be able to find inscriptions that satisfy the same criteria dating back much earlier than the 1850s.
A few years ago, there were reports in the news about the discovery of an important piece of musical notation supposed to have been written in northwestern Germany about the year 900 (see e.g. here and here). As Giovanni Varelli observed, it’s the oldest known specimen of polyphonic musical notation that seems to represent “real” music rather than merely illustrating theoretical points as in the Scholica enchiridis and Musica enchiriadis. The text begins “Sancte Bonifati,” which I suppose we can take as the name of the piece. Here it is.
There’s a modern performance of how the piece “might have sounded” available on YouTube, as sung by Quintin Beer and John Clapham, based, I assume, on Varelli’s well-informed reconstruction of it. But it also lends itself well to the technique I call paleospectrophony, which entails “playing” (or, better, educing) an inscription as audio just as though it were a sound spectrogram. After all, its horizontal axis corresponds to time, and its vertical axis corresponds to frequency, even if the left-to-right placement of notes might not represent timing precisely and the top-to-bottom placement of notes might not represent frequencies precisely.
So I spliced the two rows together, cropped them vertically to roughly an octave in height (with the top and bottom edge set roughly to G), and erased the lines that serve only to associate notes with other notes, as well as the unfortunately-placed British Museum stamp.
I then converted this image into audio via additive synthesis three times with the frequency range set to 200-400 Hz, 400-800 Hz, and 800-1600 Hz, and then combined the results.
The audio is rhythmically juddery, and the musical intervals sound rather off. Still, I’d argue that what we hear by this method sounds close enough to the scholarly musicological reconstruction of “Sancte Bonifati” to support a claim about long-term continuities in how sounds are represented spatially—continuities that arguably bridge medieval musical notation and modern sound spectrograms.
It’s also worth asking ourselves whether examples of this kind count as legitimate medieval “audio.” Even if we conclude that they don’t, the exercise might still sharpen our understanding of both “audio” and “musical notation,” and of their similarities and differences, which I believe we tend to take too much for granted.
“Sancte Bonifati” is an excellent example for present purposes because its logic is so purely “graphical,” in the sense that it relies on the meanings assigned to the left-to-right and top-to-bottom axes to do most of its work. Later Western musical notation draws more heavily on other conventions, ranging from neumatic ligatures to note heads and stems to sharps and flats, but it still follows the same “graphical” conventions to a point, and it can be informative to hear what it sounds like when handled spectrophonically. Here, for example, is a 1628 printing of the “Old Hundredth,” from Wikimedia. (You did recognize where the title of this “old hundredth” blog post came from, didn’t you?)
If we erase the lines of the staff and the note stems, we get this:
When I converted this into audio three times with the frequency scale set to 100-400, 200-800, and 400-1600 Hz (making no attempt to match absolute pitch values), here’s what I got.
[download, though why you’d want to is frankly beyond me]
The melody is recognizable, but the rhythm is surely much further removed from what the inscription’s creator intended than it was in the case of “Sancte Bonifati.” If we tried to do something like this with sheet music for the “Maple Leaf Rag” or “Für Elise,” the discrepancy would be greater still. But simply thinking about Western musical notation in these terms might be worthwhile if it points to a history in which music-writing began with the “graph,” became encumbered over time with other conventions aimed at legibility by humans, and then reverted to the “graph” again with the rise of mechanical musical instruments. That’s too simple and extreme a way of putting it, but hopefully you see what I’m getting at.
Meanwhile, I’ve been taking things in the other direction as well—that is, starting with audio and making images out of it. There’s a good deal of “sound wave art” to be found out there, but nearly all of it is less playable than phonautograms from the 1850s and 1860s, which strikes me as a step in the wrong direction—not because I think playability is necessarily important here per se, but because I see it as a measure of how meaningfully the images embody the audio they’re touted as representing. So I’ve been exploring ways of turning sound recordings into images that are both playable (at least in theory) and visually interesting. My favorite invention along these lines so far is the soundweft. This involves dividing a piece of audio into segments based on a rhythmic cycle and then displaying the segments as successive image scan lines with amplitude mapped to pixel intensity. My default color scheme puts positive velocity amplitudes in the green channel, negative velocity amplitudes in the blue channel, and displacement amplitudes in the red channel, but I’ve also experimented with color-coding by pitch class in an effort to draw out melodies and harmonies. But if that’s too technical for you, just enjoy the pretty pictures. After all, my goal has been to find a way of translating different sound recordings into pictures that are visually appealing, informative, and substantially different from one another, as an alternative to prevailing forms of “sound wave art.”
Historical Photographs (and other images)
Along with the history of recorded sound, I’ve also been getting increasingly interested in the history of photography, with which it has a lot in common—both in the ways you might expect (to the extent that they’re both “recording and reproduction” technologies) and in other ways.
I’ve published just a few posts here so far about particular aspects of historical photography, built mainly around photographs in my own collection. I’ve examined the pioneering motion photography of James Ross of Edinburgh, as well as the pioneering aerial photography of James Wallace Black. I’ve written about the “ping pong,” a ubiquitous but widely misunderstood genre of early twentieth century photograph, and about the “Rhode Island window,” a distinctive and ingenious studio prop used by a handful of photographers in Rhode Island during the 1860s.
Look for more posts of this kind here in the future.
I’ve been slowly collecting photographs in particular categories with an eye towards writing about them here—for example, photographs of people posing with telephones, studio photographs with interesting backdrops, and instantaneous stereoviews of subjects in motion. It’s always hard to decide when I’ve accumulated enough examples to move forward, and it’s always tempting to wait and see what else turns up over the course of another month or another year. But I’ll try to make it a New Year’s resolution to post more and wait less.
I’m also eager to do some further comparison of historical photographs and historical sound recordings, in the spirit of my post on “Artifices of Early Photography and Early Phonography” from back in 2014.
For the moment, I’ve picked a couple of photographs from my collection to share in this post that don’t really fit any larger themes I’m planning to explore. Hope you enjoy them.
I also like doing things with historical photographs to bring them to life sensorily in unexpected ways. So, for example, I’ve been collecting more and more pairs (and larger groupings) of tintypes taken simultaneously from side-by-side lenses—
—so that I can cobble them together into specimens of “accidental stereo.” You’ll need red-cyan glasses to appreciate the stereoscopic effect in the composite shown below, but please order some if you don’t already have a pair. They’re cheap, and you’ll want them for some other posts I have in mind for the near future.
I’ve also continued to experiment with animating sets of historical images. Below is an animation of two sets of four photographs taken simultaneously through four lenses, making eight images total on a single plate, with the two sets taken in rapid succession.But there’s no need to limit ourselves to photographs for this kind of experiment—and as you know if you’ve been following this blog, I haven’t. Here’s an animation of a moon-phase diagram from a 1582 manuscript of Metaliʿü’s-saʿadet ve-yenabiʿü’s-siyadet (Seyyid Muhammed bin Emir Hasan el-Saudî). I’ve done nothing more here than treat the diagram as though it were a phenakistoscope disc. Does this count as sixteenth-century video?
I’ve also been assembling more long chronological sequences of various types of document into video files. One of the most promising sources for this treatment is digitized historical newspapers, especially their front pages. Unfortunately this has proven to be something of a moving target, since digital repositories often undergo structural changes that cause techniques that used to work perfectly well suddenly to fail. Case in point: I’d written some code to cobble together video from the thumbnail images presented by the Library of Congress’s “Chronicling America” project, but when I went to generate a new example for this blog post, I found that the thumbnail URLs had changed into a frustratingly unpredictable form, so that I’ll have to redesign the script if I want to scrape them in the future. In the meantime, the old script still works with the full-scale jp2 files, but they take a lot longer to download. Here’s a video of the Alexandria Gazette and its predecessor titles for the period from 1811 through 1839.
I haven’t yet found a satisfactory way of aligning images for projects like this one, so that they don’t jump around from frame to frame. Does anyone have any suggestions?
The word griffonage originated in early modern France, and it literally means “careless or illegible handwriting.” Much of what I do on this blog deals with griffonage figuratively, taking source materials that are challenging to grasp for some reason and trying to breathe new life into them. But I’m also interested in griffonage in the literal sense of hard-to-read handwriting, whether it’s hard to read because of sheer sloppiness or merely because of unfamiliar writing styles, spellings, abbreviations, conventions, and so forth. Sometimes, of course, writing can be hard to read for both reasons at once. To illustrate, here’s a short excerpt from a typical early modern French legal document dated November 22, 1566:
I can decipher this myself mainly because the text is highly formulaic and I happen to have a number of other documents in the same notary’s hand for comparison. Some of the abbreviations could be expanded using multiple alternative spellings, since Middle French orthography wasn’t standardized, but a reasonable transcription would run something like this:
…regnant tres-chrestien prince henry par la grace de dieu roy
de france par-devant nous notaire royal soubsigne & les tesmoings
[…in the reign of the most Christian prince Henry, by the grace of God king of France,
before us, the royal notary undersigned and the witnesses hereinafter named…]
I’ve already published a couple of posts delving into the quirks of handwritten seventeenth-century English and Scottish documents, but you can look for more of this sort of thing in the future as I work through other unique manuscript materials in my collection, mostly in French, dating back as far as the thirteenth century and including some genres rarely found outside of archives, such as compoix and parish registers.
Then there’s the Voynich Manuscript, perhaps the world’s most mysterious book, written in an undeciphered and otherwise unknown script and dating apparently from the early fifteenth century. I’ve been playing around with a hypothesis that the most puzzling characteristics of the script would be consistent with a particular kind of cipher, and that it’s possible to mimic Voynichese patterns by enciphering a meaningful text using a cipher of the said kind. There is, I’ll concede, no evidence that the kind of cipher I had in mind was known in the fifteenth century. In fact, I have yet to see any description of such a cipher. It may be that I just haven’t come up with the right search term yet, and that it’s a well-known category of cipher among people who really know this stuff. Still, the principle seems strangely obvious to be missing from lists of “standard” ciphers, so I’ve begun exploring its possibilities as a worthwhile strategy in its own right.
The basic idea is to set up a grid containing letters, place a token on one square designated as its starting-point, and then encipher a text by writing down the moves needed to move the token to each of the squares containing its letters in turn. One simple version would be to write out the alphabet in a single row, put a token next to the letter A, and then write down the number of steps the token needs to move forwards or backwards to reach each letter in the plaintext, one after the other. If we permit “looping around” the beginning and end of the alphabet, then each letter could be reached from any other given letter in two ways: a number of steps backwards or a number of steps forwards. Now let’s say we want to disguise this system to look like an ordinary cipher that maps letters to numbers. First, let’s add 1 to the step count, so that “1” means the token stays in place, “2” means it moves one place, “3” means it moves two places, and so on. Now let’s say that a number representing a backwards movement is always preceded by a minus sign (-); that the first letter in any message needs to be encoded by a forward movement; and that the plaintext letter Q is substituted for a space between words (Q itself can be enciphered in some other way, for example as K). Got it?
13-13-3-24-24-15-24-13 24-15-22-8-17-12-6 21-22-18-5-13-14-20-3-17 16-10-20-23-4-21-23-9-6 7-4-13-18-12-21-20-9 24-14-14-2-14-23-3-11 6-8-17 11-11-25-21-7 2-16-19-23-6-4-25 21-15-24-13-26-17-3 25-2-10-15-16-8-2-4 19-4-16-9-16-4-10-8-24 14-16-24-1-18 4-9-19-14-16-21 8-9-4-17-14 12-11-2-10 13-21-9-21-10 19-14-10-8 22-18-15 13-12 10-25-19-7 26-14-9-16
I might also have more to share on the subject of shorthand, such as this 1885 advertisement for M. G. Kimmel’s Longhand Shorthand, a book outlining one of the systems I’ve been investigating that aimed to use an ordinary QWERTY keyboard for stenography, written by a professor at the institution that would evolve into my alma mater, Valparaiso University:
I haven’t been able to find a copy of Kimmel’s book anywhere, so the short “specimen” given above provides the only clues I have about how his system worked. The numeral 4 clearly represents the, though maybe only when followed by a space. Some words are reduced to their first letters; thus, H = he, i = immediately, n = understood / in, t = it, s = his, p = principle (in ps = principles). Other words are abbreviated in lengthier ways: uzd = used, wrk = work, t-f = therefore, aplid = applied, bzns = business. But here’s a puzzler: what does prlM mean? It doesn’t match anything in the “key” and seems to fall between the words his and business. Judging from other systems, the capitalized M probably represents a common word or affix beginning with m. And whatever prlM means, it would need to make sense in the surrounding context: He immediately understood the principles used in the work, therefore applied it in his SOMETHING-OR-OTHER business. My own guess as to its meaning will be found at the end of this post. What’s yours?
I first started teaching myself how to write code for audio processing because I wanted to solve problems associated with the playback of phonautograms, audio restoration, and other such areas in which accuracy, fidelity, and transparency are crucial. But sometimes an experiment along those lines that hasn’t panned out as I’d hoped has nevertheless led to something interesting. A while back, I started experimenting with harmonically sensitive noise reduction—that is, noise reduction that operates not just on isolated frequencies, but on whole harmonic sequences, since I thought this might eliminate some artifacts of traditional noise reduction. It turns out that this approach introduces new artifacts of its own, including “ghost tones” shaped out of noise corresponding to the upper harmonics of stronger signals at lower frequencies. But this gave me the idea in turn of using the same principle as a kind of melodic amplifier, taking unpitched sound and drawing out its strongest harmonic components. I call the technique melodization, and you can read more about it here.
Below is a new example of melodization based on a recording I made of a truck idling and then backing up near Ballantine Hall on the Bloomington campus of Indiana University. I’ve generated three different melodizations from it using different parameters, although they all apply a 30,000-sample analysis window (at 44.1 kHz), passing three strongest notes that are at least three notes apart with a passwidth 0.2, using impulse timing with no threshold. If you don’t listen to any other part of it, check out just the segment from 5:20 to 5:45, which gives me goosebumps every time I play it, and which is absolutely, 100%, nothing but an automatically processed field recording of a truck engine. You’ll hear:
- The original (0:00)
- A melodized version with an F major scale (3:17)
- A second melodized version with a chromatic scale (6:35)
- A third melodized version with a quarter-tone scale (9:54)
Another uncanny audio trick I’ve been cultivating is octave inversion, which involves flipping the frequency spectrum of a recording upside down, not as a whole, but octave by octave. I believe the said technique was original with me, and this discussion at Stack Exchange seems to support that conclusion. The results are—um—interesting, shall we say, and you’ll probably really like them or really hate them, depending on how you feel about unfamiliar intervallic relationships. Either way, I’ve prepared a bunch of new examples for you, all based on source material drawn from the Great 78 Project at the Internet Archive. It’s practically a whole CD’s worth of material. Indeed, I’ve toyed around with the idea of making this the stuff of a hoax CD purporting to feature the lost recordings of some forgotten early-to-mid-twentieth-century ensemble specializing in nutty microtonal music.
Yet another of my forays out onto a limb is “Archivorithm Number One,” which I’ve described as an indeterminate work of phonography. It’s a computer-actualized composition that draws random segments from the Great 78 Project (or another source, if desired) and assembles them according to randomized but rule-governed patterns, with some melodization thrown in as well, because why not? Below are three specimens of it which I generated several months ago. The first was assembled from select “remix” segments created in the way I’ve described before. For the other two, I used a modified algorithm that transposes sections according to a few extra rules in an effort to mimic human-composed chord progressions, and I’ve also added some reverb.
Back in May 2015, I began regularly photographing various locations by repeatedly setting a handheld camera down in the same spot—such as the corner of a bench or the top of a post—day after day, month after month, with the idea of averaging the results as time-lapse stills or videos. At this point, I have over four years’ worth of images of some scenes, such as the view from my home office window. But I’ve run into a few obstacles when it comes to doing things with them. First, I haven’t yet sorted all the images I have on hand by scene, which is tedious and time-consuming. Second, one of my portable drives stopped working in late July 2019, placing all the photographs I’d taken since March in limbo; I suspect I could get them retrieved by a data recovery service, but that’s an expensive proposition. Finally, the methods I’d worked out back in 2016 for processing quantities of images in the neighborhood of six to seven hundred, at moderate resolution, began to choke when I tried applying them to quantities upwards of a thousand, and at higher resolution. However, I’ve recently come up with some alternative techniques, combining a Photoshop action with some MATLAB scripts, that seem to be working pretty well. Look for a post about them soon. But in the meantime, here’s an average of 1,630 photographs I took from the railing of my back porch between September 2015 and April 2018, usually twice a day—once in the morning, once in the evening—unless I was away from home or distracted.
And here’s a video of successive fourteen-image averages, in which each image represents about a week and each second of video represents about two days of elapsed time.
Sometimes I’ve also taken a whole sequence of images from two side-by-side spots so that the results can be displayed stereoscopically. Here’s an anaglyph I created from two sets of views of one of the Sample Gates at Indiana University in Bloomington taken from two sides of a concrete bench. You’ll need to pull out those red-cyan glasses again. Did I mention that you should order a pair if you don’t already have one?
It’s also possible to average images that don’t line up quite as well with each other as the photographs I’ve just been describing do. One way to do this involves the use of morphing software, such as FotoMorph, which lets you define corresponding points in two images and then generates a “morph sequence” by warping the images based on these control points and cross-fading. But it can be hard to make all corresponding points morph seamlessly into each other, and in the animation below I noticed belatedly that I’d messed up the earlobe, and some other things besides.
So here’s a new and improved version.
My purpose in creating that animation was to grab the middle frame as the average of the two coin portraits, and then to average that average with the average of two other coin portraits, and then to average that average with the average of four other coin portraits, and so on—averaging by “cumulative morphing.” See here for details. But I’ve since written a program called Chronomorph that lets you mark up any number of images and then warps them for averaging all at once, which is a lot more efficient.
The easiest subject for image averaging is the non-profile human face, mainly because tools specific to it have been so thoroughly developed. I’ve posted here a lot about face-averaging, and in fact I wouldn’t be surprised if I’ve published more posts on that subject than any other, to the point that if you’re a regular reader you might be utterly sick of it. That would be a shame, though, since I’ve got even more posts planned about it.
Back in July 2014, I presented the following set of averaged faces of then-current United States Senators grouped by party and gender (Democrats on the left and Republicans on the right, excluding the two Independents).
A little over three years later, in October 2017, the BBC had Giuseppe Sollazzo re-create pretty much the same thing (except that he threw in the House of Representatives and chose different groupings: all congresspeople, all Democrats, all Republicans, all men, all women). Don’t the folks at the BBC follow this blog? Pfui! But since I promised to feature new examples in this post, rather than rehashing old ones, here’s a set of freshly-generated senatorial face averages based on the same image dataset I used back in 2014, but processed with newer tools and techniques.
The images are a marked improvement, I think. Unfortunately, there’s not much I can do from this platform about the composition of the Senate itself.
One frustration I’ve had with face-averaging projects is that whenever I hit upon a new and improved technique, all the other averages I’ve published up until that point suddenly become obsolete—sometimes embarrassingly so. In November 2019, I released a complete set of averages of senior yearbook portraits from Valparaiso High School for the years 1913-2012. A month later, I began using a new set of tools that are so much better than the old ones that I now wish I’d waited. Here’s a comparison of my November 2019 average for the senior women of the Class of 1957 (left) and a corresponding average made using the newer tools (right).
The difference usually isn’t quite that stark, but you get the picture. I’ll plan on describing the new tools in detail here pretty soon, but in the meantime I’ve been reprocessing the whole Valparaiso High School project, which mainly entails leaving my laptop running overnight.
That said, I also find myself left with a bunch of unpublished images created with the older tools, most notably a folderful of what I call algorithmic autoportraits, which involve funneling the raw results of a Google image search into a face averaging algorithm. And so I’m going to take this opportunity to unload them all like a dealer getting rid of the remaining stock of last year’s model of whatsit. One part of me feels they should all be redone, but another part of me thinks that maybe their technical imperfections might possess a little charm of their own. So here you go.
At the end of my fiftieth post, I provided a Table of Contents for all my posts up to that point. To conclude, then, here’s a follow-up Table of Contents for everything I’ve posted here since then.
50. My Fiftieth Griffonage-Dot-Com Post (July 28, 2016)
51. Averaging Faces in Profile, and Other Things (August 3, 2016)
52. Averaging Time-Lapse Imagery (August 20, 2016)
53. Long-Term Time Lapses in 3D (September 21, 2016)
54. Improvements in Face Averaging (September 24, 2016)
55. Time-Based Image Averaging (October 31, 2016)
56. New Software for Playing Pictures of Sound Waves (November 20, 2016)
57. Edison’s Phonographic Voice and the Aural Culture of Imitation (November 24, 2016)
58. A Graphic Look at Effects of the Electoral College, 1896-2016 (December 19, 2016)
59. See the Phonautograph Turn: Motion Capture With Sound (1860) (January 29, 2017)
60. The Wow Factor in Audio Restoration (February 16, 2017)
61. Early Motion Pictures of Eclipses (1639-1880) (March 30, 2017)
62. The Phonograph as Toastmaster (October 5, 1888) (April 5, 2017)
63. Daguerreotyping the Voice: Léon Scott’s Phonautographic Aspirations (April 23, 2017)
64. In Search of the World’s Oldest Digital Graphics (May 25, 2017)
65. Turning Audio Upside Down with Octave Inversion (June 8, 2017)
66. More Tricks for Playing With Audio (July 23, 2017)
67. George Hachenberg and His Electro-Music (1860-1897) (August 27, 2017)
68. A Song for Labor Day (September 4, 2017)
69. Time-Based Averaging of Indiana University Yearbook Portraits (September 23, 2017)
70. “A Greater Achievement than the Telephone”: Alexander Graham Bell’s Synthesizer (1884) (October 24, 2017)
71. Beyond #midiflip: Variations on MIDI Pitch Inversion (November 19, 2017)
72. Thanksgiving with the Tindles and their Phonograph (November 22, 2017)
73. Speed-Correcting Phonautograms Without Pilot Tones (December 10, 2017)
74. “Ping Pong” Photos: An Introduction (December 23, 2017)
75. Animating Mathew Brady: Civil War Era Photographs in Motion (January 3, 2018)
76. The Music of Snow Crystals (January 11, 2018)
77. Induction and Retroduction (May 21, 2018)
78. Archivorithm #1: Experiment in Indeterminacy (June 16, 2018)
79. Reading Secretary Hand (and Sound Recordings, Too) (July 1, 2018)
80. The Secret Military Origins of the Sound Spectrograph (July 26, 2018)
81. Speech Averaging: “A Visit From St. Nicholas” (July 29, 2018)
82. Displaying Historical Newspapers as Motion Pictures (August 5, 2018)
83. Archivorithm Number One, Second Edition (August 20, 2018)
84. Time-Lapse Video from National Park Service Webcams (August 21, 2018)
85. Face Averaging and Google Image Search Results (September 30, 2018)
86. 101 Algorithmic Autoportraits (October 28, 2018)
87. A Document From 1656 Linked To Scotland’s Most Haunted Pub (November 4, 2018)
88. The World’s Oldest Aerial Photographs (November 18, 2018)
89. Shorthand on the QWERTY Keyboard, 1875-1917 (November 20, 2018)
90. Of Dreams and Patent Medicine (December 12, 2018)
91. Automatic Caricatures and Antifaces (January 5, 2019)
92. Time-Lapsed National Park Webcams: Monitoring the Shutdown (January 15, 2019)
93. Sound Wave Art: Current Practices, Future Possibilities (February 28, 2019)
94. Griffoynich: A Real Cipher That Mimics Voynichese (April 10, 2019)
95. A Gallery of Averaged Newspaper Front Pages (April 14, 2019)
96. Chronomorph: A Tool for Image Averaging, Time-Based and Otherwise (June 12, 2019)
97. Musical Record Payments by the New York Phonograph Company (1892-93) (September 29, 2019)
98. Faces of Valparaiso High School: Averaging Yearbook Portraits (1905-2013) (November 24, 2019)
99. Miss America: Face Averages of Candidates by State and Year (1997-2020) (December 19, 2019)
PS. My guess for the meaning of prlM in the advertisement for Longhand Shorthand is parliamentary. Thus: “He immediately understood the principles used in the work, therefore applied it in his parliamentary business.” Got a better idea? Let me know!