I’ve been using the word eduction to refer to a concept that I originally put forward in connection with sound media, but that I now think applies to a much broader range of subjects. In this post, I’d like to explain where the concept came from and spell out my latest ideas about its scope and significance. I’m afraid it’s a lengthy and meandering post, written in an effort to bring order to a messy backlog of thoughts. The gist of it, though, is that all media objects need to be actualized to make their content physically accessible to our senses in ways we can process. For example, phonograph records are ordinarily “played,” and unless we play them, we can’t experience the sensory stimuli associated with playing them. Other media objects need to be actualized too—I can’t think of any exceptions—and the specific ways in which they’re actualized have implications for processing and interpretation. There’s no single “right” way to actualize a media object, just as there’s no single “right” way to interpret one; but in both cases, some approaches are more informative, more conventional, more innovative, or more historically and contextually informed than others. This step of actualization is what I mean by eduction. As a media archaeologist, I’m interested both in exploring past practices of eduction and in developing new ones to help make old media objects more meaningful and more enchanting.
I.
Based on its Latin roots, the verb educe means literally “to lead out,” and the Oxford English Dictionary defines it as “to bring out, elicit, develop, from a condition of latent, rudimentary, or merely potential existence.” It has also taken on more specific meanings in particular fields. In logic, eduction refers to inferences following the model: “Mars is a solar planet; the earth is a solar planet; the earth is inhabited; therefore, Mars is inhabited” (example cited by W. E. Johnson). In geology, it is instead “a process in which the Earth crust spreads sideways, exposing deep-seated rocks” (in the wise words of Wikipedia). And there are also “eduction pumps,” better known as injectors.
I started using the word eduction myself to get around a semantic problem I ran into around ten years ago. As I pointed out in my dissertation, the process by which modern sound media output sound
has usually been called reproduction: a telephone receiver or phonograph reproduces a person’s voice, and the technical name for the component that transforms a phonogram into sound is the reproducer. But Alan Williams and others who have followed his lead hold that phonography never “reproduces” originary sounds but instead subjectively represents them, an objection that makes a more neutral term desirable—even though these same critics do continue to use the word “reproduce,” apparently for want of a viable alternative. A more practical problem is the ambiguity inherent in statements such as “Mr. Edison is reproducing the phonogram,” which could mean either (1) that he is playing it, reproducing the sounds it embodies; (2) that he is duplicating it, making extra copies of it; or even (3) that he is producing a new phonogram from scratch in imitation of an earlier one. Finally, in situations where several phonograms have been edited together into an “ideal event,” it would be misleading to speak of the results being reproduced when they may never have been produced as such before, quite apart from the question of whether phonographs ever reproduce sound at all.[1]
The argument about phonography being unable to “reproduce” originary sounds centers on the inherent loss of three-dimensional complexity. Rick Altman expresses the point nicely:
However much we might rotate our heads or change positions, we remain unable to make use of the directional information that was present when the sound was produced, but which is no longer available in the recording (unless it is in stereo, and even then the location of microphones and speakers plays just as important a role as the location of the original sound source). For listening to the sound pouring out of a loudspeaker is like hearing a lawn mower through an open window; wherever the lawn mower may actually be, it always appears to be located on the side of the house where the open window is.[2]
I found that argument pretty compelling, but my own motive for rethinking the usual terminology really lay more in wanting to write precisely and accurately about
- the actualization of heavily edited sound media, and of utterances such as “I’m not here right now; please leave a message after the tone”;
- instances where a “record” of one thing has been made to represent something entirely different (see here for some examples); and
- cases of sound synthesis via loudspeakers.
Treating these phenomena by default as “reproduction” seemed to me to miss the point and to impose a skewed and inaccurate perspective on what was going on. With all these issues in mind, I chose to use the words educe and eduction to refer to the process by which telephones and phonographs output sound; and although my main goal was to avoid pitfalls I associated with the term “reproduction,” I also sensed that there was a broader concept at stake. In fact, I listed some other examples of eduction, as I then understood it—the playing of a musical box, the projection of a motion picture film, and the running of a computer program—and confessed that I wasn’t “quite sure what the boundaries of this concept might turn out to be.”[3] As for the more specific kind of eduction which telephones and phonographs carried out, I called it tympanic eduction, by which I meant controlling the motions of a thin membrane or tympanum to impart a sound wave with desired characteristics to the surrounding air—in other words, the sort of thing loudspeakers do.
Sometimes I’ve used the term “eduction” loosely as shorthand for “tympanic eduction,” but even so, I’ve always assumed that there are other kinds of eduction as well.
A little while later, I was a principal in the First Sounds initiative that made headlines in March 2008 with its playback of a record of the song “Au Clair de la Lune” as sung into a phonautograph on April 9, 1860. The phonautograph was an instrument designed to record the rapid movements of a thin membrane under the influence of sound waves passing through the adjacent air. The wavy lines it scratched into lampblack on paper were originally intended for visual apprehension, so that people could see what a particular sound looked like, but my First Sounds colleagues and I succeeded in educing those same wavy lines as sound through tympanic eduction. The sung tune was easily recognizable by ear.
To keep up the momentum that fall, I launched a series of what I called “experimental eduction projects,” repurposing existing pieces of software to educe a variety of historical inscriptions tympanically. Some of these, including phonautograms, had been recorded from living reality, but others had instead been created by hand in formats that just happened to lend themselves to tympanic eduction: early nineteenth-century drawings of hypothetical audio waveforms, for instance, and medieval musical notation that can be interpreted as a graph of time versus frequency. You can read more about the techniques I used here and here.
This work gave me a new reason to want to differentiate the eduction of phonograms—inscriptions “of” sound, however made—from the “reproduction” of past sounds. And I didn’t feel that I was unduly stretching the boundaries of phonography to include the disparate early material I was targeting. After all, many twenty-first-century electronic dance music tracks are no more recorded from life than my medieval musical notation, but we still tend to treat them as “recorded sound,” just as we tend to treat animated cartoons as “motion pictures.”
From this new vantage point, I defined eduction as
synonymous with output transduction. Educing a phonogram entails generating a sound wave based on microchronic patterns of amplitude fluctuation specified in it, much as educing a film would mean to project or display it—that is, to cause its latent program of moving images to unfold over time and become perceptible.
My point was that the term wasn’t supposed to imply anything about the origin of the displayed patterns; it only meant that they were being transduced somehow to make them meaningfully perceptible to the ears or eyes. In the same article, I went on to contrast eduction in turn with retroduction:
[A] phonograph retroduces (“brings back”) a sound if it educes a phonogram made by recording that sound and the educed sound has an audible similarity, however tenuous, to the originary sound. The phonograph can retroduce a person’s voice as a camera or a mirror can depict a person’s face, and some instances of retroduction might diverge quite sharply in effect and intent from what we generally understand as “reproduction,” analogous to trick photography or reflections in a fun-house mirror. In such cases, some parameters of an originary sound might be retroduced (such as timbre) while others are not (such as pitch). Educing a fully synthetic phonogram doesn’t retroduce anything at all.[4]
I’d coined this term to differentiate the playback of sounds from actual “reproduction,” bearing in mind the arguments of Alan Williams and Rick Altman mentioned above. To follow up on the parallel with cinema, though, retroduction would also encompass the display on a motion picture screen of movements captured by a motion picture camera. In both cases, the output transduction depends on a prior input transduction—or, to express things more pithily, its eduction depends on a prior induction. Audio and video technologies can induce subject matter from living reality, and by educing the resulting records or signals they can also retroduce that subject matter, but they can also educe content that was created in other ways, such as video animation or audio synthesis.
That was where my thoughts about eduction stood as of the publication in 2012 of my book/CD, Pictures of Sound: One Thousand Years of Educed Audio, an anthology of sonic inscriptions that had originally been created in various ways for visual apprehension, but that I had converted into sound through tympanic eduction, “playing” the images using the same process we routinely use to play LPs, WAV files, and mp3s. The back cover explains: “To educe audio from a picture is to ‘play’ it was you would a sound recording.” Inside, I emphasized my transparent treatment of the source material: “we aren’t ‘playing’ it as a live musician might play a composition from a piece of sheet music or even as an automatic musical instrument might play a given piece from a pinned barrel, a perforated sheet, or a MIDI file—we’re simply actualizing the raw aural data that lies latent within the image itself.”[5] I’ll grant that a player piano also educes its perforated roll, but I’d argue that the content being educed in that case isn’t audio but abstract musical notes: the specific sound of the notes, as rendered by the piano, isn’t derived from the media object itself as it is with my tympanic eductions. In his review of Pictures of Sound, David Suisman did a good job of articulating the rhetorical point I was now trying to hang on the term “eduction”:
This coinage is…useful because it emphasizes the distance between putting sound into an object and drawing sound out of an object. As Feaster’s work demonstrates, these are ontologically distinct phenomena, which can exist entirely apart from each other. Sound can go into an object and not come out, as with a sound spectrogram (barring Feaster’s intervention), and sound can come out of an object with nothing originally having gone in, as with a “recording” of computer music. Thus, here and in his work more broadly, Feaster problematizes a commonplace (and seemingly intuitive) understanding of what a sound recording is, namely, a thing that conveys a sonic event from the past into the present, which it does more or less well, depending on its degree of “fidelity.” Rather, he opens up space to appreciate that what we call sound recordings are particular kinds of constructions, produced by any number of means and methods, and in any number of media.[6]
I thought my term “eduction” was an improvement over the alternatives, but I also saw of a couple potential problems with it, which I’d like to point out before continuing. First, I’d found myself using the verb “educe” in two different ways:
- I educe the record (of sound).
- I educe sound (from the record).
A like difference can be seen between these groups of sentences:
- I perform the sheet music (of the song); I project the filmstrip (of the story); I read the page (containing the poem).
- I perform the song (from the sheet music); I project the story (from the filmstrip); I read the poem (on/from the page).
Those other usages don’t seem to cause confusion, so I suppose “educe” can probably work both ways too. However, there seems to be a consistent ambiguity in verbs relating to the actualization of media as to what the direct object should be: the thing, or the stuff it embodies or enables. It’s worth being aware of this.
Second, I’d been referring to my processing work as “eduction projects,” and to the sound files as “eductions,” but eduction was properly an outcome of the projects, and something done with the sound files. The projects themselves really entailed making material educible: taking information that couldn’t be educed into audio in its original form (such as images on paper), and converting it into another form that could. So there’s a distinction to be made between actually educing something and putting it into educible form, and I can’t help but think in retrospect that One Thousand Years of Educible Audio might have been a more technically accurate subtitle. It’s only educed audio while you’re playing it.
II.
It was in the spring of 2013 that I discovered the work of Roy Harris, the founder of integrationism. At the time, I was working on a couple invited chapters about “phonography” that are both slated to come out later this year.[7] One meaning of “phonography” is sound-based writing, so I was trying to educate myself about relevant theories of writing, most of which—frankly—didn’t do much for me. By contrast, I found Harris’s book Signs of Writing[8] to be interesting, provocative, and useful, starting with the basics of integrationism itself, which I’d never heard of before:
The view of human communication adopted here is integrational as opposed to telementational. That is to say, communication is envisaged not as a process of transferring thoughts or messages from one individual mind to another, but as consisting in the contextualized integration of human activities by means of signs. (p. 4)
Given my interest in practices such as playing dance records with calls to coordinate social dances,[9] this view appealed to me quite a lot—it certainly made more sense than thinking of the records as transferring thoughts between minds. I was also impressed by Harris’s nuanced arguments about the formation and processing of various kinds of sign, advanced in an effort to clarify what’s distinctive about written forms in particular. Among other things, he explicitly distinguishes “glottic writing,” or “forms of writing related specifically to spoken language” (p. 13), from other forms of writing, such as mathematical writing. Hardly anyone else does this. Overall, Harris’s theoretical work on writing strikes me as an oasis of sophistication and lucidity in a desert of naïveté and obfuscation. There’s no scholarship on the subject I would recommend more enthusiastically.
But when I encountered a passage in which Harris asserts that “auditory forms of writing” are inherently impossible, I found it a little disconcerting. After all, I’ve tended to think at least metaphorically of “auditory forms of writing” as one of my own research specialties, and Harris’s other observations about writing seemed to fit and even illuminate them. So I backtracked to review the links in his chain of reasoning to see whether I ought to reconsider my own ideas about the writtenness of recorded sound. In connection with writing, Harris states:
The difference between forming and processing partially corresponds to that implied by the contrast between the traditional terms writing and reading, but is of broader scope. Forming is to be taken to include any activity or sequence of activities by means of which a written form is produced, and processing to include any activity or sequence of activities by means of which the written form is then examined for purposes of interpretation. (pp. 64-5)
Thus, arranging children’s alphabet blocks to spell out a word would count as forming, while processing might entail perusing a table of numerical data. Writing needn’t be accessed visually, Harris stresses, pointing to the example of Braille and arguing “that the underlying formal substratum of writing is not visual but spatial” (p. 45). However, he puts forward another criterion for distinguishing written forms from gestural forms, which are also formed and processed, and equally spatial:
It is the kinetic criterion…that distinguishes written communication from gestural communication, as it likewise distinguishes any static art form (e.g. painting) from any kinetic art form (e.g. ballet). The written form as such has no kinetic dimension, even though its formation may require precisely trained movements of the pen, brush, stylus, etc.; whereas the gestural form is intrinsically kinetic. If A and B communicate by gestures, then each must watch what the other does, that is, the actual movements of formation. (p. 42)
This has implications in turn for the repeatability of processing:
[W]hen the form of the sign has a kinetic dimension, it cannot be reprocessed (i.e., any further processing by a human being requires replication of the original form, with all the problems that entails). But any non-kinetic form can, in principle, be processed and reprocessed as often as may be, and by as many people as have access to it, within the temporal limits determined by its own duration. (p. 43)
A written form can be reprocessed again and again without any additional acts of forming, the argument goes, but a gesture can only be reprocessed if somebody reenacts—that is, re-forms—the gesture. I’m not convinced that gestures can reliably be distinguished from non-kinetic forms on these terms. In the case of holding out a thumb to hitchhike, or holding up two fingers to indicate “two,” or making the “a-okay” sign (by placing thumb and forefinger in a circle and extending the other fingers straight out), the formation of the sign involves motion just as the use of a pen does, but the sign itself has no kinetic dimension: as long as the fingers are held statically in position to constitute the sign—that is, “within the temporal limits determined by its own duration”—the sign can be “processed and reprocessed as often as may be.” Keeping two fingers held up in the air might admittedly count as doing something, so we might say that the distinction at issue is really whether the person responsible for the formation of the sign is still actively engaged in maintaining it in existence when it’s processed—though that distinction too might founder on the hypothetical case of a corpse found still “pointing” at something. However, I’m less interested here in neatly differentiating the categories of writing and gesture than I am in Harris’s distinction between signs that don’t need to be re-formed in order to be processed and signs that do. The real-time act of waving good-bye to somebody would seem to be an unambiguous example of the second type of sign. The sign doesn’t outlast the act of its formation; it can’t be reprocessed without a repetition of that act.
According to Harris, spoken language likewise falls unambiguously into the second category: “Speech belongs to this world. Why is it a different world? Because we are not physiologically equipped to reprocess a spoken message auditorily unless the acoustic signal is replicated” (p. 43). The same is true, he claims, of all aural signs:
It should be noted…that according to the view proposed here writing is a form of communication with at least one reasonably clear biomechanical limitation. It is this limitation which, for instance, precludes the possibility of developing auditory forms of writing—and not an arbitrary decision by the theorist to erect the lay use of the terms spoken and written into a categorical distinction. (p. 44)
And, further on:
[I]s the gramophone record writing? …. The integrationist answer to the question is clear: such indentations would indeed be writing if only the human eye or some other organ were biomechanically capable of reading them. But if that were actually the case, then our whole concept of the relationship between forms of communication would be different. That is to say, if we could inspect a gramophone record and read the configurations of the wax surface just as we read the marks on this page, then there would be a good semiological argument for saying that Thomas Edison invented a new form of writing.
As it is, however, the surface of the record or the metal strip is not a writing surface, even though its semiological function is in some respects comparable to that of the page or the wax tablet. (pp. 115-116)
To summarize, Harris concludes that phonograms aren’t writings because we don’t have any sense organ we can use to process them repeatedly without actually re-forming the auditory signal, just as it would be necessary to repeat a gesture. This is the claim I was reluctant to accept. I’ll concede my bias here, and I know that people are adept at finding arguments in favor of positions they like while ignoring arguments in favor of contrary positions. But I think I’ve found a flaw in Harris’s argument. Specifically, I take issue with the assumption that we are biomechanically capable of processing those things he accepts as written forms without a signal being “replicated.” If we aren’t, then the key difference he posits between them and auditorily-accessed forms disappears.
My doubt centers on a distinction I would like to draw between the physical forms that endure from processing to processing—say, a piece of paper on which a pen has drawn marks in ink or the trail of exhaust from a sky-writing plane—and the “visible signals” we’re physiologically equipped to process. I’ll grant that while a motion (of pen or airplane) might be required to create a written character, that same motion doesn’t need to be physically reenacted every time someone reads it; the physical form of ink or exhaust doesn’t need to be re-formed. However, our eyes aren’t physiologically equipped to process ink marks or exhaust trails directly. If we can process them visually, it’s because of the continual intervention of light: photons rebounding from the paper or the exhaust trail (or maybe passing unobstructed around the latter). The “static” visible signal that impinges on the retina isn’t biomechanically perceptible except insofar as it’s continually replicated as a pattern of light waves. The word transduction is sometimes used to refer to the eye’s conversion of photons into electrical signals.[10] Since this word can refer to any carrying-across of data from one modality into another, however, we should also be able to use it for the conversion of material patterns (e.g. ink on paper, exhaust in the sky) into patterns of light. And this latter scenario is plainly a case of output transduction—or, in other words, of eduction, in case you were wondering when and how I was going to come back around to that.
What I mean to argue, then, is that the tympanic eduction of phonograms and the photoreflective eduction of things such as books are equivalent to each other. Unless these media objects are somehow being educed, they’re respectively inaudible and invisible; and if they’re imperceptible, they’re incapable of communication. So we might broadly distinguish the inscription that endures over time, but that we can’t access directly as such with our senses; the external stimulus that we can access with our senses, but that needs to be constantly refreshed as part of the dynamic sea of stimuli in which we’re immersed; and the percept that results from the stimulus through exteroception. Within this framework, eduction can refer to any process that bridges the gap between inscription and stimulus.
To educe a phonogram (= inscription) as sound (= stimulus), I need to do certain things to it, and doing those things tends to require specialized equipment such as a turntable or an iPod. People sometimes refer to media that require specialized equipment for eduction as “machine-readable” rather than “human-readable,” as though they were created with an avid readership of machines in mind.But to educe an ordinary book in the expected way, I also need to do certain things to it, or at least to have certain things done to it. If the book is initially closed, it needs to be opened; maybe it also needs to be turned the right way up. Then its pages need to be exposed to the light—in a particular sequence and at an amenable pace—which might require additional actions in turn. If I happen to be in my bedroom in the middle of a moonless night, I might first need to light a candle or (more likely these days) turn on an electric lamp. If I’m outside in broad daylight, perhaps reclining in a deck chair, I can probably rely on the sun as my source of light, but that might still entail holding the book in a particular position, or even relocating myself to a well-lit and unshadowed spot. If my eyesight is compromised, I might need to use a magnifying glass. And so on.
In any case, there’s nothing static about the process: even if the inscription (= ink on pages in book) doesn’t change, the stimulus (= pattern of photons) must constantly be re-formed. It’s said that “there is no such thing as a still sound,” but there’s arguably no such thing as a still sight either: people can see things only due to the motion of photons, and specifically to an appreciable photon flux, conventionally measured in photons per second per square unit. In the absence of that constant photonic barrage, they aren’t physiologically capable of seeing a book or a painting any more than they’re physiologically capable of reading each other’s minds. In order for such things to be perceived sensorily, they need somehow to be actualized, and that actualization necessarily entails dynamic movement, even if we don’t habitually think about it in those terms. Moreover, from the standpoint of eduction, the electric light isn’t a medium without content, as McLuhan would have it; rather, the content it educes as light is minimally the pattern of an electrical current—with alternating current, a cyclical flicker or pulse that’s often too rapid to notice visually, although a loudspeaker would educe the same signal as an audible hum. If I use an electric light to educe a book in turn, its content ends up superimposed on the book’s content. With a fluorescent bulb and 60 Hz AC, the patterns on the page would be educed as visible stimuli sixty times a second. What could be more dynamic than that?
Educing an ordinary book can take some doing, but in other cases a far more complex set of intermediary steps is needed. Thus, I can’t biomechanically process a word processor document in the form in which it exists on a computer hard drive; I need to have it displayed on a screen or printed out—processes I don’t even fully understand—in order to generate the stimulus necessary for me to see it in a meaningful way. But is the sign formed at the moment I do this, any more than it is when I expose a book to the light of a bedside lamp? We certainly wouldn’t say I write a document simply by virtue of pulling it up on a computer screen, or that a computer printer writes it by printing it—it’s already been written, and it would be disingenuous to claim that an electronic book manuscript isn’t a “writing” except for any part I happen to have onscreen at any given moment. Harris celebrates digital word processing as a potentially revolutionary development in the history of writing, so he apparently accepts that “writings” can exist, at least temporarily, in biomechanically inaccessible forms.
I agree that a writing can be formed once and then “processed and reprocessed as often as may be, and by as many people as have access to it, within the temporal limits determined by its own duration,” without needing to be re-formed. But in order for this to happen, the writing always needs to be educed, and the stimulus needs to undergo a constant process of formation and re-formation. If the bulb in my bedside lamp burns out, or if a fuse blows, I can’t read my book any more. When the light waves stop, the reading stops—there’s no more optical signal to process. Even when the light is good, I’m still moving my eyes forward in jumps called saccades to facilitate parallel letter recognition while I read. Every time I want to go back and re-read something, I need to repeat that process, refocusing, reforming the signs on my retina, regardless of whether the stimuli for a whole page of text are simultaneously “there” and theoretically available. In the case of Braille, on the other hand, the stimuli are not all accessible at once. The raised dots can be “processed and reprocessed” without being re-formed, but the relevant somatosensory stimuli are the movement, pressure, and vibration produced by the act of running a finger over them. In this case, reprocessing—re-reading—means re-forming the stimulus itself. And we can imagine comparable situations arising with visual reading: consider an inscription scrawled on the wall of a dark and cramped cave where I can only shine my flashlight on a letter or two at once.
Similarly, a phonograph record can be formed once and then “processed and reprocessed as often as may be, and by as many people as have accessed to it, within the temporal limits determined by its own duration,” without needing to be re-formed. That is, the record doesn’t need to be remade in order for us to hear it again. But it still needs to be educed every time somebody hears and processes it, and the stimulus needs to be undergo constant formation or re-formation in order for auditory perception to take place. Once the sound waves stop, the record can no longer be heard.
When Harris rules out “auditory forms of writing” on the grounds that “we are not physiologically equipped to reprocess a spoken message auditorily unless the acoustic signal is replicated,” I believe he’s applying a double standard according to which “writings” are assessed as inscriptions (which don’t require re-formation for reprocessing), but sound recordings are assessed as stimuli (which always require re-formation for reprocessing). In doing so, he may have been influenced by the widespread assumption that phonography “reproduces” originary phenomena, such that “reproducing” speech phonographically is an act comparable to someone physically “reproducing” a hand gesture. But it’s really more like projecting a film of someone making a hand gesture.
One of Harris’s favorite test cases for a semiology of writing is the signature, the subject of chapter seven in his book Rethinking Writing.[11] If playing a sound recording were to be regarded as forming a sign rather than processing one, though, it would be difficult to account for specimens such as this (see here for information about the audio transfer):
This record has been made by Alexander Graham Bell in the presence of Dr. Chichester A. Bell on the 15th of April 1885 at the Volta Laboratory, 1221 Connecticut Avenue, Washington D. C. In witness whereof, hear my voice: Alexander Graham Bell.
The whole point of this oral signature (I don’t know what else to call it) is that Bell himself created it—that it is a sign he himself personally formed. The fact that such a thing is conceivable gives us yet more cause to assimilate the playback of sound recordings not to forming written signs (“any activity or sequence of activities by means of which a written form is produced”), but to processing them (“any activity or sequence of activities by means of which the written form is then examined for purposes of interpretation”). We certainly don’t think of holding a written signature up to the light so that it can be seen as “forming” the signature.
Moreover, Harris seems to assume we can neatly differentiate media intended for auditory perception from writings intended for “reading” (whether visual or tactile, as with Braille). The former aren’t “writings,” in his analysis, but the latter are. In reality, the division isn’t so clear-cut. When Harris states that “the surface of the record…is not a writing surface,” I assume this is just his roundabout way of saying that records aren’t writings; of course, the ungrooved surface of a gramophone disc often contains raised or recessed numbers and letters which are unambiguously “writings.” But sound recordings in certain forms designed for playback can themselves also be deciphered visually, up to a point. Consider the famous case of Arthur Lintgen, as related by Wikipedia:
Based on the physical construction and the grooves and contours on the record, he can recognize sections where music is loud or quiet, the length of each movement and so on. Then he uses his extensive knowledge of European classical music to recognize the music…. However, his ability is strictly limited to classical orchestral music by and after Beethoven. He says instrumental and chamber music creates unrecognizable patterns, and that pre-Beethoven orchestral pieces are usually too alike in structure to identify. When given an Alice Cooper recording as a control, he said it looked “disorganized” and “[like] gibberish”.
This is admittedly an unusual skill for someone to have, but at least one aspect of the grooving on an LP is expressly designed for visual apprehension—the brief widening of groove pitch that separates tracks, which enables users to find a desired track and drop the stylus onto the beginning of it—and even I can tell the difference visually between louder or softer passages on an LP, or between segments with a more or less pronounced beat. Those are macro-level observations, but micro-level observations are possible too, and they furnish even more compelling evidence of visual legibility. If I look closely at individual sections of the groove of a gramophone record (ideally through a microscope or a magnifying glass), I can’t recognize words or melodies, but I can still “read” features such as volume, frequency, and timbral complexity. In fact, the groove is technically a two-dimensional graph plotting amplitude against time, and it permits “reading” just as a graph on paper does, according to the same semiological protocols.
On the other hand, writings in forms intended for visual apprehension have sometimes been accessed auditorily. This past November, I was on a panel with Mara Mills at the wonderful Sonic Boom Conference held at Northwestern University, and in her talk “Acoustic Alphabets and Designs for the Optophone, 1913-1972,” she discussed some historical examples of handheld devices which blind users have been able to pass across ordinary printed texts to educe the marks optically as sound. She played a few examples to show what they sound like, and you find the sound files attached to her recent blog post on the subject: “Optophones and Musical Print.” I thought it would be interesting to try to simulate the process paleospectrophonically as well, so here’s the printed word optophone educed with different frequencies assigned to seven narrow vertical bands chosen to pick up the characteristic features of the letters.
These aren’t the same frequencies any real optophone would have used, but it’s my understanding that the original goal was to form chords at least roughly comparable to what I’ve produced. The audio is rendered at three different speeds in succession, but with the optophone the actual speed of “reading” was left up to the user, just as with the saccades of conventional visual reading. Users of the optophone have been able to learn to recognize the tones generated by different letters and to process written texts by ear in this way (or through subsequent modifications of it). But the material form of the “writing” as a physical object remains the same regardless of whether someone accesses it visually or via optophone. The same is true with the use of more recent text-to-speech devices, such as this one, but the early optophone was impressively transparent in its eduction from written mark to audible tone.
Even more common are forms of electronic data that can be actualized equally well for visual or aural apprehension. For example, a WAV file can be played through a speaker as audio, or it can be displayed onscreen as a waveform—that is, as a graph—with no change in the form of the underlying digital inscription itself, the enduring object that can be “processed and reprocessed as often as may be…within the temporal limits determined by its own duration.” Both forms of display can present the data with equal completeness and accuracy, and both can be equally useful, depending on what a person wants to do with the data.
So my response to Harris’s position on the status of sound recordings as “writings” has two parts to it:
- We can’t process any kind of inscription—ink on paper, Braille, phonograph record—without it being actualized in some way that creates stimuli we can sense, a step I call eduction. Inscriptions can endure from one processing to another, but eduction needs to happen every time processing occurs; stimuli need to be generated anew. This is a characteristic all inscriptions share in common, and not something that is distinctive about sound recordings. Therefore, it isn’t a criterion we can use to differentiate sound recordings from “writings.” We might still hypothesize that change over time in the form of the stimulus is essential to sound recordings but not for “writings.” This, I think, is what Harris really means by “kinetic forms.”
- But the distinction between visual and aural media objects ultimately appears to lie not in the inscriptions themselves, but in how they’re educed. Thus, if a book is educed photoreflectively, it’s a visually-accessed inscription, but if it’s educed with an optophone, it’s an aurally-accessed inscription (and the signs take on a kinetic dimension). For that reason, it seems misleading to label any inscription as inherently visual or aural, even if it may have been created with a given mode of eduction in mind. If a “writing” is defined partly in terms of the kind of processing it enables, then an object can be a “writing” only within specific contexts of eduction, stimulus, perception, and interpretation. Bearing devices such as computer screens in mind, I would propose that a “writing” might be anything that generates a stimulus to which we apply the distinctive approach to perception and interpretation we call “reading.” If we need to specify more precisely what “writings” are, I suspect it would be more fruitful to do so in terms of stimulus, perception, and interpretation than to try to characterize the inscriptions further themselves.
After all that, I have to admit that I haven’t formed any strong opinion as to whether sound recordings are properly “writings,” or whether “sound-writing” is just a metaphor for them. In fact, I’m less confident than I was at the outset that I know what a “writing” is, which isn’t necessarily a bad thing. But I’ve gone into my response to Harris at some length here because it was in working through these issues that I found my understanding of eduction starting to expand into its current form. It was Harris’s position that first prompted me to object: “Hey, books need to be educed too!”
III.
Communication requires signs, and signs can’t function as signs unless they’re perceived, so they need to manifest themselves as sensory stimuli. There aren’t a lot of options for that: barring ESP or the insertion of electrodes into the nervous system, the list is limited to sound waves, light waves, the chemicals of smell and taste, tactile pressure, temperature, and a few other bodily conditions that reveal more about us than about our surroundings. We perceive things around us in the world only insofar as they manifest themselves in one or more of these ways. I can see things because they emit or reflect photons, hear things because they impart vibrations to the air, smell and taste things because they scatter chemicals around and in me, and feel things (in a tactile sense) because they press against parts of me. Sometimes things manifest themselves via sensory stimuli due to natural processes: the periodic rise and fall of sun and moon, the happenstance saturation of the environment with sound waves and odors. But sometimes people need to go out of their way to generate suitable stimuli from things: shining a flashlight on them, putting them under a magnifying glass or ultraviolet light, tasting white granules to find out whether they’re salt or sugar, holding one’s nose to a bottle to find out whether it contains wine or ammonia. These are all processes of eduction, processes by which things yield up sensory stimuli—or are made to do so—so that we can sense them, discover them, learn about them, experience them.
The “things” in question are of two major kinds. Out in the world, objects exist and phenomena happen; both situations can generate perceptible sensory stimuli. I can see that a tree is there, and I can see an apple fall out of it. If nothing happens to the tree, I can look at it again and again as many times as I want as long as the sun keeps shining on it, and after the sun sets I can still feel my way around it; tomorrow, when the sun rises, I can look at it again. But for me to see an apple fall again, another apple would have to fall. Of course, some things that happen cause enduring physical traces of themselves to exist. Maybe there’s an apple on the ground now, so I can infer that it fell regardless of whether I saw it fall. Or maybe a passing animal ate the apple, and I can see where it went because it left behind a trail of footprints and feces. I can study such things to my heart’s content for as long as they continue to exist (under perceptible conditions, of course).
The McLuhanism that media are “extensions of man”—meaning extensions of the human body or mind—has been a productive metaphor, but it strikes me as rather like asserting that home mail delivery is an architectural extension of my house, or that air travel is a geographical extension of places that have airports. Eductionism, if it rises to the level of an “ism,” assumes a contrary model: that our senses are finite and that media can only diversify the stimuli available to them.
Some communicative signs are like the falling of the apple, such as the signs of spoken language. I speak and you hear me; for you to hear me speak again (as opposed to a record of me speaking), I have to speak again. Such signs are better understood as happening than as existing; sometimes we call them “ephemeral” or “evanescent.” If the sign’s meaning depends on timing, it’s tied to the moment of its composition, the moment when someone or something sets it up and determines its shape, because it is only during that one moment that it can manifest itself through sensory stimuli.
But other communicative signs are like the tree or the trail of footprints: it makes sense to think of them as existing rather than just happening. “Writings” (whatever they are), drawings, paintings, sculptures, photographs, sound recordings, and motion picture films all fall into this category. Someone or something still sets these signs up and determines their specific shapes (perhaps over an extended period of time), but they outlast that activity for at least a little while. Unlike the strictly “ephemeral” signs, a sign of this type has a continued potential to manifest itself through sensory stimuli after it has been set up, perhaps indefinitely. However, that potential can be realized only through eduction. If the sign’s meaning depends on timing, it might be tied to the moment of composition or the moment of eduction, and agency might similarly be understood as resting with the composer, the educer, or both at once. There are many different processes, strategies, and traditions of eduction available to us, some of which we tend to take for granted as “natural” (such as the photoreflective eduction of paintings), and others of which we don’t (such as the projection of motion picture films); but eduction of some kind is always necessary for these things to function as signs, and they can function as signs only to the extent that they’re educed, via the sensory stimuli so generated. Moreover, the generation of sensory stimuli is always dynamic—photons travel, air molecules vibrate, chemicals react—regardless of whether the patterns are static or ever-changing. Many processes of eduction can be applied both to enduring media objects (e.g., sound recordings, videocassettes) and to ephemeral data transmitted in real time (e.g., telephone and television signals), which underscores their dynamism, but even an ordinary printed page is arguably equivalent in intent to this strip of motion picture film:
Eduction can also occur serially if an “original” inscription or signal isn’t educed directly but has its content induced into some other form from which it is educed in turn. In fact, content might undergo any number of successive transductions—including duplication, reformatting, adjustment of levels, restoration, compression, decompression, and transmission—before it manifests itself in a particular eduction. What’s essential is an unbroken chain of mechanical or automatic causality. There can’t be any point in the process where someone senses the content, mentally processes it, and then subjectively re-forms it from scratch, like a medieval monk copying a prayer-book. Rather, the content must be transduced passively, without anyone needing to attend consciously to each detail, even if there’s still some agency and room for intervention involved: think of a pictorial impression made with a cylinder seal, a photographic print made from a photographic negative, a pirated copy of a DVD. In a chain of this sort, eduction is not only “of” its immediate object, but also “of” prior objects in the chain, albeit often with some degree of generation loss. Projecting a film print educes the print, but also (less directly) the negative from which it was printed. A loudspeaker might educe an electrical signal, but also (less directly) the motions of a stylus in a groove and (even less directly) an LP on a turntable and (less directly yet) the master recording from which the LP was cut.
Since I’ve previously defined eduction as “synonymous with output transduction,” every case of output transduction within a chain of serial eduction is itself technically an eduction of sorts, even when the output takes a biomechanically imperceptible form, as with the electrical output from a sound card (which might be the input for a pair of earbuds). We can call these intermediary eductions, but I don’t have much to say about them, other than to observe that they need to be input into some other external system to become actionable or meaningful. What I’m interested in here are terminal eductions, outputs that don’t need to be input into anything but a physiological sensorium.
There’s every reason to suppose that people have been experimenting creatively with strategies for educing the inscriptions they’ve made for as long as they’ve been making them. Consider Paleolithic cave drawings. “In the absence of natural light,” we read, “these works could only have been created with the aid of torches and stone lamps filled with animal fat.”[12] True enough; but they also require an artificial source of light for viewing, as is vividly apparent from this reminiscence by Ralph Morse, who photographed the paintings at Lascaux Cave in 1947, just a few years after their rediscovery:
“We were the first people to light up the paintings so that we could see those beautiful colors on the wall,” Morse said. “Some people had been down there before us—but with flashlights, at best. We were the first to haul in professional gear and bring those spectacular paintings to life. It was a challenging project—getting the generator, running wires down into the cave, lowering all the camera equipment down on ropes. But once the lights were turned on. . . . Wow!“[13]
“Today, when you light the whole cave, it is very stupid because you kill the staging,” says Jean-Michel Geneste, Lascaux’s curator…. Worse yet, most people only see cave paintings in cropped photographs that are evenly lit with lights that are strong and white. According to Geneste, this removes the images from the context of the story they were meant to tell and makes the colors in the paintings colder, or bluer, than Paleolithic people would have seen them.
Reconstructions of the original grease lamps produce a circle of light about 10 feet in diameter, which is not much larger than many images in the cave. Geneste believes that early artists used this small area of light as a story-telling device. “It is very important: the presence of the darkness, the spot of yellow light, and inside it one, two, three animals, no more,” Geneste says. “That’s a tool in a narrative structure,” he explains.[14]

High on the Nave’s right wall, an early artist had used charcoal to draw a row of five deer heads. The images are almost identical, but each is positioned at a slightly different angle. Viewed one at a time with a small circle of light moving right to left, the images seem to illustrate a single deer raising and lowering its head as in a short flipbook animation.[14]
If the Paleolithic use of cave paintings sometimes involved educing them as animations through the selective illumination of fortuitously-shaped walls, that would explain why they turn up in such hard-to-access spots—away from both natural light and everyday living quarters—as neatly as any other hypothesis I’ve seen.Does this mean that shining modern electric lights on all the Lascaux paintings at once doesn’t count as “educing” them? Of course not. Eduction isn’t an all-or-nothing affair. The same things can be educed legitimately in different ways and to different degrees.



















- Pre-eduction: actions taken preparatory to eduction in order to set it up, such as mounting a film on a projector, organizing a playlist, or aligning a pair of separate tintypes side by side for stereoscopic viewing.
- Circum-eduction: actions carried out in parallel with eduction to complement it, such as live musical accompaniments or narrations for “silent” films.
- Post-eduction: actions by which educed signs undergo further actualization after mental processing, as with reading aloud from books or performing from sheet music.
- Eductio-navigation: actions conventionally taken to shift positional “focus” within a media object being educed, such as turning pages in a book, running one’s eyes along a line of text, or skipping through tracks on a CD, with reliably predictable outcomes.
- Eductio-manipulation: actions that alter the parameters of eduction in real time as the basis for new expressive forms, as in turntablism, or for interactively unfolding experiences, as in video games.

- as light proceeding from a surface to the retina (paintings, books, “screen practices”);
- as aerial vibrations proceeding from a diaphragm to the eardrum (telephones, phonographs, loudspeakers);
- or in other ways, such as those associated with 4D film or the Visotactor (a variant on the optophone with a vibro-tactile output rather than an auditory one).
Each type of educed stimulus might consist of a single stream intended to engage sensory organs indiscriminately, or else as a pair of streams (e.g., stereoscopy, stereophony) or a greater number of streams (e.g., quadraphonic sound) intended to engage sensory organs differentially. One or more parameters of an inscription might also be keyed to directionality in time, either with the simplicity of a timeline or the complexity of a flowchart with conditional elements. If so, it might be educed
- as an unchanging stimulus, with one parameter representing time statically, as in a graph with a time axis; or
- as a changing stimulus, with the directionality-in-time parameter determining the sequence and/or pace at which patterns are educed.
Factors that might lead someone to choose to educe a given inscription in one or another way include:
- Convention: there’s a cultural expectation that a particular kind of inscription will be educed in a particular way, or that a particular kind of content will be sensed in a particular way.
- Intention: the creator of the inscription meant for it to be educed in a certain way, or wanted a certain aspect of it to be perceptible and processable.
- Retroduction: the stimuli generated through a given mode of eduction can resemble the stimuli originally induced: records of forms changing over time might be actualized as forms changing over time; records of sound might be actualized as sound; photographic traces left by brighter light might be actualized as brighter light.
- Practicality: a particular mode of eduction is feasible and available: the user has access to the necessary devices and the necessary senses, and the process won’t unacceptably harm the inscription.
- Modality bias: patterns in the inscription can best be perceived as manifested and sensed through a particular kind of stimulus: for example, human beings can reportedly follow rhythms better by ear (e.g., a drumbeat) than by eye (e.g., a flashing light).
I don’t want to say there are wrong ways to educe an inscription, but for something to be a good example of eduction, I think it needs to facilitate making sense of the thing being educed. The link between the parameters of an inscription and the parameters of a stimulus shouldn’t be wholly random. I’m not sure exactly where to draw the line, though, and I suspect it’s properly a question of aesthetics or ethics (what eduction should be) rather than a question of ontology (what eduction is). When I’ve touched on this dilemma before, it’s been in an effort to distinguish my own eduction of “pictures of sound” from sonification. Granted, formal definitions of sonification often equate it simply with making data audible. In practice, however, the word “sonification” has typically been used to refer only to cases in which sound is conspicuously being made to represent subjects other than sound, and formal definitions sometimes also restrict it explicitly to “non-speech audio” (e.g., Wikipedia). On those grounds, applying it to the tympanic eduction of media such as phonautograms seems to me to imply—wrongly—that we’re doing something fundamentally different with them than we do with, say, LPs or mp3 files. After all, nobody ever talks about “sonifying” a record on a turntable. Jacob Smith alludes to this concern of mine in a thoughtful account of various kinds of auditory display:
Patrick Feaster makes a useful distinction between “the sonification of non-aural data,” such as the spatiotemporal relationships between Olympic athletes or the movement of atomic particles, and the sonification of “aural data stored non-aurally,” as with Scott’s phonautograms or the grooves of a phonograph record.[20]
The latter was the kind of data I’d been “educing,” but my own inclination was actually to contrast this work with sonification, rather than treating it as a subcategory of sonification. I wrote in Pictures of Sound that
we can use reverse Fourier transform software to turn any image whatsoever into sound if we want to, including a photograph of a tree’s branches or the Mona Lisa, but it would be absurd to conclude from this fact that every image in the world is ipso facto also a phonogram. The important distinction here, I believe, lies in whether or not an image was originally created to function as a representation of sound. The photograph of the tree’s branches and the Mona Lisa were presumably created as representations of phenomena in the visual world, not the auditory world. There’s a word for taking non-auditory data of this sort and expressing it aurally: sonification. On the other hand, images like Linyova’s [i.e., graph-based musical notations] were originally created as representations of phenomena in the auditory world, not the visual world. When we educe them, we’re not linking visual parameters to auditory ones arbitrarily at our own whim. Instead, those correlations are already present, inherent in the logic of the inscriptions themselves: “right” represents motion forward in time, “up” means an increase in frequency, “down” means a decrease in frequency, and so forth. The inscriptions themselves embody a phonographically meaningful data structure, just as a synthetically generated mp3 file does. We can legitimately educe them as sound because the sound is already implicit as such within them.[21]
On further reflection, I’d like to suggest that there are really two different phenomena at issue here:
- Tympanic eduction, the technique of actualizing data as sound through the controlled rapid movement of a membrane.
- Sonification, the strategy of using a sonic parameter to represent something other than itself.
If we draw this distinction, then playing back a phonogram per se isn’t sonification because it lacks the right kind of strategic leap: the connection between the parameters of the inscription and the parameters of the sound is already implicit, bound up in the traditions of phonography or the indexicality of recording or both at once. I might adjust the playback speed, but I don’t choose which parameter represents time. I might adjust the volume, but I don’t choose which parameter represents amplitude. I might turn the data into sound physically, but I’m not responsible for connecting it with sound in the first place. Playing a waltz on the piano from a piece of sheet music isn’t sonification either, for much the same reason, even though it’s obviously a case of making non-sonic data audible.
Sonification, on the other hand, entails using aspects of sound to represent information that doesn’t itself pertain to them as such; it’s an auditory equivalent to data visualization with its scatterplots and bar charts and spectrograms, which nobody would confuse with simply displaying a photograph. Every restaurant in the Long John Silver’s chain has a so-called “Captain’s Bell” mounted near its exit, with a sign reading: “If we did well… RING the Bell!” That’s sonification of a rudimentary sort: a distinctive clang has been assigned to represent and display customer satisfaction. In 2010, the New York Times website published “Fractions of a Second: An Olympic Musical,” a presentation that used sound—a synthesized piano note—to display the differences in finishing time between the gold medalists and other competitors in several Olympic racing events. That was sonification too: time represented actual time, but piano notes had been chosen purposefully to represent crossings of the finish line. The work Dario Robleto and I have done converting pulse records into “lub-dub” sounds falls into the same category: the rhythms are authentically retroduced, but the linkage of blood pressure data to specific sonic parameters—however indexically it represents the actual marks on the page—is something we came up with ourselves. The same goes for my eduction of early Morse code records as 1000 Hz tones. And here’s yet another personal example. I find it easier to remember melodies than numbers, so if I need to memorize something like a PIN or bank account number and I’m worried I’ll forget it or garble it, I’ve sometimes converted its digits into musical notes: 0 (B), 1 (C), 2 (D), 3 (E), 4 (F), 5 (G), 6 (a), 7 (b), 8 (c), 9 (d). For example, my wife and I bought a new house last year, and the street number is 3519, which I originally memorized as a musical sequence, like this:
I first made the conversion strictly in my head—there wasn’t any actual “auditory display” involved—and yet I’d argue that this was nevertheless already an act of sonification, an act in which a sonic parameter (notes of a musical scale) was used strategically to represent something else (digits in a street number). Indeed, it’s the subjective step of assigning a sonic parameter to represent something else, and using that parameter in turn as a means of grasping something it wouldn’t ordinarily be used to grasp, that strikes me as the essence of sonification. And that doesn’t really happen with the playback of records of sound. Playing an LP or a phonautogram doesn’t involve choosing sound to represent information about the unrelated parameter of waveform shape; rather, it uses sound to actualize the sound which the waveform shape conventionally represents.
To me, this all suggests that we should be careful to distinguish brute modes of eduction (e.g., photoreflective or tympanic) from the culturally informed strategies of representation in which they’re implicated (e.g., the graphing of sound; data visualization; sonification), and those two things in turn from combinations of both at once (e.g., the practice of phonographic “playback”).
A few closing thoughts:
- Sound media tend to get theorized in one of two ways: either scholars borrow theory from the study of visual media and try to make it fit sound media, or they generate new theory to accommodate or contest ways in which sound media seem different from visual media. I think eduction theory could have the potential to break out of that pattern, and to move in the opposite direction: originating in the study of sound culture, but proceeding from there to help illuminate visual culture as well.
- If it’s been productive to think in terms of “reading” films and sound recordings, then perhaps we ought to try thinking in terms of “playing” books and paintings as well, just to see what happens.
- A great deal of what’s “new” about new media seems to center on eduction, as opposed to content or processing or anything else. Isolating it as a factor might help expose important historical continuities and turning-points. What have the most revolutionary developments in eduction been in the past twenty years?
- This blog post ended up a lot longer than I thought it would be. If you made it this far, I probably owe you a beer or something.
Endnotes
1. Patrick Feaster, “‘The Following Record’: Making Sense of Phonographic Performance,” 1877-1908, Ph.D. thesis, Indiana University Bloomington, 2007, available online here, at page 31.
2. Rick Altman, “The Material Heterogeneity of Recorded Sound,” in Sound Theory Sound Practice, edited by Rick Altman (New York and London: Routledge, 1992),15-31, at page 29. An earlier article presenting a version of this argument, as referenced in the passage from my dissertation, was Alan Williams, “Is Sound Recording Like a Language?,” Yale French Studies 60 (1980): 51-66.
3. Feaster, “Following Record,” 32.
4. Patrick Feaster, “‘A Compass of Extraordinary Range’: The Forgotten Origins of Phonomanipulation,” ARSC Journal 42:2 (Fall 2011): 163-203, available online here, at pp. 164-165. Originally I had defined retroduction somewhat differently; see page 197, note 6.
5. Patrick Feaster, Pictures of Sound: One Thousand Years of Educed Audio, 980-1980 (Atlanta: Dust-to-Digital, 2012), 49, 52.
6. David Suisman, “A Thousand Years of Audio Recording: Patrick Feaster’s Pictures of Sound,” American History Now, March 20, 2014, online here.
7. “Phonography,” in Keywords in Sound Studies: Towards a Conceptual Lexicon, ed. Matt Sakakeeny and David Novak (Duke University Press, forthcoming 2015), 139-150; “Phonography and the Recording in Popular Music,” in Handbook of Popular Music, ed. Andy Bennett and Steve Waksman (SAGE Publications, forthcoming 2015), 511-529.
8. Roy Harris, Signs of Writing (London and New York: Routledge, 1995); specific page citations given inline.
9. Feaster, “Following Record, 461-486.
10. See e.g., Wikipedia (here); or Denis Baylor, “How Photons Start Vision,” Proceedings of the National Academy of Sciences 93 (January 1996), 560-565, online here.
11. Roy Harris, Rethinking Writing (London and New York: Continuum, 2001).
12. Laura Anne Tedesco, “Lascaux (ca. 15,000 B.C.),” Heilbrunn Timeline of Art History, Metropolitan Museum of Art, online here.
13. Ben Cosgrove, “Life at Lascaux: First Color Photos From Another World,” LIFE.com, online here.
14. Zach Zorich, “Early Humans Made Animated Art,” Nautilus, March 27, 2014, online here.
15. Marc Azéma and Florent Rivère, “Animation in Palaeolithic Art: A Pre-Echo of Cinema,” Antiquity 86:332 (June 1, 2012), 316-322, at p. 319.
16. Azéma and Rivère, “Animation,” 320.
17. Feaster, “Following Record,” 42.
18. Paul Saenger, Space Between Words: The Origins of Silent Reading (Stanford, California: Stanford University Press, 1997), 9.
19. Carole A. Myscofski, “Against the Grain: Learning and Teaching” (2001), Honorees for Teaching Excellence, Paper 4, Illinois Wesleyan University Digital Commons, p. 2, online here.
20. Jacob Smith, “Explorations in Cultureson,” in Carol Vernallis, Amy Herzog, and John Richardson, eds., The Oxford Handbook of Sound and Image in Digital Media (Oxford: Oxford University Press, 2013), 279-286, at 280.
Pingback: Moon Phase Animations (AD 650-1650) | Griffonage-Dot-Com
Pingback: Primeval Animations and the Voynich Manuscript | Griffonage-Dot-Com
Pingback: A Memoir about MIDI | Griffonage-Dot-Com
Pingback: Historical 3D Fun With Anaglyphs | Griffonage-Dot-Com
Pingback: Time-Based Graphs as Moving Pictures (1786-1878) | Griffonage-Dot-Com
Pingback: Film Fragments in Old Magazines (1896-1922) | Griffonage-Dot-Com
Pingback: Listen to an Electric Fish from the 1870s! | Griffonage-Dot-Com
Pingback: My Fiftieth Griffonage-Dot-Com Blog Post | Griffonage-Dot-Com
Pingback: Time-Based Image Averaging | Griffonage-Dot-Com
Pingback: Early Motion Pictures of Eclipses (1639-1880) | Griffonage-Dot-Com
Pingback: In Search of the World’s Oldest Digital Graphics | Griffonage-Dot-Com
Pingback: More Tricks for Playing With Audio | Griffonage-Dot-Com
Pingback: Animating Mathew Brady: Civil War Era Photographs in Motion | Griffonage-Dot-Com
Pingback: Induction and Retroduction | Griffonage-Dot-Com
Pingback: Reading Secretary Hand (and Sound Recordings, Too) | Griffonage-Dot-Com
Pingback: The Secret Military Origins of the Sound Spectrograph | Griffonage-Dot-Com
Pingback: Displaying Historical Newspapers as Motion Pictures | Griffonage-Dot-Com
Pingback: Time-Lapse Video From National Park Service Webcams | Griffonage-Dot-Com
Pingback: All Griffonage That On Earth Doth Dwell | Griffonage-Dot-Com