Hear the Surface of Mars Played Like a Phonograph Record

If we could somehow drop an enormous phonograph stylus down on the surface of Mars and speed up the planet’s rotation enough, would we hear something like music—if music of an eerie and peculiar kind?  This trick wouldn’t work well with our own planet, since its topographical features aren’t regular enough, at least on a global scale.  Playing Earth like a phonograph record would just produce something resembling surface noise with no content underneath.  The Moon and Mercury are even worse candidates than Earth, riddled with craters as they are.  I’m not sure about Venus.  However, natural forces on Mars would seem to have shaped much of that planet’s surface into repetitive patterns not unlike the pits that encode a sound signal on the surface of an old wax phonograph cylinder.

Representative detail of “Elevation” view from Google Mars.

Of course, we can’t really use a physical needle to trace the surface of Mars and listen to it as directly as we’d listen to an LP on a turntable—merely colonizing Mars would be a piece of cake by comparison.  But we can digitally simulate what would happen if we did.

[Click here if you just want to skip ahead and listen to the good stuff—no hard feelings!]

Optical playback systems for grooved sound media, such as Saphir and IRENE, are sometimes described as creating a “map” of a record surface and playing it afterwards with a “virtual stylus.”  But we can also use a “virtual stylus” to play elevation maps of much larger-scale topography, including the entire surfaces of planets.  It might be tricky to play a planet as though it were a gramophone disc with zig-zag lateral modulation, unless we found a nice river somewhere to guide our stylus.  But the audio signal on a phonograph cylinder is cut vertically, modulated up and down, and any planetary surface topography can be read in those terms.  Most cylinder phonographs use a feedscrew to guide a reproducer at a steady rate from one end of the record to the other, rather than letting the groove itself guide the tonearm, and we can make a “virtual stylus” track a planet’s surface in much the same way.  The idea is to set the planet rotating virtually beneath our stylus, beginning with a “needle drop” at the north pole, and then to move that stylus gradually downwards until we reach the south pole.  Of course, if we rotate the planet at the same rate throughout this process, our linear speed will increase as we approach the equator and will then decrease again upon passing it.  But much the same thing happens as a stylus moves from the outer rotations of an LP to the inner ones, so we should be able to live with it.

The idea for this project came out of a morning conversation over coffee with my brilliant wife, Ronda L. Sewald, who works as a map cataloger for the Indiana University Libraries and accordingly spends much of her time immersed in the world of geographic data.  She came up with the idea of treating the contour lines of elevation maps as wave crests and playing them as audio, maybe following a path spiraling out over time from a central point.  The prospect of needing to interpolate values between contour lines suggested that raw elevation data might be easier to process, and grayscale elevation maps seemed like particularly low-hanging fruit.  That was the morning of February 27, 2021.  By that evening, I was able to treat our son to audio extracted from the surface features of Mars.

My first experiment wasn’t on Mars, though, but on our own planet, Earth.  This involved combining two datasets: a topographic dataset for everything above water (white = +6400 m, black = 0 m) and a bathymetric dataset for everything below water (white = 0m, black = -8000 m), with a zero crossing at sea level.  One rotation around the planet spans 21,600 pixels, or ~0.49 second at a 44.1 kHz sample rate, so we can conveniently set our playback speed to 122.5 rpm, or a theoretical linear speed at the equator of approximately 3,050,429 miles per second, over sixteen times the speed of light.  If we convert each row of the image into sound at that rate, we’ll end up with 88 minutes of audio.  That’s a bit much, so I condensed the vertical dimension of the image by a factor of twenty to bring the playtime down to roughly the duration of a four-minute Edison Blue Amberol cylinder.  To match the counterclockwise direction of the planet’s rotation, we also want to play each row from right to left rather than from left to right.  It’s true that the groove path of a phonograph cylinder is a continuous helix, whereas I’m just presenting each image row corresponding to a line of latitude in sequence from pole to pole, but adjacent rows should be similar enough for the jump between them not to be noticeable.

With that, I’m pleased to give you Planet Earth played as though with a giant stylus scraping terrestrial life into oblivion at up to sixteen-plus times lightspeed.  Fortunately, our “virtual stylus” doesn’t disturb even a single grain of sand.  Sometimes non-contact approaches can be advantageous! 


[download]


The result sounds a lot like playing a pitted and effloresced phonograph cylinder; if you don’t know what that sounds like, here’s an example.  I’ve played the Moon like a phonograph cylinder too, based on this dataset, to the same specifications and to similar effect.  (You might also check out this YouTube video, in which someone tries something similar but reduces each rotation to a tiny fraction of a second.)


[download]


But it was when I tackled the planet Mars that things really got interesting.  The Martian dataset I used—a digital elevation model based primarily on data from the Mars Orbiter Laser Altimeter, and downloadable here as of this writing—is larger than the others, and I had some trouble with out-of-memory errors at first, but if we downscale the vertical dimension by a factor of eighty, we get something like this.


[download]


What caught my attention right away was those squealy pitched tones at the beginning and end.  My first instinct was to assume these were artifacts of the imaging process.  After all, that sort of thing sometimes happens with optical imaging systems for grooved sound media, for example if the AC flicker of room lighting and the rate of image capture sync up in just the wrong way to contaminate the audio with an audible hum.  But when I took a closer look at the Martian elevation image itself—which is slightly more involved than just opening it in Photoshop, since it’s a 16-bit TIFF—I quickly spotted features with extremely regular spacing that nevertheless seemed to fit seamlessly into the surrounding topography, with gentle gradients around them.  Were these for real?

Details of elevation near the Martian north pole in cylindrical projection, according to Mars Orbiter Laser Altimeter digital elevation model (contrast enhanced)

Now, I’m still not sure exactly what we’re seeing here, but please don’t jump to the conclusion that we’ve stumbled upon the ruins of a lost Martian civilization.  I believe we can safely assume that if this represents something legitimately present on the surface of Mars, it would be due to some sort of natural process like those that produce “patterned ground” as seen in photographs here, here, here, and here, or “rhythmic” sedimentary rocks as described here.  The fact that the Martian landscape contains periodically repetitive features is pretty well established.  So maybe it shouldn’t have come as a surprise to see patterns like these, or to hear musical pitches.  But it did come as a surprise, to me at least—a rather provocative surprise.

I proceeded to tweak my parameters to draw out the tones more effectively.

Here’s the Martian north polar region played at the original source resolution for both longitude (128 pixels per degree times 360 degrees = 46080 pixels) and latitude (23040 pixels for the whole planet, but only excerpted here), with a 44.1 kHz sample rate, or about 344.5 degrees per second.


[download]


Details of elevation near the Martian south pole in cylindrical projection, according to Mars Orbiter Laser Altimeter digital elevation model (contrast enhanced)

And here’s the Martian south polar region, played similarly, but starting at the pole and moving north towards the equator.


[download]


I swear that I’m doing nothing more here than playing Martian elevation data as though it were on the surface of a phonograph cylinder.  Take the published digital elevation model and convert it into audio yourself if you don’t believe me.

Of course, many thoughts began rushing through my head as I listened to the results of these experiments.  As we move away from either pole towards the equator, for example, the pitched sounds increase in frequency until they eventually exceed the audible range; that could be because our linear speed is steadily increasing, such that it takes less and less time to pass between the crests of whatever those undulations represent.   If I were to get my hands on elevation data for the whole planet at the same linear resolution I have near the poles, I supposed, maybe I’d be able to hear pitches in many other places as well.  Indeed, I imagined that the whole of the planet Mars might be playable from pole to pole as a kind of natural phonogram embodying a vast symphony of synthetic tones that await discovery.  Or if we thought of it as a musical composition rather than as extraterrestrial “found sound,” I decided we could call it Areophonia (areo-, Martian; -phonia, sound).

Slowing the speed at which we play the same data I’ve been using results in lower audio quality, but it makes certain details easier to hear.  Here’s the first six and a half minutes of audio from each pole played at half the speed of the previous examples, corresponding to a sample rate of 22,050 Hz, or about 172 degrees per second.  (Actually, I upsampled at the whole-image level, keeping the audio sample rate at 44.1 kHz and applying a rough anti-alias filter afterwards.)


North Polar Region [download]


South Polar Region [download]


And here’s some audio presented at a quarter of the speed (i.e., with the speed halved once again) corresponding to a sample rate of 11,025 Hz, or about 86 degrees per second, starting at 5:00 and running through the 14:00 mark.


North Polar Region [download]


South Polar Region [download]


Does some of that sound a bit like a bee buzzing in a bottle?

I’ve used the word “play” in this blog post to refer to what I like to call eduction whenever I’m not making an effort to avoid specialized jargon—that is, the act of “drawing out” stimuli from a latent or potential condition and thereby making them perceptible to our senses.  The experiments I’ve presented here also involve sonification, which I understand as the strategy of using a sonic parameter to represent something other than itself, and specifically a subcategory of sonification sometimes referred to as audification, or the “direct translation of a data waveform to the audible domain” (as Roger Dean puts it in The Oxford Handbook of Computer Music).  That is, I’m taking data that isn’t intrinsically “about” sound, but that can still be educed as an audio signal with informative results.  Still, some forms of audification are more conventional than others.  For example, transducing electromagnetic waves into sound waves is extremely conventional—so much so that critics sometimes seem to forget there’s even a distinction between radio and sound.  I’d argue that treating sequences of variations in depth as an audio signal is likewise extremely conventional, such that playing the surface of Mars like a phonograph record isn’t all that much more whimsical or arbitrary than playing whistlers like radio broadcasts.

Just a few days before I carried out the experiments shared above, NASA announced that the “first audio recording” of the Red Planet had been secured by a microphone aboard the Mars 2020 Perseverance Rover, including not only the rover’s own noises but the sound of a gust of wind in the Martian atmosphere. This sounds much like the rattly “wind noise” recordists try to avoid when capturing audio on Earth, but under the circumstances, it’s conceptually stunning—a remarkable first. Even so, this wasn’t the first time something purporting to be “sound” from Mars has made the rounds. Among other cases I could mention, back in 2018, Domenico Vicinanza and Genevieve Williams sonified an image of a Martian sunrise to produce a two-minute musical work entitled Mars Soundscapes, and headlines about their achievement included “Experience the sound of a martian sunrise” and “What Does Mars Sound Like?” The piece is pleasant to consume from a popular aesthetic standpoint, but its algorithmic logic of assigning brightness, color, and elevation to sonic parameters is a bit opaque; none of several articles I’ve read on the subject really explains it. As a result, I’m unsure how much meaningful information it conveys to us through our ears about what sunrises on Mars are like.

Considering these two other examples of Mars-related audio, I think Areophonia could fall somewhere in between them.  It’s not a full-fledged audio recording in the sense that the Mars 2020 Perseverance Rover’s wind-noise recording is, but its approach is still straightforward and conventional—no more arbitrary than the process of audifying radio signals.  Meanwhile, it may not be as musically consumable as Mars Soundscapes, but I’d say it still makes for an interesting listening experience, and what it sounds like follows from features that are at least reportedly out there on the surface of Mars, with no special measures taken to make it sound more musical.

That isn’t to say I know quite what to make of it yet myself.  But now that it’s out there, maybe others will help me figure that out.


Postscript (March 1, 2021): Today I took a look at some alternative sources of Martian elevation data, and I’m now more uncertain than I was before as to whether those periodic undulations near the poles are real.  If someone better versed in Martian affairs than I am could offer some insight, I’d be grateful!

The digital elevation model I’d been using has an edition date of March 21, 2003 (let’s call this Source A).  But I now see that a “final version” of the MOLA Mission Experiment Gridded Data Records was released on May 7, 2003, and that it’s supposed to “supersede all earlier releases.”  The images associated with the final version are in an IMG format that I found even more troublesome to view and play with than 16-bit TIFFs, requiring me to futz around with byte reversal and such.  The 128-pixels-per-degree dataset equivalent to the other elevation model (let’s call this Source B) is also broken rather inconveniently into sixteen tiled parts.  Still, I eventually managed to get Source B to cooperate well enough to get a good look at it.

It turns out that most, though not all, of the extremely conspicuous periodic patterns in Source A, as illustrated above, are in areas beyond the borders of Source B, which cuts off further away from the poles.  The documentation for Source A notes: “The polar gaps from 88 to 90 north and south have been filled by reprojecting polar PDS releases.”  It also states: “Data are very sparse near the two poles (above 87° north and below 87° south latitude) because these areas were sampled by only a few off-nadir altimetry tracks. Gaps between tracks of 1–2 km are common, and some gaps of up to 12 km occur near the equator. DEM [Digital Elevation Map] points located in these gaps in MOLA data were filled by interpolation.”

The final version also provides a polar projection of each circumpolar region at 512 pixels per degree of longitude corresponding, I assume, to the “polar PDS releases” used by Source A to fill the polar gaps.  If I return the data from Source A to a polar projection, I get a set of radiating striations around each pole corresponding roughly to the region with those periodic patterns that translate into tones when we play them.

But if I compare these results against the actual polar releases, I find that they show the same areas blurred out.  Here’s a comparison of representations of the Martian south pole with approximately the same level of contrast; my re-projection is on the left, and the official release is on the right.

I suspect these areas were probably blurred out because the people who prepared the polar releases considered the data especially unreliable.  Maybe they had been sampled at long intervals, with polar ice growing or shrinking between passes.  And maybe the sampled points were staggered such that once the data was combined, elevations recorded during one pass ended up interleaved with elevations recorded during other passes.  Add some interpolation into the mix to blur the boundaries, and I could imagine this process creating realistic-looking phantom surface features something like the ones we see.  Maybe.

But whatever they’re all real or not, vertical striations are definitely part of the leading resources available for exploring Martian topography today.  If you navigate as far north as you can in Google Mars, and zoom in as close as you can, they’re about the only thing you’ll see.

Even some distance away from the poles, vertical striations continue to pervade the landscape, superimposed on other features, although if we switch to “Visible” display, they disappear.

To whatever extent any of these north-south striations are artifacts of the imaging process, the resulting “Martian tones” could be chalked up to the same cause.  But even if this proves to be a decisive factor in what we’re hearing, I’d like to think that Areophonia would still be worthwhile as audio of superbly exotic origin.  And if this isn’t a nice little case of the results of sonification stimulating new questions, what is?


Post-postscript (March 2, 2021): Here are some anaglyphs I made to show what some of the periodic patterns in Source A would like like if viewed three-dimensionally, although the vertical scale is arbitrary and exaggerated.  The idea is to gauge whether these are plausible topographical features or not.  Get out your red-cyan glasses if you’ve got ’em.  First, some examples from the north polar region.

And then some examples from the south polar region.


Post-post-postscript (March 7, 2021): The audio from the north and south polar regions follows strikingly similar patterns, so I combined the 172-degrees-per-second sound files for the two areas in stereo to make their resemblance easier to hear.  Either the same glitch in the imaging process is producing similar artifacts in the far north and the far south, or the topography of the two polar regions has oddly symmetrical characteristics.  So which is it?


Thanks for stopping by!  If you’ve enjoyed this post as much as you’d enjoy a cup of coffee from your coffeehouse of choice, please consider supporting the work of Griffonage-Dot-Com by leaving a proportionate tip.

Donate with PayPal

3 thoughts on “Hear the Surface of Mars Played Like a Phonograph Record

  1. Great article — I love your site. As an artist who has done some work based on Reis’ work, I found that article fascinating, and your site overall delightful. I thought you might be interested in my friend Jens Brand’s e-player, from about 2010. He built a device which uses the positions of various satellites as needles playing across the surface of the earth. https://vimeo.com/2758871.

    • Many thanks for sharing the link to Jens Brand’s video description of his G-Player! I like his idea of using satellite positions to control the “stylus.” I see that his G-Turns storefront is no longer online, but the Wayback Machine has it archived at various dates, including https://web.archive.org/web/20120110045011/http://www.g-turns.com/ — which reveals that it even offered custom cuttings to vinyl! Unfortunately, the associated podcast series seems to have vanished more permanently, so I’m not sure if any listening options remain to satisfy curious ears. Brand writes (https://web.archive.org/web/20110112070236/http://www.g-turns.com/pages/en/home/6.how_does_it_work_.htm) about “scratching across” topographic features “in the same way a record player plays records,” with oceans rendered as silence, but then describes generating the sound “such that the shape of the topography equals the shape of the audio wave, in the same way that one would look at the latter with the aid of a 3D audio program (spectral analysis),” which implies a spectral approach based on something like additive synthesis, not directly analogous to what a stylus would do. That may have been his only option, since even the fastest satellites move far more slowly than the faster-than-light “stylus” in my examples — around 17,000 mph as opposed to almost 11,000,000,000 mph (at the equator), hence around 650,000 times slower. I suspect that playing topographic features as actual oscillations at such a reduced speed wouldn’t sound like much.

  2. I don’t know of any recordings of the G-Player available online, but having spent some time with one, I can tell you it tended to be dead silent (over the ocean), or some variety of colored noise over more mountainous areas. I see what you mean about the speed of the ‘needle’ — all but the roughest terrain would probably fall into the sub-audio realm, and the vast majority of what one heard was silence.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.