101 Algorithmic Autoportraits

Here for your consideration is an online gallery of 101 algorithmic autoportraits—pictures generated by computer based on nothing more than simple search expressions.  Each one was made by relaying the unvetted results of a Google image search to a program I wrote to hunt for faces in them, extract whatever it finds, align positions of eyes, noses, and mouths, and output a median average.  I introduced this process last month in a previous post, but it’s only since then that I’ve really had time to apply it in earnest.

Some of the averages shown below represent searches on the names of well-known people of the past and present, including politicians, actors and actresses, musicians, authors, athletes, and even fictional characters who’ve come to be closely associated with particular performers.  In each of these cases, I’d say my technique has achieved a good likeness with a subjective effect that vacillates between flattery and caricature.  These are unambiguously portraits, in the sense of pictures emphasizing the distinctive features of particular individuals.  Of course, there are many other pictures available of each these people, which is what makes it possible for my technique to work in the first place; but even so, I believe portraits created through algorithmic averaging offer something unique.  Like photographs, they’re automatic in the sense that they operate on material already extant out there in the world at (more or less) the click of a button, even if there’s some human intervention involved in setting up the process—mainly in choosing search terms, analogous to pointing a camera—and some optional postproduction, sparingly applied.  But like paintings or drawings, they also have the power to generalize and abstract.  I’m sure my algorithm is no substitute for what a virtuosic portrait artist does, but the results aren’t wholly dissimilar in spirit.  One curious point: even when these images are based overwhelmingly on photographs, they don’t look like photographs.  They look more like something a human being might have painted or drawn.

In other cases, I’ve searched on a keyword or key phrase that corresponds to some broad category, in which case what we get is less a “likeness” than a kind of visual archetype.  Nevertheless, I’d argue that these results, too, qualify as portraits.  After all, face averages have been called “composite portraits” since Galton‘s day, and art history is replete with images bearing such consciously generic titles as “Portrait of a Girl”—if you search on that phrase, you’ll turn up plenty of examples.  I did, as you’ll see.

Click on any picture below to access it in higher resolution (1000×1000 pixels per subject).  Boldface titles represent search terms, with quotation marks included wherever I used them for the actual search (though I’ve converted double to single); non-boldface titles are only explanatory.

Please check out my previous post for further discussion of techniques, parallels, antecedents, et cetera—as well as more pictures.


‘Portrait of a Girl’

Buddha

‘Marble Bust’

Headdress

‘Jimmy Carter’

‘Fan Bingbing’

‘Dalai Lama’

Abaya

‘Portrait of Prince’, ‘Portrait of Princess’

Man With Turban

‘Got Milk’

‘Portrait of an Old Man’

‘Victorian Lady’

Refugee

‘Hugh Laurie’

‘Malala Yousafzai’

‘Blue Haired’, ‘Pink Haired’, ‘Green Haired’, ‘Purple Haired’

Androgynous

‘Queen Victoria’

‘Queen Elizabeth II’

‘Hassan Rouhani’

‘Mohammed bin Salman’

‘Oprah Winfrey’

‘Mark Zuckerberg’

‘Angela Merkel’

The Beatles: ‘John Lennon’, ‘George Harrison’, ‘Paul McCartney’, ‘Ringo Starr’

‘Elizabeth Warren’

‘Michael Moore’

Headband

‘Portrait of King’, ‘Portrait of Queen’

‘John Cleese’

‘Marilyn Monroe’

‘Marilyn Manson’

Madonna Icon

‘Nicolas Cage’

‘Old Lady’

Saraswati

‘Gibson Girl’

‘J. K. Rowling’

‘George R. R. Martin’

Characters from Game of Thrones: ‘Daenerys Targaryen’, ‘Tyrion Lannister’, ‘Arya Stark’

Klingon

Star Trek cast members: ‘Nichelle Nichols’, ‘William Shatner’, ‘George Takei’, ‘DeForest Kelly’, ‘Leonard Nimoy’, ‘James Doohan’

Noh

‘Elizabeth Taylor’

‘Jeff Bezos’

‘Sarah Michelle Gellar’

‘David Boreanaz’

‘Emily Deschanel’

Mohammed

‘Beautiful Face’, ‘Pretty Face’, ‘Cute Face’, ‘Handsome Face’

Violinist

Harpist

‘Portrait of Mr’, ‘Portrait of Mrs’

‘Aly Raisman’

‘Colin Kaepernick’

‘Katy Perry’

‘George Clooney’

‘Nicki Minaj’

‘Pope Francis’

‘Jennifer Morrison’

‘David Bowie’

‘Jackie Chan’

‘Tatiana Maslany’

‘Andre the Giant’

‘Uma Thurman’

‘Elvis Presley’

Elvis Impersonator

Pinup

‘Rodrigo Duterte’

‘Mitch McConnell’

‘Sarah Huckabee Sanders’

‘Kellyanne Conway’

‘Mike Pence’

‘Tomi Lahren’

‘Steve Bannon’

Clown

Supermodel

‘Poet Laureate’

‘Green Eyed’

‘Colonel Sanders’

Flowers in Hair

Just a few days ago, while I was in the midst of pulling these examples together, an algorithmically generated painting called Portrait of Edmond Bellamy sold at auction for $432,500.  It was created using a generative adversarial network, which is an approach entirely different from mine, and it’s far more uncanny than even the strangest-looking of my autoportraits, which probably helped it fetch the sum it did.  But if we’re interested in exploring whether algorithms can convincingly mimic the products of human artistic creativity, I can’t help but see some common ground.  In fact, I’m pretty sure many of my algorithmic autoportraits could pass among human viewers for human-fashioned portraits (in a distinctive style, admittedly) to the point that they could substitute for them—on a magazine or book cover, say, or illustrating an editorial about some political figure or celebrity.  Anyone want to try?


Technical Postscript.  Quantities of source face images detected and used above range from 109 (“clown”) to 785 (“refugee”), with a typical case falling between 250 and 500.  For source quantities above 600, I’ve begun adding an extra unsharp mask step at 150%; only “refugee” is affected here.  In a few instances where enough of the source images were monochrome to yield a washed-out result, I’ve boosted vibrance saturation to bring out distinctions of color (by 35% for Victorian Lady, by 50% for Elvis Presley, by 75% for Marilyn Monroe, and by 100% for John Lennon, George Harrison, Gibson Girl, and Queen Victoria).  I’ve also brightened or darkened parts of images using burn and dodge tools, always brightening both eyes and their immediate surroundings (as described in my previous post), sometimes brightening exposed teeth (Elizabeth Warren, Sarah Huckabee Sanders, Kellyanne Conway, Tomi Lahren, Nicki Minaj, Tatiana Maslany, Pinup), often adjusting contrast around face edges to conceal “ghosts” of alternative boundaries (Poet Laureate, Colin Kaepernick, Leonard Nimoy, Pinup, Sarah Michelle Gellar, Headdress, Old Lady, Marble Bust, Jennifer Morrison, Queen Elizabeth II, Portrait of a Girl, Portrait of Queen, Green Eyed, Beautiful Face, Pretty Face, Cute Face, Victorian Lady, Gibson Girl, and all four multi-colored hair images), rarely adjusting the background to draw out muted features by darkening (Headdress) or brightening (Noh; Flowers in Hair; Portrait of King, Queen, Prince, Princess, Mrs).  I’ve obviously left some other artifacts alone, such as smeared ears, since I think of those as hallmarks of the technique.  And I’ve had some misgivings about doing even such limited subjective postproduction as I’ve done.  But the adjustments I’ve made are, I think, no more intrusive than run-of-the-mill photo editing, so I don’t feel they seriously compromise the computational origin of the pictures.

Advertisements

2 thoughts on “101 Algorithmic Autoportraits

  1. Hey Patrick– Very cool, and a great update on the pre-digital version of this procedure from the 1890s. Why not animate these portraits, arranging the component images chronologically?

    • Thanks for the suggestion! That’s exactly what I’m hoping to do in the long run, as long as I can confirm that the process is successful and streamlined enough to make it feasible. As for Google image results specifically, those can be filtered by date back to 2008, which gives us ten years’ worth of source material to play around with. The question is whether most of the images which Google dates to, say, 2011, are actually pictures from 2011 rather than earlier ones that just happen to have been posted online, or indexed, during 2011. Early experiments suggest that’s the case, at least for “active” celebrities who have remained continuously in the news. Of course, the averaging algorithm can be applied to images from other sources too, which might be more reliable as to date.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.