Automatic Caricatures and Antifaces

The art of drawing caricatures might seem quintessentially human, but computers too can exaggerate the differences between any given face and a “normal” face, and they can also work out the opposite of a given face—sometimes called its “antiface”—which is less intuitive, and not a feat I think human artists have ever attempted freehand.  In this post, I’ll explore both possibilities by building on the algorithmic autoportraits I’ve been making by averaging the results of Google image searches (see here and here).  The word “autoportrait” is most often used as a synonym for self-portrait, and in a sense that’s what my autoportraits are as well—portraits of aggregated image search results created by those image search results—but I also mean to highlight their automatic character: the possibility of producing them simply by entering a search term and waiting for the algorithmic magic to run its course.  For present purposes, what’s valuable about these algorithmic autoportraits is that they do such a good job of distilling the most typical appearance of a subject, factoring out the idiosyncrasies from hundreds of individual source images.  That makes them ideal raw material for generating caricatures and antifaces.

But there are also some hazards involved.  Caricatures are often unflattering and can be downright offensive, and it’s impossible to construct a caricature or antiface without also making an implicit assertion about what a “normal” face is.  I don’t find much discussion of ethical points in past work along similar lines, but I’ve tried to be sensitive to them myself.  You can decide whether or not I’ve been successful.

I begin by taking an algorithmic autoportrait and defining forty-one control points in it by manually clicking on each of a sequence of locations in turn: four per eye, three per eyebrow, five for the nose tip and nostrils, four for the rest of the nose, eight for the mouth, nine for the perimeter of the face, and one for the top of the head.  It’s possible that some other set of control points would work better—this is just what I came up with off the top of my head.  Ears come out as a blur in the autoportraits themselves, so I haven’t done anything with those, even though exaggerated ear size is admittedly a hallmark of traditional caricatures.

As for the mechanics of the markup, I use a piece of software I’ve been developing based on some ideas I put forward a couple years ago in a post on “Averaging Faces in Profile, and Other Things.”  I still plan to blog about the program itself separately at some point, but for now I’ll just say that it lets the user define a template (like the one with forty-one control points per face), mark up a bunch of source images based on it, warp the images to the average locations of the control points across a whole group of images, and average the results.  Unlike commercially available alternatives, it works with subject matter other than forward-looking faces and lends itself conveniently and efficiently to the kinds of batch processing I want for large-scale time-based image averaging.  On the other hand, the scale of the markup screen is rather limited, which is one reason why I haven’t tried to mark the top and bottom edges of the eyebrows and other such fine points.  I’ll also have to beg your indulgence for occasional skewed nostrils, off-kilter eyes, and so on—not necessarily a problem for caricatures, but more so for antifaces.  I’m sure many of the early efforts I’ll be sharing here could be improved through more accurate placement of control points.

For the next step—creating a reference image—I generated a median average of 104 assorted algorithmic autoportraits, abstracted from maybe thirty or forty thousand total source images.

In what follows, other faces will be compared and contrasted with this face.  It’s important to appreciate that it’s not an objective point of reference, since it’s obviously skewed as to ethnicity, among other things.  However, it does represent a pervasive cultural baseline in which whiteness, youth, and so forth are normalized, and against which artists tend—whether consciously or not—to construct their caricatures.  So as you look at the algorithmically generated caricatures and antifaces below, keep in mind that they’re predicated on the assumption that this face is the “normal” one.  At the end of this post, I’ll show what happens if we swap in a different face as “normal.”


Caricatures

To create an automatic caricature, we now want an algorithm that will exaggerate the differences of any given source image from our designated reference image.  Wikipedia traces the relevant computational principle back to Susan Brennan’s 1982 master’s thesis, The Caricature Generator, but notes:

The results produced by computer graphic systems are arguably not yet of the same quality as those produced by human artists. For example, most systems are restricted to exactly frontal poses, whereas many or even most manually produced caricatures (and face portraits in general) choose an off-center “three-quarters” view.

That’s not to say there haven’t already been some impressive attempts at automatic caricature, one of the most recent being WarpGAN.  But I wanted to try my hand at it too, and I’m pretty pleased with the results, which I believe stand up well to the competition.  One distinction to bear in mind is that I’m focusing here strictly on exaggeration per se, and not on other stylistic features associated with caricatures, such as cartoon-like line drawing.  Mathematically, I calculate warping targets for the control points as s-(f*(r-s)) where s is the source, r is the reference, and f is an exaggeration factor, set arbitrarily to 1.15 for the examples I’ll be presenting here.  After warping, I also exaggerate detail or texture by calculating pixel values likewise as s-(f*(r-s)), with f set to 1.  By way of illustration, the figure below shows an algorithmic autoportrait of Steve Bannon; a warped caricature; and finally a caricature with both warping and texture exaggeration.

Below is a gallery of other caricatures generated in the same way.  In each case, I started with an algorithmic autoportrait generated by downloading and averaging Google Image search results, occasionally with a little manual retouching around the edges of the face.  Otherwise, the only significant human input has been the manual assignment of control points—something I’m sure a computer could have done.  And yet I think these would all pass muster as culturally acceptable caricatures.

Mike Pence

Hayden Panettiere

Quentin Tarantino

Cara Delevingne

Hugh Laurie

Warren Buffett

Paul McCartney

Ringo Starr

Daenerys Targaryen (Emilia Clarke)

Jimmy Carter

Elon Musk

Daniel Radcliffe

Rudy Giuliani

Kellyanne Conway

William Shatner

One nice thing about this approach is that the amount of exaggeration can easily be varied.  Below is a video that presents ten different caricatures with progressively greater exaggeration over the course of a few seconds, beginning with unaltered algorithmic autoportraits and ending with a level of distortion somewhat greater than in the examples shown above.  (It’s worth observing that the more a given face initially departs from the norm, the more any given distortion factor will exaggerate it.  One idea I haven’t tried would be to calculate the sum of all differences between subject and reference control points and to base the amount of exaggeration on that figure, ensuring that all faces will be exaggerated to the same degree.)


Antifaces

Not only can we generate caricatures by exaggerating the differences between source images and a reference image, but we can also generate antifaces by inverting those differences.  This second principle has been explored by researchers including Rob Jenkins and A. Mike Burton, who observe (here, on page 19 of the pdf):

This antiface has some interesting characteristics.  First, it looks like a plausible photographic face.  It was not obvious in advance that this would be the case.  Second, psychologically relevant dimensions such as sex and emotional expression are reversed by this process (female becomes male; sullen becomes cheery), even though these dimensions are not explicitly coded at any stage.  In addition, all aspects of the physical appearance of the face take on the opposite valence, so that dark complexion becomes light complexion, upturned nose becomes downturned nose, etc.

Anthony C. Little et al. have distinguished further between the inversion of shape and the inversion of color, since either of these parameters can be inverted independently.  Unlike caricatures, antifaces don’t really have any cultural precedent in the visual arts outside the field of computer vision, but that doesn’t mean they can’t be culturally interesting or provocative.  As commentaries on appearance and identity, I’d think it could be just as suggestive to present an opposite—an inverse, an antithesis—as to present an exaggeration.

And it’s easy to adapt the caricaturing technique described above for creating antifaces. To select control points for warping and to calculate pixel values for inverting texture/detail/color, we simply reverse the variables in the equation: instead of s-(f*(r-s)), we take r-(f*(s-r)), with f set here to 1.  The result should theoretically be a face that differs from the reference precisely as much as the source does in its shape and detail, but in exactly the opposite direction.

Let’s begin with a few examples of algorithmic autoportraits shown side by side with their “raw” antifaces.  First, here’s George Takei:

And Pope Francis:

And Tomi Lahren:

And Gandalf (Ian McKellen):

Now, let’s be clear: this isn’t an exact science.  My placement of control points isn’t as precise as it could be, which surely affects the outcome a bit.  The process is also not losslessly reversible, although the antiface of an antiface generally looks pretty similar to the original face.  There are other grounds for skepticism besides, which I’ll go into momentarily.  But caricatures aren’t an exact science either, and as long as we’re willing to treat antifaces as a similarly playful kind of art, I think we’re on defensible ground.

It’s also worth observing that features are inverted only when they’re always present in some kind of continuum.  Hair, for example, can be light or dark, or present or absent, but as far as these antifaces go, the “opposite” of long hair isn’t short hair; the “opposite” of being clean-shaven isn’t to have a beard; and the “opposite” of not wearing glasses isn’t wearing glasses.

Unfortunately, the “raw” antiface doesn’t usually turn out quite as well as it did in these carefully chosen examples.  Slight misplacement of control points can produce ghastly artifacts, and funny things can happen with coloration when hairlines and so forth don’t coincide during texture inversion.   With that in mind, I began “taming” my results by warping the reference image to the same target points (second image below), placing it as a layer under the “raw” antiface in Photoshop, setting the blend mode to Screen, flattening the result, duplicating it as a new layer, setting the blend mode to Multiply, and again flattening the result (third image below).  This reinforces the most typical features of the antiface while concealing the most atypical ones.

The “tamed” results can sometimes also benefit from a little judicious retouching (which I’ve also taken the liberty of doing with a few of the caricatures)—for example, we might opt to erase the diagonal line above the eye on the viewer’s right, or the subject’s left, in the antiface shown above.  Of course, every step along these lines makes the process even less reversible than it was before, and distinctions of coloration in particular tend to get leveled out; witness the hair, eyes, and complexion in the above example.  But once again, this isn’t an exact science, and the “raw” results might actually be misleading in that their precision outpaces their accuracy.

Below you’ll find a gallery of additional antifaces made in the way I’ve just described.  In spite of what Jenkins and Burton claim, these antifaces aren’t all equally “plausible.”  I suppose that could be due to some flaw in my technique, but I also suspect that the range of variance in features isn’t arranged symmetrically around the median, such that inverting an extreme in one direction might yield an out-of-bounds result in the other direction.  Consider too that my goal here isn’t impeccable accuracy or photorealism, but rather the game of imagining what opposites might look like, however hazily, imperfectly, or cartoonishly glimpsed.

With those caveats in place, I invite you to think about what these antifaces might mean, if anything.  To whatever extent we read facets of character into facial features that span a continuum, an antiface should arguably invoke a diametrically opposed set of associations.  So what inferences would you draw from each of the faces below based on its appearance?  And how do those inferences compare with your perception of the subject of the antiface?  Does this exercise help reveal something about the subjects by putting a face to their “opposites,” and if so, in what sense (identity, character, background, values, politics, history)?  Or does it instead challenge the very premises that underlie our reading of faces by forcing us to contend with examples that lie so far outside our ordinary experience, both as to appearance and as to origin?

Taylor Swift

Rodrigo Duterte

John Cleese

Mark Zuckerberg

Angela Merkel

Katy Perry

Brad Pitt

Uma Thurman

Mike Pence

Mitch McConnell

Elon Musk

Kellyanne Conway

Nicolas Cage

David Boreanaz

Sarah Huckabee Sanders

Leonard Nimoy

Melania Trump

Quentin Tarantino

Stephen Colbert


Shifting the Point of Reference

Caricatures exaggerate differences, and antifaces invert them, but either way the differences always need to be relative to something.  So far this has been the average I showed at the beginning of my post—the one generated from 104 algorithmic autoportraits—but choosing a different reference image can produce significantly different results.  Here, for example, is an alternative reference image I generated from ten algorithmic autoportraits created with search terms chosen expressly to favor Black faces (e.g., Black celebrity and ‘African American’):

Substituting this as our “normal” face allows us to generate alternative caricatures in which Blackness, and not whiteness, becomes the reference point.  To illustrate the difference that can make, here’s an algorithmic autoportrait of Oprah Winfrey; a caricature relative to the earlier model; and a caricature relative to the newer model.

And here’s the same treatment given to Tomi Lahren:

In each case, does one caricature more closely match what we’d expect from an “ordinary” caricature than the other?  For that matter, shouldn’t it be possible to work backwards from any given caricature to infer the “normal” face against which it was constructed—an anti-caricature?

The effect of different reference images on antifaces is even greater than it is on caricatures, since the reference image gets weighted more heavily in them.  Below are a few contrastive examples of pairs of antifaces made using the same two reference models applied above.  The faces obviously differ, but at the same time they also share some striking similarities, suggesting that antifaces aren’t wholly contingent or arbitrary.

Colin Kaepernick

Kim Kardashian

Elizabeth Warren

Nicki Minaj

Ivanka Trump


In Conclusion

It’s evident that algorithmic caricatures and antifaces always need to be constructed in relation to a “normal” face, such that they’re only valid in relation to a specific universe of faces that might correspond to a specific population or a specific target community.  But of course this is also true of “ordinary” caricatures.  It’s just that when we make computers do the work instead of human artists, we need to make the underlying assumptions explicit; they’re not ensconced irretrievably in someone’s brain as they usually are.  If there’s a problem here, it’s one shared by every caricature that anyone has ever drawn.

That said, the techniques I’ve laid out above appear to be on the right track—or at least on a right track—when it comes to the computational synthesis of effective caricatures and antifaces, despite their relative lack of sophistication (and ears).  My caricatures seem more consistently recognizable than other computer-generated caricatures I’ve reviewed, and my antifaces, which are based on exactly the same logic, seem more fully developed than other antifaces I’ve seen.  Of course, there’s still plenty of room for refinement.  For example, here’s the “raw” antiface of George Takei overlaid in Photoshop on an antiface generated in a different way (by generating separate antifaces relative to 100+ algorithmic autoportraits and then averaging the result), with blend mode set to Screen, then layers flattened, then Brightness reduced to taste.

Is it an improvement?  Maybe, but the possible permutations are endless.

As far as I’m aware, antifaces have been treated in the past only as scientific exercises in computer vision or psychology.  But I want to suggest that they have aesthetic and expressive potential as well, once we cross some hard-to-pinpoint threshold in the state of the art.  Just imagine the cultural debates certain antifaces could provoke.  Donald Trump routinely gets caricatured in political cartoons and elsewhere, but more than that, aspects of his personal appearance such as comb-over and orangeness are often ridiculed as though they weren’t merely incidental details—what he happens by chance to look like—but actually reveal something deeper about who and what he is.

Caricatures exaggerate these supposedly meaningful features.  But an antiface reverses them; it can put a specific face to the “opposite” of Trump’s distinctive appearance.  So if it’s appropriate to dwell on Trump’s face as an index of his character and identity, then is Trump’s antiface similarly significant: an image, somehow, of what he’s not?  Could a well-constructed antiface become a potent symbol or rallying point, the abstract face of #resist?  Or does this whole experiment instead help expose the vacuity of fixating on such appearances in the first place?

 

2 thoughts on “Automatic Caricatures and Antifaces

  1. First, I laughed so hard at some of these anti-faces, and agree that in case of Trump, Pence, and a few of the others, they do suggest the richness that these souls could embrace, rather than deem them as “The Other” they must put at arm’s length. Second, now I am dying to know what my anti-face shows and whether it is a part of me that I should also embrace. As always, your work is brilliantly fascinating. Cheers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.