Check out this picture of a faceless man. Can you tell what he is feeling? Is he happy? Or sad? Is he upset? Or angry? You just can’t tell, right? If I showed you his facial features, however, you would know in an instant.
Let’s look to art for some examples. It is impossible to miss the horror in Edvard Munch’s famous painting, “The Scream.” You can almost hear it. The man’s wide eyes, flaring nostrils and wide-open mouth tell the story.
But, consider for a moment Leonardo da Vinci’s “Mona Lisa.” Is her smile friendly and warm, or is it shrewd, as if telling the viewer “I know what you are thinking.” This enigmatic smile has inspired many thousands of interpretations and, to borrow from the Iliad, is “the face that launched a thousand careers” in art history.
What is missing for us to make a more informed decision about her smile is motion. If we could just see her facial muscles move, we could tell if her smile is genuine, or just polite, or maybe even cynical. The human body conveys an abundance of information necessary for mediating socio-emotional communication. Bodily movements, facial expressions and eye gaze shifts allow us to extract information from others. We can then use this to understand their thoughts, intentions and moods. Without the ability to perceive this information, social interaction would be difficult.
Even bees do it
Social animals need to be able to recognize the members of their society. But, what do I mean when I say, “recognize”?
I have written in the past about a fascinating clue provided by social insects. It turns out that social insects like ants, bees or wasps recognize their own species’ brothers and sisters. They are able to distinguish between nest-mates and those from other nests.
The paper wasp Polistes fuscatus not only recognizes the faces of individuals of the same species, they are also experts in face discrimination. In other words, they can recognize differences in relationships between facial features, like the distance between the eyes. Imagine that. They can recognize individuals!
Human Facial Recognition
We humans, with our complex social structure, could not possibly navigate our lives without being able to “read” other individuals. We do this by reading faces.
Is that person smiling? Is the smile genuine? Or, is it a receptionist smile that sends a professional message: “Can I help you?” Is the person frowning? Angry? Hostile? Should I be afraid? Amazingly, we are able to distinguish between a menacing frown and a frown made in jest. How do we do it? The answer is by reading facial muscles and their movements.
As Paul Eckman’s studies have shown, intense, genuine smiling, known as Duchenne smiling, involves the muscles lifting the corner of the mouth as well as those orbiting the eye. Non-Duchenne (also known as social, deceptive, or standard) smiling involves only using the muscles that lift the corners of the mouth. This type of smile is less often related to genuine feelings of happiness or enjoyment.
An excellent example of the difference between Duchenne and non-Duchenne smiling can be found in this photo from U.S. News and World Reports tongue-in-cheek story, “Who is Happier: Liberals or Conservatives?”
Of course, there is more to facial expression than a mere smile. A shifting gaze of a “shifty” person, a tightening of the lips of the severely judgmental school marm, a tilt of the head of “c’mon, are you serious?” These movements also convey important clues to someone’s state of mind.
What’s going on in the brain?
Having such an important role in functioning in the world, you’d expect face recognition to occupy dedicated space in the brain. In fact, more than one brain area is implicated.
As you might expect, the visual cortex, located in the occipital lobe of the brain, is involved. The occipital face area recognizes the parts of the face during the early stages of facial recognition. For instance, it recognizes a nose or a mouth. But it doesn’t recognize the face as a whole.
On the contrary, the fusiform face area, located in ventral (lowermost) area of the temporal lobe shows no preference for single features because it is responsible for “holistic/configural” information, meaning that it puts all of the processed pieces of the face together in later processing.
Do we need to wait for a detailed processing of all the features of a face to decide on its mood, potential threat or friendliness? We don’t, because if we did, we wouldn’t survive as a species.
It turns out that cognitive processes are activated by “face-like” objects, which alert the observer to both the emotional state and identity of the subject – even before the conscious mind begins to process—or even receive—the information. Even a “stick figure face,” despite its simplicity, conveys mood information.
This robust and subtle capability is hypothesized to be the result of eons of natural selection favoring people most able to quickly identify the mental state of humans they encounter, for example, threatening or hostile people. This allows the individual an opportunity to flee or attack pre-emptively. In other words, processing this information subcortically (and therefore subconsciously)—before it is passed on to the rest of the brain for detailed processing—accelerates judgment and decision making when alacrity is paramount.
It is no wonder then, that one of the areas the fusiform face area and the occipital visual area connect to is the amygdala, the area that is responsible for emotions of fear and rage, or the “fight or flight” response.
You probably noticed that so far I have described recognition of static faces. But other than statutes, no face, not even a poker face, remains completely frozen. Sooner or later motion occurs, however subtle. It could be as slight as an ephemeral muscle twitch, or as obvious as blinking. Those movements convey much vital information. Just ask Sherlock Holmes.
Atop the temporal lobes, both the right and the left ones, there is a deep groove called the dorsal temporal sulcus or STS. In each of these grooves, there are 3 clusters, or patches, of neurons dedicated to visual recognition. What they recognize is facial motion.
An indication of how important face recognition is, is the fact that facial motion has its own processing area in the brain. All other body motions are processed in other areas. All these areas then feed their output to the anterior cingulate, the area just behind the prefrontal cortex, that coordinates social interactions. And, from thence, to the prefrontal cortex, the executive center that processes and integrates all these inputs and reacts.
Why is all this important?
Clinically, it has been demonstrated that people with Autism Spectrum Disorder (ASD) suffer a deficit in facial motion processing. But in addition, some patients with ASD have a more severe face perception deficit. Let me explain. The neurons in the fusiform face area need to be finely tuned to understand what is dissimilar from one face to another. Investigators at George Washington University have found, using a specialized fMRI procedure, that among the 15 adult participants with autism, those with more severe face perception deficits have neurons that are more broadly tuned, so that one face looks more like another, as compared with the fine tuning seen in the fusiform face area of typical adults. In other words, much of the autism hallmark of deficit in social interaction may be attributable to deficit in recognition of facial fine detail anatomy and motion.
More broadly, there is another aspect of facial recognition that we see daily in the courtroom: Eyewitness testimony. How? Remember that face recognition passes through several way stations, each contributing to the complete picture. And, all of this is then fed into the executive center of the brain, the prefrontal cortex, for filtering and editing.
From there, the prefrontal cortex feeds the processed information to the hippocampus, where memories are formed. Research has shown that “original memory,” before even prosecutors and lawyers had a chance to distort it to their ends, gets altered by the mere act of recalling and retelling. And, descriptions of facial expressions (‘he was crying’) and motion (‘and grimacing in pain’) are highly effective in evoking feelings of empathy.
How many people have languished in jail, or executed, because of faulty eyewitness testimony? The innocence project has been uncovering hundreds of them. Eyewitness misidentification is the greatest contributing factor to wrongful convictions proven by DNA testing, playing a role in more than 70% of convictions overturned through DNA testing nationwide.
A window to the soul
If the eyes are a window to the soul, how we perceive their faces is a key to understanding the souls of our fellow human beings. We are all individuals with different physical and psychological makeup. We all have different agendas, aspirations, and dreams. Yet, we do function, however improbably, as a society, thanks, in large part, due to our ability to read the faces of our fellow human beings.