How our brains recognise faces

For humans, the ability to recognise, process and memorise faces is arguably one of our most important skills.
01 December 2020

FACES_GENERIC

anonymous faces

Share

For humans, the ability to recognise, process and memorise faces is arguably one of our most important skills...

We are an inherently social species who rely significantly on visual input – without facial recognition we would undoubtedly struggle to forge relationships with one another and form societies. Multiple parts of the brain play a role in this incredibly complex process, reflecting just how demanding a face is to decode. Not only do we try to extract emotion and intent, but the task of identifying individual faces is in itself immense.

Before we can begin to recognise faces and interpret the social cues they convey, we must address the first basic challenge of facial perception: detecting the presence of a face, as opposed to some other object. This requires the identification of a basic 'T shaped' configuration (a pair of eyes, nose, and mouth). Since this arrangement is typically shared by all faces, it is likely that the detection of faces is achieved by comparison to a ‘template’. Some basic forms of artificial face detection systems also work this way. Following detection, which essentially groups all faces together, a finer level of processing kicks in to tell different faces apart. Subtle spatial differences in the arrangement and appearance of specific features, such as the eyes, within the general template are the giveaways.

Distinct processing for detection and identification provides a checkpoint that ensures only visual stimuli that fit the basic criteria for being a face are subject to the more complex mechanisms used for facial identification. Theoretically, these pathways are separate, but in reality both are automatically and rapidly integrated by what is known as 'holistic processing'. This concept theorises that faces are perceived as a whole, without being decomposed into their subsequent smaller elements. The widely seen 'part/whole task' confirms that individuals show greater accuracy in recalling and identifying certain facial features of a specific individual when they are presented in the context of a whole face rather than isolated. The notion of so-called 'grandmother cells', neurones that fire in response to stimuli associated with a specific person, seem to fit in with the idea of holistic processing. However, there is much evidence that such neurones are more theoretical and an oversimplification of what really occurs.

Facial recognition appears to be disproportionately reliant on holistic processing when compared to visual processing of non-face objects. This is seen with the face inversion effect where participants are asked to study both facial and non-facial objects presented first the right way up and then upside down. Inversion impairs recognition in both cases, but much more so for facial stimuli than for non-facial stimuli. The widely accepted explanation is that the abnormal orientation disrupts the ability to correctly integrate different components and perceive a whole face. On the other hand, inanimate objects tend to be processed as separate parts and therefore are less affected by an overall change in orientation.

As with any processing pathways involving many neural networks, there are ample ways in which facial holistic processing can fail. Prosopagnosia, also known as face blindness, is a cognitive disorder in which individuals fail to recognise familiar faces (including their own), despite the fact that other aspects of visual and intellectual functioning remain unaffected. Prosopagnosia is associated with damage to the brain's fusiform gyrus, which normally responds strongly to facial stimuli. Head trauma accounts for most cases, although, in some rare instances, the disorder may also be inherited. The late Oliver Sacks, a globally-renowned neurologist and author suffered from the condition. He wrote about his experiences, ranging from failing to recognise individuals with whom he interacted on a regular basis to mistaking his own reflection for someone else. Though he described his prosopagnosia as 'moderate', the effects seemed to have had a notable impact throughout his life.

At the other end of the scale, many individuals often demonstrate a tendency to 'see faces' in inanimate objects or random patterns, known as pareidolia. Functional neuroimaging techniques have previously demonstrated that the basis for this likely stems from non-face objects triggering specific parts of the fusiform gyrus, leading to facial perception. The high prevalence of this tendency, many say, reflects the evolutionary advantage of being able to rapidly identify danger. If a 'face-like' object can quickly activate the necessary cognitive processes of facial perception, then the observer will begin to interpret the underlying emotion and identity, possibly leading to quicker responses to threats in the form of other humans.

There is clearly much to interpret from a face beyond its basic detection and identification. Emotional recognition is a key factor driving prosocial behaviour. Numerous studies have broken down facial expressions into seven basic emotions: happiness, sadness, surprise, anger, disgust, contempt and fear, though clearly the reactions and responses to many situations may show some overlap. The basic ability to distinguish facial emotions begins to present itself early in infancy, and perception normally develops throughout childhood and adolescence. However, in some cases, normal perception of emotions via facial expression (as well as body language and vocal intonation) can be disrupted. This is known as social-emotional agnosia, caused by malfunction of the amygdala in the forebrain, typically due to bilateral damage of the temporal lobe. This malfunction causes an inability to select appropriate responses to various social cues. Those affected often isolate themselves from others, highlighting how important emotional perception is to ensure inclusion within society.

As with many of our cognitive abilities, our facial perception capabilities can be replicated by technology. Facial recognition systems are becoming increasingly commonplace, with uses ranging from smartphone identity verification to robotic security systems. Put simply, these programs extract basic information about the positioning of various features and compare this against a database of faces. Whilst there are many concerns surrounding the use of such technology - breaching privacy is a major risk, as is the possibility of poor accuracy - there is no doubt that they are extremely useful, perhaps particularly during the current COVID-19 pandemic. In August 2020, the Los Angeles Football Club announced their intention to trial facial recognition software in their stadia, the goal being to make the entry process to matches as contact-free as possible whilst maximising security.

Humans demonstrate impressive facial perception capabilities, whether in the context of recognition, basic communication or determination of physical attractiveness. Though we might be spending most of our time behind face masks these days, undoubtedly, the cognitive processes underlying how we interpret and respond to facial stimuli have been, and will likely remain, hugely important in allowing us to build our society.

Comments

Add a comment