Person Perception Program
The Person Perception Program investigates how we extract, process and use information about other people. These abilities are critical to guiding everyday social interactions. Subtle cues to identity, gender, ethnicity, age, attractiveness, emotional state and focus of attention are effortlessly read from the face, body and voice. The focus of our research is on understanding the perceptual, cognitive, neural and evolutionary mechanisms underlying this impressive expertise, how these mechanisms emerge through development, and how these mechanisms might develop and function differently in people with neurodevelopmental disorders.
- The development of person perception during childhood
- Coding ensemble information for face groups
- Perception of emotional genuineness from facial expressions
- First impressions from faces
- How does race affect face processing?
- Diagnosis and investigations into congenital prosopagnosia
- Adaptive processes in person perception
- Watching the brain recalibrate: Neural correlates of reading social signals from faces
Faces convey rich social information that guides our social interactions. Adults have little difficulty reading this information from thousands of faces, despite their apparent similarity as visual patterns. This exquisite expertise with faces emerges slowly during development, with performance on many face perception tasks improving throughout childhood. However, the source of improvement is controversial. Improvement in face perception could reflect changes in the visual processing mechanisms that are specific to face perception, or it could be due to more general cognitive development that leads to improvement on all kinds of cognitive and perceptual tasks. To determine whether changes in both face-specific visual mechanisms and general cognitive abilities contribute to the development of face perception, we are conducting a unique longitudinal study with children aged 6 to 12 years of age. This study measures children's face recognition and expression recognition skills and some general cognitive abilities, and then re-assesses these abilities over two subsequent years. We also measure visual processing mechanisms that are thought to be crucial for face perception, to determine whether these mechanisms strengthen with age. Intriguing cross-sectional findings from our first year suggest that the strength of these face perception mechanisms does not increase between ages 6 and 9, consistent with arguments that these mechanisms may already be mature by age four. However, just as in adults, individual variation in children's face recognition skills is related to the strength of these mechanisms. General cognitive abilities do contribute to age-related differences in face recognition performance but do not contribute to individual differences when age is controlled for. These results suggest that early experience with faces may determine the individual strength of these face perception mechanisms. Our results are also consistent with the view that age-related improvements in face recognition performance may largely reflect general cognitive development rather than changes in the visual mechanisms of face perception.
Humans are often confronted with many faces at once, for example when interacting with a group of people. As compared to seeing a single face, processing groups of faces can evoke a different style of information processing in which people efficiently encode the average 'ensemble' information for that group. This project examines how such ensemble information can be coded for different properties of a face group, such as expression and identity. We examined whether ensemble representations for expressions can be extracted when faces in the group were of different people. We found that participants systematically misremembered the expression in the direction of the mean expression of the group. This indicates that participants had extracted an ensemble expression for these more naturalistic groups of different people, and that this ensemble expression had impacted on their memory for individual face expressions. We have also been investigating the neural correlates of ensemble coding of face identity. We briefly presented groups of faces and then recorded event-related potentials evoked to one of the individual faces or the average identity of the group of faces. Brain responses around 250 ms after stimulus onset were sensitive to the average face but not the individual face, indicating a neural correlate of ensemble coding of identity.
To date, most research on facial expressions has focused on what emotion a face is displaying. An equally important skill in everyday life is being able to tell whether or not someone is genuinely feeling the emotion they are displaying. To address this question, we need stimuli displaying both genuine and pretend expressions. However, most of the available stimulus sets contain posed expressions. Genuinely-felt expressions have not been frequently utilised, except in the case of happy expressions using real 'Duchenne smiles'. Our work shows that these widely used sets of posed expressions are often perceived as showing pretend emotion. Quite different patterns of results could then be observed when genuinely-felt emotions are presented to participants. As such, our project initially focused on developing matched sets of genuinely-felt and pretend emotional expressions for a full range of expressions other than just happiness, such as anger, disgust, fear, sadness. We are now using these sets to test theoretical questions about how mood and personality traits, like social anxiety, depression, and psychopathic traits, affect the perception of emotion for both genuinely-felt and posed expressions. Initial results show that higher levels of psychopathic traits are associated with decreased self-reported arousal to genuinely-felt expressions of distress, such as other people's fear and sadness. This pattern was not evident for pretend versions of these expressions, supporting the idea that results from genuinely-felt versus pretend expressions can differ in important ways.
Despite the common adage 'Don't judge a book by its cover', when we see a stranger for the first time, we often make extensive subjective judgements about their character and personality simply from their face. For example, people spontaneously and rapidly judge the approachability, intelligence, or even untrustworthiness of unfamiliar faces. Some facial impressions may actually be accurate. For example, previous research from the Person Perception Program has shown that impressions of sexual infidelity contain a kernel of truth. However, regardless of their accuracy, these judgements can have surprisingly important consequences in the real world. We aim to determine the mechanisms underlying these judgements, how these judgements might vary depending on the person making them, and to what extent these judgements are accurate. Previous research has examined the contribution of visual cues within the face itself. For example, a strong jaw looks dominant, big eyes look attractive and a smile looks trustworthy. However, these judgements may also depend on extra-facial information, such as the social and cultural group of the face or perceiver, or individual perceiver differences. In particular, we are examining how the race and gender of the perceiver relative to the face might affect the cues used, and the consistency and accuracy of the impressions formed. We have found that people are more consistent and more accurate in their impressions of own-race than other-race faces, which may be related either to having more perceptual experience with own-race faces, or being more motivated to form impressions of these individuals. Finally, we are interested in understanding more generally which individual characteristics of the perceiver can predict their idiosyncratic facial impressions and the accuracy of those impressions.
Anecdotal evidence suggests that people can have trouble recognising faces when those faces come from a different ethnic group. This 'other-race effect' can be considerable, with people failing to recognise even highly familiar people of another race and the consequences can be catastrophic in a legal setting, where inaccurate eyewitness testimony can result in wrongful conviction, incarceration and even execution. Yet in the laboratory the other-race effect, while long established and widely replicated, is consistently quite small (< 10% difference in accuracy). To explore this inconsistency we took an individual differences approach testing over 500 Caucasian and Asian participants. We found that while some people were quite good at recognising other-race faces, others were so poor as to be clinically impaired or 'face-blind'. The results are consistent with our previous studies supporting a perceptual experience, rather than a social motivational basis to the other-race effect. We have also been investigating whether the race of a face can affect the perception of other social information. We found that sensitivity to gaze direction was impaired for other-race faces in Caucasian and Asian participants. In another study we found that Caucasian females could reliably judge the sexual faithfulness of Caucasian male faces whereas Asian females could not. These findings suggest that, on top of the well-established other-race recognition difficulties, it may also be more challenging to successfully interpret subtle social signals from the face in interactions with people from other ethnic backgrounds.
People with congenital (developmental) prosopagnosia have failed to develop adequate face identity recognition mechanisms and often report severe, recurring, everyday face recognition difficulties. Our research program depends on being able to reliably diagnose congenital prosopagnosia. If people have very good insight into their face recognition abilities, diagnosis would be relatively simple. However, our recent studies with nearly 300 people across three countries suggest that people do not have a great deal of insight into their face recognition abilities. This means that scores on tests of face identity recognition are crucial for diagnosis. To assist in this process, we developed a brief test of face memory. We are also interested in examining the specificity of congenital prosopagnosia. Although some people only have difficulty discriminating between faces, in a recent study we found that many people with congenital prosopagnosia also have difficulty discriminating between human bodies. We are currently examining whether people with congenital prosopagnosia have difficulty processing other information from the face, such as expression. Initial studies suggest that many people with prosopagnosia can judge facial expression, and we plan to investigate whether this is because the mechanisms used to recognise expression differ from those used to recognise identity.
In everyday life we use a wealth of social cues from faces to guide our interactions with others. This research investigates the perceptual foundations of our ability to 'read' these cues and to distinguish among thousands of faces despite their perceptual similarity. Our work with face aftereffects suggests that faces are coded relative to perceptual norms or averages that are adaptively tuned by experience. Exposure to a face updates the norm, shifting it temporarily towards characteristics of that face and selectively biases perception towards an identity with opposite characteristics. Norm-based coding may allow us to see past the shared structure of all faces, to those characteristics that define individuals and those variations in their appearance associated with different emotional and attentional states. Our work with face aftereffects suggests that face dimensions related to identity, expression, gender and age, are norm-based coded. These dimensions appear to be coded by pairs of neural populations that are to opposite (low, high) dimension values. Equal activation in the two pools signals a neutral point, or norm, and the coding is 'norm-based' because the channels are tuned to deviations from this point. These adaptive norm-based face-coding mechanisms are in place by 4 years of age (the earliest age tested). They require early visual experience for normal development, with reduced adaptive updating of face norms in adults who suffered early visual deprivation due to congenital cataracts. We have found that face adaptation is linked to face recognition ability in typical adults and that it is reduced in several populations with face recognition difficulties (individuals with autism, relatives of individuals with autism, neurotypical men with high levels of autistic traits, congenital-cataract-reversal patients). Taken together, these results suggest that the adaptive coding of faces plays an important functional role in our ability to recognise faces.
Faces are very important social stimuli. They contain a variety of cues that allow us to identify familiar people and to determine the sex, ethnicity, age, emotional expression, and gaze direction even of people we have never seen before. Perceiving these signals quickly and accurately can help us to successfully navigate social interactions. Recent research suggests that perceptual adaptation, the flexible calibration of brain responses to our current visual input, is of central importance for the perception of all face signals. This project uses event-related potentials (ERPs) to investigate how previous visual experience affects our current perception. We have developed a novel ERP paradigm that allows direct insight into the way the visual system responds to, and neurally represents, face information. We obtained evidence that some face signals, such as the general structural composition of the face, seem to be processed relative to a norm, or prototype. The processing of other face signals, such as gaze direction, seems to occur in a qualitatively different manner, independent of a prototype.