Think at London Business School: fresh ideas and opinions from LBS faculty and other experts direct to your inbox
Imagine you’re a hiring manager at a tech company and you’re looking through applications for a computer programmer position. One of the applicants is an Asian woman. How might the fact that she’s Asian and a woman affect your evaluation of her? And would how you see her make a difference in whether or not you hire her? None at all? Are you sure?
Gender bias in the fields of science, technology, engineering and maths (STEM) is a well-documented fact. In one study, STEM faculty themselves rated a female applicant as less hireable and offered her less pay than a male applicant – even though they had the exact same CV. But, interestingly, Asian people are positively stereotyped in the STEM area. So what happens when two conflicting stereotypes come into play? This is what Aneeta Rattan, Assistant Professor of Organisational Behaviour at London Business School (LBS), was intrigued to investigate.
“We socially categorise people almost the instant we come across them, often unconsciously and without even realising we’re doing it ,” Dr Rattan explains. “Interpersonal perception can depend on which aspects of someone’s identity are salient.” Can a differential emphasis on aspects of the same person’s identity affect how much male evaluators discriminate in a hiring context? Astonishingly, the answer is yes.
Together with Jennifer Steele, Associate Professor and Undergraduate Program Director, Department of Psychology, York University in Canada and the late Nalini Ambady, Professor of Psychology at Stanford University, Dr Rattan designed three experiments to illuminate this under-examined area of hiring behaviour. The hypothesis for all three studies carried out was that men would evaluate an applicant for a tech job more positively if her racial identity was emphasised, rather than her gender.
What prompted the research? “I thought of this as a very first step in trying to understand some of the intersections of identity that might meaningfully impact the outcomes that people experience. Finding that these intersections matter for evaluations emphasises how much more research needs to be done to look across different identities – gender, race, sexual orientation, disability status, age, and social class – to name just a few,” says Dr Rattan. “Secondly, we were interested in the question of whether an Asian-American woman might experience a more moderate level of discrimination because of the positive discrimination to do with her race.
“Finally, what is it that’s driving gender-biased employment discrimination? If we find that participants looking at the same person with the same exact qualification rate her less well when they’re thinking in terms of her gender, we can show very specifically and unequivocally that it’s the stereotypes associated with gender in the minds of perceivers that is driving the discrimination.”
Many people are shocked to imagine that the same person could be viewed as more or less hireable depending on what aspect of their identity is at the fore in the hirer’s perception. This makes it even more interesting and worthwhile, Dr Rattan believes. “People really vary in their intuitions about whether this research would yield different ratings of the candidate. Many think, ‘No, it’s the same person – how can the way I think about her affect my ratings, given both her race and gender are visible?’ That’s why it’s meaningful and interesting research to be doing, and impactful - if we can all have different gut instincts then it means that we really do need the science to tell us .”
An “applicant” was evaluated more positively when her racial identity was emphasised over her gender
Companies should recognise the potential influence of stereotypes when they are hiring and judge on specific criteria
HR departments should call out bias and push for more thoughtful processes
If we can all have different gut instincts then it means that we really do need the science to tell us
The researchers recruited 307 men via the online work marketplace Mechanical Turk to take part in a carefully constructed experiment. These participants were told to imagine that they were the hiring manager for “a tutoring company that works with high school students”. Their task was to review applicants’ materials to hire a tutor for intermediate to advanced computer programming in HTML Java and C++. The participants were randomly assigned to see one of three versions of the applicant’s profile.
In one version, the applicant was referred to as “a woman” named “Christine”. In another, she was referred to as “an Asian American” called “Chang”. In the control version, she was referred to simply as “the applicant”. The rest of the information, including the “fact” that this person was born and raised in Freemont, California and had nine months’ work experience as a peer counsellor at the university career centre, was identical across all three versions of the profile. She had a solid level of tech skills and familiarity with the requisite coding languages and platforms.
The study participants were asked to review this person’s profile alongside her purported profile picture (again, identical across all three versions) and rate her overall (on a scale of 1–7 where 1 meant “not very skilled” and 7 “very skilled”). They were also asked to rate their willingness to hire the candidate and how much she should be paid if she were hired.
The results were striking: the evaluators who had been presented with the version where the applicant’s Asian background was emphasised evaluated her most positively, while those using the version that stressed the fact that she was female evaluated her the least positively. “Chang”, the Asian American applicant fared better than the woman called Christine.
“Chang” was viewed as more highly skilled and more hireable. “Christine” was perceived to have fewer skills and to be less hireable – this despite the fact that their stated skills were absolutely identical.
In two other studies, making gender or race salient even had a direct effect on how much participants recommended the applicant be paid. When male evaluators considered how much to pay the applicant for the tech position, they offered her over US$1 per hour less pay when they had been primed with her gender rather than race. If it were a full-time job (40 hours per week), this would amount to a difference of US$2,080 (£1,584.54) per year.
One of the studies Dr Rattan and her colleagues conducted included both women and men who evaluated an Asian female applicant for a tutoring post in either a stereotypically male field (computer programming) or a stereotypically female field (English literature), based on her LinkedIn profile. Overall, men (but not women) rated the candidate’s abilities lower, were less willing to hire her, and recommended paying her significantly less when she had applied for a tutoring job in computer programming rather than in English literature. Even though her skills were the same.
What are the implications – how can companies become fairer in their hiring decisions? Dr Rattan has three recommendations.
1. Accept the reality “Companies must recognise that, through no fault of any specific individual, these stereotypes are in the minds of people in organisations today because they grew up in the broader social context that reinforces them. If you’re not doing something to offset those stereotypes, you’re likely allowing space for them to affect decision-making. Acknowledge that these things exist. Whether or not you have a specific pay disparity that you’re trying to address, you have to contend with the potential influence of stereotypes in your hiring.”
If you’re not doing something to offset those stereotypes, you’re likely allowing space for them to affect decision-making
2. Judge on specific criteria. “When the dimensions of judgement are very specific, we do not find differences in how people rate the candidate based on whether they’re thinking about her in terms of her gender or race. So when getting content experts’ feedback on potential job candidates, make sure you’re asking about facts – ‘How much technical skill does this candidate possess?’ – not how they feel about this person in general. If you say, ‘Overall, how much did you like this candidate?’ you’re opening up room for bias to intrude.”
3. Call out bias. “Virtually no one wants to shift their ratings based on irrelevant factors, and it’s hard for us to believe we’re susceptible to being influenced by stereotypes. HR departments can do much more to call it out when it’s happening. If, on all the individual criteria, you’ve rated a candidate highly and yet with your overall ratings there’s not a match, how do you explain that? Push people into more thoughtful processes driven by the hard data. Ask yourself if men’s gender or race is giving them a boost relative to their actual qualifications.”
What can real-life job applicants do to get around all of this? Very little, unfortunately. “It’s really not up to you,” says Dr Rattan. “You could be a woman in tech and you’re doing a lot of things to signal your competence. Maybe you’re using your initials or shortening your name. But if a man reviews your application and you’re the only woman in a pool of 20 men, they’re going to notice your gender. And if you’re only one of two Asians in a pool of White people, the reviewer is going to notice your race.
“Organisations ought to have been addressing issues of bias in hiring and pay for very many years. By not doing so they have allowed meaningful disparities to emerge. Therefore it’s something that they need to take decisive compensatory action on fixing as soon as possible.”
Organisations ought to have been addressing issues of bias in hiring and pay for very many years. By not doing so they have allowed meaningful disparities to emerge
You must be a registered user to add a comment here. If you’ve already registered, please log in. If you haven’t registered yet, please register and log in.Login/Create a Profile