• For Journalists
  • News Releases
  • Latest Releases
  • News Release

Racial discrimination in hiring remains a persistent problem

Despite new laws and changing attitudes, little has changed in 25 years

Media Information

  • Release Date: January 31, 2023

Media Contacts

Max Witynski

  • (847) 467-6105

Journal: Proceedings of the National Academy of Sciences

EVANSTON, Ill. --- Decades after hiring discrimination was made illegal in many Western countries, experts predicted it would gradually disappear. But according to a major new meta-analysis from Northwestern University, discrimination in hiring has remained a persistent problem.

In fact, with few exceptions, rates of hiring discrimination have changed little since the 1990s, according to a new paper published Jan. 31 in the Proceedings of the National Academy of Sciences . Lincoln Quillian — a professor of sociology — and former student John J. Lee, a recent graduate of Northwestern’s doctoral program in sociology, co-authored the work.

Quillian and Lee analyzed 90 studies involving 174,000 total job applications from Canada, France, Germany, Great Britain, the Netherlands and the United States to study trends in hiring discrimination among four racial-ethnic origin groups: African or Black, Middle Eastern or North African, Latin or Hispanic, and Asian. The oldest study in the analysis was a British study from 1969, and the most recent was a U.S. study from 2019. 

“The biggest takeaway was that on average, there has been no change in hiring discrimination when aggregating all six countries together,” Quillian said, despite laws passed in the European Union during the study period that aimed to reduce hiring discrimination. 

In four of the six countries and for three of the four racial-ethnic groups examined, discrimination roughly held stable. The researchers did find a few significant trends, however, that were both positive and negative.

France was the only country with a significant decline in discrimination, from very high levels in the 2000s to what are still high levels today, but in line with those of peer nations. There was a slight trend toward higher discrimination rates in all other countries except Canada, though the upward trend was only statistically significant in the Netherlands. 

“Several countries had a slight upward trend, so it was not unique to the Netherlands. It’s possible that more broadly, this increase is tied to things like the growth of right-wing politics and anti-immigrant sentiment,” Quillian said.

Among the racial-ethnic origin groups studied, most saw a constant rate of discrimination, except for Middle Eastern/North African job applicants. That group saw an uptick in hiring discrimination in the 2000s and 2010s as compared to the 1990s, which the researchers said may be attributable to rising bias against this group after terrorist attacks such as 9/11, which occurred during this period.

Other groups for which hiring trends were analyzed included African/Black, Asian and Latin American/Hispanic applicants. Relative to white applicants, applicants of color from all backgrounds in the study had to submit about 50% more applications per callback on average, Quillian said, with some variation between countries and groups. Callbacks are defined as employers expressing interest in interviewing candidates.

This means that if a white applicant must apply to 20 jobs on average to get a callback, an applicant of color would need to apply to 30. Further discrimination can occur later in the hiring process, but was not studied in this case, according to Quillian.

The 90 studies in the analysis were conducted in a similar manner, with minor differences. In most cases, researchers submitted fake application materials to real job openings, tweaking the materials slightly to include racial indicators along with otherwise similar credentials to ensure that differences in callback rates could be attributed to discrimination, rather than candidate qualifications.

Most studies analyzed (about 75%) were conducted since the 1990s, though trends extend back to the 1970s in France, Great Britain and the Netherlands.

Overall, Quillian said, it’s disappointing to see little progress despite anti-discrimination legislation, changing attitudes against open discrimination since the 1970s, and corporate and government policies that have sought to improve workforce diversity. 

According to the authors, further efforts are needed to have a real impact on hiring discrimination. Quillian believes that progress is possible if anti-discrimination policies are enforced, employers are held accountable, and mentorship programs support employees of color who are seeking promotion and advancement in particular fields. 

“Policies that require employers to keep track of and make publicly available the race or ethnicity of the people they're hiring make a lot of sense,” Quillian said. Such policies, he noted, can also encourage companies to take a second look at their own numbers. If their hiring patterns show a preference for white candidates, there is a risk of both bad publicity and discrimination lawsuits. 

Though there has been generational change over the last 50 years, with younger generations reporting less conservative racial attitudes than older ones, that change hasn’t been reflected in reduced hiring discrimination, Quillian said. 

“To make hiring discrimination a thing of the past, we need to be thoughtful and committed to enforcing the law and making changes in hiring practices to promote diversity,” he said.

You are using an outdated browser. Please upgrade your browser to improve your experience.

Global HR Lawyers

  • LinkedIn LinkedIn
  • Twitter Twitter
  • Facebook Facebook

Discrimination and bias in AI recruitment: a case study

31 October 2023

Barely a day goes by without the media reporting the potential benefits of or threats from AI. AI is being used more and more in workplace decisions: to make remuneration and promotion decisions; to allocate work; award bonuses; manage performance and make dismissal decisions. One of the common concerns is the propensity of AI systems to return biased or discriminatory outcomes. By working through a case study about the use of AI in recruitment, we examine the risks of unlawful discrimination and how that might be challenged in the employment tribunal.

Our case study begins with candidates submitting job applications which are to be reviewed and “profiled” by an AI system (the automated processing of personal data to analyse or evaluate people, including to predict their performance at work). We follow this through to the disposal of resulting employment tribunal claims from the unsuccessful candidates, and examine the risks of unlawful discrimination in using these systems. What emerges are the practical and procedural challenges for claimants and respondents (defendants) litigation procedures are ill-equipped for an automated world.

Bias and discrimination

Before looking at the facts, we consider the concepts of bias and discrimination in automated decision-making.

The Discussion Paper published for the AI Safety Summit organised by the UK government and held at Bletchley Park on 1 and 2 November highlighted the risks of bias and discrimination and commented:

“ Frontier AI models can contain and magnify biases ingrained in the data they are trained on, reflecting societal and historical inequalities and stereotypes. These biases, often subtle and deeply embedded, compromise the equitable and ethical use of AI systems, making it difficult for AI to improve fairness in decisions. Removing attributes like race and gender from training data has generally proven ineffective as a remedy for algorithmic bias, as models can infer these attributes from other information such as names, locations, and other seemingly unrelated factors. ”

What is bias and what is discrimination?

Much attention has been paid to the potential for bias and discrimination in automated decision-making. Bias and discrimination are not synonymous but often overlap. Not all bias amounts to discrimination and not all discrimination reflects bias.

A solution can be biased if it leads to inaccurate or unfair outcomes. A solution can be discriminatory if it disadvantages certain groups. A solution is unlawfully discriminatory if it disadvantages protected groups in breach of equality law.

How can bias and discrimination taint automated decision-making?

Bias can creep into an AI selection tool in a number of ways. For example, there can be: historical bias; sampling bias; measurement bias; evaluation bias; aggregation bias; and deployment bias.

To give a recent example, the shortlist of six titles for the 2023 Booker Prize included three titles by authors with the first name “Paul”. An AI programme asked to predict works to be shortlisted for this prize, is likely to identify being called “Paul” as a key factor. Of course, being called Paul will not have contributed to their shortlisting and identification of this as a determining factor amounts to bias. In doing so, the AI tool would be identifying a correlating factor which had not actually been a factor in the shortlisting; the tool’s prediction would therefore be biased as it would be inaccurate and unfair. In this case this bias is also potentially discriminatory as Paul is generally a male name and possibly also discriminatory on grounds of ethnicity and religion.

An algorithm can be tainted by historical bias or discrimination. AI algorithms are trained using past data. A recruitment algorithm takes data from past candidates and there will always be a risk of under-representation of particular groups in that training data. Bias and discrimination is even more likely to arise from the definition of success which the algorithm seeks to replicate based on successful recruitment in the past. There is an obvious risk of past discrimination being embedded in any algorithm.

This process presents the risk of random correlations being identified by AI algorithm, and there a several reported examples of this happening. One example from several years ago is an algorithm which identified being called Jared as one of the strongest correlators of success in a job. Correlation is not always causation.

An outcome may potentially be discriminatory but not be unfair or inaccurate and so not biased. If, say, a recruitment application concluded that a factor in selecting the best candidates was having at least ten years’ relevant experience, this would disadvantage younger candidates and a younger candidate may be excluded even if, in all other respects, they would be a strong candidate. This would be unlawful if it could just be justified on the facts. It would not, however, necessarily be a biased outcome.

There has been much academic debate on the effectiveness of AI in eliminating the sub-conscious bias of human subjectivity. Supporters argue that any conscious or sub-conscious bias is much reduced by AI. Critics argue that AI merely embeds and exaggerates historic bias.

Currently in the UK there are no AI specific laws regulating the use of AI in employment. The key relevant provisions at present are equality laws and data privacy laws. We have written about these in detail here . This case study focuses on discrimination claims under the Equality Act 2010.

The case study

Acquiring shortlisting tool

Money Bank gets many hundreds of applicants every year for its annual recruitment of 20 financial analysts to be based in its offices in the City of London. Shortlisting takes time and costly HR resources. Further, Money Bank is not satisfied with the suitability of the candidates shortlisted each year.

Money Bank, therefore, acquires an AI shortlisting tool, GetBestTalent, from a leading provider, CaliforniaAI, to incorporate into its shortlisting process.

CaliforniaAI is based in Silicon Valley in California and has no business presence in the UK. Money Bank is attracted by CaliforniaAI’s promises that GetBestTalent will identify better candidates, more quickly and more cheaply than by relying on human-decision makers. Money Bank is also reassured that CaliforniaAI’s publicity material states that GetBestTalent has been audited to ensure that it is bias and discrimination-free.

Money Bank was sued recently by an unsuccessful job applicant claiming that they were unlawfully discriminated against when rejected for a post. This case was settled but proved costly and time-consuming to defend. Money Bank wants, at all costs, to avoid further claims.

Data protection impact assessment

Money Bank’s Data Protection Officer (DPO) conducts a data protection impact assessment (DPIA) into the proposed use by Money Bank of GetBestTalent given the presence of various high-risk indicators, including the innovative nature of the technology and profiling. Proposed mitigations following this assessment include bolstering transparency around the use of automation by explaining clearly that it will form part of the shortlisting process; ensuring that a HR professional will review all successful applications; and confirming with CaliforniaAI that the system is audited for bias and discrimination. On that basis, the DPO considers that the shortlisting decisions are not “solely automated” and is satisfied that Money Bank’s proposed use of the system complies with UK data protection laws (this case study does not consider the extent to which the DPO is correct in considering Money Bank’s GDPR obligations to have been satisfied in this case).

Money Bank enters into a data processing agreement with CaliforniaAI that complies with UK GDPR requirements. Money Bank also notes that CaliforniaAI is self-certified as compliant with the UK extension to the EU-US Data Privacy Framework.

AI and recruitment

GetBestTalent is an off-the-shelf product and CaliforniaAI’s best seller. It has been developed for markets globally and used for many years though it is updated by the developers periodically. The use of algorithms, and the use of AI in HR systems specifically, is not new but has been growing rapidly in recent years. It is being used at different stages of the recruitment process but one of the most common applications of AI by HR is to shortlist vast numbers of candidates down to a manageable number.

AI shortlisting tools can be bespoke (developed specifically for the client); off-the-shelf; or based on an off-the-shelf system but adapted for the client. GetBestTalent algorithm is based on “supervised learning” where the input data and desired output are known and the machine learning method identifies the best way of achieving the output from the inputted data. This application is “static” in that it only changes when CaliforniaAI developer’s make changes to the algorithm. Other systems, known as dynamic systems, can be more sophisticated and continuously learn how to make the algorithm more effective at achieving its purpose.

Sifting applicants

This year 800 candidates apply for the 20 financial analyst positions at Money Bank. Candidates are all advised that Money Bank will be using automated profiling as part of the recruitment process.

Alice, Frank and James are unsuccessful, and all considered themselves strong candidates with the qualifications and experiences advertised for the role. Alice is female, Frank is black, and James is 61 years old. Each is perplexed at their rejection and concerned that their rejection was unlawfully discriminatory. All three are suspicious of automated decision-making and have had read or heard about concerns about these systems.

Discrimination claims in the employment tribunal

Alice, Frank and James each contact Money Bank challenging their rejection. Money Bank asks one of its HR professionals, Nadine, to look at each of the applications. There is little obvious to differentiate these applications from the shortlisted candidates - and Nadine cannot see that they are obviously stronger - so confirms the results of the shortlisting process.

The Bank responds to Alice, Frank and James saying that it has reviewed the rejections, and that it uses a reputable AI system which they are reassured does not discriminate unlawfully but they do not have any more information as the criteria used are developed by the algorithm and are not visible to Money Bank. The data processing agreement between Money Bank and CaliforniaAI requires CaliforniaAI (as processor) to assist Money Bank to fulfil its obligation (as controller) to respond to rights requests, but does not specifically require CaliforniaAI to provide detailed information on the logic behind the profiling nor its application to individual candidates.

Alice, Frank and James all start employment tribunal proceedings in the UK claiming, respectively sex, race and age discrimination in breach of the UK’s Equality Act. They:

  • claim direct and indirect discrimination against Money Bank; and
  • sue CaliforniaAI for inducing and/or causing Money Bank to discriminate against them.

Despite CaliforniaAI having no business presence in the UK and despite the process being more complicated, the claimants can bring proceedings against an overseas party in the UK employment tribunal.

Unless the claimants are aware of each other’s cases, in reality, these cases are likely to proceed independently. However, for the purposes of this case study, all three approach the same lawyer who successfully applies for the cases to be joined and heard together.

Alice, Frank and James recognise that, despite their suspicions, they will need more evidence to back up their claims. They, therefore, contact Money Bank and CaliforniaAI asking for disclosure of documents with the data and information relevant to their rejections.

They also write to Money Bank and California AI with data subject access requests (DSARs) making similar requests for data. These requests are made by reason of their rights under UK data protection law over which the employment tribunal has no jurisdiction so is independent of their employment tribunal claims.

Seeking disparate impact data

In order to seek to establish discrimination, each candidate requests data:

  • Alice asks Money Bank for documents showing the data on both the total proportion of candidates, and the proportion of successful candidates, who were women. This is needed to establish her claim of indirect sex discrimination.
  • Frank asks for the same in respect of the Black, Black British, Caribbean or African ethnic group.
  • James asks for the data for both over 60-year-olds and over 50-year-olds.

They also ask CaliforniaAI for the same data from all exercises in which GetBestTalent has been used globally.

Would a tribunal order a disclosure request of this nature? In considering applications for the provision of information or the disclosure of documents or data, an employment tribunal must consider the proportionality of the request. It is more likely to grant applications which require extensive disclosure or significant time or cost to provide the requested information where the sums claimed are significant.

In this case, Money Bank has the information sought about the sex, ethnicity and age of both all candidates and of those who were successful which it records as part of its equality monitoring procedures. Providing it, therefore, would not be burdensome. In other cases, the employer may not have this data. CaliforniaAI has the means to extract the data sought, at least from many of the uses of GetBestTalent. However, it would be a time-consuming and costly exercise to do this.

Both respondents refuse to provide any of the data sought. Money Bank argues that this is merely a fishing exercise as none of the claimants have any evidence to support a discrimination claim. They also argue that the system has been audited for discrimination and, therefore, the clams are vexatious. CaliforniaAI also regards the information sought as a trade secret (of both itself and its clients) and also relies on the time and cost involved in gathering it.

In response the claimants apply to the employment tribunal for an order requiring the respondents to provide the data and information requested.

The tribunal orders Money Bank to provide the claimants with the requested documents. It declines, however, to make the order requested against CaliforniaAI.

In theory, the tribunal has the power to make the requested order against CaliforniaAI. Although, it cannot make the order against an overseas person which is not a party to the litigation, in this case California AI is a party. However, the tribunal regards the request as manifestly disproportionate and gives this request short shrift.

The disparate impact data does not amount to the individuals’ personal data so is not relevant to their DSARs.

Seeking equality data

The claimants also request from Money Bank documents showing the details of: a) the gender, ethnic and age breakdown of the Bank’s workforce in the UK (as the case may be); b) the equality training of the managers connected with decision to use the GetBestTalent solution; and c) any discrimination complaints made against Money Bank in the last five years and the outcome.

Money Bank refuses all requests as it argues that the claim relates to the discriminatory impact of CaliforniaAI’s recruitment solution so that all these other issues are irrelevant. It could provide the information relatively easily but is mindful that the Bank has faced many discrimination claims in recent years and has settled or lost a number so does not want to highlight this.

The tribunal refuses to grant the requests for the equality data as it considers it unnecessary for the claimants to prove their case. The claimants will, however, still be able to point to Money Bank’s failure to provide this information in seeking to draw inferences. The tribunal also refuses the request for details of past complaints (though details of tribunal claims which proceeded to a hearing are available from a public register).

The tribunal does ask Money Bank to provide details of the equality training provided to the relevant managers as it was persuaded that this is relevant to the issues to be decided.

This information does not amount to the individuals’ personal data so is not relevant to their DSARs.

Disclosing the algorithm and audit

The claimants also asked CaliforniaAI to provide it with:

  • a copy of the algorithm used in the shortlisting programme;
  • the logic and factors used by the algorithm in achieving is output (i.e. explainability information relating to their individual decisions); and
  • the results of the discrimination audit.

In this case, CaliforniaAI has the information to explain the decisions, but this is not auto-generated (as it can be with some systems) or provided to Money Bank. Money Bank’s contract with CaliforniaAI does not explicitly require it to provide this information.

CaliforniaAI refuses to provide any of the requested information on the basis that these amount to trade secrets and also that the code would be meaningless to the claimants. The claimants counter that expert witnesses should be able to consider the code as medical experts would where complex medical evidence is relevant to tribunal proceedings.

The tribunal judge is not persuaded by the trade secret argument. If disclosed the code would be in the bundle of documents to which observers from the general public would have access (though couldn’t copy or remove). The tribunal has wide powers to regulate its own procedure and, in theory, could take steps in exceptional cases to limit public access to trade secrets.

However, the tribunal decides not to order disclosure of the code on the grounds of proportionality. It spends more time deliberating over the “explainability” information and the details of the auditing of the system.

Ultimately, it decides not to require disclosure of either. It considers that, in so far as the direct discrimination claims are concerned, it requires more than the assertion by the claimants that they have been directly discriminated against to make the requested order proportionate. If the sums likely to be awarded had been greater, it may well have reached a different decision here. In so far as Alice’s indirect claim is concerned, the explainability information and audit are more likely to be relevant to Money Bank’s defence than Alice’s claim so leaves it Money Bank to decide whether or not to disclose it.

Arguably, UK GDPR requires Money Bank to provide the explainability information in response to the data subject access request, and for Money Bank’s data processing agreement with CaliforniaAI to oblige the American company to provide this. However, both respond to the DSAR refusing to provide this information (this case study does not consider the extent to which they might be justified in doing so under UK GDPR).

What did the data show?

The data provided by Money Bank shows that of the 800 job applicants: 320 were women (40%) and 480 were men (60%); 80 described their ethnicity as Black, Black British, Caribbean or African (10%); and James was the only applicant over the age of 50.

Of the 320 women, only four were successful (20% of total shortlisted) whereas 16 men were shortlisted (80% of shortlisted). Of the 80 applicants from Frank’s ethnic group, three were appointed (15% of successful applicants). Therefore, the data shows that the system had a disparate impact against women but not against Black, Black British, Caribbean or African candidates. There was no data to help James with an indirect discrimination claim.

   
  320 40% 4 20%
80 10% 3 15%
1 <1% 0 -

After consideration of the data, Frank and James abandon their indirect discrimination claims.

Establishing discrimination: who needs to prove what?

Indirect discrimination

Alice needs to establish:

  • a provision, criterion or practice (PCP);
  • that the PCP has a disparate impact on women;
  • that she is disadvantaged by the application of the PCP; and
  • that the PCP is not objectively justifiable.

Alice relies on the AI application used by Money Bank as her PCP.

If the decision to reject her had been “explainable” then, as is the case with most human-decisions, the PCP could also be the actual factor which disadvantaged her.

Putting this into practice, let’s say it could have been established from the explainability information that the algorithm had identified career breaks as a negative factor. Alice has had two such breaks and might, in such circumstances, allege that this was unlawfully indirectly discriminatory. A tribunal may well accept that such a factor disadvantages women without needing data to substantiate this. Money Bank would then need to show either that this had not disadvantaged Alice or that such a factor was objectively justifiable.

Neither defence would be easy in this case. It is possible that the respondents could run on a counterfactual to show that Alice had not been disadvantaged by her career breaks. This would mean applying the application to an alternative set of facts – so here, running it against Alice’s application but without career breaks to show she would not have been shortlisted in any event.

In our case, however, Money Bank does not have an explanation for Alice’s failure to be shortlisted.

2. Disparate impact

Alice relies on the data to show a disparate impact.

The respondents could seek to argue that there is no disparate impact based on the auditing undertaken by CaliforniaAI using larger numbers of aggregate audits and argue that the Money Bank’s data reflects a random outcome. A tribunal is certain not to accept this argument at face value. Further, the legal tests in the UK and the US are not the same so any auditing in the US will be of reduced value.

In such a case, the respondents could seek to introduce data from its verification testing or the use of the platform by other employers. This may then require expert evidence on the conclusions to be drawn from the audit data.

In our case, neither the audit nor evidence of the impact of GetBestTalent on larger numbers are before the tribunal. Indeed, here, California AI refused to disclose it.

3. Disadvantages protected group and claimant

Alice does not have to prove why a PCP disadvantages a particular group. The Supreme Court in Essop v Home Office (2017) considered a case where black candidates had a lower pass rate than other candidates under a skills assessment test. The claimants were unable to explain why the test disadvantaged that group, but this was not a bar to establishing indirect discrimination.

The PCP (the GetBestTalent solution) clearly disadvantages Alice personally as her score was the reason she was not shortlisted.

4. Justification

Alice satisfies the first three steps in proving her case.

The burden will then pass to Money Bank to show that the use of this particular application was justified – that it was a proportionate means of achieving a legitimate aim.

What aims could Money Bank rely on? Money Bank argues that its legitimate aim is decision-making which is quicker, cheaper, results in better candidates and discriminates less than with human-made decisions.

Saving money is tricky: it cannot be a justification to discriminate to save money, but this can be relevant alongside other aims. Nonetheless, Money Bank is likely to establish a legitimate aim for the introduction of automation in its recruitment process based on the need to make better and quicker decisions and avoid sub—conscious bias. The greater challenge will be showing that the use of this particular solution was a proportionate means of achieving those aims.

In terms of the objective of recruiting better candidates, Money Bank would have to do more than merely assert that the use of GetBestTalent meant higher quality short-listed candidates. It might, for example, point to historic problems with the quality of successful candidates. This would help justify automation, but Money Bank would still have to justify the use of this particular system.

Money Bank seeks to justify its use of GetBestTalent and satisfy its proportionality by relying on its due diligence. However, it did no more than ask the question of CaliforniaAI who reassured Money Bank that it had been audited.

It also points to the human oversight where a HR professional reviews all candidates which the system proposes to shortlist to verify this decision. The tribunal is unimpressed with the human oversight as this did not extend to oversight of all the unsuccessful applications.

Pulling this together, would a tribunal accept the use of this platform satisfied the objective justification test? This is unlikely. In all likelihood, Alice would succeed, and the matter would proceed to a remedies hearing to determine her compensation.

Direct discrimination

Alice is also pursuing a direct sex discrimination claim and Frank and James, not deterred by the failure to get their indirect discrimination claims off the ground, have also continued their direct race and age discrimination claims respectively. The advantage for Alice in pursuing a direct discrimination claim is that this discrimination (unlike indirect discrimination) cannot be justified, and the fact of direct discrimination is enough to win her case.

Each applicant has to show that they were treated less favourably (i.e., not shortlisted) because of their protected characteristic (sex, race, age respectively). To do this, the reason for the decision not to shortlist must be established.

They have no evidence of the reason, but this does not necessarily defeat their claims. Under UK equality law, the burden of proof can, in some circumstances, transfer so that it is for the employer to prove that it did not discriminate. To prove this, the employer would then have to establish the reason and show that it was not the protected characteristic of the claimant in question. In this case, this would be very difficult for Money Bank as it does not know why the candidates were not shortlisted.

What is required for the burden of proof to transfer? The burden of proof will transfer if there are facts from which the court could decide that discrimination occurred. This is generally paraphrased as the drawing of inferences of discrimination from the facts. If inferences can be drawn, the employer will need to show that there was not discrimination.

Prospects of success

Looking at each claimant in turn:

  • Frank : he will struggle to draw inferences as there is no disparate impact inferring any less favourable treatment. The absence of any disparate impact does not mean that Frank could not have been directly discriminated against but without more his claim in unlikely to get anywhere. He does not have an explanation for the basis of the decision or the ethnic breakdown of its current workforce. He has limited information about Money Bank’s approach to equality. He can’t prove facts which in the absence of an explanation show prima facie discrimination so his claim fails.
  • James : his claim is unlikely to be rejected as quickly as Frank’s as the data doesn’t help prove or disprove his claim. James could try to rely on the absence of older workers in the workforce, any lack of training or monitoring and past claims if had the information as well as the absence of an explanation for his rejection but, in reality, this claim becomes pretty hopeless.
  • Alice : her may be on stronger ground. She can point to the disparate impact data as a ground for inferences but this will not normally be enough on its own to shift the burden of proof. Alice can also point to the opaque decision-making. Money Bank could rebut this if the decision was sufficiently “explainable” so the reason for Alice’s rejection could be identified. However, it cannot do so here. The dangers of inexplicable decisions are obvious.

Would the disparate impact and opaqueness be enough to draw inferences? Probably not – particularly if Alice does not have any of the equality data or information about past discrimination claims referred to above and the equality training information does not show a total disregard to equality. She could try and get information about these on cross-examination of witnesses and could point to MoneyBank’s failure to provide the equality data as grounds for drawing inferences and reversing the burden of proof. However, after carefully balancing the arguments, the tribunal decides in our case that Alice can’t prove facts which, in the absence of an explanation, show prima facie discrimination. This means that her direct discrimination claim fails.

If inferences had been drawn and Money Bank had been required to demonstrate that the protected characteristic in question had not been the reason for its decision, Money Bank would have argued that it anonymises the candidate data and ensures age, sex and ethnicity of candidates is omitted and that, therefore, the protected characteristic could not have informed the decision. However, as studies have shown how difficult it is to supress this, the tribunal would give this argument short shrift. If inferences had been drawn, Alice would have, in all likelihood, succeeded with her direct discrimination claim as well as her indirect discrimination claim.

Causing or inducing discrimination

If Money Bank is liable then CaliforniaAI is likely to also be liable for causing/inducing this unlawful discrimination by supplying the system on which Money Bank based its decision. CaliforniaAI cannot be liable if Money Bank is not liable.

The case of Alice, Frank and James highlights the real challenges for claimants winning discrimination claims where AI solutions have been used in employment decision-making. The case also illustrates the risks and pitfalls for employers using such solutions. It illustrates how both existing data protection and equality laws are unsuited for regulating automated employment decisions.

Looking forward, as the UK and other countries are debating over the appropriate level of regulation over AI in areas such as employment. It is to be hoped that these regulations recognise and embrace the inevitably of increased automation but, at the same time, ensure that individuals’ rights are protected effectively.

Related items

Related services, artificial intelligence (ai) – your legal experts.

AI has been all over the headlines in recent months. Generative AI can already produce written content, images and music which is often extremely impressive, if not yet perfect – and it is increasingly obvious that in the very near future, AI’s capabilities will revolutionise the way we work and live our lives.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 13 September 2023

Ethics and discrimination in artificial intelligence-enabled recruitment practices

  • Zhisheng Chen   ORCID: orcid.org/0000-0002-0854-2547 1  

Humanities and Social Sciences Communications volume  10 , Article number:  567 ( 2023 ) Cite this article

65k Accesses

28 Citations

241 Altmetric

Metrics details

  • Business and management
  • Science, technology and society

This study aims to address the research gap on algorithmic discrimination caused by AI-enabled recruitment and explore technical and managerial solutions. The primary research approach used is a literature review. The findings suggest that AI-enabled recruitment has the potential to enhance recruitment quality, increase efficiency, and reduce transactional work. However, algorithmic bias results in discriminatory hiring practices based on gender, race, color, and personality traits. The study indicates that algorithmic bias stems from limited raw data sets and biased algorithm designers. To mitigate this issue, it is recommended to implement technical measures, such as unbiased dataset frameworks and improved algorithmic transparency, as well as management measures like internal corporate ethical governance and external oversight. Employing Grounded Theory, the study conducted survey analysis to collect firsthand data on respondents’ experiences and perceptions of AI-driven recruitment applications and discrimination.

Similar content being viewed by others

discrimination in recruitment case study

Monitoring hiring discrimination through online recruitment platforms

discrimination in recruitment case study

Measuring ethical behavior with AI and natural language processing to assess business success

discrimination in recruitment case study

Introducing contextual transparency for automated decision systems

Introduction.

Technological innovation has revolutionized work across the first through fourth industrial revolutions. The fourth industrial revolution introduced disruptive technologies like big data and artificial intelligence (Zhang and Chen, 2023 ). The advancement of data processing and big data analytics, along with developments in artificial intelligence, has improved information processing capabilities, including problem-solving and decision-making (Raveendra et al., 2020 ). With the increasing normalization and timely usage of digital technologies, there is a potential for future higher-level implementation of AI systems (Beneduce, 2020 ).

AI can provide faster and more extensive data analysis than humans, achieving remarkable accuracy and establishing itself as a reliable tool (Chen, 2022 ). It can collect and evaluate large amounts of data that may exceed human analytical capacities, enabling AI to provide decision recommendations (Shaw, 2019 ).

Modern technologies, including artificial intelligence solutions, have revolutionized work and contributed to developing human resources management (HRM) for improved outcomes (Hmoud and Laszlo, 2019 ). One significant area where their impact is felt is in the recruitment process, where AI implementation can potentially provide a competitive advantage by enabling a better understanding of talent compared to competitors, thereby enhancing the company’s competitiveness (Johansson and Herranen, 2019 ).

AI receives commands and data input through algorithms. While AI developers believe their algorithmic procedures simplify hiring and mitigate bias, Miasato and Silva ( 2019 ) argue that algorithms cannot eliminate discrimination alone. The decisions made by AI are shaped by the initial data it receives. If the underlying data is unfair, the resulting algorithms can perpetuate bias, incompleteness, or discrimination, creating potential for widespread inequality (Bornstein, 2018 ). Many professionals assert that AI and algorithms reinforce socioeconomic divisions and expose disparities. To quote Immanuel Kant, “In the bentwood of these data sets, none of them is straight” (Raub, 2018 ). This undermines the principle of social justice, causing moral and economic harm to those affected by discrimination and reducing overall economic efficiency, leading to decreased production of goods and services.

AI recruitment tools have a concerning aspect that cannot be overlooked, highlighting the need to address these challenges through technical or managerial means (Raub, 2018 ). Increasing evidence suggests that AI is more impartial than commonly believed; however, algorithms and AI can result in unfair employment opportunities and the potential for discrimination without accountability. To harness the benefits of AI in recruiting, organizations should exercise careful selection of their programs, promote the adoption of accountable algorithms, and advocate for improvements in racial and gender diversity within high-tech companies.

The general construct of this study is, first, an extension of statistical discrimination theory in the context of the algorithmic economy; second, a synthesis of the current literature on the benefits of algorithmic hiring, the roots and classification of algorithmic discrimination; and third, initiatives to eliminate the existence of algorithmic hiring discrimination; fourth, based on the Grounded Theory, we conduct surveys with respondents and analyze primary data to support the study.

The contributions of this study are as follows:

First, discuss job market discrimination theories in the digital age context. When considering statistical discrimination theories, we should consider the current circumstances. It is necessary to apply these discrimination theories to evaluate the issues that arise from the use of technology in the digital age, particularly with the widespread adoption of artificial intelligence, big data, and blockchain across various industries.

Secondly, a literature review approach was employed to examine the factors contributing to discrimination in algorithmic hiring. Our goal with this analysis is to help managers and researchers better understand the limitations of AI algorithms in the hiring process. We conducted a thorough review of 49 papers published between 2007 and 2023 and found that there is currently a fragmented understanding of discrimination in algorithmic hiring. Building on this literature review, our study aims to offer a comprehensive and systematic examination of the sources, categorization, and possible solutions for discriminatory practices in algorithmic recruitment.

Thirdly, we take a comprehensive approach that considers technical and managerial aspects to tackle discrimination in algorithmic hiring. This study contends that resolving algorithmic discrimination in recruitment requires technical solutions and the implementation of internal ethical governance and external regulations.

The subsequent study is structured into five parts. The first section provides the theoretical background for this research. The following section outlines the research methodology employed in the literature review and identifies four key themes. The third section delves into a detailed discussion of these four themes: applications and benefits of AI-based recruitment, factors contributing to algorithmic recruitment discrimination, types of discrimination in algorithmic recruitment, and measures to mitigate algorithmic hiring discrimination. The fourth section involves conducting a survey among respondents and analyzing the primary data collected to support our study. The final section concludes by suggesting future directions for research.

Theory background

Discrimination theory.

Discrimination in the labor market is defined by the ILO’s Convention 111, which encompasses any unfavorable treatment based on race, ethnicity, color, and gender that undermines employment equality (Ruwanpura, 2008 ). Economist Samuelson ( 1952 ) offers a similar definition, indicating that discrimination involves differential treatment based on personal characteristics, such as ethnic origin, gender, skin color, and age.

Various perspectives on the causes and manifestations of discrimination can be broadly categorized into four theoretical groups. The first is the competitive market theory, which explains discriminatory practices within an equilibrium of perfect competition (Lundberg and Startz, 1983 ). This view attributes discrimination primarily to personal prejudice. The second is the monopoly model of discrimination, which posits that monopolistic power leads to discriminatory behavior (Cain, 1986 ). The third is the statistical theory of discrimination, which suggests that nonobjective variables, such as inadequate information, contribute to biased outcomes (Dickinson and Oaxaca, 2009 ). Lastly, we have the antecedent market discrimination hypothesis as the fourth category.

Statistical discrimination theory

Statistical discrimination refers to prejudice from assessment criteria that generalize group characteristics to individuals (Tilcsik, 2021 ). It arises due to limitations in employers’ research techniques or the cost constraint of obtaining information in the asymmetry between employers and job seekers. Even without monopolistic power, statistical discrimination can occur in the labor market due to information-gathering methods. Employers are primarily interested in assessing candidates’ competitiveness when making recruitment decisions. However, obtaining this information directly is challenging, so employers rely on various indirect techniques.

Discrimination carries both individual and societal economic costs. The social cost arises from the decrease in overall economic output caused by discrimination. However, this is still deemed efficient under imperfect information and aligns with the employer’s profit maximization goal. Therefore, it is likely that statistical discrimination in employment will persist.

Extension of statistical discrimination theory in the digital age

The digital economy has witnessed the application of various artificial intelligence technologies in the job market. Consequently, the issue of algorithmic hiring discrimination has emerged, shifting the focus of statistical discrimination theory from traditional hiring to intelligent hiring. The mechanisms that give rise to hiring discrimination problems remain similar, as both rely on historical data of specific populations to predict future hiring outcomes.

While AI recruiting offers numerous benefits, it is also susceptible to algorithmic bias. Algorithmic bias refers to the systematic and replicable errors in computer systems that lead to unequally and discrimination based on legally protected characteristics, such as race and gender (Jackson, 2021 ). When assessments consistently overestimate or underestimate a particular group’s scores, they produce “predictive bias” (Raghavan et al., 2020 ). Unfortunately, these discriminatory results are often overlooked or disregarded due to the misconception that AI processes are inherently “objective” and “neutral.”

Despite algorithms aiming for objectivity and clarity in their procedures, they can become biased when they receive partial input data from humans. Modern algorithms may appear neutral but can disproportionately harm protected class members, posing the risk of “agentic discrimination” (Prince and Schwarcz, 2019 ). If mishandled, algorithms can exacerbate inequalities and perpetuate discrimination against minority groups (Lloyd, 2018 ).

Within the recruitment process, algorithmic bias can manifest concerning gender, race, color, and personality.

Research methodology

The primary research strategy was a literature review approach. This review aimed to assess current research on recruitment supported by artificial intelligence algorithms. The systematic review process included gathering and evaluating the selected studies’ literature and topics. Driven by the direction of the research, studies focusing on algorithmic discrimination in recruitment over the past 10 years were included unless past literature was worth reviewing. This is because this is a relatively new phenomenon that has become prominent over the past 10 years. In defining the “algorithmic and hiring discrimination” literature, a fairly broad approach was taken based on article keywords rather than publication sources. Depending on the focus, keywords related to algorithms and hiring discrimination were included in the search string. The keyword search algorithm for this review is as follows. (“artificial intelligence” and “hiring discrimination”), (“algorithms” and “recruitment discrimination”), (artificial intelligence” and “recruitment discrimination”), and (“algorithms” and “hiring discrimination”). SCOPUS, Google Scholar, and Web of Science are three well-known search engines frequently used by the academic community and meet the criteria for technology-related topics in this review. WOS is used as a starting point for high-quality peer-reviewed scholarly articles. The study selected these three databases, used search engines, and maintained ten years. After applying an initial screening related to titles, keywords, or abstracts, the literature was selected based on its relevance to the research topic.

The obtained literature was studied in depth to reveal the surfaced themes. Several systematic research themes were identified, including AI-based recruitment applications and benefits, causes of algorithmic discrimination, which algorithmic recruitment discrimination exists, and algorithmic recruitment discrimination resolution.

The process applied for the reviews depicted in Fig. 1 . After excluding duplicates and less relevant and outdated literature, only 45 articles could be used as references for this study (referred to Table 1 ). The literature review shows that most of the research on algorithmic hiring discrimination has occurred in recent years. The research trend indicates that algorithmic hiring discrimination will be a hot research topic in the coming period.

The first theme is the application of various aspects of recruitment based on artificial intelligence support and its benefits. Bogen and Rieke ( 2018 ), Ahmed ( 2018 ), Hmoud and Laszlo ( 2019 ), Albert ( 2019 ), van Esch et al. ( 2019 ), Köchling et al. ( 2022 ), and Chen ( 2023 ) consider the recruitment process as a set of tasks that may be divided into four steps sourcing, screening, interviewing, and selection. Each step includes different activities, and AI algorithms can change how each stage is executed. Some studies point out that AI-supported recruitment has benefits. Beattie et al. ( 2012 ), Newell ( 2015 ), Raub ( 2018 ), Miasato and Silva ( 2019 ), Beneduce ( 2020 ), and Johnson et al. ( 2020 ) state that it can reduce costs; Hmoud and Laszlo ( 2019 ), Johansson and Herranen ( 2019 ), Raveendra et al. ( 2020 ), Black and van Esch ( 2020 ), and Allal-Chérif et al. ( 2021 ) suggest it saves time; Upadhyay and Khandelwal ( 2018 ) and Johansson and Herranen ( 2019 ) present it reducing transactional workload.

The second theme is the causes of algorithmic discrimination. McFarland and McFarland ( 2015 ), Mayson ( 2018 ), Raso et al. ( 2018 ), Raub ( 2018 ), Raghavan et al. ( 2020 ), Njoto ( 2020 ), Zixun ( 2020 ), and Jackson ( 2021 ) suggest that the reason for algorithmic discrimination is related to data selection. Data collection tends to prefer accessible, “mainstream” organizations unequally dispersed by race and gender. Inadequate data will screen out groups that have been historically underrepresented in the recruitment process. Predicting future hiring outcomes by observing historical data can amplify future hiring inequalities. Yarger et al. ( 2019 ), Miasato and Silva ( 2019 ), and Njoto ( 2020 ) propose that discrimination is due to the designer-induced selection of data features.

The third theme is which algorithmic recruitment discrimination exists. According to Correll et al. ( 2007 ), Kay et al. ( 2015 ), O’neil ( 2016 ), Raso et al. ( 2018 ), Miasato and Silva ( 2019 ), Langenkamp et al. ( 2019 ), Faragher ( 2019 ), Ong ( 2019 ), Fernández and Fernández ( 2019 ), Beneduce ( 2020 ), Jackson ( 2021 ), Yarger et al. ( 2023 ), and Avery et al. ( 2023 ), when partial human data is provided to a machine, so the algorithm is biased, it will eventually lead to the risk of “agent discrimination.” In recruitment, algorithmic bias can manifest in gender, race, skin color, and personality.

The fourth theme is algorithmic recruitment discrimination resolution. Kitchin and Lauriault ( 2015 ), Bornstein ( 2018 ), Raso et al. ( 2018 ), Xie et al. ( 2018 ), Raub ( 2018 ), Bornstein ( 2018 ), Grabovskyi and Martynovych ( 2019 ), Amini et al. ( 2019 ), Shin and Park ( 2019 ), Yarger et al. ( 2019 ), Gulzar et al. ( 2019 ), Kessing ( 2021 ) Jackson ( 2021 ), and Mishra ( 2022 ) argue that fair data sets need to be constructed and algorithmic transparency needs to be improved. Moreover, Smith and Shum ( 2018 ), Mitchell et al. ( 2019 ), Ong ( 2019 ), Zuiderveen Borgesius ( 2020 ), Peña et al. ( 2020 ), Kim et al. ( 2021 ), Yang et al. ( 2021 ), and Jackson ( 2021 ) propose that from a management perspective, data governance need be strengthened, including internal ethical governance and external ethical oversight.

figure 1

Procedures used in the literature review to reveal emerging themes.

Through this review, we have created an overarching conceptual framework to visualize how AI and AI-based technologies impact recruitment. This framework is illustrated in Fig. 2 and aligns with the four research themes identified.

figure 2

An overarching conceptual framework to visualize how AI and AI-based technologies can impact recruitment efforts.

Theme I. AI-based recruitment applications and benefits

Artificial intelligence and algorithm.

The idea of machine intelligence dates back to 1950, when Turing, the father of computer science, asked, “Can machines think?” (Ong, 2019 ). The term “artificial intelligence” was coined by John McCarthy. Although early scientists made outstanding contributions, artificial intelligence became an industry after the 1980s with hardware development. Initial applications of artificial intelligence were seen in the automation of repeated and complicated work assignments, like industrial robot production, that displaced human work in some plants. After the mid-1990s, artificial intelligence software saw significant advances. Until today’s digital economy, AI has been commonly used in various industries (Hmoud and Laszlo, 2019 ).

Artificial intelligence is defined as the ability of something like a machine to understand, learn, and interpret on its own in a human-like manner (Johansson and Herranen, 2019 ). Artificial intelligence aims “to understand and simulate human thought processes and to design machines that mimic this behavior.” It is designed to be a thinking machine with a level of human intelligence (Jackson, 2021 ). However, large amounts of data must be combined with fast and iterative intelligent algorithms to handle this process, allowing ML systems to learn from patterns or features in the data automatically.

A set of instructions or commands used to carry out a particular operation is known as an algorithm. This digital process makes decisions automatically depending on the data entered into the program (Miasato and Silva, 2019 ). The algorithm analyzes massive data patterns through data mining, searching, and using ways to predict, like our point of view encoded in the code. It explores the dataset using agents representing various traits, such as race, sexual orientation, and political opinions (Njoto, 2020 ). The algorithms frequently contain these biases due to the lengthy history of racial and gender prejudices, both intentional and unconscious. When biases exist in algorithmic data, AI may replicate these prejudices in its decision-making, a mistake known as algorithmic bias (Jackson, 2021 ).

AI-based recruitment stage

The recruitment procedure is a series of events that may be divided into four significant steps: searching, screening, interviewing, and selection (Bogen and Rieke, 2018 ). Each phase includes various activities, and artificial intelligence technology can influence the execution of each stage.

The searching phase aims at a system for searching web content. It screens passive job applicants online through social media and recruitment platforms, analyzing their profiles according to predefined job descriptions. The search engine recognizes the meaning of the searched content and performs a web-based search to match candidates’ profiles based on semantic annotations of job postings and profiles (Hmoud and Laszlo, 2019 ).

The screening phase includes evaluating and assessing the qualifications of candidates, where AI technology assists recruiters in scoring candidates and evaluating their competencies (Bogen and Rieke, 2018 ). The resumes are screened to match the job description better. The system can rank candidates according to the relevance of the qualification metrics.

The following phase is the interview. It is probably the most individual stage of the selection process and, thus, unlikely to be fully automated by artificial intelligence. However, some AI tools enable recruiters to conduct video interviews and research candidates’ reactions, voice tones, and facial expressions (Ahmed, 2018 ).

The final stage is the selection stage, which is the stage where the employer makes the final employment decision. In this stage, AI systems can calculate remuneration and benefits for companies and anticipate the risk that candidates would violate workplace rules. (Bogen and Rieke, 2018 ).

AI-based recruitment benefits

Recruitment quality.

Beattie et al. ( 2012 ) found that some large companies believe unconscious bias affects recruitment quality. Organizations may need to hire more qualified people to avoid financial losses (Newell, 2015 ). Artificial intelligence has become a part of the recruitment industry to automate the recruiting and selecting process, which can remove unconscious human bias that affects the hiring process (Raub, 2018 ). One of the ideas behind the development of AI in the selection of candidates is to bring higher standards to the selection process independent of the thoughts and beliefs of the interviewer (Miasato and Silva, 2019 ). Artificial intelligence tools can start with accurate job descriptions and targeted advertisements that match a candidate’s skills and abilities to job performance and create a profile of every candidate that indicates which candidate is best suited for the job (Johnson et al. 2020 ). In addition, automated resume screening systems allow recruiters to consider more candidates that would be overlooked (Beneduce, 2020 ). With advances in AI technology, candidate selection becomes impersonal based on data shared with the company and available on the Internet.

Recruitment efficiency

HR departments may receive many candidates for every position. Traditional screening and selection that depends on human intervention to evaluate candidate information is the most expensive and discouraging hiring process (Hmoud and Laszlo, 2019 ). Artificial intelligence can accelerate the hiring procedure, produce an outstanding candidate experience, and reduce costs (Johansson and Herranen, 2019 ). It can bring job information to applicants faster, allowing them to make informed decisions about their interests early in the hiring process. Artificial intelligence can also screen out many uninterested applicants and remove them from the applicant pool, thus reducing the number of applicants recruiters need to select later. It is even possible to source reticent candidates with the help of artificial intelligence and have more time to concentrate on the best match. Artificial intelligence can not only automate the evaluation of hundreds of resumes on a large scale in a short period, but it can also automatically classify candidates based on the job description provided. Moreover, The final results after the hiring decision can be more easily fed back to the candidate (Raveendra et al., 2020 ).

Transactional workload

The application of AI in recruitment can be described as a “new era in human resources” because artificial intelligence replaces the routine tasks performed by human recruiters, thus changing the traditional practices of the recruitment industry (Upadhyay and Khandelwal, 2018 ). Most professionals believe that AI is beneficial to recruiters in terms of reducing routine and administrative tasks (Johansson and Herranen, 2019 ). Recruiters will hand over time-consuming administrative tasks like recruiting, screening, and interviewing to AI, allowing more scope for recruiters to concentrate on strategic affairs (Upadhyay and Khandelwal, 2018 ).

Theme II. Why is there algorithmic recruitment discrimination

Algorithms are not inherently discriminatory, and engineers rarely intentionally introduce bias into algorithms. However, bias can still arise in algorithmic recruitment. This issue is closely linked to the fundamental technology behind AI and ML. The ML process can be simplified into several stages, each involving three key components contributing to algorithmic bias: dataset construction, the engineer’s target formulation, and feature selection (36KE, 2020 ). When the dataset lacks diverse representation from different companies, bias may be introduced during the development of algorithmic rules by engineers and when annotators handle unstructured data (Zixun, 2020 ).

Datasets: bias soil

Datasets serve as the foundation of machine learning (ML). If an algorithm’s data collection lacks quantity and quality, it will fail to represent reality objectively, leading to inevitable bias in algorithmic decisions. Researchers commonly use a 95% confidence level, which provides 95% certainty but still leaves a one in twenty chance of bias (Raub, 2018 ). Nearly every ML algorithm relies on biased databases.

One issue arises when datasets are skewed towards accessible and more “mainstream” groups due to the ease of data collection. Consequently, there is an imbalance in the distribution concerning gender and race dimensions (36KE, 2020 ). If the collected data inadequately represent a particular race or gender, the resulting system will inevitably overlook or mistreat them in its performance. In the hiring process, insufficient data may exclude historically underrepresented groups (Jackson, 2021 ). Assessing the success of potential employees based on existing employees perpetuates a bias toward candidates who resemble those already employed (Raghavan et al., 2020 ).

Without careful planning, most datasets consist of unstructured data acquired through observational measures, lacking rigorous methods in controlled environments (McFarland and McFarland, 2015 ). This can lead to significant issues with misreporting. When algorithms play a role in decision-making, underrepresented individuals are unequally positioned. Furthermore, as AI improves the algorithm, the model accommodates the lack of representation, reducing sensitivity to the underrepresented groups. The algorithm favors the represented group, operating less effectively for other groups (Njoto, 2020 ).

Existing social biases are introduced into the dataset. The raw data already reflects social prejudices, and the algorithm also incorporates biased relationships, leading to the “bias in and bias out” phenomenon (36KE, 2020 ). This phenomenon means that discrimination and disparities exist, just like in forecasting, where historical inequalities are projected into the future and may even be amplified (Mayson, 2018 ).

A research team at Princeton University discovered that algorithms lack access to the absolute truth. The machine corpus contains biases that closely resemble the implicit biases observed in the human brain. Artificial intelligence has the potential to perpetuate existing patterns of bias and discrimination because these systems are typically trained to replicate the outcomes achieved by human decision-makers (Raso et al. 2018 ). What is worse, the perception of objectivity surrounding high-tech systems obscures this fact.

In summary, if an algorithmic system is trained on biased and unrepresentative data, it runs the risk of replicating that bias.

Data feature selection: designer bias

The introduction of bias is sometimes not immediately apparent in model construction because computer scientists are often not trained to consider social issues in context. It is crucial to make them aware of attribute selection’s impact on the algorithm (Yarger et al., 2019 ).

The algorithm engineer plays a crucial role in the entire system, from setting goals for machine learning to selecting the appropriate model and determining data characteristics such as labels. If inappropriate goals are set, bias may be introduced from the outset (36KE, 2020 ).

An engineer is responsible for developing the algorithmic model. If they hold certain beliefs and preconceptions, those personal biases can be transmitted to the machine (Njoto, 2020 ). Although the device is responsible for selecting employee resumes, it operates based on underlying programming. The programmer guides the AI in making decisions about the best candidate, which can still result in discrimination (Miasato and Silva, 2019 ).

Furthermore, personal biases can manifest in the selection of data characteristics. For example, engineers may prioritize specific features or variables based on how they want the machine to behave (Miasato and Silva, 2019 )). The Amazon hiring case illustrates this, where engineers considered education, occupation, and gender when assigning labels to the algorithm. When gender is considered the crucial criterion, it influences how the algorithm responds to the data.

Theme III. Which algorithmic recruitment discrimination exists

In the recruitment process, algorithmic bias can be manifested in terms of gender, race, color, and personality.

Gender stereotypes have infiltrated the “lexical embedding framework” utilized in natural language processing (NLP) techniques and machine learning (ML). Munson’s research indicates that “occupational picture search outcomes slightly exaggerate gender stereotypes, portraying minority-gender occupations as less professional” ((Avery et al., 2023 ; Kay et al., 2015 ).

The impact of gender stereotypes on AI hiring poses genuine risks (Beneduce, 2020 ). In 2014, Amazon developed an ML-based hiring tool, but it exhibited gender bias. The system did not classify candidates neutrally for gender (Miasato and Silva, 2019 ). The bias stemmed from training the AI system on predominantly male employees’ CVs (Beneduce, 2020 ). Accordingly, the recruitment algorithm perceived this biased model as indicative of success, resulting in discrimination against female applicants (Langenkamp et al. 2019 ). The algorithm even downgraded applicants with keywords such as “female” (Faragher, 2019 ). These findings compelled Amazon to withdraw the tool and develop a new unbiased algorithm. However, this discrimination was inadvertent, revealing the flaws inherent in algorithmic bias that perpetuates existing gender inequalities and social biases (O’neil, 2016 ).

Microsoft’s chatbot Tay learned to produce sexist and racist remarks on Twitter. By interacting with users on the platform, Tay absorbed the natural form of human language, using human tweets as its training data. Unfortunately, the innocent chatbot quickly adopted hate speech targeting women and black individuals. As a result, Microsoft shut down Tay within hours of its release. Research has indicated that when machines passively absorb human biases, they can reflect subconscious biases (Fernández and Fernández, 2019 ; Ong, 2019 ). For instance, searches for names associated with Black individuals were more likely to be accompanied by advertisements featuring arrest records, even when no actual records existed. Conversely, searches for names associated with white individuals did not prompt such advertisements (Correll et al., 2007 ). A study on racial discrimination revealed that candidates with white names received 50% more interview offers than those with African-American names.

In 2015, Google’s photo application algorithm erroneously labeled a photo of two black people as gorillas (Jackson, 2021 ). The algorithm was insufficiently trained to recognize images with dark skin tones (Yarger et al., 2023 ). The company publicly apologized and committed to immediately preventing such errors. However, three years later, Google discontinued its facial identification service, citing the need to address significant technical and policy issues before resuming this service. Similarly, in 2017, an algorithm used for a contactless soap dispenser failed to correctly identify shades of skin color, resulting in the dispenser only responding to white hands and not detecting black and brown ones. These cases serve as examples of algorithmic bias (Jackson, 2021 ).

Personality

The algorithm assesses word choice, tone shifts, and facial expressions (using facial recognition) to determine the candidate’s “personality” and alignment with the company culture (Raso et al., 2018 ). Notable examples include correlating longer tenure in a specific job with “high creativity” and linking a stronger inclination towards curiosity to a higher likelihood of seeking other opportunities (O’neil, 2016 ). Additionally, sentiment analysis models are employed to gauge the level of positive or negative emotions conveyed in sentences.

Theme IV. How decreasing algorithmic recruitment discrimination

Changes should be made at the technical and regulatory levels to ensure that AI algorithms do not replicate existing biases or introduce new ones based on the provided data (Raub, 2018 ).

Building fair algorithms from a technical perspective

Constructing a more unbiased dataset.

Unfair datasets are the root cause of bias. Therefore, a direct approach to addressing algorithmic bias is reconfiguring unbalanced datasets. Using multiple data points can yield more accurate results while carefully eliminating data points that reflect past biases. However, this approach incurs significant costs (Bornstein, 2018 ).

Another method is to correct data imbalances by using more equitable data sources to ensure fair decision-making (36KE, 2020 ). Understanding the underlying structure of training data and adjusting the significance of specific data points during training based on known latent distributions makes it possible to uncover hidden biases and remove them automatically. For example, Microsoft revised their dataset for training the Face API, resulting in a 20-fold reduction in the recognition error ratio between men and women with darker skin tones and a 9-fold reduction for women by balancing factors such as skin color, age, and gender (Grabovskyi and Martynovych, 2019 ).

Integrating “small data” and “big data” can enhance accuracy (36KE, 2020 ). Data should not solely rely on extensive collections but also focus on precision. While big data analysis tends to emphasize correlations, which can lead to errors when inferring causation, small data, which is more user-specific, offers detailed information and helps avoid such mistakes. Combining the vastness of big data with the precision of small data can help somewhat mitigate hiring errors (Kitchin and Lauriault, 2015 ).

Biases in datasets can be identified through autonomous testing. The inaccuracies stemming from incomplete past data can be addressed through “oversampling” (Bornstein, 2018 ). Researchers from MIT demonstrated how an AI system called DB-VEA (unsupervised learning) can automatically reduce bias by re-sampling data. This approach allows the model to learn facial features such as skin color and gender while significantly reducing categorization biases related to race and gender (Amini et al., 2019 ).

Therefore, constructing a more unbiased dataset is one of the methods that can be employed to tackle algorithmic bias.

Enhancing algorithmic transparency

Engineers write algorithmic models, but they often need help understanding the processes that AI undergoes to produce a specific outcome. Many algorithmic biases are difficult to fully understand because their techniques and methods are not easily visible. This leaves many people unaware of why or how they are discriminated against and lacks public accountability (Jackson, 2021 ). There is an issue of “algorithmic black box” in ML. Therefore, transparency would facilitate remediation when deviant algorithms are discovered and solve the current “black box” dilemma (Shin and Park, 2019 ).

Technological tools against bias

Data blending process. Blendoor is inclusive recruiting and staffing analytics software that mitigates unconscious bias. It takes candidate profiles from existing online job boards and applicant tracking systems to reduce unconscious bias. Blendoor “blends” candidate profiles by removing names, photos, and dates (Yarger et al., 2019 ). Thus, Blendoor promotes design fairness by assisting underrepresented job seekers and encoding equal opportunity in the algorithm.

Decoupling technique. In resume screening, this technique allows the algorithm to identify the best candidates by considering variables optimized for other applicants based on specific categories like gender or race rather than the entire applicant pool (Raso et al., 2018 ). It means that the characteristics selected for minority or female applicants will be determined according to the trends of other minority or female applicants, which may differ from the features identified as successful representatives.

Word embedding. Microsoft researchers found that words exhibit distinct associations in news and web data. For instance, words like “fashion” and “knitting” are more closely related to females, while “hero” and “genius” are more closely related to males (36KE, 2020 ). Microsoft suggests a simple solution by removing the gender-specific measures in word embedding to reduce “presentation bias.”

Differential testing. Scientists at Columbia University developed Deep Xplore, a software that highlights vulnerabilities in algorithmic neural networks via “coaxing” the system to make mistakes (Xie et al., 2018 ). Deep Xplore utilizes discrepancy testing, which involves comparing several systems and observing their outputs’ differences. A model is considered vulnerable if all other models consistently predict a particular input while only one model predicts it differently (Gulzar et al., 2019 ).

Bias detection tool. In September 2018, Google introduced the innovative What-If tool for detecting bias (Mishra, 2022 ). It assists designers in identifying the causes of misclassification, determining decision boundaries, and detecting algorithmic fairness through interactive visual interfaces. Additionally, Facebook has developed Fairness Flow, an emerging tool for correcting algorithmic bias. Fairness Flow automatically notifies developers if an algorithm makes unfair judgments based on race, gender, or age (Kessing, 2021 ).

Improving the algorithm’s ethics from a management perspective

Internal ethics governance.

Several major technology companies have published AI principles addressing bias governance, signaling the start of self-regulation (36KE, 2020 ). Microsoft has formed an AI and ethical standards committee to enforce these principles, subjecting all future AI products to ethics scrutiny (Smith and Shum, 2018 ). Google has responded by introducing a Model Card function, similar to an algorithm manual, that explains the employed algorithm, highlights strengths and weaknesses, and even shares operational results from various datasets (Mitchell et al., 2019 ).

Algorithmic systems undergo audits to prevent unintended discrimination and make necessary adjustments to ensure fairness (Kim et al., 2021 ). Regular internal audits allow companies to monitor, identify, and correct biased algorithms. Increased involvement from multiple parties in the data collection process and continuous algorithm monitoring are essential to reduce or eliminate bias (Jackson, 2021 ). Some companies have introduced AI-HR audits, similar to traditional HR audits, to review employee selection and assess the reliability of AI algorithms and ML data (Yang et al., 2021 ). Companies should also stay updated on recruitment laws and regulations and ensure compliance with legal requirements.

Considering the programmers behind these algorithms, diversity in the high-tech industry is crucial. Algorithms often reflect the opinions of those who create them. The persistent underrepresentation of women, African-Americans, and Latino professionals in the IT workforce leads to biased algorithms. For instance, a study in 2019 found that only 2.5% of Google’s employees were black, while Microsoft and Facebook had only 4% representation. Another study revealed that 80% of AI professors in 2018 were male. Involving diverse individuals in data collection and training can regulate and eliminate human bias rooted in algorithms (Jackson, 2021 ).

Although self-regulation can help reduce discrimination and influence lawmakers, it has potential drawbacks. Self-regulation lacks binding power, necessitating external oversight through third-party testing and the development of AI principles, laws, and regulations by external agencies.

External supervision

To ensure transparency and accountability in recruitment, third-party certification and testing of AI products can help mitigate the negative impacts of unreliability. At the “Ethics and Artificial Intelligence” technical conference held at Carnegie Mellon University, the director of Microsoft Research Institute proposed a solution to ensure consistent standards and transparency in AI. Microsoft’s proposal, “Allowing third-party testing and comparison,” aims to uphold the integrity of AI technology in the market (Ong, 2019 ).

Various organizations have issued principles promoting equity, ethics, and responsibility in AI (Zuiderveen Borgesius, 2020 ). The Organization for Economic Cooperation and Development (OECD) has provided recommendations on AI, while the European Commission has drafted proposals regarding the influence of algorithmic systems on human rights. In 2019, the European Commission established a high-level expert group on AI, which proposed ethical guidelines and self-regulatory measures regarding AI and ethics.

Public organizations have played a role in establishing mechanisms to safeguard algorithmic fairness. The Algorithm Justice League (AJL) has outlined vital behaviors companies should follow in a signable agreement. Holding accountable those who design and deploy algorithms improves existing algorithms in practice (36KE, 2020 ). After evaluating IBM’s algorithm, AJL provided feedback, and IBM responded promptly, stating that they would address the identified issue. As a result, IBM significantly improved the accuracy of its algorithm in minority facial identification.

Data protection and non-discrimination laws safeguard against discriminatory practices in algorithmic decision-making. In the EU region, Article 14 of the European Convention on Human Rights (ECHR) guarantees the rights and freedoms outlined in the Convention, prohibiting direct and indirect discrimination (Zuiderveen Borgesius, 2020 ). Non-discrimination laws, particularly those about indirect discrimination, serve as a means to prevent various forms of algorithmic discrimination. The EU General Data Protection Regulation (GDPR), implemented in May 2018, addresses the impact of ML algorithms and offers a “right to explanation” (e.g., Articles 13–15) (Peña et al., 2020 ), enabling individuals to request explanations for algorithmic decisions and demand measures to avoid discriminatory influences when handling sensitive data. The GDPR mandates organizations to conduct a Data Protection Impact Assessment (DPIA), with each EU member state must maintain an independent data protection authority vested with investigative powers. Under the GDPR, a data protection authority can access an organization’s premises and computers using personal data (Zuiderveen Borgesius, 2020 ).

Investigation and analysis

Based on Grounded Theory, this section uses a qualitative research approach to explore AI-supported recruitment applications and discrimination.

Sources and methods

The study is based on Grounded Theory and qualitative analysis of interview data. Glaser and Strauss (1965,1968) proposed this theory. The basic idea is constructing a theory based on empirical data (Charmaz and Thornberg, 2021 ). Researchers generally do not make any theoretical assumptions before starting scientific research but start directly from a realistic point of view and summarize several empirical concepts in primary data, which are then raised to systematic theoretical knowledge. Grounded Theory must be supported by empirical evidence, but its most significant characteristic is not its practical nature but the extraction of new ideas from existing facts.

Grounded Theory is a qualitative approach to research that focuses on the importance of “primary sources” (Timmermans and Tavory, 2012 ). In the study of AI-driven hiring discrimination, the systematic collection and analysis of data are used to uncover intrinsic patterns, construct relevant concepts, and refine relevant theoretical models instead of adopting theoretical assumptions. The current research on the influence factors and measures of AI-driven recruitment discrimination is not intensive enough and lacks corresponding theoretical support. At the same time, Grounded Theory extracts from “primary data” and constructs a theoretical model to study AI-driven recruitment applications and discrimination.

Interviewees and content

Participants in the interview.

The interview period was June 2023, and the interview form was a face-to-face live/video/telephone interview. The interviewees were selected considering representativeness, authority, and operability, and ten people with experience in recruiting or interviewing with the help of intelligent tools were finally selected for the study. The basic information of the interviewees is shown in Table 2 . The study was conducted with the interviewees’ consent. Each interview lasted about 30 min, and notes were taken during the interview. The number of interviewees was determined based on the principle of information saturation.

Before conducting interviews, a large amount of data is collected to understand AI-driven hiring discrimination and propose appropriate improvement strategies. A study of AI-driven hiring discrimination was conducted using “dynamic sampling” and “information saturation” methods.

Interview outline

The interview outline was set in advance around the core objectives of this study, including the following six questions: “Do you know about AI-driven recruitment,” “How do you think about AI-driven recruitment discrimination,” “What do you think is the cause of AI-driven recruitment discrimination,” “Types of AI-driven hiring discrimination,” “Strategies to solve AI-driven hiring discrimination,” and “What other suggestions do you have.” Based on the predefined interview outline, appropriate adjustments were made as the interview progressed.

Interview ethics

The interview process is based on three main principles—the right-to-know principle. The interviewer fully understands the purpose, content, and use of the interview before being interviewed, the principle of objectivity. The researcher will guide the respondent to ask questions and answer what they cannot understand. Respondents make objective statements of their willingness that are not influenced by external factors, the principle of confidentiality. Interviews will be conducted anonymously, and the personal information of the interviewees will not be disclosed. The interviewee’s privacy is fully respected, and the original data is replaced by figures, which will be used only for the interviewer’s reference and analysis, appropriately kept by the interviewer, and used only for this study and no other purpose.

Interview tools

Nvivo 12.0 Plus qualitative analysis software was used as an auxiliary tool to clarify ideas and improve work efficiency.

Data organization after the interview

Within two working days after the completion of the interview, the analysis and organization of the interview data was completed. The Nvivo 12 plus software coded the interview data in three layers from the bottom up, with the content as the center.

The first layer was open coding. The interview data of 10 interviewees were imported and, using the software, parsed word by word to clarify the meaning of words and sentences, give an interpretation of the data, and obtain free nodes. The data from each section was summarized and inferred to organize the interviewees’ perceptions of AI-driven recruitment, and each node was given a name to derive the first-level nodes.

Next, the second-level spindles were coded. The researchers unfolded the induction of interrelated classes for the nodes formed by the open coding, constructed the relationship between concepts and classes, and coded a spindle concept, which, after the spindle coding, would form a second-level node.

The third part is the three-level core coding. Another coding core class genus is selected based on the secondary spindle, and a tertiary code is developed.

Interview quality control

In order to ensure the credibility of the interview results, the method adopted in this study is to use a uniform way of asking questions to different interviewees. Each round of interviews should be kept between 20 and 30 min. Too long an interval will reduce the effectiveness of feedback on the questions. Also, interviews should not span more than one month to ensure the timeliness of the information obtained.

Applying the Nvivo12 Plus qualitative analysis method, 182 free nodes were obtained by three-level coding, 31 primary nodes were formed after analogy, and 11 secondary nodes were deduced inductively. Finally, five core genera of tertiary nodes were identified (see Table 3 ).

Open level 1 coding

The interviews with 10 respondents resulted in 182 words and sentences related to AI-driven recruitment applications and discrimination, which were conceptualized and merged to form 31 open-ended Level 1 codes.

Main-axis second-level coding

The spindle codes were analyzed through cluster analysis to analyze the correlation and logical order among the primary open codes, forming more generalized categories. Eleven spindle codes were extracted and summarized.

Core-type tertiary coding

The Grounded Theory steps resulted in 31 open primary and 11 secondary spindle codes. Further categorization and analysis revealed that when “AI-driven recruitment applications and discrimination” is used as the core category, the five main categories are AI-driven recruitment applications, AI-driven recruitment effects, causes of AI-driven recruitment discrimination, types of AI-driven recruitment discrimination, and AI-driven recruitment discrimination measures.

The coding process described above was exemplified by an interview with a researcher, F2, who had taught information science at a university for 2 years and was now employed at an intelligent technology R&D company. After the interview, F1’s information was analyzed and explored in a three-level coding process.

Under the three-level node AI-driven recruitment application, F2 suggested that the AI tools currently developed could assist companies with simple recruitment tasks, including online profile retrieval, analysis, and evaluation. However, this technical engineer suggested that candidate assessment for high-level positions suits human-machine collaboration, although machines have an advantage in candidate profile searches.

Supported by three-level nodes of AI-driven recruiting effectiveness, F2 suggests that machine applications in recruiting can relieve human transactional workload, and chatbot Q&A services improve recruiting efficiency.

In the context of the causes of AI-driven hiring discrimination at the third level, F2 suggests that some job seekers are unfamiliar with the hiring interface and how to use it, leading to unfair interviews. She suggested the need for organizers to prepare usage guidelines or mock interview exercises. She argues that much of the data needed for intelligent machine learning comes from internal companies or external market supplies and that this data lacks fair scrutiny. It is even possible that the source data fed into the machines is compromised.

Under the tertiary node AI-driven hiring discrimination, F2 is concerned that the machines may misevaluate candidates due to individual differences, such as intelligence, or external characteristics, such as skin color. Moreover, some discriminatory judgments are difficult to resolve under current technology.

Under the tertiary node AI-driven hiring discrimination measures, F2 proposes utilizing technical tools, such as learning impartial historical data, or non-technical tools, such as anti-AI discrimination laws. She argues that in the future, humans use AI tools to solve more complex decisions, not just limited to hiring. Instead, humans need to embrace and accept the widespread use of machines.

Synthesizing the above analysis, the final overview of the AI-driven recruitment application and discrimination framework is obtained (see Fig. 3 ). After the conceptual model was constructed, the remaining original information was coded and comparatively analyzed, and no new codes were generated, indicating that this study was saturated.

figure 3

The comprehensive analysis produces an overview of AI-driven recruitment applications and recognition frameworks.

An analysis of interview results conducted using Grounded Theory indicates that AI-supported hiring discrimination should be approached from five perspectives. These perspectives align with the thematic directions identified through our literature review.

Firstly, AI-driven hiring applications impact various aspects, such as reviewing applicant profiles online, analyzing applicant information, scoring assessments based on hiring criteria, and generating preliminary rankings automatically.

Secondly, interviewers perceive benefits in AI-driven recruitment for job seekers. It eliminates subjective human bias, facilitates automated matchmaking between individuals and positions, and provides automated response services. Moreover, AI reduces the workload on humans and enhances efficiency.

Thirdly, concerns are raised regarding potential hiring discrimination perpetrated by machines. This can arise from AI tools, such as partial source data, or users unfamiliar with user interfaces and operations.

Fourthly, intrinsic factors like personality and IQ, as well as extrinsic factors like gender and nationality, have been observed to influence the accurate identification and judgment of AI systems concerning hiring discrimination.

Fifthly, respondents offer recommendations for combating discrimination by machines, including technical and non-technical approaches.

Recommendations for future studies

This study conducted a literature review to analyze algorithmic recruitment discrimination’s causes, types, and solutions. Future research on algorithmic recruitment discrimination could explore quantitative analysis or experimental methods across different countries and cultures. Additionally, future studies could examine the mechanics of algorithmic recruitment and the technical rules that impact the hiring process. It would be interesting to analyze the psychological effects of applying this algorithmic recruitment technique on various populations (gender, age, education level) from an organizational behavior perspective. While recent studies have primarily discussed discrimination theory in the traditional economy’s hiring market, future theoretical research should consider how advanced technology affects equity in hiring within the digital economy.

The study concludes that the fourth industrial revolution introduced technological innovations significantly affecting the recruitment industry. It extends the analysis of statistical discrimination theory in the digital age and adopts a literature review approach to explore four themes related to AI-based recruitment. The study argues that algorithmic bias remains an issue while AI recruitment tools offer benefits such as improved recruitment quality, cost reduction, and increased efficiency. Recruitment algorithms’ bias is evident in gender, race, color, and personality. The primary source of algorithmic bias lies in partial historical data. The personal preferences of algorithm engineers also contribute to algorithmic bias. Technical measures like constructing unbiased datasets and enhancing algorithm transparency can be implemented to tackle algorithmic hiring discrimination. However, strengthening management measures, such as corporate ethics and external oversight, is equally important.

Data availability

The study is still ongoing, and the results of subsequent analyses will continue to be applied to valuable and critical projects. Relevant data are currently available only to scholars conducting similar research, with the prerequisite of signing a confidentiality agreement. Corresponding author can be contacted for any requests.

36KE (2020) From sexism to recruitment injustice, how to make AI fair to treat. https://baijiahao.baidu.com/s?id=1663381718970013977&wfr=spider&for=pc

Ahmed O (2018) Artificial intelligence in HR. Int J Res Anal Rev 5(4):971–978

Google Scholar  

Albert ET (2019) AI in talent acquisition: a review of AI-applications used in recruitment and selection. Strateg HR Rev 18(5):215–221. https://doi.org/10.1108/SHR-04-2019-0024

Article   Google Scholar  

Allal-Chérif O, Yela Aránega A, Castaño Sánchez R (2021) Intelligent recruitment: how to identify, select, and retain talents from around the world using artificial intelligence. Technol Forecast Soc Change 169:120822. https://doi.org/10.1016/j.techfore.2021.120822

Amini A, Soleimany AP, Schwarting W, Bhatia SN, Rus D (2019) Uncovering and mitigating algorithmic bias through learned latent structure. In: Conitzer V, Hadfield G, Vallor S (eds) Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society. Association for Computing Machinery

Avery M, Leibbrandt A, Vecci J (2023) Does artificial intelligence help or hurt gender diversity? In: Conitzer V, Hadfield G, Vallor SE (eds) vidence from two field experiments on recruitment in Tech, 14 February 2023. Association for Computing Machinery

Beattie G, Johnson PJPP, Education PiH (2012) Possible unconscious bias in recruitment and promotion and the need to promote equality. Perspect: Policy Pract High Educ 16(1), 7–13

Beneduce G (2020) Artificial intelligence in recruitment: just because it’s biased, does it mean it’s bad? NOVA—School of Business and Economics

Black JS, van Esch P (2020) AI-enabled recruiting: what is it and how should a manager use it? Bus Horiz 63(2):215–226. https://doi.org/10.1016/j.bushor.2019.12.001

Bogen M, Rieke A (2018) Help wanted: an examination of hiring algorithms, equity, and bias. Upturn

Bornstein S (2018) Antidiscriminatory algorithms. Alabama Law Rev 70:519

Cain GG (1986) The economic analysis of labor market discrimination: a survey. Handb Labor Econ 1:693–785

Charmaz K, Thornberg R (2021) The pursuit of quality in grounded theory. Qual Res Psychol 18(3):305–327. https://doi.org/10.1080/14780887.2020.1780357

Chen Z (2022) Artificial intelligence-virtual trainer: innovative didactics aimed at personalized training needs. J Knowl Econ. https://doi.org/10.1007/s13132-022-00985-0

Chen Z (2023) Collaboration among recruiters and artificial intelligence: removing human prejudices in employment. Cogn Technol Work 25(1):135–149

Article   PubMed   CAS   Google Scholar  

Correll SJ, Benard S, Paik I (2007) Getting a job: is there a motherhood penalty? Am J Sociol 112(5):1297–1338

Dickinson DL, Oaxaca RL (2009) Statistical discrimination in labor markets: an experimental analysis. South Econ J 76(1):16–31

Faragher JJEWSTBANHS (2019) Is AI the enemy of diversity? People Management

Fernández C, Fernández A (2019) Ethical and legal implications of AI recruiting software. Ercim News 116:22–23

Grabovskyi V, Martynovych O (2019) Facial recognition with using of the microsoft face API Service. Electron Inf Technol 12:36–48

Gulzar MA, Zhu Y, Han X (2019) Perception and practices of differential testing. In: Conitzer V, Hadfield G, Vallor S (eds) 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). Association for Computing Machinery

Hmoud B, Laszlo V (2019) Will artificial intelligence take over human resources recruitment and selection? Netw Intell Stud 7(13):21–30

Jackson MC (2021) Artificial intelligence & algorithmic bias: the issues with technology reflecting history & humans. J Bus Technol Law 16:299

Johansson J, Herranen S (2019) The application of artificial intelligence (AI) in human resource management: current state of AI and its impact on the traditional recruitment process. Bachelor thesis, Jonkoping University

Johnson RD, Stone DL, Lukaszewski KM (2020) The benefits of eHRM and AI for talent acquisition. J Tour Futur 7(1):40–52

Kay M, Matuszek C, Munson SA (2015) Unequal representation and gender stereotypes in image search results for occupations. In: Conitzer V, Hadfield G, Vallor S (eds) Proceedings of the 33rd annual ACM conference on human factors in computing systems. Association for Computing Machinery

Kessing M (2021) Fairness in AI: discussion of a unified approach to ensure responsible AI development. Independent thesis Advanced level, KTH, School of Electrical Engineering and Computer Science (EECS)

Kim PT, Bodie MTJJOL, Law E (2021) Artificial intelligence and the challenges of workplace. Discrim Privy 35(2):289–315

Kitchin R, Lauriault TPJG (2015) Small data in the era of big data 80(4):463–475

Köchling A, Wehner MC, Warkocz J (2022) Can I show my skills? Affective responses to artificial intelligence in the recruitment process. Rev Manag Sci. https://doi.org/10.1007/s11846-021-00514-4

Langenkamp M, Costa A, Cheung C (2019) Hiring fairly in the age of algorithms. Available at: SSRN 3723046

Lloyd K (2018) Bias amplification in artificial intelligence systems. arXiv preprint arXiv07842

Lundberg SJ, Startz R (1983) Private discrimination and social intervention in competitive labor market. Am Econ Rev 73(3):340–347

Mayson SG (2018) Bias in, bias out. Yale Law J 128(8):2218–2300

McFarland DA, McFarland HR (2015) Big data and the danger of being precisely inaccurate. Big Data Soc Hum Resour Manag 2(2):2053951715602495

Miasato A, Silva FR (2019) Artificial intelligence as an instrument of discrimination in workforce recruitment. Acta Univ Sapientiae: Legal Stud 8(2):191–212

Mishra P (2022) AI model fairness using a what-if scenario. In: Practical explainable AI using Python. Springer, pp. 229–242

Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, … Gebru T (2019) Model cards for model reporting. In: Conitzer V, Hadfield G, Vallor S (eds) Proceedings of the conference on fairness, accountability, and transparency. Association for Computing Machinery

Newell S (2015) Recruitment and selection. Managing human resources: personnel management in transition. Blackwell Publishing, Oxford

Njoto S (2020) Research paper gendered bots? Bias in the use of artificial intelligence in recruitment

O’neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown

Ong JH (2019) Ethics of artificial intelligence recruitment systems at the United States Secret Service. https://jscholarship.library.jhu.edu/handle/1774.2/61791

Peña A, Serna I, Morales A, Fierrez J (2020) Bias in multimodal AI: testbed for fair automatic recruitment. In: Conitzer V, Hadfield G, Vallor S (eds) Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. Association for Computing Machinery

Prince AE, Schwarcz D (2019) Proxy discrimination in the age of artificial intelligence and big data. Iowa Law Rev 105:1257

Raghavan M, Barocas S, Kleinberg J, Levy K (2020) Mitigating bias in algorithmic hiring: evaluating claims and practices. In: Conitzer V, Hadfield G, Vallor S (eds) Proceedings of the 2020 conference on fairness, accountability, and transparency. Association for Computing Machinery

Raso FA, Hilligoss H, Krishnamurthy V, Bavitz C, Kim L (2018). Artificial intelligence & human rights: opportunities & risks. Available at: SSRN3259344 (2018-6)

Raub M (2018) Bots, bias and big data: artificial intelligence, algorithmic bias and disparate impact liability in hiring practices. Ark Law Rev 71:529

Raveendra P, Satish Y, Singh P (2020) Changing landscape of recruitment industry: a study on the impact of artificial intelligence on eliminating hiring bias from recruitment and selection process. J Comput Theor Nanosci 17(9):4404–4407

Article   CAS   Google Scholar  

Ruwanpura KN (2008) Multiple identities, multiple-discrimination: a critical review. Fem Econ 14(3):77–105

Samuelson PA (1952) Spatial price equilibrium and linear programming. Am Econ Rev 42(3):283–303

Shaw J (2019) Artificial intelligence and ethics. Perspect: Policy Pract High Educ 30, 1–11

Shin D, Park YJJCIHB (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Perspect: Policy Pract High Educ 98, 277–284

Smith B, Shum H (2018). The future computed. Microsoft

Tilcsik A (2021) Statistical discrimination and the rationalization of stereotypes. Am Sociol Rev 86(1):93–122

Timmermans S, Tavory I (2012) Theory construction in qualitative research: from grounded theory to abductive analysis. Sociol Theory 30(3):167–186. https://doi.org/10.1177/0735275112457914

Upadhyay AK, Khandelwal K (2018) Applying artificial intelligence: implications for recruitment. Strateg HR Rev 17(5):255–258

van Esch P, Black JS, Ferolie J (2019) Marketing AI recruitment: the next phase in job application and selection. Comput Hum Behav 90:215–222. https://doi.org/10.1016/j.chb.2018.09.009

Xie X, Ma L, Juefei-Xu F, Chen H, Xue M, Li B, … See S (2018) Deephunter: Hunting deep neural network defects via coverage-guided fuzzing. Available at: arXiv preprint arXiv:.01266

Yang J, Im M, Choi S, Kim J, Ko DH (2021) Artificial intelligence-based hiring: an exploratory study of hiring market reactions. Japan Labor Issues 5(32):41–55

Yarger L, Payton FC, Neupane B (2019) Algorithmic equity in the hiring of underrepresented IT job candidates. Online Inf Rev 44(2):383–395

Yarger L, Smith C, Nedd A (2023) 11. We cannot build equitable artificial intelligence hiring systems without the inclusion of minoritized technology workers. In: Conitzer V, Hadfield G, Vallor S (eds) Handbook of gender and technology: environment, identity, individual. p. 200. Association for Computing Machinery

Zhang J, Chen Z (2023) Exploring human resource management digital transformation in the Digital Age. J Knowl Econ. https://doi.org/10.1007/s13132-023-01214-y

Zixun L (2020) From sexism to unfair hiring, how can AI treat people fairly? https://baijiahao.baidu.com/s?id=1662393382040525886&wfr=spider&for=pc

Zuiderveen Borgesius FJ (2020) Strengthening legal protection against discrimination by algorithms and artificial intelligence. Int J Hum Rights 24(10):1572–1593

Download references

Author information

Authors and affiliations.

College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing, China

Zhisheng Chen

You can also search for this author in PubMed   Google Scholar

Contributions

ZSC conceived, wrote, and approved the manuscript.

Corresponding author

Correspondence to Zhisheng Chen .

Ethics declarations

Competing interests.

The author declares no competing interests.

Ethical approval

Approval was obtained from the ethics committee of the University NUAA. The procedures used in this study adhere to the tenets of the Declaration of Helsinki.

Informed consent

Each participant in this study willingly provided informed permission after being fully told of the study’s purpose, its methods, its participants’ rights, and any possible dangers. They were therefore assured of their comprehension and consent.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Source of data and analysis, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chen, Z. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanit Soc Sci Commun 10 , 567 (2023). https://doi.org/10.1057/s41599-023-02079-x

Download citation

Received : 18 February 2023

Accepted : 29 August 2023

Published : 13 September 2023

DOI : https://doi.org/10.1057/s41599-023-02079-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Strong and weak alignment of large language models with human values.

  • Mehdi Khamassi
  • Marceau Nahon
  • Raja Chatila

Scientific Reports (2024)

How AI hype impacts the LGBTQ + community

  • Dawn McAra-Hunter

AI and Ethics (2024)

The Promise and Challenges of AI Integration in Ovarian Cancer Screenings

  • Sierra Silverwood
  • Margo Harrison

Reproductive Sciences (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

discrimination in recruitment case study

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • About European Sociological Review
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, selection of countries, a harmonized cross-national field experiment, summary and conclusion, supplementary data, acknowledgements, gender discrimination in hiring: evidence from a cross-national harmonized field experiment.

ORCID logo

  • Article contents
  • Figures & tables

Gunn Elisabeth Birkelund, Bram Lancee, Edvard Nergård Larsen, Javier G Polavieja, Jonas Radl, Ruta Yemane, Gender Discrimination in Hiring: Evidence from a Cross-National Harmonized Field Experiment, European Sociological Review , Volume 38, Issue 3, June 2022, Pages 337–354, https://doi.org/10.1093/esr/jcab043

  • Permissions Icon Permissions

Gender discrimination is often regarded as an important driver of women’s disadvantage in the labour market, yet earlier studies show mixed results. However, because different studies employ different research designs, the estimates of discrimination cannot be compared across countries. By utilizing data from the first harmonized comparative field experiment on gender discrimination in hiring in six countries, we can directly compare employers’ callbacks to fictitious male and female applicants. The countries included vary in a number of key institutional, economic, and cultural dimensions, yet we found no sign of discrimination against women. This cross-national finding constitutes an important and robust piece of evidence. Second, we found discrimination against men in Germany, the Netherlands, Spain, and the UK, and no discrimination against men in Norway and the United States. However, in the pooled data the gender gradient hardly differs across countries. Our findings suggest that although employers operate in quite different institutional contexts, they regard female applicants as more suitable for jobs in female-dominated occupations, ceteris paribus , while we find no evidence that they regard male applicants as more suitable anywhere.

Women have traditionally been disadvantaged in the labour market, and much scholarship has documented patterns of and trends in gender inequalities (e.g. Weichselbaumer and Winter-Ebmer, 2005 ; Carlsson, 2011 ). However, women’s and men’s working lives have changed considerably since the mid-20th century ( Goldin, 2014 ). In nearly all OECD countries, women now have higher educational attainment than men ( OECD, 2015 ). In many countries, women comprise more than 40 per cent of the labour force ( Pew Research Center, 2017 ), and, although the process is slow, there is some evidence that the gender gap in earnings is converging ( Jacobsen, Khamis and Yuksel, 2015 ; Blau and Kahn, 2017 ; Neumark, 2018 ). People’s attitudes have also changed; in particular, we have seen decreasing support for traditional gender norms and increasing support for women’s employment ( Fernández, 2013 ).

All trends towards equalization notwithstanding, gender inequalities in the labour market still exist. Broadly construed, there are two explanations for why this is the case. First, women are treated differently from men within the same jobs, and second, women and men are sorted into different jobs, with lower earnings and fewer promotion prospects in typically female-dominated jobs. Studies have, however, shown that when men and women work in the same jobs in the same firms, gender differences in earnings are significantly diminished or even eradicated (e.g. Petersen and Morgan, 1995 ). This gives more credibility to the sorting explanation. Indeed, we know that occupational sex segregation is widespread ( Chang, 2004 ), and that men and women work in jobs with unequal compensation ( Levanon and Grusky, 2016 ). Scholars have therefore argued for the exigency to better understand the sorting process of men and women into different jobs ( Petersen and Saporta, 2004 ). We can think of two competing explanations. First, the supply side argument addresses educational and occupational choices: men and women choose different occupations and therefore apply for different jobs. Alternatively, men and women apply for the same jobs, but women are discriminated against when they apply for jobs with higher earnings, more responsibilities, etc. This demand side argument is related to employers’ hiring decisions, and this study aims to make a contribution to the literature by testing the discrimination explanation.

Hiring processes are contingent on employers’ decision-making, and crucial elements of their decisions usually remain opaque to researchers. Thus, measuring discrimination is difficult. Supply-side data can reveal gender gaps in labour market outcomes, but we can never rule out the possibility that observed gender gaps are driven by unobserved factors pertaining to the supply side rather than by employers’ discriminatory practices on the demand side. Therefore, experimental designs are more suitable for detecting discrimination ( Azmat and Petrongolo, 2014 ; Gaddis, 2018 ). While a weakness of laboratory experiments is external validity, field experiments can, through manipulation of one (or more) treatment variable(s), e.g. the applicant’s gender, provide real-world causal estimates of treatment effects on employers’ hiring decisions.

Previous Research

Social scientists have conducted randomized field experiments to detect hiring discrimination since the 1970s ( Riach and Rich, 2002 ). Perhaps surprisingly, previous studies on hiring discrimination of male and female job applications show very mixed findings. Table 1 gives an overview of the most relevant field experiments on gender discrimination in hiring, and we comment on the most important findings below.

Previous field experiments on gender discrimination in hiring

AuthorsApplicant agesCountryNo. of occupationsBlue/white collarQualificationsOccupations
28Sweden15BWLo-Med-HiStore clerk, vehicle mechanic, cleaner, enrolled nurse, waitstaff, chef, truck/delivery driver, warehouse worker, preschool teacher, IT developer, B2B sales, accounting clerk, customer service, telemarketing, childcare
24; 28;
38
Spain6WMed-HiSales representatives, marketing technicians, accountant’s assistants, accountants, administrative assistants/receptionists, executive secretaries
;
Baert, De Pauw and Deschacht (2016)
NABelgium2WHighBusiness administration for BA and business economics for MA
20France1WLowCashier works in retail stores
NAAustralia4WLowWaitstaff, data-entry, customer service, sales
; 31Sweden18WMed-HiAccountant/auditor, assistant nurse, chef, cleaner, elementary school teacher, computer specialist, engineer, financial assistant, high school teacher, nurse, preschool teacher, receptionist, salesperson, store personnel, or cashier
(2012)23; 35;
47; 53
Belgium12BWLo-Med-HiIndustry and manufacturing; commerce, transport, and catering; communication, administration, and financial services; public sector, health care, non-profit, and other services
(2014)NASweden11BWLo-Med-HiCleaners, restaurant workers, accountants, nurses, primary school teachers, shop sales assistants, high school teachers, business sales assistants, construction workers, motor-vehicle drivers, and computer professionals
35–70Sweden7BWLow/MediumAdministrative assistants, chefs, cleaners, food serving and waitstaff, retail sales persons and cashiers, sales representatives, truck drivers
24–29Sweden13BWLo-Med-HiConstruction, motor-vehicle drivers, nurses, secondary school teachers (math, science, language), shop sales assistants, computer professionals, preschool teachers, business sales assistants, cleaners, accountants, restaurant workers
(2012)25France1WHighSoftware developers
23–24France3BMediumConstruction (masonry, plumbing, and electricity)
37–39Spain18BWLo-Med-HiDelivery, waitstaff, sales clerks, computer technician, estate agents, office clerks, industrial engineers, tax advisors, physiotherapists, foremen/women, head chefs, store managers, heads of logistics, warehouse managers, supervising clerks, marketing directors, senior lawyers, senior nurses
NAUKNAWHighProfessional and managerial positions
NAUS1WLowWaitstaff
25; 37France12WLo-Med-HiAdministrative technician, administrative clerk, accounting clerk, executive manager, portfolio manager, recovery manager, accounting manager; receptionist, counter clerk, customer consultant, sales manager, customer assistant
NAAustralia7BWMed-HiComputer analyst programmer, computer operator, computer programmer, gardener, industrial relations officer, management accountant, payroll clerk
NAUK4WMed-HiComputer analyst, electrical and mechanical engineer, secretary, trainee chartered accountant
NAUS1WHighSummer associate positions of large law firms (interpreted as quasi full-time job offer due to sectoral characteristics of summer associate positions as job entry into the law sector)
NAAustria4WMed-HiNetwork technicians, computer programmers, accountants, secretaries
NAUS1WHighTenure-track assistant professorships
NAUS8BWLo-Med-HiAdministrative support, human resource associate, financial analyst, sales representative; housekeeping, customer service, manufacturing, maintenance/janitor
25; 28China4WMed-HiEngineers, accountants, secretaries, and marketing professionals
AuthorsApplicant agesCountryNo. of occupationsBlue/white collarQualificationsOccupations
28Sweden15BWLo-Med-HiStore clerk, vehicle mechanic, cleaner, enrolled nurse, waitstaff, chef, truck/delivery driver, warehouse worker, preschool teacher, IT developer, B2B sales, accounting clerk, customer service, telemarketing, childcare
24; 28;
38
Spain6WMed-HiSales representatives, marketing technicians, accountant’s assistants, accountants, administrative assistants/receptionists, executive secretaries
;
Baert, De Pauw and Deschacht (2016)
NABelgium2WHighBusiness administration for BA and business economics for MA
20France1WLowCashier works in retail stores
NAAustralia4WLowWaitstaff, data-entry, customer service, sales
; 31Sweden18WMed-HiAccountant/auditor, assistant nurse, chef, cleaner, elementary school teacher, computer specialist, engineer, financial assistant, high school teacher, nurse, preschool teacher, receptionist, salesperson, store personnel, or cashier
(2012)23; 35;
47; 53
Belgium12BWLo-Med-HiIndustry and manufacturing; commerce, transport, and catering; communication, administration, and financial services; public sector, health care, non-profit, and other services
(2014)NASweden11BWLo-Med-HiCleaners, restaurant workers, accountants, nurses, primary school teachers, shop sales assistants, high school teachers, business sales assistants, construction workers, motor-vehicle drivers, and computer professionals
35–70Sweden7BWLow/MediumAdministrative assistants, chefs, cleaners, food serving and waitstaff, retail sales persons and cashiers, sales representatives, truck drivers
24–29Sweden13BWLo-Med-HiConstruction, motor-vehicle drivers, nurses, secondary school teachers (math, science, language), shop sales assistants, computer professionals, preschool teachers, business sales assistants, cleaners, accountants, restaurant workers
(2012)25France1WHighSoftware developers
23–24France3BMediumConstruction (masonry, plumbing, and electricity)
37–39Spain18BWLo-Med-HiDelivery, waitstaff, sales clerks, computer technician, estate agents, office clerks, industrial engineers, tax advisors, physiotherapists, foremen/women, head chefs, store managers, heads of logistics, warehouse managers, supervising clerks, marketing directors, senior lawyers, senior nurses
NAUKNAWHighProfessional and managerial positions
NAUS1WLowWaitstaff
25; 37France12WLo-Med-HiAdministrative technician, administrative clerk, accounting clerk, executive manager, portfolio manager, recovery manager, accounting manager; receptionist, counter clerk, customer consultant, sales manager, customer assistant
NAAustralia7BWMed-HiComputer analyst programmer, computer operator, computer programmer, gardener, industrial relations officer, management accountant, payroll clerk
NAUK4WMed-HiComputer analyst, electrical and mechanical engineer, secretary, trainee chartered accountant
NAUS1WHighSummer associate positions of large law firms (interpreted as quasi full-time job offer due to sectoral characteristics of summer associate positions as job entry into the law sector)
NAAustria4WMed-HiNetwork technicians, computer programmers, accountants, secretaries
NAUS1WHighTenure-track assistant professorships
NAUS8BWLo-Med-HiAdministrative support, human resource associate, financial analyst, sales representative; housekeeping, customer service, manufacturing, maintenance/janitor
25; 28China4WMed-HiEngineers, accountants, secretaries, and marketing professionals

Note: B = blue collar; W = white collar.

Source : own elaboration.

Some experiments found advantages for men over women ( Neumark, Bank and Van Nort, 1996 ; Petit, 2007 ; Zhou, Zhang and Song, 2013 ; Duguet, Loïc and Petit, 2017 ; González, Cortina and Rodríguez, 2019 ), whereas other experiments found advantages for women over men ( Jackson, 2009 ; Carlsson, 2011 ; Carlsson and Eriksson, 2017 ). Some studies found hiring discrimination against both men and women, depending on parental status ( Correll, Benard and Paik, 2007 ) or gender composition and type of job ( Weichselbaumer, 2004 ; Yavorsky, 2019 ), while other studies found no gender discrimination at all ( Albert, Escot and Fernández-Cornejo, 2011 ; Capéan et al. , 2012; Carlsson et al. , 2014 ; Carlsson and Erikson, 2017; Bygren, Erlandsson and Gähler, 2017 ). Some studies found evidence of hiring discrimination against women in high-level jobs ( Riach and Rich, 2002 ; Baert, De Pauw and Deschacht, 2016 ), while others did not ( Williams and Ceci, 2015 ). These inconsistencies in findings might reflect true cross-national differences in gender discrimination. If institutional contexts, such as labour market policies, affect employers’ hiring decisions, they might, all else equal, behave differently in different national contexts ( Gangl and Ziefle, 2009 ). However, as these experiments are adapted to national contexts, and the included occupations vary considerably, inconsistencies in findings might also be an artefact of heterogeneity of research designs.

More consistently across contexts, field experiments on gender discrimination show that men are discriminated when they apply for female occupations, and women when they apply for male occupations ( Riach and Rich, 2002 , 2006 ; Booth and Leigh, 2010 ; Carlsson, 2011 ; Rich, 2014 ). ‘However, discrimination against men in “female” occupations was always much higher than that against women in “male” occupations’ ( Riach and Rich, 2002 : pp. F504–505). One study also found discrimination of men in female-dominated occupations, and no gender differences in hiring in mixed or male-dominated occupations ( Ahmed, Granberg and Khanna, 2021 ). Thus, despite the obvious temptation, we cannot directly compare field-experimental evidence on gender discrimination across countries, due to heterogeneity in research designs across countries and time-periods.

To address this limitation, we make use of a harmonized cross-national field experiment in six countries: Germany, the Netherlands, Norway, Spain, the United Kingdom, and the United States [The Growth, Equal Opportunities, Migration and Markets (GEMM) study, conducted by Lancee et al. , 2019b ]. 1 To our knowledge, the GEMM study is the first randomized field experiment with a deliberate cross-national comparative design ( Di Stasio and Lancee, 2019 ). These data allow us to provide new and rigorous evidence on gender discrimination in the first phase of the hiring process in six occupations in six countries. We contribute to the literature by analysing hiring discrimination within and across countries with different institutional characteristics.

Gender Discrimination: Theoretical Considerations

Hiring new employees always involves an element of risk-taking, as employers cannot know beforehand how an individual will perform. Employers rely on the information available in the cover letter and CV but may still be uncertain about the applicants’ skills. If employers believe members of a particular group are more productive than others, they might regard group membership as an informative cue. Obviously, employers’ expectations might be wrong, as they may rely on unfounded stereotypes about certain groups. In addition, even if employers’ beliefs are correct in terms of average group-level characteristics, individual job applicants may deviate substantially from a given group characteristic. 2

Discrimination against Women

Several perspectives explain why employers discriminate against women. We have grouped the relevant theoretical approaches into two broader categories: (i) cultural perspectives focusing on social norms and gender stereotypes, and (ii) the economic-rational perspective addressing statistical discrimination.

According to cultural perspectives, employers rely on gender stereotypes and gender-differentiated work expectations. In Joan Acker’s seminal work on gendered organizations, gender inequality is an inbuilt characteristic of work organizations ( Acker, 1990 ; Rudman and Phelan, 2008 ; Williams, Muller and Kilanski, 2012 ). Of particular importance is the norm of the ‘ideal worker’, working full-time without family obligations. As women’s work traditionally has been confined to the domestic sphere, this norm would disadvantage women in hiring situations ( Acker, 1990 ). Even in large, modern organizations, there is evidence that women are held to other standards than men, which might explain the persistence of the glass ceiling in career promotion. The so-called ‘paradox of meritocracy’ ( Castilla and Benard, 2010 ) implies that top-down directives oriented towards fairness and efficiency seem incapable of neutralizing discriminatory gender attitudes and may even reinforce the adverse effects of unconscious bias. Thus, despite societal trends towards gender convergence, theories about gendered organizations lead us to expect that men have an advantage over women in virtually all hiring processes.

The theory of statistical discrimination builds on the assumption that employers engage in cost-benefit calculations ( Arrow, 1972 ; Phelps, 1972 ). This economic-rational perspective leads us to expect that employers assess the potential productivity of job applicants by their observable characteristics, such as human capital, and attribute average group characteristics to them to assess their unobservable characteristics ( Fang and Moro, 2011 ). Due to productivity gains and because hiring in itself is costly, employers can be expected to be looking for stable workers. Given that women are more likely to be absent due to family responsibilities, employers would assess men’s productivity higher and discriminate against women, all else equal.

To summarize, both cultural and economic-rational perspectives lead us to expect discrimination of female applicants, primarily due to employers’ beliefs about women’s higher level of absence associated with childcare.

Discrimination against Men and Women

As noted above, previous experiments show differential gender discrimination across male- and female-dominated occupations. The cultural perspectives might explain why. Psychologists have developed the stereotype content model, which proposes that people tend to perceive men as competent but not warm, and women as warm but not competent ( Glick and Fiske, 1996 ). People also perceive male-dominated jobs as requiring more competence and female-dominated jobs as requiring more warmth ( Cuddy, Fiske and Glick, 2008 ). As these stereotypes are associated both with individuals and jobs, it is highly plausible that employers discriminate applicants with the ‘wrong’ gender ( Bobbitt-Zeher, 2011 ). Thus, ‘if a caregiving job is thought to require warmth and men are thought to not possess much warmth, individuals may expect that a man will not be successful at a caregiving job’ ( Halper, Cowgill and Rios, 2019 : p. 2). By the same logic, employers would form negative performance expectations of women in—for instance—technical jobs. Thus, employers’ gender stereotypes might steer the process of matching jobs and job applicants. Theoretically, this argument is captured by the concept of sex typing of jobs ( Bielby and Baron, 1986 ; Glick, Zion and Nelson, 1988 ; Reskin and Roos, 1990 ), the role congruency model ( Cejka and Eagly, 1999 ), and the theory of gender categorization within work organizations ( Ridgeway, 1997 ).

The theory on statistical discrimination can also explain differential gender discrimination across male- and female-dominated occupations. As noted, most employers are looking for stable employees, and studies have documented that workers’ employment duration is sensitive to the sex typing of the job, so that women who enter a male-dominated occupation and men who enter a female-dominated occupation have disproportionately higher exit risks ( Torre, 2014 , 2018 ). Employers might be aware of this association and act accordingly. On closer inspection therefore, the differences between the cultural and the economic-rational perspectives are rather subtle, as both perspectives are compatible with the assumption that gender stereotypes are exogenously given and that employers are looking for the best match between an applicant and a job. 3 Both perspectives, therefore, lead us to expect discrimination against the minority sex in sex-typed jobs and to expect to find no prevalence of discrimination in gender-balanced jobs, ceteris paribus . The norm of the ‘ideal worker’, however, leads us to the generic expectation that women are discriminated against, independently of the sex typing of the job.

Theories on discrimination are primarily concerned with individual-level explanations, largely ignoring the role of country-level institutional contexts ( Reskin, 2000 ). However, the ‘opportunity structure for discrimination’ ( Petersen and Saporta, 2004 ) is likely to differ by macro-level factors, which we explain below.

The GEMM study is a fully harmonized field experiment on job hiring across six advanced economies that differ in a number of relevant macro-level characteristics. Because the number of policy and institutional characteristics varying across these countries is larger than the number of countries analysed and because these characteristics are highly endogenous, it is not possible to identify the effect of a single policy or institutional dimension. Our goal is therefore more modest: we want to test whether estimates of hiring discrimination of male and female applicants are robust across different policy and institutional contexts. If they are, we conclude that, despite their institutional differences, there is a common trend across these societies. If they are not, we interpret cross-national variation by considering country-specific characteristics that may affect employers’ propensity to discriminate. We consider three macro dimensions: (i) general labour market regulations and conditions, (ii) family policies, and (iii) cultural norms.

First, labour market regulations can influence employers’ hiring decisions by affecting the costs of job mismatch. When these costs are high, employers are likely to be more risk averse and to draw on statistical discrimination to reduce contractual hazards. If employment contracts with low termination costs are available to employers and if such contracts can be used for long time-periods, the match-or-miss pressure for employers will wane, thus reducing the impact of risk aversion on hiring decisions. The included countries differ markedly in the extent of labour market regulation (see Table 2 ); and we expect more gender discrimination related to the sex typing of jobs in countries with higher dismissal costs, such as Germany and the Netherlands. Another potential factor affecting the costs of discriminating is labour market tightness. If employers have a large pool of potential candidates, they are more prone to discriminate, even if only as a heuristic strategy to simplify the screening procedure ( Birkelund, 2016 ), than when they have a restricted supply of workers ( Baert, De Pauw and Deschacht, 2016 ). Spain is an outlier, with a high unemployment rate, which could fuel hiring discrimination.

Societal factors potentially associated with gender discrimination propensities

 



 

 
 
 

 

Germany2.5580.63.75%0.9136.6%29.653.6%660.778
Netherlands2.8420.64.89%0.8958.%29.970.5%140.737
Spain2.0160.517.37%0.8421.6%30.961%420.746
Norway2.2871.34.21%0.9627.7%29.390.2%80.83
United Kingdom1.2390.64.38%0.8936.4%28.961.9%660.77
United States0.500.34.37%0.8617.2%26.861.4%620.718
 



 

 
 
 

 

Germany2.5580.63.75%0.9136.6%29.653.6%660.778
Netherlands2.8420.64.89%0.8958.%29.970.5%140.737
Spain2.0160.517.37%0.8421.6%30.961%420.746
Norway2.2871.34.21%0.9627.7%29.390.2%80.83
United Kingdom1.2390.64.38%0.8936.4%28.961.9%660.77
United States0.500.34.37%0.8617.2%26.861.4%620.718

OECD Index of regulation on individual dismissal of workers with regular contracts. 0 = very loose, 5 = very strict. The index refers to the year 2013 ( OECD, 2020a) .

  Data from the OECD for 2013. Total duration for which mothers can be on paid leave (OECD, 2020 b ).

Includes public spending on early childhood education and care, OECD Family Database for 2015 or latest available year ( OECD, 2020c) .

Data from OECD for 2019 ( OECD, 2019 ).

OECD Short-Term Labor Market Statistics 2017 ( OECD, 2017 ).

Data from the OECD, referring to 2018 ( OECD, 2020d) .

Data from OECD Family Data Base for 2015 or latest available year ( OECD, 2020c) .

h Source : Own calculations. ‘When jobs are scarce, men should have more right to a job than women’, per cent (strongly) disagree minus per cent (strongly) agree. Averages based on available data, European Values Survey 2008, 2017, as well as World Value Survey Waves 5 (2005–2009) and 6 (2011–2015).

Numbers provided by Hofstede Insights, comparing countries’ scores on the Masculinity Index (see Hofstede Insights, 2020 ).

The World Economic Forum: The Global Gender Gap Report 2017. Global Gender Gap Index ( The World Economic Forum, 2017 ).

Family policies can potentially influence employers’ hiring decisions by affecting the costs associated with childbirth. Although often considered mutually complementary interventions, public support for childcare (through direct provision or subsidies) and parental leave policies actually have very different implications. Childcare support policies likely reduce the duration of post-birth work interruptions, and, because they are funded through general taxes, their costs are not borne by employers in particular. In contrast, generous maternity leave policies that establish mandatory job retention over a specified period around childbirth impose significant nonwage costs to employers, which will be greater for tasks where interruptions provoke severe human capital depreciation ( Stier, Lewin-Epstein and Braun, 2001 ; Mandel and Semyonov, 2006 ; Gangl and Ziefle, 2009 ). The probability that employers discriminate against women should thus be greater in contexts where maternity leave arrangements are generous, such as Norway, and in contexts with less public provision of childcare, such as the United States (see Table 2 ).

Our countries of study also differ with respect to gender norms, which are associated with labour market and family policies (see Table 2 ). There is a close association between female employment rates and support for gender stereotypes ( Fortin, 2005 ; Polavieja, 2015 ) and we expect more hiring discrimination of women in countries with higher support for traditional gender attitudes, such as Germany. Notably, such norms go beyond mere attitudinal indicators and include sex-typical behaviours that can shape expectations ( Polavieja, 2012 ). Relevant behaviours with a normative dimension include fertility behaviour (e.g. average age at first birth) and gender differences in employment rates and working hours that can ‘inform’ employers about the ‘risks’ of employing women ( Bygren, Erlandsson and Gähler, 2017 ; Becker, Fernandes and Weichselbaumer, 2019 ). The selected countries differ in both gender attitudes and behaviours potentially affecting employers’ hiring decisions.

Table 2 summarizes the indicators that characterize the countries included in the study. The list of indicators is not exhaustive, but the table illustrates the degree of variation across these countries. In accordance with the above theories, we expect the probability of observing gender discrimination in hiring to be higher in macro-level contexts where the costs of job mismatch are high due to labour-market regulation or—conditions and where traditional gender norms prevail, as expressed through attitudes and values or through gendered behaviours. These arguments, based on a small selection of the contextual measures that could have been included, are tentative. Moreover, contextual factors are only relevant if employers know about them or act upon related beliefs. Both assumptions are disputable ( Birkelund et al. , 2019 ). Hence, our aim is not to identify the effect of any single dimension, which would be impossible given the small sample of countries, but to determine if our findings hold across different country contexts, and, in the event they do not, whether we can meaningfully interpret national variation by accounting for these institutional, cultural, and economic dimensions.

From 2016 to 2018, we sent fictitious cover letters and CVs sent to 21,318 vacant jobs advertized on national online platforms, and gathered and coded all responses from the employers (for an overview of the data, see Lancee et al. , 2019a ,b). The experiment was primarily designed to measure hiring discrimination against immigrants and their descendants. 4 To compare their callbacks with those received by the majority population, 25 per cent of the applications in each country included a majority identity, 4,279 in total, which are the data that are used here. The fictitious job applicants, hereafter applicants, were given education levels that matched the (average) job requirements, which varied between a high school diploma to a bachelor’s degree. All applicants had CVs with four years occupation-specific work experience at two different employers, 5 and we varied their age between 22 and 26 years. 6 The design is unmatched, which means that one application was sent to each vacancy. Some field experiments send two—or more—applications per vacancy, allowing the researchers to measure individual employer behaviour in addition to average employer behaviour within occupations and countries, which we measure here. Although both matched and unmatched designs have distinct advantages, the strength of the unmatched design is that one can easily implement multiple treatments. Furthermore, the risk of detection is minimal. There is also evidence that unmatched designs provide the most comparable and externally valid estimates of hiring discrimination, by avoiding potential issues of induced competition (see Vuolo, Uggen and Lageson, 2018 ; Lancee, 2019 ; Larsen, 2020 for discussions) and they minimize harm to employers by reducing their time spent in reading fictitious applications. Applications were sent to nationally advertized job vacancies within each country, which means that, although limited by occupational constraints (six occupations), the study covers national labour markets.

Occupations

The occupations included are as comparable across the six countries as possible. The selected occupations have different levels of customer contact and different educational requirements. We were looking for occupations that were available on job search platforms within each country, and for which there were sufficient numbers of vacant jobs within a time limit of maximum 2 years. To decide which occupations we should chose, we discussed a range of occupational covariates that one might not need to worry about in national studies, but which could be highly relevant in a cross-national design. We decided to exclude jobs in the public sector, which often have their own recruitment organizations. This implies that many female dominated occupations, such as nurses and teachers, are not included in our data, since they are mostly found in the public sector. We also decided to avoid occupations that often rely on informal recruitment of workers. This implies that many male-dominated occupations, such as mechanics or plumbers, are not included in our data, since they seem to rely on informal networks when they recruit new workers. Since we need the same occupations across all countries, we only need one country in which some of these considerations matter, to influence the data collection.

After these market discussions, we carefully considered the comparability of job tasks and content, and we decided to include four occupations with low or middle qualifications (cook, receptionist, store assistant, and payroll clerk), and two occupations which require education up to a bachelor’s degree (software developer and sales representative). Three of these occupations have relatively little customer contact (software developer, payroll clerk, and cook), whereas the other three imply higher customer contact (sales representative, receptionist, and store assistant). The following occupations are included (ISCO codes in parentheses): Cook (512), payroll clerk (2411, 3313, 411, 412), receptionist (422), sales representative (3322), software developer (252), and store assistant (522). These occupations cover approximately 15–20 per cent of the work force within each country.

Many occupations are likely to comprise different sex-typed jobs, and the occupations included here vary in their gender profiles. 7   Supplementary Table S1 provides an overview of the gender distribution in each country within each occupational category based on national statistics the year before the field experiment took place ( Lancee et al. , 2019b ). We note that receptionists and payroll clerks are female dominated, in particular in Netherlands, Norway, and the United States, whereas software developers are clearly male dominated in all countries.

The size of the labour market differs across these countries, and as the data collection took place within a limited time, the availability of job vacancies varied. This implies that in the data, for some countries, some occupations are under-represented. For instance, Norway has a low share of receptionists (4 per cent), whereas Spain has a low share of software developers (6 per cent) and sales representatives (7 per cent). We therefore add occupational controls in all our analyses.

Treatment Variable

Gender, our main treatment variable, randomly assigned the job applications, is coded ‘1’ for females and ‘0’ for males. 8 The experiment also included other treatments (see Lancee et al. , 2019a ). As these treatments are orthogonal to gender, there is no need to control for them.

Dependent Variable: Employer Response

Our main dependent variable is employer callback, which includes an invitation to an interview, an invitation to a pre-interview, and/or a request for more information. In Supplementary Information , we include analyses using only ‘invitation to an interview’, a stricter measurement of callback. As there are cross-national differences in the likelihood that employers ask job applicants for an interview (see Lancee et al. , 2019a ), we prefer the broader definition of callbacks that includes an invitation for a pre-interview and/or a request for more information. A callback rate of 0.49 means that 49 per cent of the applicants received a callback. We also calculate gender ratios, dividing female by male callback rates. A gender ratio above 1 means that male applicants are discriminated, whereas a gender ratio below 1 means that female applicants are discriminated.

Estimation Strategy

To examine cross-country variation in hiring discrimination, we start by documenting callback ratios for each occupation in each country; see Table 3 . We then estimate country-specific linear probability regression models; regressing callbacks on gender (see Supplementary Table S2 and Figure 1 ). 9 The gender coefficient provides an estimate of gender discrimination in hiring within each country, with associated standard error.

Effect of gender on callback probability. Note: Coefficients with 95 per cent confidence intervals from linear probability models estimated for each country, including occupation controls (Supplementary Table S2, models 1–6)

Effect of gender on callback probability. Note: Coefficients with 95 per cent confidence intervals from linear probability models estimated for each country, including occupation controls ( Supplementary Table S2 , models 1–6)

Callback ratios by country, occupation, and gender

CountryOccupation Male/FemaleCallback rate MaleCallback rate FemaleCallback gender ratio
GermanyCook66/550.770.670.870.36
GermanyPayroll clerk61/620.160.291.770.13
GermanyReceptionist61/660.570.791.370.01
GermanySales representative49/720.470.420.890.79
GermanySoftware developer58/540.670.811.210.16
GermanyStore assistant51/620.250.481.900.01
NetherlandsCook113/1330.800.760.950.71
NetherlandsPayroll clerk97/890.260.351.350.29
NetherlandsReceptionist62/500.270.461.680.06
NetherlandsSales representative83/680.370.471.260.39
NetherlandsSoftware developer82/720.830.780.940.65
NetherlandsStore assistant65/680.200.442.210.00
NorwayCook36/410.330.341.021.00
NorwayPayroll clerk46/430.330.260.780.71
NorwayReceptionist9/110.440.180.410.35
NorwaySales representative91/840.250.321.270.51
NorwaySoftware developer59/530.460.511.110.82
NorwayStore assistant35/390.090.212.390.20
SpainCook175/1890.220.231.050.96
SpainPayroll clerk86/810.140.261.860.07
SpainReceptionist76/510.050.244.470.00
SpainSales representative34/350.380.310.820.79
SpainSoftware developer28/230.570.520.910.92
SpainStore assistant105/760.100.171.800.21
United KingdomCook61/490.410.451.100.90
United KingdomPayroll clerk115/930.060.294.770.00
United KingdomReceptionist53/510.190.120.620.53
United KingdomSales representative67/710.180.211.180.86
United KingdomSoftware developer64/500.300.381.280.57
United KingdomStore assistant49/630.330.170.530.10
United StatesCook37/400.540.450.830.65
United StatesPayroll clerk55/340.130.151.160.96
United StatesReceptionist46/380.150.211.380.72
United StatesSales representative37/390.380.280.750.59
United StatesSoftware developer36/460.360.350.960.99
United StatesStore assistant43/510.260.331.300.62
CountryOccupation Male/FemaleCallback rate MaleCallback rate FemaleCallback gender ratio
GermanyCook66/550.770.670.870.36
GermanyPayroll clerk61/620.160.291.770.13
GermanyReceptionist61/660.570.791.370.01
GermanySales representative49/720.470.420.890.79
GermanySoftware developer58/540.670.811.210.16
GermanyStore assistant51/620.250.481.900.01
NetherlandsCook113/1330.800.760.950.71
NetherlandsPayroll clerk97/890.260.351.350.29
NetherlandsReceptionist62/500.270.461.680.06
NetherlandsSales representative83/680.370.471.260.39
NetherlandsSoftware developer82/720.830.780.940.65
NetherlandsStore assistant65/680.200.442.210.00
NorwayCook36/410.330.341.021.00
NorwayPayroll clerk46/430.330.260.780.71
NorwayReceptionist9/110.440.180.410.35
NorwaySales representative91/840.250.321.270.51
NorwaySoftware developer59/530.460.511.110.82
NorwayStore assistant35/390.090.212.390.20
SpainCook175/1890.220.231.050.96
SpainPayroll clerk86/810.140.261.860.07
SpainReceptionist76/510.050.244.470.00
SpainSales representative34/350.380.310.820.79
SpainSoftware developer28/230.570.520.910.92
SpainStore assistant105/760.100.171.800.21
United KingdomCook61/490.410.451.100.90
United KingdomPayroll clerk115/930.060.294.770.00
United KingdomReceptionist53/510.190.120.620.53
United KingdomSales representative67/710.180.211.180.86
United KingdomSoftware developer64/500.300.381.280.57
United KingdomStore assistant49/630.330.170.530.10
United StatesCook37/400.540.450.830.65
United StatesPayroll clerk55/340.130.151.160.96
United StatesReceptionist46/380.150.211.380.72
United StatesSales representative37/390.380.280.750.59
United StatesSoftware developer36/460.360.350.960.99
United StatesStore assistant43/510.260.331.300.62

Table 3 shows the callback rates and related gender ratios by country and occupation. We first note that out of 36 possible outcomes, 23 favour females , as indicated by callback gender ratios > 1. This is interesting, but due to the small sample for each occupation within each country, most of these outcomes are not significant by conventional standards (see right-hand column). In Germany, we find statistically significant hiring discrimination against male applicants for receptionist and store assistant jobs, with callback ratios of 1.4 and 1.9, respectively. In the Netherlands, we find evidence of hiring discrimination against male applicants for store assistant jobs, with a callback ratio of 2.2. In Spain, we find clear evidence of hiring discrimination of males in two occupations, with callback ratios of 1.9 (payroll clerk) and 4.5 (receptionist). In the United Kingdom, we find strong evidence of hiring discrimination against males in payroll clerk jobs (callback ratio of 4.8, the highest of all). Interestingly, in the data, we find no evidence of gender discrimination in hiring in Norway or the United States. Thus, the evidence shows hiring discrimination against male, not female, job applicants in 1–3 occupations within four of the six countries.

Based on country-specific regression models, Figure 1 (and Supplementary Table S2 ) shows the probability of receiving a callback separately for each country. According to these estimates, we find evidence of hiring discrimination against male applicants in United Kingdom, Spain, Germany, and the Netherlands. The gender differences range from 0 per cent in the US to 9 percentage points in Germany. Thus, we observe gender discrimination in hiring against men in four out of six countries. 10

As shown in Supplementary Table S3 , only one of the contrasts is significant, namely, that between the United States and Germany, the countries with the lowest and highest gender coefficients, respectively. However, given that there are 30 contrasts in this equation, we would expect to observe 1–2 significant outcomes (5 per cent) by chance.

Thus far, the field experiment has revealed that employers discriminate against male but not female applicants. Second, although the gender coefficients are statistically significant in four out of six countries (United Kingdom, Germany, the Netherlands, and Spain), we find no convincing evidence of cross-national differences in gender discrimination. 11 Given the widespread evidence of female labour market disadvantage and the large cross-national variation in structural, institutional, and cultural dimensions documented in Table 2 , our finding of no cross-national differences in hiring discrimination is surprising. However, no previous study has examined this topic in a rigorous comparative way.

When using invitation for an interview, a stricter definition of callbacks, as the dependent variable, we find smaller country differences in gender discrimination in hiring (compare Figure 1 with Supplementary Figure S1 ). As the stricter version of callback (invitation for an interview) are less frequent than the wider version, the standard errors for these estimates are slightly larger, which can be seen by comparing Figure 1 with Supplementary Figure S1 . This means that for the interview variable, the 95 per cent confidence intervals are slightly wider, and that it is only for Spain where the estimate is statistically significant.

Despite recent changes, on average, women still have lower earnings and worse career prospects. These well-known facts are true according to reliable and national representative data, such as labour force surveys and register data. The key question is why. Broadly speaking, two explanations have been provided. First, women and men might sort into different jobs because of their different educational and occupational choices, and their different work–life balance preferences and constraints, all of which accumulate to different employment trajectories and outcomes. This is the supply-side story. Second, men and women might sort into different jobs because employers discriminate women, particularly in the best-paid jobs. According to this demand-side explanation, hiring discrimination against women would be an important explanation for women’s labour-market disadvantage. Because studies based on observational data cannot empirically adjudicate between supply and demand side explanations, there is a need for field experiments to provide reliable and valid estimates of employers’ hiring discrimination.

Interestingly, the story jointly told by previous field experiments clashes with the conventional account of female disadvantage. It is often the fictitious male applicants, not the females, who are discriminated in hiring processes. In particular, there is evidence that women are favoured in female-dominated occupations. However, the heterogeneity of previous studies, in terms of occupations included, timing of the studies, and at what geographical level (local or national) they took place, makes comparisons difficult. Against this background, we made use of a harmonized field experiment in six countries to provide comparable, reliable, and balanced cross-national documentation of hiring discrimination against men and women.

The field experimental data show no evidence of hiring discrimination against women in any of the occupations in any of the countries included. The countries vary in a number of institutional, economic, and cultural dimensions potentially affecting employers’ likelihood of discriminating against women. We also included occupations varying in skill requirements and customer contact. And, as documented in footnote 7, the manual job content of our occupations vary from high (cooks) to low (payroll clerks). The findings reported in this study therefore constitute an important and robust piece of evidence that young women are not discriminated in the first phase of the hiring process in any of the occupations studied in any of the countries studied.

Second, we found hiring discrimination against men in Germany, the Netherlands, Spain, and the United Kingdom, where male applicants were less likely to receive a callback when they applied for jobs as store assistants (Germany and the Netherlands), receptionists (Spain and Germany), and payroll clerks (Spain and the United Kingdom). We found no hiring discrimination against men in Norway and in the United States. However, when pooling the data, we found no statistically significant differences across countries, perhaps with the exception of the contrast between Germany and the United States.

Understanding Gender Discrimination

With these findings in mind, how can we better understand gender discrimination in hiring? We did not find any support for the generic belief that women are disadvantaged in hiring processes, as implied both in models of cultural stereotypes and statistical discrimination, where employers are assumed to believe that women are potentially unstable workers, more likely to quit their jobs to attend their families and/or generally less committed to their firms. Gender stereotypes where women are seen as mothers and housewives seem less important in hiring processes today than in the past. According to our findings, these stereotypes seem not to operate at all. We suggest a few tentative interpretations of why this is the case. First, most women today are not primarily homemakers. Second, females are more likely to be hiring agents, in particular in female-dominated occupations, and we cannot rule out the possibility of in-group (same gender) favouritism benefiting female candidates. Third, in female occupations, hiring agents might find women more stable employees than men, who might be more likely to pursue a career, thereby leaving the job they were hired for. We should also remember that the job candidates we constructed are young workers with only 4 years of working experience. This means the presented evidence does not preclude the possibility of discrimination against women in hiring, earnings, or promotion opportunities later in the career.

Interestingly, the evidence on hiring discrimination against men would seem compatible with existing theories about gender stereotypes that were formulated to account for women’s disadvantage. Perspectives emphasizing the sex typing of jobs, gender categorization within work organizations, role congruency, and stereotype contents, all seem relevant for explaining discrimination against men in the matching process. Theoretically, these cultural perspectives are also compatible with the economic model of employers as (limited) rational actors who try to find the best match between job tasks and job applicants. If employers perceive certain jobs as more appropriate for women, male applicants, even if formally qualified, may be devaluated because employers believe that they are poor matches for the sex-typed job tasks. For jobs that are not sex-typed, gender stereotypes do not seem to matter in the matching process.

The above-mentioned theories should lead to symmetrical expectations of hiring discrimination against applicants with the ‘wrong sex’ in sex-typed jobs. Thus, they cannot help us understand why women were not discriminated in the male-dominated occupation we included: software developers, an occupation which requires continuous training and where job disruptions are particularly hazardous for employers. To understand this, we can only speculate. It could be that the IT sector is more tolerant, pioneering a new work–life gender-egalitarian culture ( Faulkner, 2009 , but see Bertogg et al. , 2020 ). Alternatively, given the low proportion of women who enter STEM fields, IT employers might believe female applicants are positively selected in unobserved characteristics. Another possibility is that employers might be nervous that they have implicit or hidden bias against women. As a result, they may overreact and give women advantages in hiring. Whatever the reason is, finding no hiring discrimination against women in IT jobs constitutes an important challenge to both cultural and economic theories of ‘gender’ discrimination.

However surprising, the presented evidence is not at odds with previous research on hiring discrimination. The key to explaining divergent results likely lies in the occupations studied. For balanced studies, including both female- and male-dominated occupations, and gender-neutral occupations, the aggregate outcome would be close to zero gender discrimination in hiring. For more unbalanced studies, like the GEMM study, which includes two clearly female-typed occupations, and only one strongly male-dominated occupation, we might expect an aggregated pattern showing hiring discrimination against men. In principle, the same logic should apply for unbalanced studies including a higher proportion of male dominated occupations, but then we would expect an aggregated pattern of hiring discrimination of females. Yet the findings regarding the male-dominated occupation we included cast doubts on the symmetrical nature of hiring discrimination by gender. Interestingly, when scholars plan to study gender differences in hiring discrimination, we tend to think about discrimination of women, not men, yet previous experiments seem to include more female- than male-dominated occupations. More research including more occupations is needed.

Lack of Cross-National Variation

Despite differences in labour market conditions, family policies, and cultural norms, we found no clear evidence of cross-national variation in hiring discrimination. An explanation might be that the associations of gender stereotypes and jobs, while culturally embedded, are fairly universal across advanced Western economies (but see Supplementary Table S1 for national variations in occupational gender distributions), and hiring agents across these societies are similarly influenced by these views. Given the embeddedness of job-specific gender stereotypes, one might be pessimistic with regard to the possibilities of policy reforms to encourage gender balance. In addition, the implications of our study appear even more serious given that male-dominated occupations related to the industrial society are gradually vanishing. On the other hand, if gender-neutral occupations are growing in size, gender stereotypes will become less important over time. Thus, we have a cultural and a structural argument, and future research would benefit from addressing both arguments.

Naturally, this study has limitations. Field experiments investigate discrimination in the initial stages of the hiring process and do not give information about who gets the jobs, at what wages, and with what career opportunities. Second, the field experiment provides information about the outcomes of job applications for young applicants 22–26 years of age, and we cannot know what the situation would have looked like if we had included older fictitious applicants. Similarly, we have not tested employers’ reactions to applicants with family obligations. It should be noted though, that a Swedish study including older applicants, found no difference in employers’ reactions to mothers and fathers ( Bygren, Erlandsson and Gähler, 2017 ).

Field experiments cannot cover the whole labour market, and the outcomes of these experiments are only representative for the included occupations. The GEMM study includes six occupations, requiring an educational level varying from a high school diploma to a bachelor’s degree. With a limited number of male and female applications within each occupation, we are abstained from analysing in more detail the variation in types of jobs within occupations (e.g. managerial jobs).

We believe that the implications of our findings are important. In particular, we need to update our knowledge of gender discrimination and the belief that women are always the disadvantaged group. This belief might have been correct earlier, but today, at least for the occupations we examined, we found no evidence of hiring discrimination against female job applicants in any of the six countries included. Rather, we observed hiring discrimination against males in female-dominated jobs, whereas female applicants were favoured in female-dominated occupations and not discriminated in the other occupations we included. Future research should explore more in-depth the mechanisms associated with this (reversed) gender gap in hiring discrimination and delineate its boundary conditions.

For information on ‘Growth, Equal Opportunities, Migration and Markets’ (GEMM) project, financed by Horizon2020, see http://gemm2020.eu/ .

If employers act upon a perceived group difference in the variance of unobserved expected productivity, field experimental evidence of discrimination may not be very informative ( Heckman and Siegelman, 1993 ). Using the method proposed by Neumark (2012) , Baert (2015) found no evidence of this bias related to gender heterogeneity.

Several concepts have been introduced to differentiate so-called error discrimination ( England, 1994 ) and stereotype-based discrimination ( Bobbitt-Zeher, 2011 ) from the economic-rational model, but the theory of statistical discrimination (albeit with bounded rationality) can easily accommodate the notion of stereotypes affecting employers’ hiring decisions.

See Di Stasio and Larsen (2020) for a study of the combined effects of ethnicity and gender on employers callbacks, based on the GEMM occupations.

To find suitable names for the applicants, an online name search was conducted on the websites of national name registers and the most frequent names in the applicants’ birth year were listed. Names were then carefully chosen to avoid connotations to religion or class. Finally, we used official register data to identify the most common surnames in each country. For the United States, we used census data ( U.S. Census Bureau, 2010 ) to ensure that employers would identify the names as typical white names.

The age used for fictitious job applicants in field experiments of gender discrimination in hiring varies. See Table 1 .

The O*NET dataset (previously called the Dictionary of Occupational Titles) provides very detailed information of the task-content of occupations in the United States. It covers 449 detailed occupations and provides 277 descriptors for each occupation. Using these data, we performed a factor analysis to measure the manual skill content of the jobs. We converted 2,000 US Census occupations into their ISCO-88 four-digit equivalents by means of a crosswalk provided by the Centre for Longitudinal Studies, Institute of Education, University of London. We found that the GEMM occupations vary between having a manual job content score of 0.76 (cooks) to 0.23 (payroll clerks). See also Ortega and Polavieja (2012) .

We would have needed a much larger sample if we were to include more than a binary gender variable.

Due to the well-known problems with logistic regression ( Mood, 2010 ), especially concerning comparisons across samples and interaction effects, we do not present logit models here. The results are generally similar and are available upon request.

Using a narrower definition of callbacks, see Supplementary Information , we find significantly higher callbacks to women (0.07 and 0.06) in Spain and the Netherlands, whereas the gender coefficient, albeit positive in favour of females, is not significant in the other countries.

The constant terms in Supplementary Table S2 indicate the probability of receiving a callback for male applicants. They vary from low (Spain: 0.19) via moderately low in the United Kingdom, Norway, and the United States (with intervals between 0.32 and 0.50), to high in Germany and the Netherlands (0.70–0.74). These cross-national differences in baseline callbacks reflect country-level differences in demand for labour and/or a better fit of the applications.

Supplementary data are available at ESR online.

This project received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 649255; the Research Council of Norway, grant number 287016; The Netherlands Organization for Scientific Research (NWO), (016.Vidi.185.041). We thank Laura García Llamas and Louis Klobes for valuable research assistance.

Conflict of interest

We are aware of no potential conflict of interest that might raise questions of bias in our work.

Acker J. ( 1990 ). Hierarchies, jobs, bodies: a theory of gendered organizations . Gender & Society , 4 , 139 – 158 .

Google Scholar

Ahmed A. , Granberg M. , Khanna S. ( 2021 ). Gender discrimination in hiring: an experimental reexamination of the Swedish case . Plos One , 16 , 0245513 .

Albert R. , Escot L. , Fernández-Cornejo J. A. ( 2011 ). A field experiment to study sex and age discrimination in the Madrid labour market . International Journal of Human Resource Management , 22 , 351 – 375 .

Arrow K. J.   1972 . Models of job discrimination. In Pascal A. H. (Eds.), Racial Discrimination in Economic Life . New York : Lexington Books , pp. 83 – 102 .

Google Preview

Azmat G. , Petrongolo B. ( 2014 ). Gender and the labor market: what have we learned from field and lab experiments?   Labour Economics , 30 , 32 – 40 .

Baert S. ( 2015 ). Field experimental evidence on gender discrimination in hiring: biased as Heckman and Siegelman predicted? . Economics , 9 , 1 – 11 .

Baert S. , Pauw A.-S. D. , Deschacht N. ( 2016 ). Do employer preferences contribute to sticky floors?   Industrial and Labor Relations Review , 63 , 714 – 736 .

Becker S. O. , Fernandes A. , Weichselbaumer D. ( 2019 ). Discrimination in hiring based on potential and realized fertility: evidence from a large-scale field experiment . Labour Economics , 59 , 139 – 152 .

Berson C.   2012 . Does Competition Induce Hiring Equity? , available from: https://halshs.archives-ouvertes.fr/halshs-00718627/document [accessed 24 September 2021] .

Bertogg A.  et al.  ( 2020 ). Gender discrimination in the hiring of skilled professionals in two male-dominated occupational fields: a factorial survey experiment with real-world vacancies and recruiters in four European countries . Köln Z Soziol   (Suppl 1) 72 , 261 – 289 .

Bielby W. T. , Baron J. N. ( 1986 ). Men and women at work: sex segregation and statistical discrimination . American Journal of Sociology , 91 , 759 – 799 .

Birkelund G. E. ( 2016 ). Rational laziness – when time is limited, supply abundant, and decisions have to be made . Analyse & Kritik. Zeitschrift Für Sozialtheorie , 38 , 203 – 226 .

Birkelund G. E.  et al.  ( 2019 ). Do terrorist attacks affect ethnic discrimination in the labour market? Evidence from two randomized field experiments . British Journal of Sociology , 70 , 241 – 260 .

Blau F. D. , Kahn L. M. ( 2017 ). The gender wage gap: extent, trends, and explanations . Journal of Economic Literature , 55 , 789 – 865 .

Bobbitt-Zeher D. ( 2011 ). Gender discrimination at work: connecting gender stereotypes, institutional policies, and gender composition of workplace . Gender & Society , 25 , 764 – 786 .

Booth A. , Leigh A. ( 2010 ). Do employers discriminate by gender? A field experiment in female-dominated occupations . Economics Letters , 107 , 236 – 238 .

Brandén M. , Bygren M. , Gähler M. ( 2018 ). Can the trailing spouse phenomenon be explained by employer recruitment choices?   Population, Space and Place , 24 , e2141.

Bygren M. , Erlandsson A. , Gähler M. ( 2017 ). Do employers prefer fathers? Evidence from a field experiment testing the gender by parenthood interaction effect on callbacks to job applications . European Sociological Review , 33 , 337 – 348 .

Capéau B.  et al.  ( 2012 ). Two Concepts of Discriminaiton: Inequality of Opportunity versus Unequal Treatment of Equals. ECARES Working Paper No. 2012/58.

Carlsson M. ( 2011 ). Does hiring discrimination cause gender segregation in the Swedish labor market?   Feminist Economics , 17 , 71 – 102 .

Carlsson M. , Eriksson S. ( 2017 ). The Effect of Age and Gender on Labor Demand Evidence from a Field Experiment . Working Paper No. 2017:4. Sweden: Linnaeus University.

Carlsson R.  et al.  ( 2014 ). Testing for Backlash in Hiring: A Field Experiment on Agency, Communion, and Gender . Working paper. Sweden: Linnaeus University.

Castilla E. J. , Benard S. ( 2010 ). The paradox of meritocracy in organizations . Administrative Science Quarterly , 55 , 543 – 676 .

Cejka M. A. , Eagly A. H. ( 1999 ). Gender-stereotypic images of occupations correspond to the sex segregation of employment . Personality and Social Psychology Bulletin , 25 , 413 – 423 .

Chang M. L. ( 2004 ). Growing pains: cross-national variation in sex segregation in sixteen developing countries . American Sociological Review , 69 , 114 – 137 .

Charles M. ( 2011 ). A world of difference: international trends in women’s economic status . Annual Review of Sociology , 37 , 355 – 371 .

Correll S. J. , Benard S. , Paik I. ( 2007 ). Getting a job: is there a motherhood penalty?   American Journal of Sociology , 112 , 1297 – 1338 .

Cuddy A. J. C. , Fiske S. T. , Glick P. ( 2008 ). Warmth and competence as universal dimensions of social perception: the stereotype content model and the BIAS map . Advances in Experimental Social Psychology , 40 , 61 – 149 .

Di Stasio V. , Lancee B. ( 2019 ). Understanding why employers discriminate, where and against whom: the potential of cross-national, factorial and multi-group field experiments. Research in Stratification and Mobility , available from: 10.1016/j.rssm.2019.100463

Di Stasio V. , Larsen E. N. ( 2020 ). The racialized and gendered workplace: applying an intersectional lens to a field experiment on hiring discrimination in five European labor markets . Social Psychology Quarterly , 83 , 229 – 250 .

Duguet E.  et al.  ( 2012 ). First order stochastic dominance and the measurement of hiring discriminaiton: a ranking extension of correspondence testing with an application to gender and origin, available from: https://halshs.archives-ouvertes.fr/halshs-00731005/

Duguet E. , Loïc D. and , Petit P. ( 2017 ). Hiring discrimination against women: distinguishing taste based discrimination from statistical discrimination . Available at SSRN: https://ssrn.com/abstract=3083957 or 10.2139/ssrn.3083957 .

England P. ( 1994 ). Neoclassical economists’ theories of discrimination. In Burstein P. (Ed.) , Equal Employment Opportunity: Labor Market Discrimination and Public Policy . New York : Aldine De Gruyter , pp. 59 – 70 .

Fang H. , Moro A. ( 2011 ). Theories of statistical discrimination and affirmative action: a survey. In Benhabib J. , Bisin A. , Jackson M. O. (Eds.), Handbook of Social Economics . San Diego : Elsevier , Chapter 5, pp. 133 – 200 .

Faulkner W. ( 2009 ). Doing gender in engineering workplace cultures. I. Observations from the Field . Engineering Studies , 1 , 3 – 18 .

Fernández R. ( 2013 ). Cultural change as learning: the evolution of female labor force participation over a century . American Economic Review , 103 , 472 – 500 .

Fortin N. M. ( 2005 ). Gender role attitudes and the labour-market outcomes of women across OECD countries . Oxford Review of Economic Policy , 21 , 416 – 438 .

Gaddis S. M. (Ed.). ( 2018 ). An introduction to audit studies in the social sciences. In Audit Studies: Behind the Scenes with Theory, Method, and Nuance . Cham : Springer , pp. 3 – 44 .

Gangl M. , Ziefle A. ( 2009 ). Motherhood, labor force behavior and women’s careers: an empirical assessment of the wage penalty for motherhood in Britain, Germany and the United States . Demography , 46 , 341 – 369 .

Glick P. , Fiske S. T. ( 1996 ). The ambivalent sexism inventory: differentiating hostile from benevolent sexism . Journal of Personality and Social Psychology , 70 , 491 – 512 .

Glick P. , Zion C. , Nelson C. ( 1988 ). What mediates sex discrimination in hiring decisions? . Journal of Personality and Social Psychology , 55 , 178 .

Goldin C. ( 2014 ). A grand gender convergence: its last chapter . American Economic Review , 104 , 1091 – 1119 .

González M. J. , Cortina C. , Rodríguez J. ( 2019 ). The role of gender stereotypes in hiring: a field experiment . European Sociological Review , 35 , 187 – 204 .

Halper L. R. , Cowgill C. M. , Rios K. ( 2019 ). Gender bias in caregiving professions: the role of perceived warmth . Journal of Applied Social Psychology , 49 , 1 – 14 .

Heckman J. J. , Siegelman P. ( 1993 ). The urban institute audit studies: their methods and findings. In Fix M. , Struyk R. (Eds.), Clear and Convincing Evidence: Measurement of Discrimination in America . Washington, DC : Urban Institute Press .

Hofstede Insights ( 2020 ). Compare Countries , available from: https://www.hofstede-insights.com/product/compare-countries/ [accessed 25 June 2020].

Jackson M. ( 2009 ). Disadvantaged through discrimination? The role of employers in social stratification . The British Journal of Sociology , 60 , 669 – 692 .

Jacobsen J. , Khamis M. , Yuksel M. ( 2015 ). Convergence in men’s and women’s life patterns: lifetime work, lifetime earnings, and human capital investment . Research in Labor Economics , 41 , 1 – 33 .

Lancee B. ( 2019 ). Ethnic discrimination in hiring: comparing groups across contexts. Results from a cross-national field experiment . Journal of Ethnic and Migration Studies , 47 , 1181 – 1200 .

Lancee B.  et al.  ( 2019a ). The GEMM Study: A Cross-National Harmonized Field Experiment on Labour Market Discrimination: Codebook .http://dx.doi.org/10.2139/ssrn.3398273

Lancee B.  et al.  ( 2019b ). The GEMM Study: A Cross-National Harmonized Field Experiment on Labour Market Discrimination: Technical Report . 10.2139/ssrn.3398191

Larsen E. N. ( 2020 ). Induced competition in matched correspondence tests: conceptual and methodological considerations . Research in Social Stratification and Mobility , 65 , 100475 .

Levanon A. , Grusky D. B. ( 2016 ). The persistence of extreme gender segregation in the twenty-first century . American Journal of Sociology , 22 , 573 – 619 .

Mandel H. , Semyonov M. ( 2006 ). A welfare state paradox: state interventions and women’s employment opportunities in 22 countries . American Journal of Sociology , 111 , 1910 – 1949 .

Mood C. ( 2010 ). Logistic regression: why we cannot do what we think we can do, and what we can do about it . European Sociological Review , 26 , 67 – 82 .

Neumark D. ( 2012 ). Detecting discrimination in audit and correspondence studies . Journal of Human Resources , 47 , 1128 – 1157 .

Neumark D. ( 2018 ). Experimental research on labor market discrimination . Journal of Economic Literature , 56 , 799 – 866 .

Neumark D. , Bank R. J. , Van Nort K. D. ( 1996 ). Sex discrimination in restaurant hiring: an audit study . The Quarterly Journal of Economics , 111 , 915 – 941 .

OECD. ( 2015 ). Education at a Glance 2015 , available from: https://www.oecd.org/gender/data/gender-gap-in-education.htm [accessed 5 January 2020].

OECD. ( 2017 ). Short-Term Labour Market Statistics , available from: https://stats.oecd.org/OECDStat_Metadata/ShowMetadata.ashx?Dataset=STLABOUR&ShowOnWeb=true&Lang=en ) [accessed 24 June 2020].

OECD. ( 2019 ). “Unemployment rate”. OECD Employment Outlook , available from: https://data.oecd.org/unemp/unemployment-rate.htm

OECD. ( 2020a ). Index of Regulation on Individual Dismissal of Workers with Regular Contracts , available from: https://www1.compareyourcountry.org/employment-protection-legislation/en/0/178/ranking/ [accessed 23 June 2020].

OECD. ( 2020b ). Length of Maternity Leave, Parental Leave and Paid Father-Specific Leave , available from: https://www.oecd.org/gender/data/length-of-maternity-leave-parental-leave-and-paid-father-specific-leave.htm [accessed 25 June 2020].

OECD. ( 2020c ). OECD Family Database , available from: http://www.oecd.org/els/family/database.htm [accessed 25 June 2020].

OECD. ( 2020d ). Exployment: Share of Employed In Part-Time Employment, by Sex and Age Group , available from: https://stats.oecd.org/index.aspx?queryid=54746 [accessed 25 June 2020].

Petersen T. , Morgan L. A. ( 1995 ). Separate and unequal: occupation establishment sex-segregation and the gender wage-gap . American Journal of Sociology , 101 , 329 – 365 .

Petersen T. , Saporta I. ( 2004 ). The opportunity structure for discrimination . American Journal of Sociology , 109 , 852 – 901 .

Petit P. ( 2007 ). The effects of age and family constraints on gender hiring discrimination: a field experiment in the French financial sector . Labour Econ , 14 , 371 – 391 .

Pew Research Center. ( 2017 ). In Many Countries at Least Four-in-Ten in the Labor Force are Women , available from: https://www.pewresearch.org/fact-tank/2017/03/07/in-many-countries-at-least-four-in-ten-in-the-labor-force-are-women/ [accessed 25 June 2020].

Phelps E. S. ( 1972 ). The statistical theory of racism and sexism . American Economic Review , 62 , 659 – 661 .

Polavieja J. G. ( 2012 ). Socially embedded investments: explaining gender differences in job-specific skills . American Journal of Sociology , 118 , 592 – 634 .

Polavieja J. G. ( 2015 ). Capturing culture: a new method to estimate exogenous cultural effects using migrant populations . American Sociological Review , 80 , 166 – 191 .

Reskin: Plenum B. F. ( 2000 ). Employment discrimination and its remedies. In Berg I. and Kalleberg A. (Eds.), Handbook on Labor Market Research . New York .

Reskin B. F. , Roos P. A. ( 1990 ). Job Queues, Gender Queues: Explaining Women’s Inroads into Male Occupations. Philadelphia: Temple University Press.

Riach P. A. , Rich J. ( 1987 ). Testing for Sexual Discrimination in the Labour Market . Australian Economic Papers , 26 , 165 – 178 .

Riach P. A. , Rich J. ( 2002 ). Field experiments of discrimination in the market place . The Economic Journal , 112 , F480 – F518 .

Riach P. A. , Rich J. ( 2006 ). An experimental investigation of sexual discrimination in hiring in the English labor market . The B.E. Journal of Economic Analysis & Policy , 6 , available from: http://www.bepress.com/bejeap/advances/vol6/iss2/art1

Rich J. ( 2014 ). What Do Field Experiments of Discrimination in Markets Tell Us? A Meta Analysis of Studies Conducted since 2000 . IZA Discussion Paper No. 8584. Available at SSRN: https://ssrn.com/abstract=2517887 .

Ridgeway C. L. ( 1997 ). Interaction and the conservation of gender inequality: considering employment . American Sociological Review , 62 , 218 – 235 .

Rivera L. A. , Tilcsik A. ( 2016 ). Class advantage, commitment penalty: the gendered effect of social class signals in an elite labor market . American Sociological Review , 81 , 1097 – 1131 .

Rudman L. , Phelan J. E. ( 2008 ). Backlash effects for disconfirming gender stereotypes in organizations . Research in Organizational Behavior , 28 , 61 – 79 .

Stier H. , Lewin-Epstein N. , Braun M. ( 2001 ). Welfare regimes, family-supportive policies, and women’s employment along the life-course . American Journal of Sociology , 106 , 1731 – 1760 .

The World Economic Forum. ( 2017 ). The Global Gender Gap Report 2017 , available from: https://www.weforum.org/reports/the-global-gender-gap-report-2017

Torre M. ( 2014 ). The scarring effect of “women’s work”: the determinants of women’s attrition from male-dominated occupations . Social Forces , 93 , 1 – 29 .

Torre M. ( 2018 ). Stopgappers? The occupational trajectories of men in female-dominated occupations . Work and Occupations , 45 , 283 – 312 .

U.S. Census Bureau. ( 2010 ). Frequently Occurring Surnames from the 2010 Census , available from: https://www.census.gov/topics/population/genealogy/data/2010_surnames.html [accessed 18 January 2019].

Vuolo M. , Uggen C. , Lageson S. ( 2018 ). To match or not to match? Statistical and substantive considerations in audit design and analysis. In Gaddis S. M. (Ed.), Audit Studies: Behind the Scenes with Theory, Method, and Nuance . Cham : Springer , pp. 119 – 140 .

Weichselbaumer D. ( 2004 ). Is it sex or personality? The impact of sex stereotypes on discrimination in applicant selection . Eastern Economic Journal , 30 , 159 – 186 .

Weichselbaumer D. , Winter-Ebmer R. ( 2005 ). A meta-analysis of the international gender wage gap . Journal of Economic Surveys , 19 , 479 – 511 .

Williams C. L. , Muller C. , Kilanski K. ( 2012 ). Gendered organizations in the new economy . Gender & Society , 26 , 549 – 573 .

Williams W. M. , Ceci S. J. ( 2015 ). National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track . PNAS , 112 , 5360 – 5365 .

Yavorsky J. E. ( 2019 ). Uneven patterns of inequality: an audit analysis of hiring-related practicies by gendered and classed contexts . Social Forces , 98 , 461 – 492 .

Zhou X. , Zhang J. , Song X. ( 2013 ). Gender Discrimination in Hiring: Evidence from 19,130 Resumes in China . MPRA paper No. 43543. Available at SSRN: https://ssrn.com/abstract=2195840 or 10.2139/ssrn.2195840 .

Gunn Elisabeth Birkelund is a Professor of Sociology at University of Oslo. Her main research interests include analytical sociology, labor market studies, social inequalities, and population dynamics. She is a Fellow at The European Academy of Sociology, and Secretary General at the Norwegian Academy of Science and Letters. Her articles have appeared in European Sociological Review, Social Forces, International Migration Review, European Societies, and, earlier, in American Journal of Sociology and American Sociological Review .

Bram Lancee is an Associate Professor of Sociology at the University of Amsterdam. Current research interests include social capital, ethnic minorities and the labour market, inequality, attitudes towards immigration, and ethnic discrimination. His work has been published in journals, such as Social Forces, European Sociological Review, International Migration Review, Journal of Ethnic and Migration Studies, and Social Science Research.

Edvard N. Larsen is a postdoctoral researcher in Sociology at the University of Oslo, and Researcher II at the KIFO Institute of Church, Religion, and Worldview research. His main research interests are social inequality, migration, labor market discrimination, and religion. His work has been published in the journals Journal of Ethnic and Migration Studies , Social Psychology Quarterly , and Research on Social Stratification and Mobility .

Javier Polavieja (Oxford University PhD in Sociology, 2001) is Banco Santander Professor of Sociology and Director of the D-Lab at the Department of Social Sciences, University Carlos III of Madrid, as well as Research Fellow at the Institute of Economics and the Carlos III-Juan March Institute. His main fields of research are social stratification, political sociology, and migration research. His work has been published in American Journal of Sociology , American Sociological Review , European Sociological Review , Social Forces , Socio-Economic Review , Labour Economics , Political Behavior , Electoral Studies , International Migration , and Social Indicators Research .

Jonas Radl is an Associate Professor of Sociology at Universidad Carlos III de Madrid and Head of the Research Group ‘Effort and Social Inequality’ at WZB Berlin Social Science Center. Current research interests comprise social stratification and the life course. His work has been published in journals such as European Sociological Review , Social Forces , and Socio-economic Review .

Ruta Yemane is a Research Fellow at WZB Berlin Social Science Center in the migration, integration, and transnationalization research unit. Her research focuses on labor market discrimination, racism, and stereotypes. Her work has been published in the British Journal of Social Psychology and the Journal of Ethnic and Migration Studies .

Supplementary data

Month: Total Views:
October 2021 122
November 2021 3,057
December 2021 787
January 2022 1,802
February 2022 1,033
March 2022 1,373
April 2022 1,329
May 2022 1,937
June 2022 1,490
July 2022 1,317
August 2022 2,506
September 2022 1,553
October 2022 2,042
November 2022 2,108
December 2022 1,400
January 2023 10,260
February 2023 1,812
March 2023 2,624
April 2023 2,528
May 2023 2,558
June 2023 1,450
July 2023 1,183
August 2023 1,227
September 2023 1,489
October 2023 1,982
November 2023 1,761
December 2023 1,271
January 2024 1,438
February 2024 1,563
March 2024 2,036
April 2024 2,549
May 2024 2,088
June 2024 1,221
July 2024 1,128
August 2024 1,229

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-2672
  • Print ISSN 0266-7215
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Directory Global directory
  • Logins Product logins
  • Support Support & training
  • Contact Contact us

discrimination in recruitment case study

Could ESG be more transformational than GenAI for corporate strategies?

discrimination in recruitment case study

Power skills needed to exceed culture and well-being expectations amid deeply uncertain times

discrimination in recruitment case study

How employers rise to the new challenges of AI and the needs of their employees

More insights.

discrimination in recruitment case study

Generative AI and the small law firm: First steps for firm leaders

discrimination in recruitment case study

Forum: Empowering Employees with Hybrid Work

discrimination in recruitment case study

The race to regulate e-commerce is just beginning

IMAGES

  1. Understanding Discrimination in the Workplace: A Case Study on

    discrimination in recruitment case study

  2. Blog: How to prevent racial discrimination during recruitment

    discrimination in recruitment case study

  3. Who Discriminates in Hiring? A New Study Can Tell.

    discrimination in recruitment case study

  4. (PDF) Perceptions of Discrimination in Recruitment and the Workplace

    discrimination in recruitment case study

  5. Discrimination in Recruitment

    discrimination in recruitment case study

  6. 2 Mayo Clinic studies examine discrimination, bias in health care organizations

    discrimination in recruitment case study

VIDEO

  1. Recruitment Case Study

  2. Whistl and Major Recruitment Case Study

  3. Recruitment Case Study

  4. EMPLEMENTATION OF ENTERPRISE RESOURCE PLANNING SYSTEM USING ODOO Recruitment case study

  5. Why Companies Hesitate to Hire Over 50! 💰 #finance

  6. 授業中での差別😱Racial discrimination at classroom

COMMENTS

  1. Analyzing discrimination in recruitment: A guide and best ...

    Future research can conduct a cross-cultural study about discrimination in employment and analyze if hiring discrimination is a universal phenomenon, or if it depends on a country's history of migration economy, living standards, cultural beliefs and values, legal systems, and labor market.

  2. Racial discrimination in hiring remains a persistent problem

    Decades after hiring discrimination was made illegal in many Western countries, experts predicted it would gradually disappear. But according to a major new meta-analysis from Northwestern University, discrimination in hiring has remained a persistent problem.

  3. Discrimination and bias in AI recruitment: a case study

    One of the common concerns is the propensity of AI systems to return biased or discriminatory outcomes. By working through a case study about the use of AI in recruitment, we examine the risks of unlawful discrimination and how that might be challenged in the employment tribunal.

  4. AI and discriminative decisions in recruitment: Challenging ...

    Our critical analysis of the arguments for relying on AI to decrease discrimination in recruitment is informed by insights gleaned from philosophy and methodology of science, legal and political philosophy, and critical discussions on AI, discrimination and recruitment.

  5. Ethics and discrimination in artificial intelligence-enabled ...

    This study aims to address the research gap on algorithmic discrimination caused by AI-enabled recruitment and explore technical and managerial solutions. The primary research approach used...

  6. Analyzing discrimination in recruitment: A guide and best ...

    We also explain challenges in the design and implementation of these studies and how they can be addressed. Finally, we suggest avenues for future research and how future studies can contribute to reduce hiring discrimination.

  7. The state of hiring discrimination: A meta-analysis of ...

    1. Introduction. Although the workforce has become increasingly diverse—improving the integration of female, migrant, and older workers, amongst other groups—many individuals belonging to various minority groups still face considerable discrimination in the labour market ( Organisation for Economic Co-operation and Development [OECD], 2020a ).

  8. Gender Discrimination in Hiring: Evidence from a Cross ...

    Some studies found evidence of hiring discrimination against women in high-level jobs (Riach and Rich, 2002; Baert, De Pauw and Deschacht, 2016), while others did not (Williams and Ceci, 2015). These inconsistencies in findings might reflect true cross-national differences in gender discrimination.

  9. New study finds AI-enabled anti-Black bias in recruiting

    The report expands on hiring discrimination by exploring potential biases incorporated within pre-programmed “expected responses”, with researchers pointing out that these responses point to potential data inequity.

  10. The Discriminatory Potential of Modern Recruitment Trends—A ...

    Introduction: Discrimination in a Changing Recruitment Environment. From various correspondence tests, we know that applicants who belong to marginalized groups have a higher likelihood of being discriminated against (Lane, 2016; Zschirnt and Ruedin, 2016; Quillian et al., 2017; Baert, 2018).