Preventing discrimination in the automated targeting of job advertisements
Introduction
With the rise of artificial intelligence (AI) and the accompanying subfields of big data, data mining and machine learning, a lot of human tasks can be successfully performed by AI-driven software. A White House report on big data warns that such innovations can root discrimination deeply into society and reinforce prejudice and bias.1 An example of discriminatory AI is the computer program which is used in some jails to determine which prisoners are eligible for parole. The program generates a risk assessment score to determine which prisoners are likely to re-offend. According to a research done by ProPublica, the system is biased against prisoners of colour.2 Such technologies are applied to automate decisions in multiple other fields such as online advertising and employment.3
Imagine that 50 years ago, a newspaper gave the option to advertise vacancies only in copies that went to male readers. Advertising like this belongs to the possibilities that platforms like Facebook provide nowadays. With countless targeting settings, people can be excluded until the advertiser has reached the perfect audience. When AI is used to control and apply these settings it is vital that it does not do so in a discriminatory way, especially in the field of employment. The chances for job seekers are seriously diminished when they are excluded from seeing job advertisements because this gives them a false start.4 This undermines the principle of equality from which follows that every individual should have the same opportunities, including equal access to employment.5 A good example of discriminatory job advertising can be found in a research that analysed advertising placements.6 It discovered that an advertisement for a high-paying executive position was shown almost six times more to men than to women.7 However, when used in the right way, AI can efficiently identify and reach the candidates possessing the required skills for a job while avoiding individual biases.8
On the background of the increasing amount of discriminatory challenges facing AI applications, this paper (or hereafter “we”) examines discrimination in the automated online job advertising business in Europe. Because this topic is so extensive, this paper examines how discrimination can be prevented in the automated online targeting of job advertisements. This question is answered in this paper in four steps. First, the technical elements that come into play and may cause discrimination when using AI to target advertisements are presented in Section 2. Secondly, the scope and effect of European non-discrimination law are established in Section 3. Thirdly, Section 4 examines in which ways the targeting of job advertisements can be discriminatory. Fourthly, technical recommendations on how to prevent discrimination when the targeting is done by AI are presented in Section 5. Various options like influencing the pre-processing of big data, altering the algorithmic models are evaluated. Section 5 also examines the possibilities of using techniques like data mining and machine learning to actively battle direct and indirect discrimination. Finally, Section 6 concludes the paper.
Section snippets
Using artificial intelligence in automated online advertising
In order to scrutinize automated online job advertising, an examination of the advertising process and a delimitation of the subject is needed. This section explains the practice of online advertising and which factors play a role in the outcome of an online advertising campaign.
Non-discrimination law
Provisions on non-discrimination and equality are strongly integrated in international law. The concept of equality has been expressed explicitly in most human rights instruments as a preambular objective, as an implicit descriptive function in the understanding of the scope and application of human rights, and it has been codified in substantive provision of human rights treaties.22 The implementation of these treaties in the domestic legal order is up to each state because
Recognising discrimination in the targeting of job advertisements
This section aims to determine in which ways the targeting of a job advertisement can be discriminatory. The concepts of direct and indirect discrimination are set out and applied to the automated targeting of job advertisements in Sections 4.1 and 4.2.
In online job advertising, advertisements are shown based on the information the social media channel has collected about its users. Facebook creates an advertisement profile of its users based on information provided by them and their behaviour
Preventing discrimination in the automated targeting of job advertisements
This section provides recommendations on how direct and indirect discrimination can be prevented when using AI in the automated job advertising process and it builds on the concepts discussed in sections two and four. When the targeting is done by AI, the factors that influence its targeting decisions are the data contained in the big data sets, the algorithms used for the data mining, the rules that are learned from this and the use of those rules by the AI. Consequently, these are also the
Conclusion
On the background of the increasing amount of discriminatory challenges facing AI applications, this paper examined the requirements needed in order to comply to European non-discrimination law to prevent discrimination in the automated online job advertising business in Europe. The factors that influence the occurrence of discrimination are the big data, the algorithms that mine the big data, the correlations that are found and the accompanying targeting rules, and the way AI uses these rules.
Acknowledgements
I would like to thank Kevin Jon Heller, Maarten den Heijer and Frederik Zuiderveen Borgesius for proofreading the paper, for suggestions and for advice. I would also like to thank Simone van Beek for language help. Any errors are mine.
References (0)
Cited by (26)
Complex patterns of ICTs' effect on sustainable development at the national level: The triple bottom line perspective
2024, Technological Forecasting and Social ChangeFirms' multi-sided platform construction efforts and ESG performance: An information processing theory perspective
2023, Industrial Marketing ManagementAI advertising: An overview and guidelines
2023, Journal of Business ResearchAdvancing algorithmic bias management capabilities in AI-driven marketing analytics research
2023, Industrial Marketing ManagementAn interdisciplinary review of AI and HRM: Challenges and future directions
2023, Human Resource Management ReviewCitation Excerpt :In fact, AI interview raters indicated less appearance prejudice compared with human raters (Suen, Chen, & Lu, 2019). Scholars further provided suggestions against potential ethical pitfalls in AI recruitment, by proposing technical suggestions to avoid possible discrimination in AI-based job advertisements according to legal frameworks (Dalenberg, 2018) and suggesting a third-party data keeper to safeguard a rich and representative dataset of private information (Blass, 2019). While some scholars developed AI tools to detect and predict workplace injuries (e.g., Cheng, Ng, Sin, Lai, & Law, 2020) or stress (Yan, Chien, Yeh, Chou, & Hsing, 2020), other OT papers focused on HR performance, engagement, and turnover in healthcare contexts.
Algorithmic bias in machine learning-based marketing models
2022, Journal of Business ResearchCitation Excerpt :Israeli and Ascazra (2020) further clarify that this kind of result may also be due to the fact that it costs more to reach the female audience than their male counterpart, and companies may purposefully target a specific customer segment based on criteria such as gender, age etc. Similarly, there is empirical evidence showing a discriminatory preference of online advertisement placement algorithms in promoting products to specific customer groups or market segments (Israeli and Ascazra, 2020; Dalenberg, 2018; Vigdor, 2019). Algorithmic bias can result in discriminatory pricing practices, such as minorities and women receiving stricter credit conditions for approval of bank loans or credit cards and young female drivers paying a higher insurance premium due to the perception of greater risk (Israeli and Ascazra, 2020).