Skip to main content

Towards Equitable AI in HR: Designing a Fair, Reliable, and Transparent Human Resource Management Application

  • Conference paper
  • First Online:
Deep Learning Theory and Applications (DeLTA 2023)

Abstract

The aim of this work is the development of artificial intelligence (ai) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.

M. Danner and B. Hadžić—Both authors contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arik: Interpretable deep learning for time series forecasting (2021). https://ai.googleblog.com/2021/12/interpretable-deep-learning-for-time.html

  2. Bangor, A., Kortum, P.T., Miller, J.T.: An empirical evaluation of the system usability scale. Int. J. Hum.-Comput. Interact. 24(6), 574–594 (2008)

    Article  Google Scholar 

  3. Bellamy, R.K., et al.: AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943 (2018)

  4. Binns, R.: Fairness in machine learning: lessons from political philosophy. In: Conference on Fairness, Accountability and Transparency, pp. 149–159. PMLR (2018)

    Google Scholar 

  5. COM, E: Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Proposal for a regulation of the European parliament and of the council (2021)

    Google Scholar 

  6. Urquhart, L.D., Craigon, P.J.: The moral-it deck: a tool for ethics by design. J. Responsible Innov. 8(1), 94–126 (2021)

    Article  Google Scholar 

  7. Danner, M., Hadžić, B., Radloff, R., Su, X., Peng, L., Rätsch, M.: Overcome ethnic discrimination with unbiased machine learning for facial data sets. In: 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - VISAPP, Lisbon, Portugal, pp. 464–471 (2023)

    Google Scholar 

  8. Ekman, P., Friesen, W.V.: Facial action coding system. Environ. Psychol. Nonverbal Behav. (1978)

    Google Scholar 

  9. Fischer, M.T., Hirsbrunner, S.D., Jentner, W., Miller, M., Keim, D.A., Helm, P.: Promoting ethical awareness in communication analysis: investigating potentials and limits of visual analytics for intelligence applications. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 877–889 (2022)

    Google Scholar 

  10. Friedman, B.: Value-sensitive design. Interactions 3(6), 16–23 (1996)

    Article  Google Scholar 

  11. Gansser, O.A., Reich, C.S.: A new acceptance model for artificial intelligence with extensions to UTAUT2: an empirical study in three segments of application. Technol. Soc. 65, 101535 (2021)

    Article  Google Scholar 

  12. Goldsmith, J.: Dealing with prosody in a text-to-speech system. Int. J. Speech Technol. 3, 51–63 (1999)

    Article  Google Scholar 

  13. Gressel, C., Orlowski, A.: Integrierte technikentwicklung: Herausforderungen, umsetzungsweisen und zukunftsimpulse. TATuP-Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis 28(2), 71–72 (2019)

    Article  Google Scholar 

  14. Gulliford, F., Dixon, A.P.: AI: the HR revolution. Strateg. HR Rev. 18(2), 52–55 (2019)

    Article  Google Scholar 

  15. Hamilton, I.A.: Amazon built an AI tool to hire people but had to shut it down because it was discriminating against women. Insider (2018). https://www.businessinsider.com/amazon-built-ai-to-hire-peoplediscriminated-against-women-2018-10. Accessed 23 May 2022

  16. Hirsbrunner, S.D., Tebbe, M., Müller-Birn, C.: From critical technical practice to reflexive data science. Convergence, p. 13548565221132243 (2022)

    Google Scholar 

  17. Hudson, D.A., Zitnick, L.: Generative adversarial transformers. In: International Conference on Machine Learning, pp. 4487–4499. PMLR (2021)

    Google Scholar 

  18. Insights, F.B.: Recruitment software market size, share and industry analysis by component: regional forecast 2018–2025 (2019). https://www.fortunebusinessinsights.com/industry-reports/recruitment-software-market-100081

  19. Jiang, Y., Chang, S., Wang, Z.: TransGAN: two pure transformers can make one strong GAN, and that can scale up. In: Advances in Neural Information Processing Systems, vol. 34, pp. 14745–14758 (2021)

    Google Scholar 

  20. Kenton, J.D.M.W.C., Toutanova, L.K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of naacL-HLT, vol. 1, p. 2 (2019)

    Google Scholar 

  21. Makhlouf, K., Zhioua, S., Palamidessi, C.: On the applicability of machine learning fairness notions. ACM SIGKDD Explorations Newsl. 23(1), 14–23 (2021)

    Article  Google Scholar 

  22. Resource Management, IH: Human resource management - guidelines on recruitment (2016). https://www.iso.org/standard/64149.html

  23. Marcinkowski, F., Starke, C.: Wann ist künstliche intelligenz (un-) fair? Politik in der digitalen Gesellschaft—Band, p. 269 (2019)

    Google Scholar 

  24. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)

    Article  Google Scholar 

  25. O’neil, C.: Weapons of math destruction: how big data increases inequality and threatens democracy. Crown (2017)

    Google Scholar 

  26. Reilly, P.: The impact of artificial intelligence on the HR function. Which way now for HR and organisational changes, pp. 41–58 (2018)

    Google Scholar 

  27. Ruf, B., Detyniecki, M.: Towards the right kind of fairness in AI. arXiv preprint arXiv:2102.08453 (2021)

  28. Sánchez-Monedero, J., Dencik, L., Edwards, L.: What does it mean to ‘solve’ the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 458–468 (2020)

    Google Scholar 

  29. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  30. Spindler, M., Booz, S., Gieseler, H., Runschke, S., Wydra, S., Zinsmaier, J.: How to achieve integration? Methodological concepts and challenges for the integration of ethical, legal, social and economic aspects into technological development. Das geteilte Ganze: Horizonte Integrierter Forschung für künftige Mensch-Technik-Verhältnisse, pp. 213–239 (2020)

    Google Scholar 

  31. Staatsministerium Baden-Württemberg: Land fördert projekte zur künstlichen intelligenz (2021). https://www.baden-wuerttemberg.de/de/service/presse/pressemitteilung/pid/land-foerdert-projekte-zur-kuenstlichen-intelligenz/

  32. Strack, R., Caye, J., von der Linden, C., Quirós, H., Haen, P.: Realizing the value of people management: from capability to profitability. BCG. perspectives (2012)

    Google Scholar 

  33. Technavio: AI market in recruitment industry by component and geography - forecast and analysis 2022–2026 (2022). https://www.technavio.com/report/ai-market-industry-in-recruitment-industry-analysis

  34. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  35. Venkatesh, V., Bala, H.: Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 39(2), 273–315 (2008)

    Article  Google Scholar 

  36. Verma, S., Rubin, J.: Fairness definitions explained. In: Proceedings of the International Workshop on Software Fairness, pp. 1–7 (2018)

    Google Scholar 

  37. Votto, A.M., Valecha, R., Najafirad, P., Rao, H.R.: Artificial intelligence in tactical human resource management: a systematic literature review. Int. J. Inf. Manag. Data Insights 1(2), 100047 (2021)

    Google Scholar 

  38. Waltz, C.: Deep learning model for unbiased artificial intelligence in human resource management. Master’s thesis, Reutlingen University (2022)

    Google Scholar 

  39. Witte, W.W.: Die 25 umsatzstärksten hr-softwareanbieter 2021 (2022). https://www.hr-konjunktur.de/newsleser/die-25-umsatzstaerksten-hr-software-anbieter-2021.html

  40. Zhu, Z., Lin, K., Jain, A.K., Zhou, J.: Transfer learning in deep reinforcement learning: a survey (2022)

    Google Scholar 

Download references

Acknowledgements

This work is partially supported by a grant of the BMWi ZIM program, no. KK5007201LB0.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Thomas Weber , Xinjuan Zhu or Matthias Rätsch .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Danner, M., Hadžić, B., Weber, T., Zhu, X., Rätsch, M. (2023). Towards Equitable AI in HR: Designing a Fair, Reliable, and Transparent Human Resource Management Application. In: Conte, D., Fred, A., Gusikhin, O., Sansone, C. (eds) Deep Learning Theory and Applications. DeLTA 2023. Communications in Computer and Information Science, vol 1875. Springer, Cham. https://doi.org/10.1007/978-3-031-39059-3_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-39059-3_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-39058-6

  • Online ISBN: 978-3-031-39059-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics