Skip to main content

A Co-design Study for Multi-stakeholder Job Recommender System Explanations

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Abstract

Recent legislation proposals have significantly increased the demand for eXplainable Artificial Intelligence (XAI) in many businesses, especially in so-called ‘high-risk’ domains, such as recruitment. Within recruitment, AI has become commonplace, mainly in the form of job recommender systems (JRSs), which try to match candidates to vacancies, and vice versa. However, common XAI techniques often fall short in this domain due to the different levels and types of expertise of the individuals involved, making explanations difficult to generalize. To determine the explanation preferences of the different stakeholder types - candidates, recruiters, and companies - we created and validated a semi-structured interview guide. Using grounded theory, we structurally analyzed the results of these interviews and found that different stakeholder types indeed have strongly differing explanation preferences. Candidates indicated a preference for brief, textual explanations that allow them to quickly judge potential matches. On the other hand, hiring managers preferred visual graph-based explanations that provide a more technical and comprehensive overview at a glance. Recruiters found more exhaustive textual explanations preferable, as those provided them with more talking points to convince both parties of the match. Based on these findings, we describe guidelines on how to design an explanation interface that fulfills the requirements of all three stakeholder types. Furthermore, we provide the validated interview guide, which can assist future research in determining the explanation preferences of different stakeholder types.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Ethical Review Committee Inner City faculties (Maastricht University).

  2. 2.

    https://tianchi.aliyun.com/dataset/31623/.

  3. 3.

    https://pypi.org/project/deep-translator/.

  4. 4.

    https://www.pyg.org/.

  5. 5.

    https://atlasti.com/.

  6. 6.

    https://www.randstad.nl/.

References

  1. Abdollahpouri, H., et al.: Multistakeholder recommendation: survey and research directions. User Model. User-Adap. Inter. 30, 127–158 (2020)

    Article  Google Scholar 

  2. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)

  3. Bianchini, M., Gori, M., Scarselli, F.: Inside pagerank. ACM Trans. Internet Technol. (TOIT) 5(1), 92–128 (2005)

    Article  Google Scholar 

  4. Burges, C., Ragno, R., Le, Q.: Learning to rank with nonsmooth cost functions. In: Advances in Neural Information Processing Systems, vol. 19 (2006)

    Google Scholar 

  5. Cambria, E., Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N.: A survey on XAI and natural language explanations. Inf. Process. Manag. 60(1), 103111 (2023)

    Article  Google Scholar 

  6. Chen, L., Pu, P.: Trust building in recommender agents. In: Proceedings of the Workshop on Web Personalization, Recommender Systems and Intelligent User Interfaces at the 2nd International Conference on E-Business and Telecommunication Networks, pp. 135–145 (2005)

    Google Scholar 

  7. Cramer, H., et al.: The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-Adap. Inter. 18(5), 455–496 (2008)

    Article  Google Scholar 

  8. Dworkin, S.L.: Sample size policy for qualitative studies using in-depth interviews (2012)

    Google Scholar 

  9. Fauvel, K., Lin, T., Masson, V., Fromont, É., Termier, A.: XCM: an explainable convolutional neural network for multivariate time series classification. Mathematics 9(23), 3137 (2021)

    Article  Google Scholar 

  10. Garcia-Gathright, J., Hosey, C., Thomas, B.S., Carterette, B., Diaz, F.: Mixed methods for evaluating user satisfaction. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 541–542 (2018)

    Google Scholar 

  11. Hagras, H.: Toward human-understandable, explainable AI. Computer 51(9), 28–36 (2018)

    Article  Google Scholar 

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  13. Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 22–30 (2018)

    Google Scholar 

  14. Le, R., Zhang, T., Hu, W., Zhao, D., Song, Y., Yan, R.: Towards effective and interpretable person-job fitting. In: International Conference on Information and Knowledge Management, Proceedings, pp. 1883–1892 (2019). https://doi.org/10.1145/3357384.3357949

  15. Liashchynskyi, P., Liashchynskyi, P.: Grid search, random search, genetic algorithm: a big comparison for nas. arXiv preprint arXiv:1912.06059 (2019)

  16. Longhurst, R.: Semi-structured interviews and focus groups. Key Methods Geogr. 3(2), 143–156 (2003)

    Google Scholar 

  17. Lovász, L.: Random walks on graphs. Comb. Paul Erdos Eighty 2(1–46), 4 (1993)

    Google Scholar 

  18. Mei, A., Saxon, M., Chang, S., Lipton, Z.C., Wang, W.Y.: Users are the north star for AI transparency. arXiv preprint arXiv:2303.05500 (2023)

  19. Menon, S., Vondrick, C.: Visual classification via description from large language models. arXiv preprint arXiv:2210.07183 (2022)

  20. Morse, J.M.: Determining sample size (2000)

    Google Scholar 

  21. Nauta, M., et al.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. arXiv preprint arXiv:2201.08164 (2022)

  22. OpenAI: ChatGPT: optimizing language models for dialogue (2022). https://openai.com/blog/chatgpt/

  23. Palacio, S., Lucieri, A., Munir, M., Ahmed, S., Hees, J., Dengel, A.: XAI handbook: towards a unified framework for explainable AI. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3766–3775 (2021)

    Google Scholar 

  24. Poli, J.P., Ouerdane, W., Pierrard, R.: Generation of textual explanations in XAI: the case of semantic annotation. In: 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6. IEEE (2021)

    Google Scholar 

  25. Pu, P., Chen, L., Hu, R.: A user-centric evaluation framework for recommender systems. In: Proceedings of the Fifth ACM Conference on Recommender Systems, pp. 157–164 (2011)

    Google Scholar 

  26. Purificato, E., Manikandan, B.A., Karanam, P.V., Pattadkal, M.V., De Luca, E.W.: Evaluating explainable interfaces for a knowledge graph-based recommender system. In: IntRS@ RecSys, pp. 73–88 (2021)

    Google Scholar 

  27. de Ruijt, C., Bhulai, S.: Job recommender systems: a review. arXiv preprint arXiv:2111.13576 (2021)

  28. Schellingerhout, R., Medentsiy, V., Marx, M.: Explainable career path predictions using neural models (2022)

    Google Scholar 

  29. Su, X., Khoshgoftaar, T.M.: A survey of collaborative filtering techniques. Adv. Artif. Intell. 2009 (2009)

    Google Scholar 

  30. Szymanski, M., Millecamp, M., Verbert, K.: Visual, textual or hybrid: the effect of user expertise on different explanations. In: 26th International Conference on Intelligent User Interfaces, pp. 109–119 (2021)

    Google Scholar 

  31. Tiddi, I., Schlobach, S.: Knowledge graphs as tools for explainable machine learning: a survey. Artif. Intell. 302, 103627 (2022)

    Article  MathSciNet  Google Scholar 

  32. Upadhyay, C., Abu-Rasheed, H., Weber, C., Fathi, M.: Explainable job-posting recommendations using knowledge graphs and named entity recognition. In: Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, pp. 3291–3296 (2021). https://doi.org/10.1109/SMC52423.2021.9658757

  33. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  34. Walker, D., Myrick, F.: Grounded theory: an exploration of process and procedure. Qual. Health Res. 16(4), 547–559 (2006)

    Article  Google Scholar 

  35. Wang, X., He, X., Cao, Y., Liu, M., Chua, T.S.: KGAT: knowledge graph attention network for recommendation. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 950–958 (2019)

    Google Scholar 

  36. Wrede, C., Winands, M.H., Wilbik, A.: Linguistic summaries as explanation mechanism for classification problems. In: The 34rd Benelux Conference on Artificial Intelligence and the 31th Belgian Dutch Conference on Machine Learning (2022)

    Google Scholar 

  37. Yıldırım, E., Azad, P., Öğüdücü, ŞG.: biDeepFM: a multi-objective deep factorization machine for reciprocal recommendation. Eng. Sci. Technol. Int. J. 24(6), 1467–1477 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roan Schellingerhout .

Editor information

Editors and Affiliations

Appendices

Appendix

A Hyperparameter Tuning

The optimal hyperparameter configuration we found, is the following:

  • hidden_dimensions = 10

  • output_dimensions = 100

  • number_of_layers = 2

  • attention_heads = 5

  • dp_rate = 0.01

  • learning_rate = 0.001

  • epochs = 1

An overview of all configurations we tested, can be found on GitHub

B Preliminary Interview Guide

Table 2. The preliminary interview guide.

C Grounded Theory Results

1.1 C.1 Candidates

Table 3. The quotes, open codes, and categories discovered by using grounded theory for the candidates’ responses.

Theory (Based on Table 3): Candidates want to be able to determine whether or not a vacancy is relevant at a glance. To do so, the explanation needs to be brief and straight to the point. Once the candidate has found a potentially interesting vacancy, they should be able to explore the explanation in more detail. Considering their difficulty in parsing both the graph and feature attribution explanation, the textual explanation should always be central, with the other two merely functioning as further support.

1.2 C.2 Recruiters

Table 4. The quotes, open codes, and categories discovered by using grounded theory for the recruiters’ responses.

Theory (Based on Table 4): Recruiters prefer the model to act mainly as a supportive tool. This means that the strongest arguments the model puts forward should be front and center. This allows them to use the explanations when defending their decision, be it to their supervisor or a client. They will always want to manually verify the claims made by the model, but due to the explanation, they are likely to consider predicted matches before all else. The exact details of how the model came to its prediction will oftentimes be irrelevant, but are nice to have accessible in case additional evidence should be provided.

1.3 C.3 Company Representatives

Table 5. The quotes, open codes, and categories discovered by using grounded theory for the company representatives’ responses.

Theory (Based on Table 5): Company representatives want the explanations to assist them as quickly as possible. Due to their generally higher level of experience in reading charts and graphs, the graph explanations actually help the most with this. However, even though the graph can give them an explanation at a glance, they still want to be able to explore further, in case the graph comes across as surprising or unintuitive. In such a scenario, they either want to study the explanation in more detail, e.g., through additionally reading the textual explanation, or they want to manually look into alternative candidates. The feature attribution map could easily be converted into a ‘hub’ for them, where they can get an overview of alternative candidates for a vacancy.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schellingerhout, R., Barile, F., Tintarev, N. (2023). A Co-design Study for Multi-stakeholder Job Recommender System Explanations. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1902. Springer, Cham. https://doi.org/10.1007/978-3-031-44067-0_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44067-0_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44066-3

  • Online ISBN: 978-3-031-44067-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics