Abstract
Recent legislation proposals have significantly increased the demand for eXplainable Artificial Intelligence (XAI) in many businesses, especially in so-called ‘high-risk’ domains, such as recruitment. Within recruitment, AI has become commonplace, mainly in the form of job recommender systems (JRSs), which try to match candidates to vacancies, and vice versa. However, common XAI techniques often fall short in this domain due to the different levels and types of expertise of the individuals involved, making explanations difficult to generalize. To determine the explanation preferences of the different stakeholder types - candidates, recruiters, and companies - we created and validated a semi-structured interview guide. Using grounded theory, we structurally analyzed the results of these interviews and found that different stakeholder types indeed have strongly differing explanation preferences. Candidates indicated a preference for brief, textual explanations that allow them to quickly judge potential matches. On the other hand, hiring managers preferred visual graph-based explanations that provide a more technical and comprehensive overview at a glance. Recruiters found more exhaustive textual explanations preferable, as those provided them with more talking points to convince both parties of the match. Based on these findings, we describe guidelines on how to design an explanation interface that fulfills the requirements of all three stakeholder types. Furthermore, we provide the validated interview guide, which can assist future research in determining the explanation preferences of different stakeholder types.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Ethical Review Committee Inner City faculties (Maastricht University).
- 2.
- 3.
- 4.
- 5.
- 6.
References
Abdollahpouri, H., et al.: Multistakeholder recommendation: survey and research directions. User Model. User-Adap. Inter. 30, 127–158 (2020)
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)
Bianchini, M., Gori, M., Scarselli, F.: Inside pagerank. ACM Trans. Internet Technol. (TOIT) 5(1), 92–128 (2005)
Burges, C., Ragno, R., Le, Q.: Learning to rank with nonsmooth cost functions. In: Advances in Neural Information Processing Systems, vol. 19 (2006)
Cambria, E., Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N.: A survey on XAI and natural language explanations. Inf. Process. Manag. 60(1), 103111 (2023)
Chen, L., Pu, P.: Trust building in recommender agents. In: Proceedings of the Workshop on Web Personalization, Recommender Systems and Intelligent User Interfaces at the 2nd International Conference on E-Business and Telecommunication Networks, pp. 135–145 (2005)
Cramer, H., et al.: The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-Adap. Inter. 18(5), 455–496 (2008)
Dworkin, S.L.: Sample size policy for qualitative studies using in-depth interviews (2012)
Fauvel, K., Lin, T., Masson, V., Fromont, É., Termier, A.: XCM: an explainable convolutional neural network for multivariate time series classification. Mathematics 9(23), 3137 (2021)
Garcia-Gathright, J., Hosey, C., Thomas, B.S., Carterette, B., Diaz, F.: Mixed methods for evaluating user satisfaction. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 541–542 (2018)
Hagras, H.: Toward human-understandable, explainable AI. Computer 51(9), 28–36 (2018)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 22–30 (2018)
Le, R., Zhang, T., Hu, W., Zhao, D., Song, Y., Yan, R.: Towards effective and interpretable person-job fitting. In: International Conference on Information and Knowledge Management, Proceedings, pp. 1883–1892 (2019). https://doi.org/10.1145/3357384.3357949
Liashchynskyi, P., Liashchynskyi, P.: Grid search, random search, genetic algorithm: a big comparison for nas. arXiv preprint arXiv:1912.06059 (2019)
Longhurst, R.: Semi-structured interviews and focus groups. Key Methods Geogr. 3(2), 143–156 (2003)
Lovász, L.: Random walks on graphs. Comb. Paul Erdos Eighty 2(1–46), 4 (1993)
Mei, A., Saxon, M., Chang, S., Lipton, Z.C., Wang, W.Y.: Users are the north star for AI transparency. arXiv preprint arXiv:2303.05500 (2023)
Menon, S., Vondrick, C.: Visual classification via description from large language models. arXiv preprint arXiv:2210.07183 (2022)
Morse, J.M.: Determining sample size (2000)
Nauta, M., et al.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. arXiv preprint arXiv:2201.08164 (2022)
OpenAI: ChatGPT: optimizing language models for dialogue (2022). https://openai.com/blog/chatgpt/
Palacio, S., Lucieri, A., Munir, M., Ahmed, S., Hees, J., Dengel, A.: XAI handbook: towards a unified framework for explainable AI. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3766–3775 (2021)
Poli, J.P., Ouerdane, W., Pierrard, R.: Generation of textual explanations in XAI: the case of semantic annotation. In: 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6. IEEE (2021)
Pu, P., Chen, L., Hu, R.: A user-centric evaluation framework for recommender systems. In: Proceedings of the Fifth ACM Conference on Recommender Systems, pp. 157–164 (2011)
Purificato, E., Manikandan, B.A., Karanam, P.V., Pattadkal, M.V., De Luca, E.W.: Evaluating explainable interfaces for a knowledge graph-based recommender system. In: IntRS@ RecSys, pp. 73–88 (2021)
de Ruijt, C., Bhulai, S.: Job recommender systems: a review. arXiv preprint arXiv:2111.13576 (2021)
Schellingerhout, R., Medentsiy, V., Marx, M.: Explainable career path predictions using neural models (2022)
Su, X., Khoshgoftaar, T.M.: A survey of collaborative filtering techniques. Adv. Artif. Intell. 2009 (2009)
Szymanski, M., Millecamp, M., Verbert, K.: Visual, textual or hybrid: the effect of user expertise on different explanations. In: 26th International Conference on Intelligent User Interfaces, pp. 109–119 (2021)
Tiddi, I., Schlobach, S.: Knowledge graphs as tools for explainable machine learning: a survey. Artif. Intell. 302, 103627 (2022)
Upadhyay, C., Abu-Rasheed, H., Weber, C., Fathi, M.: Explainable job-posting recommendations using knowledge graphs and named entity recognition. In: Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, pp. 3291–3296 (2021). https://doi.org/10.1109/SMC52423.2021.9658757
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)
Walker, D., Myrick, F.: Grounded theory: an exploration of process and procedure. Qual. Health Res. 16(4), 547–559 (2006)
Wang, X., He, X., Cao, Y., Liu, M., Chua, T.S.: KGAT: knowledge graph attention network for recommendation. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 950–958 (2019)
Wrede, C., Winands, M.H., Wilbik, A.: Linguistic summaries as explanation mechanism for classification problems. In: The 34rd Benelux Conference on Artificial Intelligence and the 31th Belgian Dutch Conference on Machine Learning (2022)
Yıldırım, E., Azad, P., Öğüdücü, ŞG.: biDeepFM: a multi-objective deep factorization machine for reciprocal recommendation. Eng. Sci. Technol. Int. J. 24(6), 1467–1477 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix
A Hyperparameter Tuning
The optimal hyperparameter configuration we found, is the following:
-
hidden_dimensions = 10
-
output_dimensions = 100
-
number_of_layers = 2
-
attention_heads = 5
-
dp_rate = 0.01
-
learning_rate = 0.001
-
epochs = 1
An overview of all configurations we tested, can be found on GitHub
B Preliminary Interview Guide
C Grounded Theory Results
1.1 C.1 Candidates
Theory (Based on Table 3): Candidates want to be able to determine whether or not a vacancy is relevant at a glance. To do so, the explanation needs to be brief and straight to the point. Once the candidate has found a potentially interesting vacancy, they should be able to explore the explanation in more detail. Considering their difficulty in parsing both the graph and feature attribution explanation, the textual explanation should always be central, with the other two merely functioning as further support.
1.2 C.2 Recruiters
Theory (Based on Table 4): Recruiters prefer the model to act mainly as a supportive tool. This means that the strongest arguments the model puts forward should be front and center. This allows them to use the explanations when defending their decision, be it to their supervisor or a client. They will always want to manually verify the claims made by the model, but due to the explanation, they are likely to consider predicted matches before all else. The exact details of how the model came to its prediction will oftentimes be irrelevant, but are nice to have accessible in case additional evidence should be provided.
1.3 C.3 Company Representatives
Theory (Based on Table 5): Company representatives want the explanations to assist them as quickly as possible. Due to their generally higher level of experience in reading charts and graphs, the graph explanations actually help the most with this. However, even though the graph can give them an explanation at a glance, they still want to be able to explore further, in case the graph comes across as surprising or unintuitive. In such a scenario, they either want to study the explanation in more detail, e.g., through additionally reading the textual explanation, or they want to manually look into alternative candidates. The feature attribution map could easily be converted into a ‘hub’ for them, where they can get an overview of alternative candidates for a vacancy.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Schellingerhout, R., Barile, F., Tintarev, N. (2023). A Co-design Study for Multi-stakeholder Job Recommender System Explanations. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1902. Springer, Cham. https://doi.org/10.1007/978-3-031-44067-0_30
Download citation
DOI: https://doi.org/10.1007/978-3-031-44067-0_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44066-3
Online ISBN: 978-3-031-44067-0
eBook Packages: Computer ScienceComputer Science (R0)