Abstract
In recent years, the use of NLP models to predict people’s attitudes toward social bias has attracted the attention of many researchers. In the existing work, most of the research is at the sentence level, i.e., judging whether the whole sentence has a biased property. In this work, we leverage pre-trained models’ powerful semantic modeling capabilities to model dialogue context. Furthermore, to use more features to improve the ability of the model to identify bias, we propose two auxiliary tasks with the help of the dialogue’s topic and type features. In order to achieve better classification results, we use the adversarial training method to train two multi-task models. Finally, we combine the two multi-task models by voting. We participated in the NLPCC-2022 shared task on Fine-Grain Dialogue Social Bias Measurement and ranked fourth with the Macro-F1 score of 0.5765. The codes of our model are available at github (https://github.com/33Da/nlpcc2022-task7).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bordia, S., Bowman, S.: Identifying and reducing gender bias in word-level language models. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pp. 7–15 (2019)
He, T., Glass, J.: Negative training for neural dialogue response generation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2044–2058 (2020)
Tan, O.S., Low, E.L., Tay, E.G., Yan, Y.K. (eds.): Singapore Math and Science Education Innovation. ETLPPSIP, vol. 1. Springer, Singapore (2021). https://doi.org/10.1007/978-981-16-1357-9
Smith, S.,et al.: Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990 (2022)
Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)
Zhou, J. et al.: Towards identifying social bias in dialog systems: Frame, datasets, and benchmarks (2022)
Peng, B., Wang, J., Zhang, X.: Adversarial learning of sentiment word representations for sentiment analysis. Inf. Sci. 541, 426–441 (2020)
Park, J.H., Shin, J., Fung, P.: Reducing gender bias in abusive language detection. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2799–2804 (2018)
Sap, M., Card, D., Gabriel, S., Choi, Y., Smith, A.N.: The risk of racial bias in hate speech detection. In ACL (2019)
Qian, Y., Muaz, U., Zhang, B., Hyun, J.W.: Reducing gender bias in word-level language models with a gender-equalizing loss function. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pp. 223–228 (2019)
Vaswani, A.: Attention is all you need. Advances in neural information processing systems, p. 30 (2017)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pp. 4171–4186 (2019)
Cui, Y., Che, W., Liu, T., Qin, B., Yang, Z.: Pre-training with whole word masking for chinese BERT. IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 3504–3514 (2021)
Maronikolakis, A., Baader, P., Schütze, H:. Analyzing hate speech data along racial, gender and intersectional axes. arXiv preprint arXiv:2205.06621 (2022)
Bao, X., Qiao, Q.: Transfer learning from pre-trained BERT for pronoun resolution. In: Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pp. 82–88 (2019)
Andrew Moore, A., Barnes, J.: Multi-task learning of negation and speculation for targeted sentiment classification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2838–2869 (2021)
Li, Y., Caragea, C.: A multi-task learning framework for multi-target stance detection. In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 2320–2326 (2021)
Akyürek, A.F., Paik, S., Kocyigit, M.Y., Akbiyik, S., Runyun, Ş.L., Wijaya, D.: On measuring social biases in prompt-based multi-task learning. arXiv preprint arXiv:2205.11605 (2022)
Zhang, Z.,: Mengzi: Towards lightweight yet ingenious pre-trained models for chinese. CoRR, abs/2110.06696 (2021)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (2018)
Miyato, T., Dai, A.M., Goodfellow, I.: Adversarial training methods for semi-supervised text classification. stat, 1050:6 (2017)
Loshchilov, I., Hutter, F.: Fixing weight decay regularization in adam. CoRR, abs/1711.05101 (2017)
Sun, Y.: ERNIE: enhanced representation through knowledge integration. CoRR, abs/1904.09223 (2019)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Tang, H., Liu, J., Zhao, M., Gong, X.: Progressive layered extraction (PLE): A novel multi-task learning (MTL) model for personalized recommendations. In Fourteenth ACM Conference on Recommender Systems, pp. 269–278 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Mai, H., Zhou, X., Wang, L. (2022). A Multi-task Learning Model for Fine-Grain Dialogue Social Bias Measurement. In: Lu, W., Huang, S., Hong, Y., Zhou, X. (eds) Natural Language Processing and Chinese Computing. NLPCC 2022. Lecture Notes in Computer Science(), vol 13552. Springer, Cham. https://doi.org/10.1007/978-3-031-17189-5_27
Download citation
DOI: https://doi.org/10.1007/978-3-031-17189-5_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-17188-8
Online ISBN: 978-3-031-17189-5
eBook Packages: Computer ScienceComputer Science (R0)