Abstract
Recent Query Auto-completion (QAC) systems leverage natural language generation or pre-trained language models (PLMs) to demonstrate remarkable performance. However, these systems also suffer from biased and toxic completions. Efforts have been made to address language detoxification within PLMs using controllable text generation (CTG) techniques, involving training with non-toxic data and employing decoding time approaches. As the completions for QAC systems are usually short, these existing CTG methods based on decoding and training are not directly transferable. Towards these concerns, we propose the first public QAC detoxification model, Detoxifying Query Auto-Completion (or DQAC), which utilizes adapters in a CTG framework. DQAC operates on latent representations with no additional overhead. It leverages two adapters for toxic and non-toxic cases. During inference, we fuse these representations in a controlled manner that guides the generation of query completions towards non-toxicity. We evaluate toxicity levels in the generated completions across two real-world datasets using two classifiers: a publicly available (Detoxify) and a search query-specific classifier which we develop (QDetoxify). DQAC consistently outperforms all existing baselines and emerges as a state-of-the-art model providing high quality and low toxicity. We make the code publicly available\(^{1}\).(\(^{1}\) https://shorturl.at/zJ024)
Aishwarya Maheswaran, Kaushal Kumar Maurya: These authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
also called as completion or query.
- 2.
References
Cai, F., De Rijke, M., et al.: A survey of query auto completion in information retrieval. Found. Trends® in Inf. Retrieval 10(4), 273–363 (2016)
Dathathri, S., Madotto, A., Lan, J., Hung, J., Frank, E., Molino, P., Yosinski, J., Liu, R.: Plug and play language models: a simple approach to controlled text generation. In: ICLR (2020)
Gehman, S., Gururangan, S., Sap, M., Choi, Y., Smith, N.A.: RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In: EMNLP Findings, pp. 3356–3369 (2020)
Gupta, M., Joshi, M., Agrawal, P.: Deep learning methods for query auto completion. In: Kamps, J., et al.,(eds.) ECIR, vol. 13982, pp. 341–348. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-28241-6_35
Gururangan, S., et al.: Don’t stop pretraining: Adapt language models to domains and tasks. In: ACL, pp. 8342–8360. Association for Computational Linguistics (2020)
Hanu, L.: Unitary team: detoxify. Github. https://github.com/unitaryai/detoxify (2020)
Hartvigsen, T., Gabriel, S., Palangi, H., Sap, M., Ray, D., Kamar, E.: ToxiGen: a large-scale machine-generated dataset for adversarial and implicit hate speech detection. In: ACL, pp. 3309–3326 (May 2022)
Houlsby, N., et al.: Parameter-efficient transfer learning for NLP. In: ICML, pp. 2790–2799. PMLR (2019)
Lees, A., et al.: A new generation of perspective API: efficient multilingual character-level transformers. KDD (2022)
Liu, A., et al.: DExperts: decoding-time controlled text generation with experts and anti-experts. In: ACL-IJCNLP, pp. 6691–6706 (Aug 2021)
Logacheva, V., et al.: ParaDetox: detoxification with parallel data. In: ACL, pp. 6804–6818 (2022)
Lu, X., et al.: Quark: controllable text generation with reinforced unlearning. NeurIPS 35, 27591–27609 (2022)
Maurya, K.K., Desarkar, M.S., Gupta, M., Agrawal, P.: TRIE-NLG: trie context augmentation to improve personalized query auto-completion for short and unseen prefixes. In: DMKD, vol. 1573-756X. ECML-PKDD 2023 (2023)
Maurya, K.K., Desarkar, M.S., Kano, Y., Deepshikha, K.: ZmBART: an unsupervised cross-lingual transfer framework for language generation. In: ACL-IJCNLP Findings, pp. 2804–2818 (Aug 2021)
Mitra, B., Craswell, N.: Query auto-completion for rare prefixes. In: CIKM, pp. 1755–1758 (2015)
Olteanu, A., Diaz, F., Kazai, G.: When are search completion suggestions problematic? Proc. ACM on Hum.-Comput. Inter. 4(CSCW2), 1–25 (2020)
Pozzobon, L.A., Ermis, B., Lewis, P., Hooker, S.: On the challenges of using black-box APIs for toxicity evaluation in research. ArXiv abs/2304.12397 (2023)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR 21(140), 1–67 (2020). http://jmlr.org/papers/v21/20-074.html
Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using siamese BERT-networks. In: EMNLP (11 2019)
Stickland, A.C., Murray, I.: BERT and PALs: projected attention layers for efficient adaptation in multi-task learning. In: ICML, pp. 5986–5995. PMLR (2019)
Üstün, A., Bérard, A., Besacier, L., Gallé, M.: Multilingual unsupervised neural machine translation with denoising adapters. In: EMNLP (2021)
Wu, Q., Burges, C.J., Svore, K.M., Gao, J.: Adapting boosting for information retrieval measures. Inf. Retrieval 13, 254–270 (2010)
Yadav, N., Sen, R., Hill, D.N., Mazumdar, A., Dhillon, I.S.: Session-aware query auto-completion using extreme multi-label ranking. In: KDD, pp. 3835–3844 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Maheswaran, A., Maurya, K.K., Gupta, M., Desarkar, M.S. (2024). DQAC: Detoxifying Query Auto-completion with Adapters. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14650. Springer, Singapore. https://doi.org/10.1007/978-981-97-2266-2_9
Download citation
DOI: https://doi.org/10.1007/978-981-97-2266-2_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2265-5
Online ISBN: 978-981-97-2266-2
eBook Packages: Computer ScienceComputer Science (R0)