Abstract
The classifier-guided method is a simple yet effective technique for controllable text generation. However, its application to the discrete diffusion model, a promising non-autoregressive language model, remains unexplored. In this paper, controllable generation is achieved through multiple iterations of conditional backward diffusion, each iteration comprising two key steps: conditional prediction and masking. The conditional prediction distribution is approximated by the product of classifier probabilities and language model probabilities. Two distinct algorithms, Largest-M and gradient search, are introduced to sample from this joint distribution. Largest-M operates within the discrete token sequence space, while gradient search functions within the continuous hidden state space. Both algorithms aim to identify texts that maximize the joint probability. Experiments across four fine-grained controllable tasks demonstrate the effectiveness of the proposed algorithms, achieving success rates up to 99% on certain tasks. Furthermore, the inherent multi-modality problem in discrete diffusion models is mitigated by framing multi-modality suppression as a controllable generation task. The experiment result indicates that integrating multi-modality classifiers effectively enhances the performance of discrete diffusion models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ouyang, L., et al.: Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 35, 27730–27744 (2022)
GPT4 Homepage. https://openai.com/index/gpt-4/. Accessed 10 May 2024
Llama3 Homepage. https://llama.meta.com/llama3/. Accessed 10 May 2024
Zhang, Y., et al.: Siren’s song in the AI ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 (2023)
Zou, A., et al.: Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 (2023)
Dathathri, S., et al.: Plug and play language models: a simple approach to controlled text generation. In: International Conference on Learning Representations (2019)
Yang, K., Klein, D.: FUDGE: controlled text generation with future discriminators. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Austin, J., et al.: Structured denoising diffusion models in discrete state-spaces. Adv. Neural Inf. Process. Syst. 34, 17981–17993 (2021)
Sohl-Dickstein, J., et al.: Deep unsupervised learning using nonequilibrium thermodynamics. In: International Conference on Machine Learning. PMLR (2015)
Song, Y., et al.: Score-based generative modeling through stochastic differential equations. In: International Conference on Learning Representations (2021)
Rombach, R., et al.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)
Saharia, C., et al.: Photorealistic text-to-image diffusion models with deep language understanding. Adv. Neural Inf. Process. Syst. 35, 36479–36494 (2022)
Gu, J., et al.: Non-autoregressive neural machine translation. In: International Conference on Learning Representations (ICLR) (2018)
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Adv. Neural. Inf. Process. Syst. 33, 6840–6851 (2020)
Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2020)
Hoogeboom, E., et al.: Argmax flows and multinomial diffusion: learning categorical distributions. Adv. Neural Inf. Process. Syst. 34, 12454–12465 (2021)
Li, X., et al.: Diffusion-lm improves controllable text generation. Adv. Neural Inf. Process. Syst. 35, 4328–4343 (2022)
Gong, S., et al.: DiffuSeq: sequence to sequence text generation with diffusion models. In: International Conference on Learning Representations (ICLR 2023) (2023)
Chen, J., et al.: A cheaper and better diffusion language model with soft-masked noise. In: The 2023 Conference on Empirical Methods in Natural Language Processing (2023)
Ye, J., et al.: Dinoiser: diffused conditional sequence learning by manipulating noises. arXiv preprint arXiv:2302.10025 (2023)
Brown, T., et al.: Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33, 1877–1901 (2020)
Krause, B., et al.: GeDi: generative discriminator guided sequence generation. In: Findings of the Association for Computational Linguistics: EMNLP 2021 (2021)
He, Z., et al.: DiffusionBERT: improving generative masked language models with diffusion models. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, vol. 1: Long Papers (2023)
Kenton, J.D.M.W.C., Toutanova, L.K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT (2019)
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)
Novikova, J., Dušek, O., Rieser, V.: The E2E dataset: new challenges for end-to-end generation. In: Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue (2017)
Acknowledgments
This work was supported by the National Natural Science Foundation of China (62366010) and Guangxi Natural Science Fundation (2024GXNSFAA010374).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Jiang, H., Cai, G., Li, S. (2025). Classifiers Guided Controllable Text Generation for Discrete Diffusion Language Models. In: Wong, D.F., Wei, Z., Yang, M. (eds) Natural Language Processing and Chinese Computing. NLPCC 2024. Lecture Notes in Computer Science(), vol 15361. Springer, Singapore. https://doi.org/10.1007/978-981-97-9437-9_11
Download citation
DOI: https://doi.org/10.1007/978-981-97-9437-9_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-9436-2
Online ISBN: 978-981-97-9437-9
eBook Packages: Computer ScienceComputer Science (R0)