Skip to main content

Classifiers Guided Controllable Text Generation for Discrete Diffusion Language Models

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 15361))

  • 205 Accesses

Abstract

The classifier-guided method is a simple yet effective technique for controllable text generation. However, its application to the discrete diffusion model, a promising non-autoregressive language model, remains unexplored. In this paper, controllable generation is achieved through multiple iterations of conditional backward diffusion, each iteration comprising two key steps: conditional prediction and masking. The conditional prediction distribution is approximated by the product of classifier probabilities and language model probabilities. Two distinct algorithms, Largest-M and gradient search, are introduced to sample from this joint distribution. Largest-M operates within the discrete token sequence space, while gradient search functions within the continuous hidden state space. Both algorithms aim to identify texts that maximize the joint probability. Experiments across four fine-grained controllable tasks demonstrate the effectiveness of the proposed algorithms, achieving success rates up to 99% on certain tasks. Furthermore, the inherent multi-modality problem in discrete diffusion models is mitigated by framing multi-modality suppression as a controllable generation task. The experiment result indicates that integrating multi-modality classifiers effectively enhances the performance of discrete diffusion models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ouyang, L., et al.: Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 35, 27730–27744 (2022)

    Google Scholar 

  2. GPT4 Homepage. https://openai.com/index/gpt-4/. Accessed 10 May 2024

  3. Llama3 Homepage. https://llama.meta.com/llama3/. Accessed 10 May 2024

  4. Zhang, Y., et al.: Siren’s song in the AI ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 (2023)

  5. Zou, A., et al.: Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 (2023)

  6. Dathathri, S., et al.: Plug and play language models: a simple approach to controlled text generation. In: International Conference on Learning Representations (2019)

    Google Scholar 

  7. Yang, K., Klein, D.: FUDGE: controlled text generation with future discriminators. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)

    Google Scholar 

  8. Austin, J., et al.: Structured denoising diffusion models in discrete state-spaces. Adv. Neural Inf. Process. Syst. 34, 17981–17993 (2021)

    Google Scholar 

  9. Sohl-Dickstein, J., et al.: Deep unsupervised learning using nonequilibrium thermodynamics. In: International Conference on Machine Learning. PMLR (2015)

    Google Scholar 

  10. Song, Y., et al.: Score-based generative modeling through stochastic differential equations. In: International Conference on Learning Representations (2021)

    Google Scholar 

  11. Rombach, R., et al.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

  12. Saharia, C., et al.: Photorealistic text-to-image diffusion models with deep language understanding. Adv. Neural Inf. Process. Syst. 35, 36479–36494 (2022)

    Google Scholar 

  13. Gu, J., et al.: Non-autoregressive neural machine translation. In: International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  14. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Adv. Neural. Inf. Process. Syst. 33, 6840–6851 (2020)

    Google Scholar 

  15. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2020)

    Google Scholar 

  16. Hoogeboom, E., et al.: Argmax flows and multinomial diffusion: learning categorical distributions. Adv. Neural Inf. Process. Syst. 34, 12454–12465 (2021)

    Google Scholar 

  17. Li, X., et al.: Diffusion-lm improves controllable text generation. Adv. Neural Inf. Process. Syst. 35, 4328–4343 (2022)

    Google Scholar 

  18. Gong, S., et al.: DiffuSeq: sequence to sequence text generation with diffusion models. In: International Conference on Learning Representations (ICLR 2023) (2023)

    Google Scholar 

  19. Chen, J., et al.: A cheaper and better diffusion language model with soft-masked noise. In: The 2023 Conference on Empirical Methods in Natural Language Processing (2023)

    Google Scholar 

  20. Ye, J., et al.: Dinoiser: diffused conditional sequence learning by manipulating noises. arXiv preprint arXiv:2302.10025 (2023)

  21. Brown, T., et al.: Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33, 1877–1901 (2020)

    Google Scholar 

  22. Krause, B., et al.: GeDi: generative discriminator guided sequence generation. In: Findings of the Association for Computational Linguistics: EMNLP 2021 (2021)

    Google Scholar 

  23. He, Z., et al.: DiffusionBERT: improving generative masked language models with diffusion models. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, vol. 1: Long Papers (2023)

    Google Scholar 

  24. Kenton, J.D.M.W.C., Toutanova, L.K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT (2019)

    Google Scholar 

  25. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  26. Novikova, J., Dušek, O., Rieser, V.: The E2E dataset: new challenges for end-to-end generation. In: Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue (2017)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China (62366010) and Guangxi Natural Science Fundation (2024GXNSFAA010374).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoyong Cai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, H., Cai, G., Li, S. (2025). Classifiers Guided Controllable Text Generation for Discrete Diffusion Language Models. In: Wong, D.F., Wei, Z., Yang, M. (eds) Natural Language Processing and Chinese Computing. NLPCC 2024. Lecture Notes in Computer Science(), vol 15361. Springer, Singapore. https://doi.org/10.1007/978-981-97-9437-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-9437-9_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-9436-2

  • Online ISBN: 978-981-97-9437-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics