Skip to main content

Copyright Protection for Large Language Model EaaS via Unforgeable Backdoor Watermarking

  • Conference paper
  • First Online:
Pattern Recognition (ICPR 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15320))

Included in the following conference series:

  • 265 Accesses

Abstract

Large language models (LLMs) have evolved rapidly and demonstrated superior performance over the past few months. Training these models is both expensive and time-consuming. Consequently, some companies have begun to offer embedding as a service (EaaS) based on these LLMs to reap the benefits. However, this makes them potentially vulnerable to model extraction attacks which can replicate a functionally similar model and thereby infringe upon copyright. To protect the copyright of LLMs for EaaS, we propose a backdoor watermarking method by injecting a secret cosine signal into embeddings of original text with triggers. The secret signal, generated and authenticated using identity information, establishes a direct link between the watermark and the copyright owner. Experimental results demonstrate the method’s effectiveness, showing minimal impact on downstream tasks and high detection accuracy, as well as exhibiting resilience to forgery attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Brown, T., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901 (2020)

    Google Scholar 

  2. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  3. Hu, Z., et al.: LLM-adapters: an adapter family for parameter-efficient fine-tuning of large language models. arXiv preprint arXiv:2304.01933 (2023)

  4. Li, J., Tang, T., Zhao, W.X., Nie, J.Y., Wen, J.R.: Pretrained language models for text generation: a survey. arXiv preprint arXiv:2201.05273 (2022)

  5. Li, P., Cheng, P., Li, F., Du, W., Zhao, H., Liu, G.: PLMmark: a secure and robust black-box watermarking framework for pre-trained language models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 14991–14999 (2023)

    Google Scholar 

  6. Li, Y., Jiang, Y., Li, Z., Xia, S.T.: Backdoor learning: a survey. IEEE Trans. Neural Netw. Learn. Syst. 35(1), 5–22 (2022)

    Article  Google Scholar 

  7. Li, Y., Zhu, M., Yang, X., Jiang, Y., Wei, T., Xia, S.T.: Black-box dataset ownership verification via backdoor watermarking. IEEE Trans. Inf. Forensics Secur. 18, 2318–2332 (2023)

    Article  Google Scholar 

  8. Liu, Y., Jia, J., Liu, H., Gong, N.Z.: Stolenencoder: stealing pre-trained encoders in self-supervised learning. In: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pp. 2115–2128 (2022)

    Google Scholar 

  9. Lu, J., Ni, J., Su, W., Xie, H.: Wavelet-based CNN for robust and high-capacity image watermarking. In: 2022 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2022)

    Google Scholar 

  10. Orekondy, T., Schiele, B., Fritz, M.: Knockoff nets: stealing functionality of black-box models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4954–4963 (2019)

    Google Scholar 

  11. Peng, W., et al.: Are you copying my model? Protecting the copyright of large language models for EaaS via backdoor watermark. arXiv preprint arXiv:2305.10036 (2023)

  12. Sun, X., et al.: Text classification via large language models. In: The 2023 Conference on Empirical Methods in Natural Language Processing (2023)

    Google Scholar 

  13. Taori, R., et al.: Stanford alpaca: an instruction-following llama model (2023)

    Google Scholar 

  14. Touvron, H., et al.: Llama: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)

  15. Vaithilingam, P., Zhang, T., Glassman, E.L.: Expectation vs. experience: evaluating the usability of code generation tools powered by large language models. In: CHI Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1–7 (2022)

    Google Scholar 

  16. Wu, S., Liu, J., Huang, Y., Guan, H., Zhang, S.: Adversarial audio watermarking: embedding watermark into deep feature. In: 2023 IEEE International Conference on Multimedia and Expo (ICME), pp. 61–66. IEEE (2023)

    Google Scholar 

  17. Xia, P., Li, Z., Zhang, W., Li, B.: Data-efficient backdoor attacks. In: Raedt, L.D. (ed.) Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, pp. 3992–3998. International Joint Conferences on Artificial Intelligence Organization (2022). https://doi.org/10.24963/ijcai.2022/554, main Track

  18. Xu, T., Zhong, S., Xiao, Z.: Protecting intellectual property of EEG-based model with watermarking. In: 2023 IEEE International Conference on Multimedia and Expo (ICME), pp. 37–42. IEEE (2023)

    Google Scholar 

  19. Yang, X., et al.: Watermarking text generated by black-box language models. arXiv preprint arXiv:2305.08883 (2023)

  20. Zhao, X., Wang, Y.X., Li, L.: Protecting language generation models via invisible watermarking. arXiv preprint arXiv:2302.03162 (2023)

Download references

Acknowledgments

This research work is partly supported by National Natural Science Foundation of China No. 62472177, No. 62172001, and Guangdong Key Laboratory of Intelligent Information Processing & Shenzhen Key Laboratory of Media Security Shenzhen University Shenzhen 518060, China No. 2023B1212060076.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaoxia Yin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kong, C., Chen, J., Tan, S., Yin, Z., Zhang, X. (2025). Copyright Protection for Large Language Model EaaS via Unforgeable Backdoor Watermarking. In: Antonacopoulos, A., Chaudhuri, S., Chellappa, R., Liu, CL., Bhattacharya, S., Pal, U. (eds) Pattern Recognition. ICPR 2024. Lecture Notes in Computer Science, vol 15320. Springer, Cham. https://doi.org/10.1007/978-3-031-78498-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-78498-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-78497-2

  • Online ISBN: 978-3-031-78498-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics