Skip to main content

An Ensemble-Based Approach for Generative Language Model Attribution

  • Conference paper
  • First Online:
Web Information Systems Engineering – WISE 2023 (WISE 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14306))

Included in the following conference series:

  • 841 Accesses

Abstract

Recently, Large Language Models (LLMs) have gained considerable attention due to their incredible ability to automatically generate texts that closely resemble human-written text. They have become invaluable tools in handling various text-based tasks such as content creation and report generation. Nevertheless, the proliferation of these tools can create undesirable consequences such as generation of false information and plagiarism. A variety of LLMs have been operationalized in the last few years whose abilities are heavily influenced by the quality of their training corpus, model architecture, pre-training tasks, and fine-tuning processes. Our ability to attribute the generated text to a specific LLM will not only help us understand differences in the LLMs’ output characteristics, but also effectively distinguish machine-generated text from human-generated text. In this paper, we study whether a machine learning model can be effectively trained to attribute text to the underlying LLM that generated it. We propose an ensemble neural model that generates probabilities from multiple pre-trained LLMs, which are then used as features for a traditional machine learning classifier. The proposed approach is tested on Automated Text Identification (AuTexTification) datasets in English and Spanish languages. We find that our models outperform various baselines, achieving macro \(F_{macro}\) scores of 0.63 and 0.65 for English and Spanish texts, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.symanto.com/nlp-tools/symanto-brain/.

References

  1. Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)

    Google Scholar 

  2. Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019)

  3. Crothers, E., Japkowicz, N., Viktor, H.: Machine generated text: a comprehensive survey of threat models and detection methods. arXiv preprint arXiv:2210.07321 (2022)

  4. Deng, Z., Gao, H., Miao, Y., Zhang, H.: Efficient detection of LLM-generated texts with a Bayesian surrogate model. arXiv preprint arXiv:2305.16617 (2023)

  5. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805 (2018). http://arxiv.org/abs/1810.04805

  6. Gehrmann, S., Strobelt, H., Rush, A.M.: GLTR: statistical detection and visualization of generated text. arXiv preprint arXiv:1906.04043 (2019)

  7. He, P., Liu, X., Gao, J., Chen, W.: DeBERTa: decoding-enhanced BERT with disentangled attention. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=XPZIaotutsD

  8. Ippolito, D., Duckworth, D., Callison-Burch, C., Eck, D.: Automatic detection of generated text is easiest when humans are fooled. arXiv preprint arXiv:1911.00650 (2019)

  9. Jawahar, G., Abdul-Mageed, M., Lakshmanan, L.V.: Automatic detection of machine generated text: a critical survey. arXiv preprint arXiv:2011.01314 (2020)

  10. Ji, Z., et al.: Survey of hallucination in natural language generation. ACM Comput. Surv. 55(12), 1–38 (2023)

    Article  Google Scholar 

  11. Li, B., Weng, Y., Song, Q., Deng, H.: Artificial text detection with multiple training strategies. arXiv preprint arXiv:2212.05194 (2022)

  12. Li, H., Moon, J.T., Purkayastha, S., Celi, L.A., Trivedi, H., Gichoya, J.W.: Ethics of large language models in medicine and medical research. Lancet Digit. Health 5, e333–e335 (2023)

    Article  Google Scholar 

  13. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. CoRR abs/1907.11692 (2019). http://arxiv.org/abs/1907.11692

  14. Massarelli, L., et al.: How decoding strategies affect the verifiability of generated text. arXiv preprint arXiv:1911.03587 (2019)

  15. Mitchell, E., Lee, Y., Khazatsky, A., Manning, C.D., Finn, C.: DetectGPT: zero-shot machine-generated text detection using probability curvature. arXiv preprint arXiv:2301.11305 (2023)

  16. Mitrović, S., Andreoletti, D., Ayoub, O.: ChatGPT or human? Detect and explain. explaining decisions of machine learning model for detecting short ChatGPT-generated text. arXiv preprint arXiv:2301.13852 (2023)

  17. Sadasivan, V.S., Kumar, A., Balasubramanian, S., Wang, W., Feizi, S.: Can AI-generated text be reliably detected? arXiv preprint arXiv:2303.11156 (2023)

  18. Sarvazyan, A.M., González, J.Á., Franco Salvador, M., Rangel, F., Chulvi, B., Rosso, P.: AuTexTification: automatic text identification. In: Procesamiento del Lenguaje Natural. Jaén, Spain (2023)

    Google Scholar 

  19. Scao, T.L., et al.: Bloom: a 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022)

  20. Sun, Z.: A short survey of viewing large language models in legal aspect. arXiv preprint arXiv:2303.09136 (2023)

  21. Uchendu, A., Le, T., Shu, K., Lee, D.: Authorship attribution for neural text generation. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 8384–8395 (2020)

    Google Scholar 

  22. Wolf, T., et al.: Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45 (2020)

    Google Scholar 

  23. Wu, S., et al.: BloombergGPT: a large language model for finance. arXiv preprint arXiv:2303.17564 (2023)

  24. Zhou, J.T., Tsang, I.W., Pan, S.J., Tan, M.: Heterogeneous domain adaptation for multiple classes. In: Artificial Intelligence and Statistics, pp. 1095–1103. PMLR (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harika Abburi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Abburi, H., Suesserman, M., Pudota, N., Veeramani, B., Bowen, E., Bhattacharya, S. (2023). An Ensemble-Based Approach for Generative Language Model Attribution. In: Zhang, F., Wang, H., Barhamgi, M., Chen, L., Zhou, R. (eds) Web Information Systems Engineering – WISE 2023. WISE 2023. Lecture Notes in Computer Science, vol 14306. Springer, Singapore. https://doi.org/10.1007/978-981-99-7254-8_54

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-7254-8_54

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-7253-1

  • Online ISBN: 978-981-99-7254-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics