Skip to main content

Extracting and Re-mapping Narrative Text Structure Elements Between Languages Using Self-supervised and Active Few-Shot Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13336))

Abstract

Transcreators extract crucial information from text written in one language for a specific media type and translate this text into a different language and a different media type. Multiple factors drive changes in narrative structures in different languages for different media platforms. AI-based approaches can be used to extract critical information elements from text and augment human analysis and insight to facilitate transcreation. In this study, we apply self-supervised learning and active few-shot learning based on generative pretrained transformer models (e.g. GPT-N) to perform information extraction. We also used Wikifier (https://wikifier.org/) to annotate the related text with links to relevant Wikipedia concepts to augment human users with additional explanations. The performance statistics were collected using four news stories, and the results show that self-supervised approach is error-prone because the GPT-3 pretrained language model can generate synthetic information based on patterns learned from its huge training corpus instead of reflecting only relevant facts in the prompted text. On the other hand, active few-shot learning worked very well with 87.5% accuracy on the experimental examples. Wikifier also provides a large number of correct and useful links to named entities such as human names, locations, organizations, and concepts. Transcreators can leverage these AI tools to augment their ability to effectively perform their tasks.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)

  2. Brown, T.B., et al.: Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020)

  3. Settles, B.: Active learning literature survey (2009)

    Google Scholar 

  4. Wang, Y., Yao, Q., Kwok, J.T., Ni, L.M.: Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. (CSUR) 53(3), 1–34 (2020)

    Article  Google Scholar 

  5. Wikifier semantic annotation service for 100 languages. https://wikifier.org/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Qian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qian, M., Zhu, E. (2022). Extracting and Re-mapping Narrative Text Structure Elements Between Languages Using Self-supervised and Active Few-Shot Learning. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05643-7_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05642-0

  • Online ISBN: 978-3-031-05643-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics