Skip to main content

Morality Beyond the Lines: Detecting Moral Sentiment Using AI-Generated Synthetic Context

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12797))

Included in the following conference series:

  • 2728 Accesses

Abstract

Moral rhetoric is defined as the language used for advocating or taking a moral stance towards an issue by invoking or making salient various moral concerns. The Moral Foundations Theory (MFT) can be used to evaluate expressions of moral sentiment. MFT proposes that there are five innate, universal moral foundations that exist across cultures and societies: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation. We investigate the case in which texts containing MFD keywords are not expressed explicitly — hidden context. While members of high-context groups can read “between the lines” meanings, word counting methods or other NLP methods for moral sentiment detection and quantification cannot happen if the related keywords are not there to be counted. To explore the hidden context, we leverage a pretrained generative language model such as Generative Pre-trained Transformer (GPT-2) that uses deep learning to produce human-like text—to generate a new story. A human writer would usually provide several prompting sentences, and the GPT model would produce the rest of the story. To customize the GPT-2 model towards a specific domain―for this paper we studied local population’s attitudes towards US military bases located in foreign countries―a training dataset from the domain can be used to finetune the GPT-2 model. Finetuning means taking weights of a trained neural network and using it as initialization for a new model being trained on the finetuning dataset. Restricted language codes (meanings are not expressed explicitly) can be used as prompting sentences, and finetuned GPT models can be used to generate multiple versions of synthetic contextual stories. Since the GPT-2 model was trained using millions of examples from a huge text corpus, the generated context contents reflect the cultural-related knowledge and common sense in the culture. In addition, since finetuned models were trained using fine-tuned dataset, the generated context contents reflect the local people’s reaction for that specific domain—which is attitudes towards US military bases in regards to this paper. After using or fine-tuning the GPT-2 model to generate multiple versions of synthetic text, some versions might contain keywords defined in the MFD. Our hypothesis is that the percentage keywords related to the five morality domains can serve as statistical indicators for the five domains. Our experiment shows that the top five morality domain types experiencing significant percentage changes between positive and negative stories, generated by fine-tuned training models, are HarmVice, AuthorityVirtue, InGroupLoyalty, FairnessVirtue and FairnessVice. The results are in line with several major issues identified between US oversea military bases and local populations by well-known existing studies. The main contribution of this research is to use AI-generated synthetic context for detecting moral sentiment and quantification.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bernstein, B.: Elaborated and restricted codes: their social origins and some consequences. Am. Anthropol. 66(6), 55–69 (1964)

    Google Scholar 

  2. Benamara, F., Inkpen, D., Taboada, M.: Introduction to the special issue on language in social media: exploiting discourse and other contextual information. Comput. Linguist. 44(4), 663–681 (2018)

    Google Scholar 

  3. Sagi, E., Dehghani, M.: Measuring moral rhetoric in text. Soc. Sci. Comput. Rev. 32(2), 132–144 (2013)

    Article  Google Scholar 

  4. Graham, J., Haidt, J., Nosek, B.A.: Liberals and conservatives rely on different sets of moral foundations. J. Pers. Soc. Psychol. 96(5), 1029 (2009)

    Google Scholar 

  5. Garten, J., Boghrati, R., Hoover, J., Johnson, K.M., Dehghani, M.: Morality between the lines: detecting moral sentiment in text. In: Proceedings of IJCAI 2016 Workshop on Computational Modeling of Attitudes (2016)

    Google Scholar 

  6. Moral foundations theory: https://moralfoundations.org/

  7. Moral foundations dictionary: https://moralfoundations.org/wp-content/uploads/files/downloads/moral%20foundations%20dictionary.dic

  8. Haidt, J., Graham, J., Joseph, C.: Above and below left–right: Ideological narratives and moral foundations. Psychol. Inquiry 20(2–3), 110–119 (2009)

    Google Scholar 

  9. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  10. Allen, M.A., Flynn, M., Machain, C.M., Stravers, A.: Understanding How Populations Perceive U.S. Troop Deployments, The Owl in the Olive Tree, the blog of Minerva Research Initiative (2019). https://minerva.defense.gov/Owl-In-the-Olive-Tree/Owl_View/Article/1797784/understanding-how-populations-perceive-us-troop-deployments/

  11. https://medium.com/@stasinopoulos.dimitrios/a-beginners-guide-to-training-and-generating-text-using-gpt2-c2f2e1fbd10a

  12. Guglielmo, S., Malle, B.F.: Asymmetric morality: blame is more differentiated and more extreme than praise. PLOS ONE (2019). https://doi.org/10.1371/journal.pone.0213544

  13. Anderson, R.A., Crockett, M.J., Pizarro, D.A.: A theory of moral praise. Trends Cogn. Sci. 24(9), 694–703 (2020)

    Google Scholar 

  14. Cooley, A.: Base Politics: Democratic Change and the US Military Overseas. Cornell University Press, Ithaca (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Qian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qian, M., Laguardia, J., Qian, D. (2021). Morality Beyond the Lines: Detecting Moral Sentiment Using AI-Generated Synthetic Context. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2021. Lecture Notes in Computer Science(), vol 12797. Springer, Cham. https://doi.org/10.1007/978-3-030-77772-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77772-2_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77771-5

  • Online ISBN: 978-3-030-77772-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics