Skip to main content

Don’t Do That! Reverse Role Prompting Helps Large Language Models Stay in Personality Traits

  • Conference paper
  • First Online:
Interactive Storytelling (ICIDS 2024)

Abstract

This paper investigates effectiveness of role prompting, an approach used to condition a large language model (LLM) with a role or personality trait. Conditioning an LLM with a specific role or personality trait is crucial for various applications, such as using an LLM as a non-playable character. Role-playing is one of the crucial techniques in doing so. However, existing studies have only observed changes when introducing a role or personality trait to LLMs, but not how effectively they conform to the given personality trait. In this study, we investigate how well LLMs adhere to given instructions using a well-established personality test. Additionally, we conduct an experiment to determine how a given personality influences biases in generating a game story ending. Through our investigations, we found that traditional role prompting is ineffective for assessing a certain personality trait. Therefore, we propose a novel variant of role prompting called reverse role prompting (RRP) to reduce the significance of personality traits other than the assigned personality. We observed that when using RRP to assign personalities to LLMs, the LLMs are better able to act the given personality. We also found that LLMs inherently have high agreeableness, affecting story ending biases when instructed to generate a game story. RRP is also effective for reducing the effects of this inherent agreeableness. Future studies should further investigate the effectiveness of the proposed RRP for a wider range of narrative elements and applications in storytelling.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.jp.square-enix.com/ai-tech-preview/portopia/en/.

  2. 2.

    https://github.com/Pittawat2542/llm-personality.

  3. 3.

    https://github.com/SiyuanChen0218/LLMs_Story_Generation_with_Personality.

  4. 4.

    A system prompt is an initial set of instructions or context provided to guide and shape the behavior and responses of an LLM.

  5. 5.

    https://platform.openai.com/docs/models/gpt-3-5-turbo.

References

  1. Harmon, S., Rutman, S.: Prompt engineering for narrative choice generation. In: Holloway-Attaway, L., Murray, J.T. (eds.) Interactive Storytelling. ICIDS 2023. LNCS, vol. 14383, pp. pp. 208–225. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-47655-6_13

  2. Achiam, J., et al.: GPT-4 technical report. arXiv preprint arXiv:2303.08774 (2023)

  3. Alhussain, A.I., Azmi, A.M.: Automatic story generation: a survey of approaches. ACM Comput. Surv. (CSUR) 54(5), 1–38 (2021)

    Article  Google Scholar 

  4. Bahamon, J.: Toward a computational model of character personality for planning-based narrative generation. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 8, pp. 2–5 (2012)

    Google Scholar 

  5. Barford, K.A., Smillie, L.D.: Openness and other Big Five traits in relation to dispositional mixed emotions. Personality Individ. Differ. 102, 118–122 (2016)

    Article  Google Scholar 

  6. Cain, W.: Prompting change: exploring prompt engineering in large language model AI and its potential to transform education. TechTrends 68(1), 47–57 (2024)

    Article  Google Scholar 

  7. Cao, B., Cao, Y., Lin, L., Chen, J.: Defending against alignment-breaking attacks via robustly aligned LLM. arXiv preprint arXiv:2309.14348 (2023)

  8. Chen, G.H., Chen, S., Liu, Z., Jiang, F., Wang, B.: Humans or LLMs as the judge? A study on judgement biases. arXiv preprint arXiv:2402.10669 (2024)

  9. DeYoung, C.G., Quilty, L.C., Peterson, J.B., Gray, J.R.: Openness to experience, intellect, and cognitive ability. J. Pers. Assess. 96(1), 46–52 (2014)

    Article  Google Scholar 

  10. Goldberg, L.: Administering IPIP measures, with a 50-item sample questionnaire. Medicine 2(5), 1–6 (2006)

    MathSciNet  Google Scholar 

  11. Hilliard, A., Munoz, C., Wu, Z., Koshiyama, A.S.: Eliciting Personality Traits in Large Language Models (2024). https://arxiv.org/abs/2402.08341

  12. Holderried, F., et al.: A generative pretrained transformer (GPT)-powered chatbot as a simulated patient to practice history taking: prospective, mixed methods study. JMIR Med. Educ. 10, e53961 (2024). https://doi.org/10.2196/53961

    Article  Google Scholar 

  13. Hough, L.M., Eaton, N.K., Dunnette, M.D., Kamp, J.D., McCloy, R.A.: Criterion-related validities of personality constructs and the effect of response distortion on those validities. J. Appl. Psychol. 75(5), 581 (1990)

    Article  Google Scholar 

  14. John, O.P., Donahue, E.M., Kentle, R.L.: Big five inventory. J. Pers. Soc. Psychol. (1991)

    Google Scholar 

  15. Kong, A., Zhao, S., Chen, H., Li, Q., Qin, Y., Sun, R., Zhou, X.: Better zero-shot reasoning with role-play prompting. arXiv preprint arXiv:2308.07702 (2023)

  16. Lebowitz, J., Klug, C.: Interactive Storytelling for Video Games: A Player-Centered Approach to Creating Memorable Characters and Stories. Taylor & Francis (2012)

    Google Scholar 

  17. Liu, R., et al.: Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960 (2023)

  18. Mehta, A., Kunjadiya, Y., Kulkarni, A., Nagar, M.: Exploring the viability of conversational AI for non-playable characters: a comprehensive survey. In: 2021 4th International Conference on Recent Trends in Computer Science and Technology (ICRTCST), pp. 96–102. IEEE (2022)

    Google Scholar 

  19. Morsunbul, U.: The validity and reliability study of the Turkish version of quick big five personality test. Dusunen Adam J. Psychiatry Neurol. Sci. 27(4), 316 (2014)

    Article  Google Scholar 

  20. Nichols, N., Smathers, M.J., Birnbaum, L., Hammond, K., Adams, L.E.: Method and apparatus for triggering the automatic generation of narratives, 11 November 2014, US Patent 8,886,520

    Google Scholar 

  21. Roberts, B.W., Jackson, J.J., Fayard, J.V., Edmonds, G., Meints, J.: Conscientiousness. In: Handbook of Individual Differences in Social Behavior, pp. 369–381 (2009)

    Google Scholar 

  22. Roccas, S., Sagiv, L., Schwartz, S.H., Knafo, A.: The big five personality factors and personal values. Pers. Soc. Psychol. Bull. 28(6), 789–801 (2002)

    Article  Google Scholar 

  23. Rothmann, S., Coetzer, E.P.: The big five personality dimensions and job performance. SA J. Ind. Psychol. 29(1), 68–74 (2003)

    Article  Google Scholar 

  24. Rubin-McGregor, E., Harrison, B., Siler, C.: Enhancing character depth through personality exceptions for narrative planners. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 19, pp. 136–144 (2023)

    Google Scholar 

  25. Smidt, W.: Big Five personality traits as predictors of the academic success of university and college students in early childhood education. J. Educ. Teach. 41(4), 385–403 (2015)

    Article  Google Scholar 

  26. van Stegeren, J., Myśliwiec, J.: Fine-tuning GPT-2 on annotated RPG quests for NPC dialogue generation. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–8 (2021)

    Google Scholar 

  27. Sun, Z., et al.: Aligning large multimodal models with factually augmented RLHF. arXiv preprint arXiv:2309.14525 (2023)

  28. Suzuki, R., Arita, T.: An evolutionary model of personality traits related to cooperative behavior using a large language model. Sci. Rep. 14(1), 5989 (2024)

    Article  Google Scholar 

  29. Taveekitworachai, P., et al.: What is waiting for us at the end? Inherent biases of game story endings in large language models. In: Holloway-Attaway, L., Murray, J.T. (eds.) Interactive Storytelling. ICIDS 2023. LNCS, vol. 14384, pp. 274–284. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-47658-7_26

  30. Taveekitworachai, P., Abdullah, F., Thawonmas, R.: Null-shot prompting: rethinking prompting large language models with hallucination. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Florida, USA, November 2024

    Google Scholar 

  31. Tov, W., Nai, Z.L., Lee, H.W.: Extraversion and agreeableness: divergent routes to daily satisfaction with social relationships. J. Pers. 84(1), 121–134 (2016)

    Article  Google Scholar 

  32. Vearing, A., Mak, A.S.: Big five personality and effort-reward imbalance factors in employees’ depressive symptoms. Personality Individ. Differ. 43(7), 1744–1755 (2007)

    Article  Google Scholar 

  33. Wang, R., et al.: Role Prompting Guided Domain Adaptation with General Capability Preserve for Large Language Models. arXiv preprint arXiv:2403.02756 (2024)

  34. Wang, T.S., Gordon, A.S.: Playing story creation games with large language models: experiments with GPT-3.5. In: Holloway-Attaway, L., Murray, J.T. (eds.) Interactive Storytelling. ICIDS 2023. LNCS, vol. 14384, pp. 297–305. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-47658-7_28

  35. Wang, Z.M., et al.: RoleLLM: benchmarking, eliciting, and enhancing role-playing abilities of large language models. arXiv preprint arXiv:2310.00746 (2023)

  36. Witt, L.: The interactive effects of extraversion and conscientiousness on performance. J. Manag. 28(6), 835–851 (2002)

    Google Scholar 

  37. Wu, Q., et al.: AutoGen: enabling next-gen LLM applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155 (2023)

  38. Zhao, H., Seibert, S.E.: The big five personality dimensions and entrepreneurial status: a meta-analytical review. J. Appl. Psychol. 91(2), 259 (2006)

    Article  Google Scholar 

  39. Ziems, C., Held, W., Shaikh, O., Chen, J., Zhang, Z., Yang, D.: Can large language models transform computational social science? Comput. Linguist. 50(1), 237–291 (2024)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Siyuan Chen .

Editor information

Editors and Affiliations

A IPIP-50 Scores When Assigning Different Personality Trait Using Standard Role Prompting

A IPIP-50 Scores When Assigning Different Personality Trait Using Standard Role Prompting

See Tables 6, 7, 8, 9 and 10.

Table 6. This table presents the IPIP-50 scores when assigning intellect personality trait using standard role prompting approach.
Table 7. This table presents the IPIP-50 scores when assigning conscientiousness personality trait using standard role prompting approach.
Table 8. This table presents the IPIP-50 scores when assigning extraversion personality trait using standard role prompting approach.
Table 9. This table presents the IPIP-50 scores when assigning agreeableness personality trait using standard role prompting approach.
Table 10. This table presents the IPIP-50 scores when assigning emotional stability personality trait using standard role prompting approach.

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, S. et al. (2025). Don’t Do That! Reverse Role Prompting Helps Large Language Models Stay in Personality Traits. In: Murray, J.T., Reyes, M.C. (eds) Interactive Storytelling. ICIDS 2024. Lecture Notes in Computer Science, vol 15467. Springer, Cham. https://doi.org/10.1007/978-3-031-78453-8_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-78453-8_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-78452-1

  • Online ISBN: 978-3-031-78453-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics