Abstract:
The recent advancements of large language models (LLMs) have sparked intense interest among researchers and the general public alike as to how deeply these models underst...Show MoreMetadata
Abstract:
The recent advancements of large language models (LLMs) have sparked intense interest among researchers and the general public alike as to how deeply these models understand the human mind. While LLMs have shown remarkable capabilities in accounting for human's cognitive and social activities, their knowledge of human's cultural diversity has not been directly evaluated. Our study addresses this gap by investigating GPT-3.5 and GPT-4's ability to simulate human's self-concept across 73 countries. Using a classic paradigm designed to examine self-concept called the Twenty-statement Test, we illustrated GPT's ability to account for the variations in self-concept across cultures. In line with existing findings in cross-cultural psychology, GPT-simulated human self-concept contained significantly more social elements in collectivist cultures compared to individualist cultures. As such, we show initial evidence for LLMs capacity to account for the cultural variabilities in human mind and behavior. Our findings address the prevalent concerns about LLMs' sensitivity to human diversity, providing insights into the feasibility of using AIs to simulate human subjects on a global scale in social science research.
Date of Conference: 16-18 August 2024
Date Added to IEEE Xplore: 12 December 2024
ISBN Information: