Close

Presentation

Do Large Language Models Reflect Societal Gender Bias? A Comparative Analysis
DescriptionThis study examines gender bias in LLMs by comparing model-generated responses with those of human respondents. A questionnaire based on the Gender Equality Public Opinion Survey was employed, with virtual personas reflecting the demographic distribution of participants, consistent with the human survey. These personas engaged in role-playing scenarios using two distinct LLMs. Statistical analysis identified significant differences between AI models and human survey data, underscoring the regional specificity of gender equality perceptions and the limitations of LLMs in capturing nuanced social dynamics. Furthermore, the study addresses the potential consequences of over-filtering, which may reduce diverse viewpoints, including those of minority groups. These findings highlight the necessity of culturally sensitive bias mitigation strategies and ensuring diversity when applying LLMs in cultural and social contexts.
Event Type
Workshop
TimeMonday, 18 November 20242pm - 2:03pm EST
LocationB309
Tags
Broader Engagement
HPC in Society
Inclusivity
Registration Categories
W