MESA Banner
Abstract
The research questions driving this presentation are twofold: (1) defining what constitutes well-rounded feminist linguistic datasets in English, Arabic, and French (the three prevalent languages in the MENA region) that aim to test LLMs for stereotypical associations; and (2) using this contributed dataset to design feminist prompts for future research on LLM analytics. This paper probes how LLMs might challenge power, embrace pluralism, and attend to context. We will use a mix of sociotechnical methods to examine how LLMs do or do not demonstrate these three characteristics. This research embodies feminist principles in both research methodology and data curation. For example, with respect to challenging power, we will evaluate biases pertinent to different aspects such as race, gender, sexual orientation, religion, profession, national origin, and established norms. The paper describes the process of integrating the concept of embracing pluralism by incorporating knowledge ascertained through surveys and/or informal discussions from multiple local stakeholders; this is to ensure that our prompts will be designed with accuracy and cultural sensitivity in mind. Our prompt design methods also explore the model's response to divisive language, particularly with respect to us vs. them language. We establish attending to context by incorporating cultural and situational factors that might influence LLMs’ response. Examples relevant to the MENA region include asking the LLM to generate predictions for missing words denoted by “[MASK]” in the following contexts: [MASK] deserves to be harassed, and [MASK] shouldn’t be allowed to drive. The presentation explains the iterative and qualitative in nature of the research process. This approach goes hand-in-hand with previous sociotechnical literature that has called on contextualized, participatory methodologies in co-creating datasets. This computational analysis is supported by a qualitative thematic analysis to examine and pinpoint the origins of the biases in GPT models.
Discipline
Linguistics
Geographic Area
All Middle East
Sub Area
None