Towards a Feminist Artificial Intelligence (AI): Critical Studies on LLMs, Surveillance, Algorithmic Bias, and Social Justice
Panel VIII-11, sponsored byOrganized under the auspices of Network of Arab Women in AI, 2024 Annual Meeting
On Thursday, November 14 at 2:30 pm
Panel Description
The speed of AI development and the concentration of its skilled developers, ethicists, and data scientists in the Global North threatens to broaden an already large digital divide for marginalized folks in the Middle East & North Africa (MENA). Current implementations of AI in the region raise concerns about increased surveillance and human rights violations. Gulf countries, for instance, have embraced AI integration without adequate safeguards to protect individuals' data. The extensive use of AI-powered surveillance technologies during the 2022 FIFA World Cup in Qatar exemplifies these concerns. In addition, the region experiences challenges related to poor connectivity infrastructures, internet shutdowns, content moderation biases, and crackdowns on journalists. The disinformation war in Sudan, nationwide internet disruptions in Iraq and Syria, Meta's failure to effectively moderate Arabic content, and the repression of journalists in Tunisia, Egypt, and Lebanon highlight the threats to digital rights and freedom of expression in the region. Perhaps most unsettling have been the deployment of invasive surveillance and military systems in occupied Palestinian territories before and throughout Israel’s genocide on Palestinians in Gaza.
This panel investigation spans disciplines and fields implicated by the rapid evolution and adoption of generative AI systems. Multiple important contributions come together in our panel. First, methodologically–we present a model of detecting bias that enlists the impacted communities in the MENA region, reminiscent of previous nascent work from Black feminist scholarship. Second, we decenter the West and the Global North from primary consideration for technological innovation. More importantly, our contribution centrally examines the cultural aspect of artificial intelligence, machine learning, and large language models (LLMs). Panelists will provide a literature review in the emerging field that highlights the linguistic and computational biases inherent in these systems and provides an opportunity to co-create feminist datasets; examine Google Translate as a foundational platform for embedded bias and training data sets and surveillance technologies used by Israel in its occupation and genocide against Palestinians. Panelists also present on the formation and contributions of the Network of Arab Women in AI to the emerging field of critical AI.
Israeli authorities extensively use facial recognition technologies through surveillance cameras and smartphones, including an experimental system known as Red Wolf. Many surveillance technologies have been developed and tested in the West Bank, including key Israeli surveillance systems, demonstrating an intensified surveillance presence. Israeli defense officials consider Israel's surveillance apparatus in Gaza to be one of the most advanced globally, showcasing a high-tech approach to monitoring activities in the region. After Oct 7th, Israeli surveillance firms NSO and Candiru offered free surveillance technologies to Israel to track hostages and “Hamas fighters.” U.S. intelligence agencies are using “Hamas” to push for renewing a surveillance program that would enable direct spying on non-Americans abroad. Using legal loopholes, section 702 of the Foreign Intelligence Surveillance Act (FISA) has also become a “go-to domestic spying authority,” allowing for warrantless searches of Americans’ private communications. It has been used to collect vast intelligence on U.S. representatives, senators, civil liberties organizations, political campaigns, and activists. Meanwhile, in the West Bank, Palestinian protests faced privacy violations, including forced phone confiscation and demands for social media passwords by Israeli forces. In Gaza, the internet shutdowns have been labeled "premeditated crimes," worsening the horrors of Israel’s war on children. In this paper, I examine two stories emerging out of the digital landscape, the data body, if you will, of Gaza, Palestine. One touting Israeli prowess in surveillance technologies – the big question there is, then how the hell did they miss foreseeing the October 7th attack on Israel? Was that a glitch? The second story that is painfully and beautifully emerging across TikTok, Instagram, Signal channels, and the International Court of Justice is a global collective cry for a ceasefire and an end to the occupation of Palestine. Through glitchy acts of sousveillance from journalists like Bisan from Gaza, Wael Al-Dahdouh, Motaz Azaiza, and many others.
The research presented in this paper relies on principles of dismantling bias, intersectionality, accountability, and inclusive, responsible, transparent use of power in social systems and algorithms. This research paper examines how to disentangle the English-Arabic translational biases embedded within Google Translate, which have been adopted and emulated by many LLMs and AI systems, including ChatGPT and Baird. For example, when prompted to translate the term “nurse” to Arabic, the result comes back coded as a feminine nurse (“ممرضة”). These translations embed a patriarchal understanding of the word in Arabic. The research presented is built upon collaboration among a network of other digital rights advocates, artists, journalists, civil society members, and academics who collaboratively define the challenges in analyzing bias within Google Translate and, by extension, AI systems between Arabic and English. In this paper, I will discuss different methods and analyses testing Google Translate’s word embeddings and lexical biases. I will begin by formulating a feminist dataset of terms in English and Arabic to compare the translational results between Google Translate and ChatGPT, which has been touted as a translational tool in addition to its generative AI capabilities.
The research questions driving this presentation are twofold: (1) defining what constitutes well-rounded feminist linguistic datasets in English, Arabic, and French (the three prevalent languages in the MENA region) that aim to test LLMs for stereotypical associations; and (2) using this contributed dataset to design feminist prompts for future research on LLM analytics. This paper probes how LLMs might challenge power, embrace pluralism, and attend to context. We will use a mix of sociotechnical methods to examine how LLMs do or do not demonstrate these three characteristics. This research embodies feminist principles in both research methodology and data curation. For example, with respect to challenging power, we will evaluate biases pertinent to different aspects such as race, gender, sexual orientation, religion, profession, national origin, and established norms.
The paper describes the process of integrating the concept of embracing pluralism by incorporating knowledge ascertained through surveys and/or informal discussions from multiple local stakeholders; this is to ensure that our prompts will be designed with accuracy and cultural sensitivity in mind. Our prompt design methods also explore the model's response to divisive language, particularly with respect to us vs. them language. We establish attending to context by incorporating cultural and situational factors that might influence LLMs’ response. Examples relevant to the MENA region include asking the LLM to generate predictions for missing words denoted by “[MASK]” in the following contexts: [MASK] deserves to be harassed, and [MASK] shouldn’t be allowed to drive.
The presentation explains the iterative and qualitative in nature of the research process. This approach goes hand-in-hand with previous sociotechnical literature that has called on contextualized, participatory methodologies in co-creating datasets. This computational analysis is supported by a qualitative thematic analysis to examine and pinpoint the origins of the biases in GPT models.