AI, Safety, and the Rising Threat of Technology-Facilitated Violence Against Women and Girls : Reflections from the AEGIS Roundtable to mark the 16 Days of Activism to End Digital VAWG

Author: AEGIS Centre

As artificial intelligence (AI) becomes part of everyday life across the world, it is also reshaping the risks faced by women and girls in digital spaces. This was the central concern explored during the recent AEGIS roundtable held as part of the 16 Days of Activism.

The event brought together researchers, practitioners, and students who are exploring this new terrain, such as Rachel Windebank, Sixtus C. Onyekwere (PhD student), Dr Karen Middleton, Dr Judith Fletcher-Brown, and Gemma Summers-Green (PhD student).  The event was moderated by Dr Devran Gulel (AEGIS Executive Member). Participants’ contributions pointed to a shared reality: AI is amplifying patterns of abuse that have existed for a long time while creating fresh challenges for safety, accountability, and justice.

Prof Nafisa Bedri giving opening remark on behalf of AEGIS

(1) AI as an Amplifier of Abuse

Rachel Windebank, Operations Director at Stop Domestic Abuse, grounded her presentation in frontline experience. She noted that survivors now describe technology as “woven into” their abuse. What used to require effort or technical skill now happens with ease. Simple devices such as AirTags, for example, enable hidden surveillance without any specialist knowledge. Rachel shared a recent case in which a perpetrator placed an AirTag inside a baby’s car seat during a supervised visit. The mother and child had recently been moved to refuge accommodation, and within hours he had located them. The placement had to be abandoned for their safety.

Rachel Windebank giving her presentation

According to Rachel, this pattern is no longer unusual. Survivors are reporting higher levels of fear related to hidden monitoring, fake accounts, voice cloning, and manipulated screenshots. Some perpetrators automate harassment through bots that send hundreds of messages through the night. Others generate deepfake images to threaten, coerce, or shame. These actions are not new, but AI increases their scale and intensity.

The psychological impact is significant. Many survivors say they feel digitally followed and unable to gain peace even after leaving the perpetrator. Rachel emphasised that this constant digital presence is altering the experience of coercive control in ways that the sector is still learning to understand.

Family Courts and the Problem of AI-Generated Evidence:

A striking theme in Rachel’s presentation was the effect of AI on family court processes. She described how perpetrators use AI to create long, polished statements that mask abusive behaviour. They also submit fabricated evidence that is difficult to challenge without specialist training. Meanwhile, survivors speak from trauma and often appear inconsistent or distressed, which may influence how their accounts are received.

Rachel expressed concern that many courts lack digital literacy. Judges and legal professionals are not required to undertake training on technology-facilitated abuse. This gap creates a situation where perpetrators can exploit AI more effectively than the institutions tasked with protecting survivors and children.

She argued that courts need embedded domestic abuse specialists, better guidance on digital evidence, and clear protocols for identifying AI-generated material. Without these measures, existing inequalities may deepen. Dr. Devran Gulel and Rachel discussed ways to foster collaboration between the family courts, the volunteer community, and researchers.

“breadbasket of the world,” but today it starves under the weight of violence and neglect. My message to the world is simple: wake up. If nothing is done now, there will be no resources, not for us, and not for you.

(2) Biases Shaping AI Systems

The second keynote came from Sixtus Cyprian Onyekwere, a PhD researcher at the University of Portsmouth (funded by the ESRC South Coast Doctoral Training Partnership) and an executive member of AEGIS. Drawing on AEGIS’s  collaborative work with the Centre for the Study of the Economies of Africa (CSEA) and the Gender and Responsible Artificial Intelligence Network (GRAIN), Sixtus explored how AI reflects structural inequalities. He used a feminist’s lens to analyse  three layers of bias: societal and cultural bias, data bias, and algorithmic bias.

Sixtus Onyekwere givinig his presentation

Societal and Cultural Bias:

Sixtus noted that AI systems learn from the world as it is, not as it should be. When voice assistants default to female voices or respond politely to sexist comments, they reinforce ideas about women as compliant or available for service. Young users may absorb these messages and reproduce them offline. Following this line of thought, in the Q&A Prof Bedri and Dr Gulel discussed how early exposure to gender bias and roles shapes girls’ career paths and contributes to the prejudice women encounter at work, among other things.

Data Bias:

Women are significantly under-represented in digital datasets, especially in the Global South. Facial recognition tools struggle to identify women of colour with accuracy. Automated hiring systems penalise women because they reflect historic patterns in the labour market. Search engines often show male CEOs even when data on women leaders exists. Sixtus explained that these examples show how social inequalities feed directly into digital systems. When datasets omit or misrepresent women, AI systems reproduce those distortions.

Algorithmic Bias:

Algorithms often intensify historic patterns of discrimination. During the discussion, Sixtus described examples of AI systems used in recruitment, housing, and image classification that unfairly screened out women or mislabelled them based on gendered assumptions. Because AI operates at speed and scale, these patterns spread quickly.

(3) Interdisciplinary Insights from the Third Panel

The panel discussion brought together Dr Karen Middleton, Dr Judith Fletcher-Brown, and Gemma Summers-Green, which was moderated by Dr Devran Gulel. Each offered insights from marketing, social marketing, and gender-based violence research.

Panel Discussions with Dr Middleton, Dr Fletcher-Brown and their PhD student Gemma Summers-Green

Dr. Karen Middleton reflected on how online advertising models reward polarising and harmful content. She explained that advertising systems often amplify misogynistic material because it generates high engagement. This creates a cultural environment where harmful stereotypes and misinformation spread with ease.

Dr. Judith Fletcher-Brown added that technology has a positive side when used for social change. She described examples from India where social marketing campaigns used digital tools to support acid attack survivors and shift public attitudes. These examples show that AI and technology can help challenge stigma when used with care.

Gemma Summers-Green highlighted the continuum of sexual violence and the need for prevention approaches that address long-term structural issues. She noted that shifting societal norms requires early education and intergenerational conversations, not only legal reforms. Her work in festival settings shows how informal spaces also contribute to harmful cultures, which then appear online.

Global South Contexts and the Need for Localised Understanding:

Speakers noted that AI-driven risks do not look the same everywhere. Dr Devran Gulel, who moderated the event, highlighted the example of digital tracking apps in Saudi Arabia. In that context, tracking can both restrict women and enable them to leave the home under conditions shaped by local norms. Prof Nafisa Bedri, British Academy Global Professor of gender and reproductive health and AEGIS executive member, emphasised the need to understand context before making assumptions about safety or harm.

This broader conversation made it clear that Global South perspectives are essential. AI tools interact with social structures, cultural expectations, and legal systems in different ways. When research centres on the Global North, it may miss how AI shapes risk for women elsewhere.

(4) Where Solutions Might Begin

Across the event, speakers agreed that interventions must operate at multiple levels. Structural action is essential, including regulation that directly addresses technology-facilitated abuse, safety-by-design principles in tech companies, and accountability for harmful content.

At the same time, community organisations often adapt more quickly than formal institutions. Rachel’s example of frontline workers scanning for hidden trackers after a single incident shows how rapid adaptation can improve safety.

Long-term change appears to lie in early education, gender-responsive digital literacy, and public engagement that reflects the realities of younger generations. As speakers noted, many children form gendered assumptions before they start school. Addressing these early patterns may be one of the most effective prevention strategies.

The roundtable showed that AI is now deeply entangled with the everyday realities of violence against women and girls. The risks are growing, but so are opportunities to strengthen protection and accountability. What appears clear is that responses must draw on diverse forms of expertise, including global perspectives, feminist analysis, community practice, and early education. The conversation is only the beginning, but the need for sustained attention is evident.