9th February 2026

Safer Internet Day 2026: Making AI safe and equal

11 min read

This Safer Internet Day, the theme is  ‘Smart tech, safe choices – Exploring the safe and responsible use of AI –  and it’s one that matters to everyone. 

Artificial Intelligence (AI) is shaping how we work, learn, access services, and connect with one another. But “safer” can’t just mean safer for some. If AI is to serve all of us, it needs to be designed, used , and regulated with everyone in mind – including women and girls, who are most at risk from potential harm. 

At AUDRI, we see AI as a technology that holds enormous promise – but also one that threatens to deepen the digital divide if left unchecked. Across every dimension –  from who is harmed, to who gets to use these tools, to who is building them and making the rules –  women and girls need to be prioritised.

AI-powered abuse: a growing crisis

The most acute harm is also the most urgent. AI-powered deepfake technology is being weaponised against women and girls on a staggering scale. According to UN Women, up to 95% of online deepfakes are non-consensual pornographic content, and 99% of those target women. These   tools are increasingly easy to use  and designed to require  minimal technical skill –  just a single photograph and a request to generate fabricated explicit images –  as was recently brought into sharp focus with revelations about the use of Grok to perpetuate image-based abuse. 

Targets include public figures, journalists, and activists, but increasingly also underage girls: this month, UNICEF reported that at least 1.2 million children across 11 countries had their images manipulated into sexually explicit deepfakes in the past year alone.

Beyond deepfakes, AI is amplifying existing forms of gender-based violence,  enabling more sophisticated stalking, sexual coercion and extortion, and coordinated harassment campaigns. And what happens online doesn’t stay online: it spills into offline life with devastating psychological, professional, and financial consequences. Yet 1.8 billion women and girls globally still lack legal protection from online harassment and technology-facilitated abuse.

The digital divide is widening

We know that tech-facilitated gender-based violence keeps women and girls offline, contributing to the digital divide  – the gap between those who benefit from technology and those who are excluded from or disadvantaged by it. It forces women and girls to remove themselves from unsafe digital environments, or limits their participation in the digital ecosystem as they self-censor to avoid targeting and victimisation. AI-based abuse is no different, and, as AI reshapes the economy, the impact of this inequality is growing. 

Research from Harvard Business School and UC Berkeley, drawing on 18 studies and more than 140,000 people, found that around the world, women are roughly 25% less likely than men to use generative AI tools. The reasons are complex  – women report lower familiarity with the tools (mirroring lower digital literacy rates more broadly), less confidence, and greater ethical concerns about using AI and about its impact. But this has some serious consequences. If AI boosts productivity and career advancement, as many predict it will, and women aren’t using it, existing gaps in pay and opportunity will grow. This is especially alarming given that women are nearly three times more likely than men to hold jobs that AI could automate –  meaning the people most affected by AI disruption are the least equipped and supported. 

Inequality by design

Behind these gaps lies a structural problem. Women make up only around 22% of the global AI workforce, dropping to just 12% of AI researchers and 14% of senior executives. We know that AI outputs mirror the same inequalities, attitudes and biases that exist across society, and this is a function of the way models are trained using historically flawed datasets. But when the teams designing AI systems don’t reflect the diversity of the people affected by them, certain forms of bias, including gender bias, are likely to go unrecognised, unconsidered, and unmitigated.. Research has shown that almost 44% of AI-based systems exhibit gender bias

And the recent misuse of AI technology to create image-based abuse content is the predictable result of building powerful technologies without meaningful participation,  and impact assessments and accountability mechanisms centred on those most likely to be harmed  –  including women, survivors of abuse, and people from marginalised communities. When oversight and governance exclude these perspectives, it is inevitable that the misogyny and sexism that women live with every day are replicated in digital spaces.  

Alarmingly, but perhaps not surprisingly,  this content is extremely profitable, and, as we saw in the wake of the Grok scandal, the tech companies behind the tools have an interest in keeping these harmful tools on the market. Recently, the European Commission preliminarily found that TikTok’s addictive design breached the Digital Services Act, “Do no harm” principle, pointing to a toxic engagement-driven business model that profits from harmful and addicitive content while failint to properly assess user risk. 

A critical moment for change

We believe in a world where everyone can enjoy and benefit from the opportunities promised by emerging technologies, including AI. This is why AUDRI is calling for the regulation of emerging technologies in line with our 10 Feminist Digital Principles,  grounded in human rights law and developed to ensure everyone can enjoy equal rights to safety, freedom, and dignity in their online (and offline) lives. 

This Safer Internet Day, we reiterate our call for governments, technology companies, and international institutions to:

  • Guarantee freedom from technology-facilitated gender-based violence, including swift and meaningful redress for survivors and accountability for platforms and developers.
  • Demand strict action against harmful AI systems and set safeguards against discriminatory bias, with those most at risk of harm leading the design of those safeguards.
  • Expand women’s participation and leadership in technology –  in design, development, governance, and regulation at every level.
  • Adopt Equality-by-Design and Safety-by-Design principles throughout all phases of AI development, including mandatory human rights and gender impact assessments before deployment, during use, monitoring, and modification of these technologies.
  • Centre the voices of survivors and affected communities in all technology oversight and regulation, because safety standards set without those who experience harm will always fall short.

Ensuring safer AI and a safer internet  is more than preventing the worst abuses – even though these remain urgent and pressing – or reacting after the fact to demand justice and accountability when they occur.  It’s about understanding the structures which have enabled these abuses, and working towards a more equal digital world – from design, to deployment, to digital literacy and use. We need to ensure that the  benefits of new technologies don’t flow to half the population, while the the other half carries the risk. The principles exist. The evidence is clear. What’s needed now is the will to act.

Join AUDRI’s campaign for universal digital rights at audri.org.

Read more about Grok, deepfakes and legal responses to image-based abuse.

Newsletter Sign-up

Make a donation

I want to donate