women and AI

Social media was supposed to make things better: more connections, more community, more access. And in some ways, it has. 

But for many women, that promise of a safe, friendly online space has been eroding for years. And now, with the rapid rise of artificial intelligence, things are only getting worse. 

“Being a woman online in 2026 is pretty much a hellscape,” says Avery Swartz, founder and CEO of Camp Tech, a tech and AI skills training company. 

While that might sound dramatic, it’s not. 

The introduction of AI has opened a new world of dangerous possibilities, and over and over again, we are seeing it be weaponized against women, explains Swartz.

AI-generated deepfake pornography, “nudify” apps that can take a fully clothed image and turn it into something explicit in seconds, and AI-powered bots harassing women in their DMs or comment sections are becoming commonplace.

Even the things that feel harmless aren’t always harmless.

Generating AI versions of your LinkedIn profile picture or participating in viral online trends actively trains AI systems, improving capabilities such as facial recognition and image generation.

For a lot of women, says Swartz, it’s created a constant calculation: what’s safe to share, and what isn’t?

So, how did we get here?

Part of the problem lies in what’s known as algorithmic bias. AI systems learn from existing data – and that data reflects the world we already live in, including all of its stereotypes. The more common something is in the dataset, the more likely it is to show up in the output, explains Swartz. Which is why, unless you’re very specific, AI still tends to default to narrow, often male-dominated ideas of what power, leadership, or even beauty look like.

But the issue isn’t just the data, it’s the people behind it, she says.

A relatively small, powerful group – still overwhelmingly male – is shaping how these technologies are built and deployed. And increasingly, they’re doing it in a space that feels largely unchecked.

“There’s this sense of inevitability with tech,” says Swartz. “Like, this is just the way it’s going to be; like the word is coming down from on high from these… to put it plainly, tech bros.”

Too often, she adds, the focus is also on possibility rather than risk.

“If you work in product, the first question should be: how can this be weaponized?” Swartz explains. “If you’re not asking that, you’re not asking the right questions.”

That question – how could this go wrong – has largely been missing from the AI conversation, and women are feeling the consequences of that oversight in real time.

There’s also a feedback loop at play. As online spaces become riskier, many women are choosing to post less or step back entirely. But when that happens, there’s less data representing women in these systems, making the outputs even more skewed.

“We’re damned if we do, damned if we don’t,” Swartz says bluntly.

Pushing back

And then there’s the issue of accountability – or lack thereof.

In Canada, meaningful regulation around AI and online harm is still playing catch-up. While some countries, like Australia, have taken steps to block minors from accessing social media, laws in Canada are still being discussed. 

Meanwhile, the technology continues to evolve at a pace that outstrips any safeguards.

“It’s like a game of whack-a-mole,” says Swartz. “The problem is just going to pop up again somewhere else, on some other app.” 

Until then, she says, the responsibility falls on individuals – on women, on parents, on anyone trying to navigate this space. 

But there are some helpful tips to remember that can make that process a little easier:

1. Be intentional about what you share

It helps to think of anything you post as something that could be reused, scraped, or repurposed in ways you didn’t originally intend. Public posts aren’t just “yours” in a simple sense – platforms often have broad rights to use that content, including for AI training. Before you share something, it’s worth asking: who else benefits from this? 

2. Think about how AI is using your image

Viral trends are often designed to generate valuable data. Something like a before-and-after photo or an aging challenge might feel like nostalgia or fun, but it can also become material that helps improve systems like facial recognition and deepfake generation.

3. Audit your existing digital footprint

You can go back and audit what’s already out there. Reviewing older posts, tightening privacy settings, or deleting content – especially photos of yourself or your children – can reduce how easily your digital footprint is accessed or scraped.

While Swartz acknowledges a lot of this sounds very doom-and-gloom, she also wants people to remember that we aren’t totally without hope.

“It’s important to remember that we don’t have to just instantly accept what these ‘emperors of tech’ are telling us,” she adds. 

“Nothing happens in a vacuum, and I think the country is in a very interesting place right now. We’re really trying to define who we are and what our values are, and what we stand for. I think there’s a layer there that could lead to some changes in how we handle AI in this country, which is really exciting.”