How AI Algorithms Shaped 90s 'Girl Power' Media—And Why It Backfired
Reddit users are calling out 90s 'girl power' shows for hollow stereotypes. But here's the real problem: the algorithms that shaped, promoted, and recommended these narratives were built on biased data. What can AI learn from this failure?
Reddit users have called out how 90s "girl power" media promised liberation but delivered clichés. Here's what algorithmic analysis reveals: these shows didn't fail women—the systems promoting them did. Recommendation algorithms, built on engagement metrics and purchase data, actively pushed one-dimensional characters as "aspirational." Netflix's algorithm doesn't care if Carrie Bradshaw is hollow; it just knows you'll watch. Today's AI systems still struggle with representation because they optimize for clicks, not depth. The data was broken from the start.
By YEET Magazine Staff | Updated: May 13, 2026
Sex and the City: The Algorithm's Perfect Product
This show was designed like a conversion funnel. Dull characters in designer clothes = behavioral targeting gold. Algorithms learned users watched it, then served ads for luxury brands. The system worked flawlessly—just not for actual human growth.
One Reddit user nailed it: "It's essentially a long and beautiful advertisement." That's not a critique of the writing. That's an indictment of how data-driven content recommendations optimized for commercial value over authenticity. Machines identified engagement patterns and amplified them.
Another viewer said the show almost trapped her in unhealthy relationship patterns. But here's the uncomfortable truth: the algorithm didn't care. Engagement = success. Emotional damage = not a metric.
Wonder Woman: When Training Data Gets Lazy
Wonder Woman's character had zero world knowledge but unlimited confidence. Sound familiar? That's exactly how AI models trained on insufficient or biased datasets behave. They're trained on incomplete information yet generate authoritative-sounding outputs.
The film reduced her agency the second a man entered the scene. Romantic motivation became the plot's central algorithm. Women watching recognized the pattern instantly—because they'd seen it recommended to them countless times before. Streaming platforms' algorithms had learned: female heroes with love interests perform well.
Users called out the illogic. But algorithms don't evaluate logic. They evaluate watch time and completion rates.
Gilmore Girls: Echo Chambers Built Into Content Systems
Fast-paced dialogue. Codependent relationships masked as love. Normalized toxic dynamics. Recommendation systems identified "cozy comfort watch" signals and locked users into repetitive consumption patterns.
The algorithm learns: viewers who watch Gilmore Girls also watch these 47 similar shows. Then it serves those shows, reinforcing the same narrative patterns. Echo chambers aren't accidents—they're features of engagement optimization.
Why This Matters Now
These 90s shows didn't just disappoint women. They trained modern AI models. Every Netflix recommendation, every TikTok algorithm, every content personalization system was built on datasets containing this flawed media. Garbage in, garbage out. Except the garbage had real-world consequences.
Today's generative AI, trained on decades of this content, still reproduces the same stereotypes. Why? Because the underlying data was never cleaned. The algorithmic bias compounds.
The Path Forward
Better content recommendation requires better training data. That means auditing what's being amplified. It means building algorithms that optimize for representation diversity, not just engagement metrics. It means acknowledging that "what gets watched most" isn't the same as "what's actually good for people."
Some platforms are starting. But most still operate on the 90s playbook: measure engagement, amplify signals, ignore cultural impact. The machines are doing exactly what we taught them to do.
Q&A
Why did these shows seem empowering at the time?
Algorithms aren't selective. They amplified both the aspirational messaging AND the harmful patterns equally. Confirmation bias made women see what they wanted. Marketing budgets (driven by data targeting) ensured the "girl power" narrative stayed loud while criticism stayed quiet.
Are modern shows different?
Marginally. Streaming platforms now track sentiment data, not just watch time. But recommendation systems still prioritize engagement. A complex, flawed female character doesn't perform as well as a one-dimensional "relatable" one. The incentive structure hasn't fundamentally changed.
Can AI fix representation in media?
Only if we reprogram its objectives. Instead of optimizing for watch time, optimize for diversity metrics, critical reception, and long-term audience satisfaction. But that requires companies to deprioritize short-term revenue. Don't hold your breath.
How do recommendation algorithms actually influence what content gets made?
Studios use engagement data to green-light projects. If the algorithm says "female-led comfort watches drive subscriptions," studios make more of them—even if those stories are narratively empty. The data becomes a self-fulfilling prophecy. Bad media creates bad training data creates more bad recommendations creates more bad media.
Related Reading
Explore how algorithmic bias shapes streaming recommendations and why data-driven storytelling often fails marginalized communities. Also check out our deep dive on gender bias in AI training datasets.