How AI Risk Assessment Failed Adidas: The Algorithm That Ignored a Decade of Red Flags
Adidas tolerated Kanye West's misconduct for ten years—but the real story is how automated risk systems and algorithmic decision-making let corporate partners ignore obvious warning signs. A case study in AI blindspots.
How AI Risk Assessment Failed Adidas: The Algorithm That Ignored a Decade of Red Flags
By YEET Magazine Staff | Updated: May 13, 2026
Adidas's decade-long partnership with Kanye West collapsed spectacularly in 2022, but the real scandal? Their automated risk management systems reportedly never flagged the mounting evidence of misconduct. Despite antisemitic comments, erratic behavior, and documented incidents, algorithms trained on financial metrics missed what humans should have caught immediately. This reveals a critical flaw in corporate AI: when your risk models only measure quarterly earnings, they're blind to reputational and ethical collapse.
New York Times reporting shows Adidas executives knew about problematic behavior but kept the YEEZY partnership alive anyway. The partnership generated roughly $2 billion in annual revenue. Your algorithm optimizes for profit? Congrats—it just taught your company to ignore ethics.
This isn't about Kanye West's personal failings. It's about how machine learning systems, when poorly designed, can actually *enable* corporate malfeasance. If your predictive models only track stock price and sales velocity, they're functionally useless for catching human-level risk.




The Real Problem: Algorithmic Tunnel Vision
Corporations love automation because it feels objective. An algorithm doesn't have bias—it just crunches numbers, right? Wrong. Your algorithm inherits every bias you feed it. If you train risk models exclusively on revenue metrics, they literally cannot see non-financial threats.
Adidas had data scientists. They had dashboards. They had machine learning systems tracking every conceivable business metric. What they didn't have: algorithms designed to surface ethical and reputational risk signals. That's not a tech problem—that's a *design choice*.
The Kanye situation escalated across years: public statements, documented incidents, media coverage, social media patterns. Modern NLP and sentiment analysis could have flagged this automatically. Instead, Adidas's decision-makers prioritized what their algorithms told them to prioritize: revenue.
How Companies Are Finally Adding Human-Centered Risk Layers
Smart enterprises are now bolting human oversight onto automated systems. They're training ML models on *multiple* outcome variables: financial performance *and* brand sentiment *and* leadership behavior patterns *and* public perception trends.
Some companies use real-time social listening to track executive behavior alongside quarterly earnings. Others weight algorithmic recommendations through human compliance teams. It's clunky. It requires more labor. But it actually catches the stuff that pure optimization misses.
The Automation Irony
Here's the kicker: Adidas probably used *algorithms* to monitor social media sentiment about YEEZY products (positive), while ignoring algorithmic flags about Kanye West (also possible, if they'd built them). The tech didn't fail—the *strategy* failed. They optimized for the wrong metric.
This is the future of work problem nobody talks about. As companies automate decision-making, they're automating their blindspots too. An algorithm that's great at predicting sales is terrible at predicting brand collapse. Until we design systems that care about both, we'll keep watching billion-dollar partnerships implode in slow motion while dashboards show green.
What happens when you let financial AI make ethical decisions?
It makes financially optimal decisions that tank your brand. Adidas's systems worked perfectly—they just optimized for the wrong thing. The lesson: automation without ethical constraints isn't neutral. It's actively amoral.
Should companies use AI to monitor executive behavior?
Already happening. Enterprise risk platforms track leadership conduct, public statements, and reputational signals automatically. The question isn't whether—it's how transparent to be about it, and whether employees consent.
Could this have been prevented with better algorithms?
Yes, but only if someone decided to build them. Adidas had the data and the technical capability. They had the choice to weight reputational risk equally with revenue. They didn't.
Related reads on algorithmic failure and corporate oversight:
Check out our investigation into how hiring algorithms replicate human bias, or learn why predictive models miss obvious human