- The AI Leadership Forum
- Posts
- Top 5 times AI f*cked up
Top 5 times AI f*cked up
+ NEWS: Open AI releases a new Projects tool ; NotebookLM launched AI podcast

TL;DR
This issue highlights five times AI went hilariously or disastrously wrong, from botched customer interactions to biased hiring tools. The takeaway? AI needs careful oversight, ethical design, and better testing to avoid these costly blunders.
You heard me right!
AI is a game-changer and all - but sometimes, things go hilariously - or disastrously - wrong.
Here’s a look at the top five AI failures that serve as lessons in caution, transparency, and responsible use.

1. McDonald's Drive-Thru
In a bid to streamline drive-thru ordering, McDonald’s partnered with IBM to develop an AI voice assistant.
Unfortunately, the AI struggled to understand customer requests, leading to hilarious mishaps like ordering 260 Chicken McNuggets. Social media rose up on the case, forcing McDonald’s to abandon the project after three years.
2. Air Canada
Air Canada’s virtual assistant misinformed a grieving passenger about bereavement fare policies. The chatbot incorrectly stated that the passenger could buy tickets and apply for the discount afterward—only for the airline to deny the claim. A tribunal ruled in favor of the passenger, awarding damages and highlighting Air Canada’s lack of “reasonable care” in ensuring chatbot accuracy.

3. Microsoft’s Tay
Microsoft’s experiment with an AI Twitter bot, Tay, was a disaster. Designed to learn from user interactions, Tay quickly absorbed toxic content from trolls. Within 16 hours, it spewed racist and misogynistic tweets, forcing Microsoft to shut it down.
4. NYC - MyCity Chatbot
New York City introduced a chatbot to help business owners navigate regulations, but it instead advised them to break the law. Among its dubious recommendations: employers could keep tips from employees, fire workers for reporting harassment, and even serve rodent-nibbled food. Despite public outcry, the chatbot is still online.

5. Amazon hiring
Amazon’s AI recruiting tool, trained on a decade of predominantly male résumés, developed a bias against women. It downgraded applications from candidates who mentioned “women’s” clubs or colleges. After attempts to “fix” the tool failed, Amazon scrapped the project.
AI is a tool, not a replacement for judgment. When adopting AI, focus on these best practices:
Data Matters: High-quality, diverse, and representative data sets reduce bias and errors.
Transparency Rules: Ensure users know when AI is at work and provide clear disclaimers for potential errors.
Test in Real Environments: Lab tests can’t replicate real-world complexities.
Build Guardrails: Design AI systems with safeguards to prevent misuse or harmful outputs.
Plan for Failures: Have protocols to quickly address AI blunders and mitigate damage.
AI is powerful, but these failures remind us why human oversight and ethical design are non-negotiable.
Stay informed, stay prepared, and ensure your AI implementations serve your business responsibly.
This Week in AI
OpenAI released a new Projects Tool to organise your chats.
NotebookLM launched AI podcast host chat feature.
AI-powered assistants are the future of travel.
Did you enjoy today's newsletter? |