When AI turns out to be 700 humans
If you missed the headlines: Builder.ai—once a $1.5 B Microsoft-backed “AI” unicorn filed for bankruptcy.
After investigators discovered that its “autonomous platform” was powered by hundreds of engineers in India, coached to pose as a bot.
What actually happened
- Fake AI theatre – Internal documents reveal that the “Natasha” engine was actually a Slack channel where human coders responded in real-time.
- Round-tripped revenue – Mutual invoicing with partner VerSe allegedly padded topline by approximately 300 percent.
- Cash burn + lost trust – Bankruptcy followed an investor cash-grab when proof of real ML patents or autonomous output couldn’t be produced.
The quiet lesson (not a roast)
AI does not equal autonomy. Most real-world “AI” today is a human-in-the-loop system. That’s fine—label it honestly.
Sell outcomes, not buzzwords. Clients care about speed, quality, and cost—how you achieve it should be transparent.
Due diligence is everyone’s job. Investors, buyers, and even employees need a “show me the model” mindset.
Hybrid works when you admit it. Many great tools blend AI and expert review. It’s the deceit, not the humans, that kills trust.
Let’s build systems that:
- Explain what’s automated and what’s human-powered.
- Measure real value (time saved, errors reduced).
- Own the learning curve: upfront about limitations, roadmap, and human backup.
Automation done right frees people for higher-value work; done wrong, it hides people behind a curtain.
Thoughts? Have you encountered “AI” that was mostly Excel and elbow grease? Drop a comment—let’s keep the conversation honest and helpful.
Disclaimer: discovery-stage analysis compiled from public sources; figures may update, and errors are mine alone.