For nearly 80 years, AI has been chasing a dream: to think like us. Not just to compute but to reason, to handle the unknown, to navigate the messy, irrational nature of human decision-making.
The problem? We barely understand how we do it ourselves.
Back in the 1950s, AI was all logic; if-then statements and rule-based systems, built by minds convinced intelligence could be engineered like clockwork. Then reality hit. People don’t think in rigid rules. We guess, we adapt, we hedge our bets. When faced with uncertainty, we rarely calculate (if ever), and we trust our gut.
Enter Dempster-Shafer Theory, a forgotten mathematical gem from the 1960s. Instead of forcing a hard probability on everything, it allowed for degrees of belief “I have some evidence, but not enough to be sure.” In an era obsessed with certainty, it was too nuanced, too human. AI researchers largely ignored it.
Fast-forward to today: AI is everywhere; writing, driving, diagnosing, predicting. It’s built on mountains of data (Approx 175 zetta bytes), yet still struggles with uncertainty. Large language models hallucinate facts. Self-driving cars freeze at ambiguous intersections. Recommendation engines push nonsense when patterns don’t align. In short, AI fumbles exactly where humans improvise.
(this is someone's pizza I suppose)
Maybe that’s the real frontier; not just making AI faster or bigger, but teaching it to doubt, question, and hesitate.
Because intelligence, in any form, isn’t about knowing everything. It’s about knowing when you don’t.
Asimov, of course, saw this coming. His short story The Last Question follows humanity’s eternal quest to solve the problem of entropy, essentially, the ultimate question of whether the universe can be saved from heat death. Over millennia, ever-evolving AIs are asked the same question: How can entropy be reversed? Each time, the answer is the same: “Insufficient data for a meaningful answer.” Until, at the very end of time, when all knowledge is gathered and AI finally understands. It's answer? “Let there be light.”
Maybe Asimov was onto something. Maybe AI’s final test isn’t about answering every question, it’s about knowing when the answer doesn’t exist. And when that day comes, when AI finally embraces uncertainty, who knows? It might just create a new universe to figure it all out.
Comentarios