AI is Irrational
We’re building intelligence by discarding reason
AI’s leaders promise us a world of increased rationality. Fusing mathematical precision with logical reasoning, algorithms are built to run on data and efficiency - finally allowing us to leave emotions, biases, and human irrationality behind, or so we’re told.
Human progress and scientific advancement have been built on the pursuit of reason. The idea that we may be able to see what our minds are truly capable of creating when freed from the irritating urges to eat, sleep, have sex, procrastinate, or get angry is exciting. What couldn’t we do with this superpower?
From economic gains to social cohesion, AI evangelists tell us that technology could follow rational economic principles to drive smarter, more efficient decisions that could revolutionize society and deliver collective prosperity. We could be richer and freer than we ever imagined. We get giddy with excitement thinking about what’s possible.
Only, AI isn’t about the future, it’s about the past.
AI may feel like a revolutionary new topic, but it's over 75 years old. In fact, it’s older than 98% of all human beings walking on earth right now. Most of us have always been using this technology without even knowing, from playing chess computer games to searching GPS and navigation apps to painfully trying to chat with voice assistants. We talk about technology as if it will, by definition, produce rational results in the future, instead of asking what AI has actually done in the past.
Did AI create a more rational world?
Well, no.
Actually, it did the opposite.
AI might have enabled the creation of humongous, global, free communication tools like FaceBook, Instagram, TikTok, and Snapchat, but levels of loneliness have increased. Breakthroughs in AI might have inspired hundreds of billions of dollars of investments into health focused technology, but the average US life expectancy has declined. Tremendous promises for AI once centered on reducing the climate crisis, but now global carbon usage has dramatically spiked.
And AI for national security - a field that has generated trillions of dollars of investment, creating tens of thousands of world experts - has allowed war to become so commonplace, that impending catastrophe has seeped into the collective psyche and wars have launched across the globe.
So…how did we get here?
Human Exceptionalism
We’re strange creatures, us humans. We share almost everything with the animal kingdom: we even share 99% of our DNA, opposable thumbs, social structures, and self-awareness with our primates. Yet we’re set apart, making it impossible not to wonder why. What is it that makes us uniquely human?
After centuries of thought, we seem to have settled on an answer: our ability to reason. We are exceptional when we reason, and animalistic when we don’t. Yet within us, we hold the capacity for both. We can both think extraordinarily, and react animalistically. This is the human dilemma.
In 384 BCE, Greek philosopher Aristotle attempted to manage the conundrum by introducing a definitive framework to determine whether we’re thinking rationally, or not. This work gave us the first formal principles of reasoned thinking, and a way to keep us on track. Remarkably, Aristotle’s rules are still used today to distinguish genuine instances of rationality, or reasoning, from opinion, emotion, and manipulation.
Across generations and through countless cultural and technological shifts, the core tenets of Aristotle’s rationality still stand. Rationality is not a feeling, or a marketing gimmick, it is a process. Today, when we call something rational, or logical, what we mean is that it adheres to a set of guiding principles that include:
Logical validity
All the premises lead to the conclusion. So if I am a human, and humans are strange, then I must be strange also.
Soundness
All the premises are true. I actually need to be a human, and humans need to actually be strange for this argument to work. I need to have proof, or evidence of this being true.
Consistency
None of the premises are both true and false at the same time and in the same way. And here’s where my argument fails, sometimes humans are strange, but sometimes they’re entirely predictable.
Aristotle’s definition of rationality forms the foundation of how we understand human intelligence. His framework helped shape civilization, with new leading thinkers building on what he started. Plato took Aristotle’s teachings further, placing rationality at the heart of the human mind, tying it to morality and justice. Centuries later, Aquinas blended this wisdom into Christian theology, suggesting that logic could even be a way to understand God, an idea that changed how people thought about the divine. Then, in the 1600s, Renaissance thinker Descartes crystallized rational thought as the defining trait of humanity with his famous line: Cogito, ergo sum - I think, therefore I am.
By the 17th century, the ideas of Aristotle, Plato, Aquinas, Descartes, and others had gained unstoppable momentum. Rationality was no longer just an academic, philosophical concept; it was the force that shifted human history and sparked the Enlightenment, the Age of Reason. Finally, society was liberated from the arbitrary consolidation of power, liberated from monarchy and church hierarchy. This was a turning point in which intellectual freedom, skepticism of traditional authority, and the pursuit of knowledge through logic and evidence took center stage.
We might take rationality for granted today, but rationality fundamentally transformed the structure of society. For the first time in history, it affirmed that all humans, by their capacity for reason, by their conscious intelligence, possess equal dignity and worth. It united our species through a shared sense of humanity, vastly improving life for billions.
Rational Machines
As society advanced, philosophical principles naturally evolved into mathematical ones, driven by their shared foundation in consistent, methodical reasoning. Soon mathematics became the language through which logical reasoning could be universally expressed and applied.
In one of the major breakthroughs of the Enlightenment, the mathematician Gottfried Wilhelm Leibniz invented the mechanical calculator. This device could do everything from basic math to extracting square roots, an amazing leap forward in mechanising an element of human reasoning. It was also a first; mankind had now built a tool to remove human error and supercharge our ability to calculate complex sums.
Of course, we wanted more.
Leibniz believed we could unlock the mysteries of the universe, if we created a sufficiently powerful machine. He believed that with the right symbols, everything could be understood. He developed the concept of mathesis universalis, a universal language of symbols that could turn vast ideas into precise, calculable forms, turning complex thoughts into simple math equations. While he never cracked the ultimate code of the universe, his inventions, like the integral symbol (∫) and the partial derivative (∂), radically altered the scale of what human intellect could calculate.
As our knowledge advanced, so did the complexity of mathematics. Fields like algebra, calculus, and geometry evolved rapidly, and even more symbols appeared, all driven by the need to solve real-world problems.
By the 1800s, British inventor Charles Babbage had created the Analytical Engine, an early mechanical computer that could perform general calculations, and Ada Lovelace had created its first computer program. Together, these breakthroughs supported the creation of magnificent feats of intellect - including bridges, spacecraft, electromagnetic waves, quantum mechanics and, yes, even the development of AI itself.
Classical AI
Advancements rolled in and by the 1950s, American Engineer Claude Shannon had realized the logic behind digital circuits, designing a system to process and store information in binary form (0s and 1s), enabling reliable, high-speed calculations and data handling. Turing had invented his celebrated machine, a machine that could calculate anything we wanted: as long as we gave it the right instructions.
These breakthroughs allowed us to believe that computers could, one-day, imitate and surpass human thinking. This triggered the 1956 Dartmouth Conference, in which the academic field of AI was launched. Researchers McCarthy, Minsky and Newell set forth a call to action: create machines that can think. With optimism spreading across the globe for the good that this could achieve, classical AI engineers dedicated their careers to creating rational agents capable of reasoning to Aristotle’s highest ideal.
AI systems, known as expert systems, were developed to outsmart the brightest humans in specialized fields. IBM’s Deep Blue famously defeated the world chess champion Gary Kasparov by processing millions of moves in seconds. DENDRAL helped chemists identify molecular structures with impressive precision, while MYCIN diagnosed bacterial infections more accurately than many doctors. With each success, the belief that AI could eventually surpass human intelligence grew stronger.
But, frustratingly, momentum then stalled.
There wasn’t enough money in it. While expert systems were intellectually astounding, they were incredibly limited. They thrived in controlled environments, but struggled to adapt to the messy, unpredictable nature of real life. And they were expensive. It had taken almost two centuries just to create a machine that could master chess, but, in the end, there wasn’t a customer for it. That’s the trouble with intelligence, it often runs counter to the needs of the irrational financial markets. The verdict was in: scaling rationality was possible, but not profitable.
By the 1980s, the cracks in the story of AI’s potential to transform society, and the economy, were clear. The grand promises of AI’s potential had gone unfulfilled. Government funding dried up, private investment pulled back, and public enthusiasm evaporated. The "AI winter" had arrived, blanketing the field in disillusionment.
This is the point at which we stopped building rational machines. The attempt to do so was renamed ‘classical AI’, and relegated to the artifacts of history.
Human Error
Our fundamental error - what we are sweeping under the carpet - is that what we now term as “AI” is typically something else entirely. The vast majority of it belongs to a different field altogether, that of “machine learning”.
We have come to use the terms AI and machine learning interchangeably, because they’re both computer science fields in the category of attempts to replicate human thought processes, whether intelligent or idiotic. Just like Atheism and Catholicism both fall under the broad category of “religion,” but mean very different things.
In machine learning, a term coined in the 1960s to distinguish the field from traditional AI, computers aren’t explicitly programmed to follow logical instructions; instead, they are fed vast amounts of data and taught to recognize patterns on their own. The system "learns" by identifying statistical relationships within the data, adjusting its internal parameters to refine its understanding. Once it has mapped these patterns, it can generate predictions by selecting the most statistically probable next data point in any given sequence.
An obvious example of machine learning is Large Language Models (LLMs). These models are trained on an ocean of existing text, such as books, articles, conversations, scripts, and anything else developers can get their hands on. The LLM seeks statistical patterns, and uses these patterns to ‘predict’ a probable sequence of words that might follow any sequence of words it is fed (known as the ‘prompt’). The output generated is simply the most likely sequence of words which could follow the prompt, based on the patterns in all of the existing sequences of words in the model. More advanced examples, like multimodal or agentic AI, follow the same process. They may appear more sophisticated, but at their core, they amplify patterns, never truly thinking in the sense that we have all understood this term since Aristotle.
The true ambition of AI and machine learning are not just different - they are diametrically opposed:
AI is the pursuit of certainty.
Machine learning is the pursuit of best guesses.
AI seeks perfection.
Machine learning generates approximations.
AI seeks deductive reasoning.
Machine learning thrives on inductive inference.
AI seeks accuracy.
Machine learning seeks aggregates.
To put the point metaphorically, imagine you’re a sailor, stranded at sea in the middle of a violent storm. You ask two different systems to help you navigate a ship through the chaos. The first, true AI, relies on principles of physics, weather models, and precise calculations. It evaluates every variable, applies logic, and plots the safest course based on objective truths. It tells you to turn right, because that’s your best chance of survival. The second, machine learning, doesn’t understand any of that. Instead, it analyzes data from past sailors, noting that most of them turned left in bad weather, regardless of what happened to them after. So it tells you to turn left, not because it’s the safest choice, but because it’s the most statistically common.
Which machine would you want to use?
Which machine do you think you’re using when you're using today’s ‘AI’?
We think we’re getting the promise of classical AI, we think the tools we engage with will tell us what’s right, or what will reason us to the best possible future outcome. But, in reality, what we're actually getting is a jumble of the most probable, the average of what has gone before.
Human Intervention
Logic has laws, definitive rules developed by the greatest minds in history, designed to protect us from confusion. These laws guard against the seduction of intellectual shortcuts and the comfort of easy answers. If we were thinking rationally, we’d apply machine learning to these principles, before betting our pensions on untested marketing claims.
By Aristotle’s timeless standards, machine learning, today’s “AI”, fails every defining characteristic of logical reasoning.
Logical validity - FAIL
There are no premises in machine learning - instead of starting with a set of rules, machine learning systems learn from patterns in data and generalize to make predictions.
Soundness - FAIL
Predictions made by machine learning models are not guaranteed to be 100% sound; instead, they provide the likelihood or probability of an outcome.
Consistency - FAIL
There is no absolute truth in machine learning - machine learning deals with probabilities and uncertainties and things can be both true and not true at the same time.
Machine learning is, by definition, irrational.
Not metaphorically.
Not theoretically.
Literally.
We can also see that machine learning is the antithesis to intelligence through the more recent insights of Daniel Kahneman. Kahneman’s work has shaped how we understand human thought itself, especially what types of thoughts help us, and what thoughts harm us. In his iconic book Thinking, Fast and Slow, he outlines two systems of thinking that govern our minds:
Fast Thinking (System 1), which is instinctive, and prone to bias, it’s our gut reactions and snap judgments, shaped by experience and emotion.
Slow Thinking (System 2) which is deliberate, and rational, it’s the careful, effortful thinking we engage in when solving complex problems or questioning our assumptions.
What becomes clear when comparing today’s machine learning “innovations” against Kahneman’s framework is that ‘classical AI’ replicates Slow Thinking (System 2), it’s based on logic, rules, and structured reasoning. Machine learning, on the other hand, models Fast Thinking (System 1). It doesn’t understand. It doesn’t reason. It doesn’t think. It doesn’t reflect. It just reacts, predicts, and guesses based on past patterns. It’s a digitized instinct without the faintest trace of reason.
Whilst Fast Thinking (System 1), our quick, instinctive judgments born from experience, serves us as a (mostly) useful short term tool, it’s Slow Thinking (System 2), that’s the deliberate power of deductive reasoning, and is the basis for all of human progress. So, with machine learning, what we have created is a digitized version of animalistic, primal, instinct, which is only one aspect of the human mind, and the more limited one at that.
Going Backwards
Machine learning generates decisions based on the average of the past; it should be easy to see that this is not progression, but regression. By incorporating this technology into our decision making systems, we are abandoning the jewel of Western civilization, the scientific method.
Make no mistake, this is a religious resurgence. This is the start of our regression to a society driven by purely by faith, not logic. By believing that machine learning, a technology that by definition cannot differentiate between fact and fiction, will someday, spontaneously erupt into AGI, we are believing in a form of ‘divine intervention.’ We are putting science and mathematics aside, and gambling our future on the promise that the aggregate will one day, mysteriously become accurate.
While it’s true that a monkey with a typewriter will, given an infinite amount of time, eventually hit the typewriter keys in a sequence which corresponds to the complete works of Shakespeare, all of us can understand that, having finally produced the right pattern of keystrokes, the monkey still wouldn’t be Shakespeare. Not because the monkey’s slow, not because the monkey’s under-resourced, not because the monkey doesn’t have enough data, but because it’s a monkey.
We must re-discover rational thinking, and accept some basic truths.
You cannot make a system that doesn’t value accuracy become accurate.
You cannot iterate randomness into reason.
You cannot make averages lead to absolutes.
With clear eyes, we can see that machine learning is the exact type of thinking that teachers from Aristotle to Kahneman have warned us about: thinking without reflection, judgment without reasoning. It’s the kind of mental shortcuts based on past averages that countless philosophers have dissected and numerous behavioral economists have labeled as cognitive traps.
When we understand this, we can understand why the world is more confused than ever. And why ‘AI’ projects are failing. Across the globe, billions of people, hundreds of thousands of companies, and governments worldwide are using so-called “AI” tools, convinced they’ll make them more rational, more objective, more correct. Yet they’re left scratching their heads, wondering why life, work, and society is becoming more obscure and confusing.
It’s because they’re not using AI, they’re using machine learning.
And machine learning is the abandonment of reason.