The Rise and Fall of Logic
Philosophy · Logic | 14 March 2026

The Rise and Fall of Logic

Once dismissed as a dusty relic of scholastic philosophy, formal logic has re-emerged at the beating heart of computer science, artificial intelligence and cognitive science. How did this happen — and what does it mean for the way we reason?

There is a peculiar irony at the heart of modern intellectual life. At the very moment that philosophy departments across the English-speaking world began to question whether formal logic was too narrow, too technical, too divorced from human experience to tell us anything meaningful about the world — computers arrived and proved them catastrophically wrong.

The story begins in antiquity. Aristotle’s Organon laid out, for the first time, a systematic account of valid inference. His syllogistic — the rules governing arguments of the form “all men are mortal, Socrates is a man, therefore Socrates is mortal” — dominated Western thought for two millennia. It was, in its way, an extraordinary achievement: the codification of something that every rational being does implicitly, rendered explicit and teachable.

The Triumph of the Schoolmen

Medieval logicians, working within the framework Aristotle had bequeathed, pushed the machinery much further. The terminist logicians of the fourteenth century — William of Ockham, Jean Buridan, Albert of Saxony — developed a remarkably sophisticated theory of reference and truth, anticipating questions that would not be rigorously posed again until the late nineteenth century.1

But the Renaissance brought a different wind. Humanist scholars, impatient with what they saw as the sterile word-games of the scholastics, turned away from formal logic toward rhetoric, philology, and the direct study of ancient texts. Logic retreated into the curriculum as a compulsory but largely despised preliminary to the real work of philosophy.

The question is not whether machines can think, but whether the rules of thought are themselves a kind of machine.

Gottlob Frege, 1879

Frege’s Revolution

The decisive break came in 1879, when a virtually unknown German mathematician named Gottlob Frege published a slender pamphlet he called the Begriffsschrift — “concept script.” In it he did something no one had managed to do in two thousand years of logic: he invented a notation powerful enough to represent the full complexity of mathematical reasoning, and showed how every valid inference could be reduced to a small set of primitive rules.

A new symbolic language

Frege’s system was ugly by modern standards — his two-dimensional notation has never been adopted, and even at the time it was largely ignored. But the underlying idea was revolutionary. By treating logical form as something that could be studied independently of content, Frege opened the door to the work of Russell, Whitehead, and ultimately Gödel, Turing, and Church.

"The whole of mathematics follows from a handful of logical axioms" — this was the dream of logicism, and for a brief moment in the early twentieth century, it seemed within reach.

Then came the incompleteness theorems, and the dream fractured. Kurt Gödel showed in 1931 that any sufficiently powerful formal system must contain true statements that cannot be proved within that system. It was a result of extraordinary philosophical depth, and it was widely — if incorrectly — taken to mean that logic had hit a wall, that formalism had been shown to be insufficient for capturing mathematical truth.

The Computational Resurrection

What rescued logic from this apparent defeat was something nobody anticipated: the computer. Alan Turing’s 1936 paper on computable numbers was, in one sense, another negative result — a proof that certain problems cannot be solved algorithmically. But in showing what a mechanical process of symbol manipulation could in principle accomplish, Turing accidentally sketched the blueprint for every digital computer ever built.

Today logic permeates the infrastructure of modern life. Every time a chip performs a Boolean operation, every time a database executes a query, every time a proof assistant verifies a piece of software, the ghost of Aristotle and the genius of Frege are silently at work. The schoolmen were not wasting their time after all.

\[\sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}\]

1 See Gyula Klima, John Buridan (Oxford University Press, 2009) for the best modern treatment.

subkiy
subkiy

subkiy is Professor of Philosophy at the University of Edinburgh, where she specialises in the history of logic and the philosophy of mathematics. Her most recent book is The Formal Turn (Princeton, 2024).

14 March 2026