There is a joke among programmers that learning to code ruins you for everything else. Once you have spent enough time breaking problems into explicit steps, specifying inputs and outputs, hunting for the hidden assumption that makes a system fail — you begin to see the world differently. Conversations feel underspecified. Arguments feel like programs with logical errors. Instructions feel incomplete.
This is sometimes presented as a pathology. I want to suggest it is a virtue.
What Programming Actually Teaches
The surface skill of programming — knowing the syntax of a language, understanding data structures, being able to write a loop — is not what changes you. What changes you is the discipline that underlies it: the insistence on precision, the intolerance for ambiguity, the discovery that what seems obvious usually isn’t.
Consider what happens when you try to teach a computer to do something simple. Say, sorting a list of names alphabetically. You know how to do this. You have done it a thousand times, in your head, without thinking about it. And then you try to write it down in a way that a machine can follow — and you discover that you don’t actually know how you do it.
You know the result. You don’t know the process. And programming forces you to know the process.
This is epistemically humbling in a way that very few other disciplines are. It reveals, repeatedly and without mercy, the gap between feeling that you understand something and actually understanding it.
If you can't explain it simply, you don't understand it well enough. But if you can't make a computer do it, you may not understand it at all.
Attributed (loosely) to Feynman
Abstraction as a Mode of Thought
The second great gift of programming is abstraction. The ability to identify the common structure beneath apparently different problems — to see that sorting names, ranking search results, and organising a library are all instances of the same underlying operation — is one of the most powerful cognitive tools available.
This is not unique to programming. Mathematicians, philosophers, and taxonomists have always worked this way. But programming makes the value of abstraction visceral. When you find the right abstraction, code that was tangled and hard to read suddenly becomes simple and clear. When you miss it, you pay in bugs and complexity and hours lost to confusion.
The programmer learns, through painful experience, that the hard part of most problems is not the implementation. It is finding the right way to think about the problem. Get the model right and the code writes itself. Get the model wrong and no amount of cleverness will save you.
On Bugs
There is a particular kind of intellectual experience that programming produces which I have not found replicated elsewhere: the experience of hunting a bug.
A bug is a fact. The program does something you did not intend. You are wrong about something — about what the code does, about what the data contains, about what the system state is at some point in execution. Your job is to find exactly where your model of the world diverges from the world itself.
This requires a combination of logical deduction, empirical testing, and creative hypothesis generation that is, in miniature, the scientific method. It also requires a particular epistemic virtue: the willingness to conclude that you are wrong, and to keep looking until you find exactly how.
The Limits of the Metaphor
I do not want to oversell this. Programming is a tool, and like all tools it has its domain of application and its limits.
The precision it demands is not always appropriate. Human communication depends on implication, context, shared background — all the things that make natural language powerful and efficient. Demanding the precision of a function specification from a conversation partner is antisocial and counterproductive.
The abstraction it encourages can become a vice — a tendency to see everything as an instance of a familiar pattern, to force the complexity of experience into the categories that happen to be available.
And the algorithmic mindset — the assumption that every problem has a well-defined input, a well-defined output, and a finite sequence of steps that transforms one into the other — is a poor fit for the most important problems of human life.
A Discipline Worth Having
None of this diminishes what programming, at its best, genuinely teaches. The habit of asking: what do I actually mean? How would I verify that? What assumption am I making that might be false? These are good habits for a mind to have.
The programmer who brings these habits to reading, to argument, to the evaluation of evidence — who treats every claim as a potential bug, to be tested before being trusted — is, I think, a better thinker for the experience.
Not because code is more important than ideas. But because the discipline of making things work, precisely and reliably, in a world that does not forgive hand-waving, is a discipline that transfers.