The Distance Between Desire and Making
On creativity, irreducibility, and what we lose when AI closes the gap
There is a question underneath most conversations about AI that rarely gets asked directly. Not whether the outputs are good, or whether the tools are useful, but something more structural: what happens to the person who no longer has to make the thing themselves?
This is not a moral argument against technology. It is an observation about development, about how human beings form capability, identity, and understanding through the act of struggling with difficult processes over time. And it raises a concern that goes deeper than the usual worry about AI-generated content being shallow or derivative.
The deeper risk is not the content. It is what disappears inside the person who no longer does the work.
A Mirror, Not an Oracle

One of the more persistent confusions in how people relate to AI tools is the sense that they are receiving outputs from something alien, some independent intelligence producing knowledge from elsewhere. This is worth correcting plainly.
AI systems are trained on human output: language, images, music, code, argument, narrative. They are, in the most literal sense, mirrors of what human beings have already made. When an AI produces a sophisticated response, it is aggregating and recombining patterns from the accumulated written record of human thought. That is impressive as a feat of engineering. But it is not an independent source of understanding.
Jaron Lanier has written about a distinction between what he calls Shannon information and vernacular information. Shannon information is measurable signal: bits, entropy, compression, transmissible without context. Vernacular information is something that means something to somebody. It requires culture, interpretation, intention, history, authorship. A sentence means something different when it comes from a person with stakes in its truth. A song has a lineage. A claim has an author who stands behind it, or doesn’t.
The risk Lanier identifies is the collapse of these two categories: beginning to treat statistical coherence as if it were genuine understanding, or circulation as a proxy for truth. When we forget that AI outputs derive from human knowledge, we start relating to them as external authority. We lose the thread back to the human source. And that loss of thread is also a loss of agency.
Irreducibility and the Conditions of Growth

Stephen Wolfram’s work on computational irreducibility offers a useful framework here, even if it requires some translation from physics into human development.
Wolfram’s core observation is that many systems cannot be shortcut. You cannot predict the outcome of certain computational processes without running them step by step. There is no formula that leaps ahead. The process itself is the only path to the result. Complexity, he argues, is not an anomaly. It is the natural consequence of simple rules run across time.
Applied to human development, this is not a metaphor so much as a structural claim. Learning is irreducible. The formation of deep understanding, the kind that allows a person to work through ambiguity, generate original ideas, tolerate uncertainty, cannot be compressed without loss. You cannot skip the iterations. The struggle is not a delivery mechanism for knowledge; it is the knowledge forming.
Clifford Geertz, approaching this from cultural anthropology rather than computation, described it through the concept of thick description. A wink is not just an eyelid movement. Its meaning is inseparable from the layers of intention, cultural context, and shared understanding that surround it. Surface behaviour and deep meaning are not the same thing. You cannot read one from the other without all the contextual tissue that connects them.
Applied to creativity: a painting is not just a visual surface. A piece of music is not just a sequence of frequencies. Their meaning is accumulated through process, through the struggles the maker went through, the choices they made and abandoned, the resistance they encountered and worked through. The thick description of a creative work is its entire history of becoming.
AI can produce the surface convincingly. It cannot produce the history. And it is not clear that we fully understand yet what we lose when the history is absent, not just from the work, but from the maker.
The Compression of Distance

For most of human history, creativity required a long chain of process. Idea to sketch, sketch to draft, draft to revision, revision to completion. Each step required skill that was itself the product of prior process. The distance between wanting to make something and actually making it was populated by years of practice, failure, and slow refinement. That distance was not a problem to be solved. It was where the person was formed.
What AI does, functionally, is compress this distance to near zero. You want an image, it is generated. You want an essay, it is drafted. You want a track, it is produced. From a pure efficiency standpoint, this is extraordinary. From the standpoint of human development, it raises a harder question.
The computation that used to happen inside the person, the neural effort, the pattern-building, the gradual formation of taste and judgment, now happens inside the machine, during training, in infrastructure that exists outside the learner entirely. Wolfram’s framework makes this concrete: the irreducible computation still occurs. The question is only where it occurs. And if it moves fully outside the human, humans become selectors of outputs rather than makers. That is a fundamentally different role, and it forms a different kind of person.
The Exercise Problem
There is a useful parallel with physical exercise. For most of human history, physical exertion was unavoidable because survival required it. As technology progressively removed the need for physical labour, exercise became optional. And society’s response was not to celebrate the efficiency gain and move on. It was, eventually, to deliberately reintroduce physical stress. Gyms, running, sport: these exist not because people enjoy artificial hardship, but because the body requires stress to develop. Remove the stress, and the body does not simply stay the same; it declines.
The same logic applies to cognitive and creative development. For most of human history, intellectual effort was unavoidable: reading, writing, memorisation, problem-solving, the slow accumulation of craft. AI now makes much of that effort optional in the same way that machinery made physical labour optional. The intellectual friction is gone. And friction, it turns out, is not just an obstacle to learning. It is the mechanism of learning.
Within a generation or two, educational institutions will likely respond to this in exactly the way fitness culture responded to sedentary technology. Not by rejecting AI wholesale, but by deliberately preserving the conditions of intellectual effort. Handwriting, close reading, device-free environments, assessed processes rather than outputs. Not out of nostalgia, but because the development that requires friction cannot be substituted by the results of frictionless production.
Creativity as Becoming, Not Producing

Creativity is not primarily a production mechanism. It is a developmental process, a means by which a person becomes capable of certain kinds of thought, feeling, and perception. The artefact is evidence of a process of becoming. Remove the process and the artefact remains, but the becoming does not.
In music production, this is observable directly. Modern digital audio workstations have made music creation more efficient than it has ever been. And yet something interesting happens when a musician works instead with physical instruments, with the friction of patch cables, acoustic resonance, the unpredictability of real synthesis. Structures emerge that could not have been designed in advance. The complexity comes through interaction with resistance, not through optimisation of a workflow. The magic, if that word can be used plainly rather than romantically, arrives through friction. It cannot be pre-designed. It unfolds.
This is not an argument for deliberate inefficiency. It is an observation about the conditions under which certain kinds of discovery occur. Exploration requires the possibility of getting lost. Wandering is not a failure of navigation; it is a mode of encounter with the unexpected. When the path from idea to outcome is entirely smooth, you arrive only where you already knew you were going.
Emergent Complicity
Most people working in creative and intellectual fields, including people who hold views like those described here, are using these tools anyway. This is not hypocrisy so much as the ordinary structure of technological adoption. The incentives are immediate. The productivity gains are real. The tools are, in many contexts, useful. Artists, educators, writers, and technologists all participate in accelerating a system whose long-term cultural effects remain genuinely uncertain.
This complicity is not the result of bad faith. It emerges from the logic of the system. And that is precisely what makes it worth naming. When behaviour is driven by incentives rather than reflection, the aggregate effect can diverge significantly from what any individual actor would choose if they were choosing deliberately.
The question is not whether to use these tools, but under what conditions they support rather than replace development. Using AI to handle mechanical tasks that sit outside your core creative process is different from using it to perform the core creative process itself. The first preserves the formative territory. The second colonises it.
What Needs Protecting

The distance between desire and making is not an inefficiency. It is the site of human formation. It is where identity, skill, and creative depth emerge, not as byproducts of the process, but as the process itself.
The risk of AI is not, primarily, that it produces bad content. The content is often technically impressive. The risk is more subtle and more serious: that by removing the necessity of the formative process, it quietly removes the formation. That a generation grows up acquiring outputs without undergoing the irreducible traversal that produces depth.
Wolfram’s observation applies here without embellishment: you cannot shortcut an irreducible process. You can move it outside the human. But moving it outside the human does not mean it disappears. It means the human no longer benefits from having done it.
Preserving the conditions of slow, difficult, exploratory development is not a reactionary gesture. It is a structural requirement for the kind of human beings capable of generating genuine thought, feeling, and art. The tools are not the enemy. Forgetting what the tools are for is.
References: Stephen Wolfram, A New Kind of Science; Jaron Lanier, Who Owns the Future?; Clifford Geertz, The Interpretation of Cultures.