A follow-up to The Distance Between Desire and Making.
Published 12 April 2026
In my last essay I wrote about what happens when AI collapses the distance between wanting to make something and actually making it, and why that distance, far from being an inefficiency, is where the human being is formed. The struggle is not the delivery mechanism for the work. It is the work, in the deepest sense.
I want to stay with that idea but get more specific. I have been living it recently in a very concrete way. Not as theory but as daily practice, in the studio, with patch cables and voltage and a system that is only ever semi-stable.
The Dark Art Nobody Explains
Let me start with something that rarely gets explained to people outside this world.
Electronic music, the kind made with modular synthesisers, semi-modular hybrids, generative systems and hardware sequencers, is not what most people imagine when they think of someone making music with machines. The common assumption is that electronic music is precise. Programmed. Controlled. That the musician sits down, decides what they want, and the machine executes it faithfully.
The reality is almost the opposite.
What you’re actually doing when you work with a modular system is building a living network of interacting processes: oscillators, filters, envelopes, noise sources, clocks, sequencers, all wired together in ways that produce consequences you didn’t fully anticipate. Voltage flows through patch cables. One module’s output becomes another’s input. Feedback loops develop personalities. Systems drift. Resonances appear that weren’t designed, and combinations emerge that couldn’t have been predicted from the individual parts.
The musician’s role is not to dictate but to respond. To listen to what the system is doing and shape it, gently, attentively, the way you might coax a fire rather than operate a machine. It is closer to gardening than engineering. You create the conditions, and then you attend to what grows.
This is not unique to electronic music. Any acoustic instrumentalist will tell you that playing is a two-way process. You adapt your touch to what comes back. The violin responds differently in a cold room. The piano in this hall has a different action from the one at home. You are always in dialogue with the physical reality of the instrument, always adjusting. The instrument pushes back and you respond. That exchange is where the music lives.
Electronic music just makes this negotiation more visible, more explicit, and when the system is truly alive, more unpredictable.
The Semi-Stable System
I have spent the last few years building a live modular and digital hybrid rig that I can genuinely express myself through. Not just operate, but play. A system capable of generating fantastical sonic worlds in real time, unpredictable enough to surprise me, stable enough to perform with.
That balance is harder to find than it sounds. Too stable and the system becomes a playback machine, executing what you have already decided. Too unstable and you are fire-fighting rather than performing. The sweet spot is on the edge: systems you are controlling as best you can, while knowing that a significant portion of what happens next is genuinely not up to you.
This is not a compromise. It is the goal.
Last week I was working on new improvisations and discovered something unexpected. Pinging a spectral resonator with my VCOs at certain frequencies produces sounds that are almost biological. Cetacean. Whale-like. Rich, living, breathing textures that sound simultaneously ancient and entirely electronic. I didn’t design that sound. I couldn’t have. It emerged from the physical interaction between voltage, resonance and frequency, a conversation between components that had never spoken to each other in quite that configuration before.
You cannot prompt your way to that. It doesn’t exist until it happens.
But unpredictability alone is not enough. There is another side to this that I have been passionate about for a long time, and that is expressivity. Electronic music has historically asked musicians to make a deal: gain access to extraordinary sonic possibilities, but surrender the kind of nuanced physical expression that acoustic instruments allow. A pianist can shade a single note with an almost infinite range of touch, weight and timing. Most electronic instruments offered nothing close to that. You triggered a note and the machine responded with whatever it had decided that note should sound like.
That has been changing. A new generation of instruments, and a new approach to control data known as MPE, which allows each note to carry its own independent stream of expression simultaneously, have opened something genuinely new. Where a conventional synthesiser receives a single instruction per note, these instruments respond to pressure, lateral movement, subtle shifts in touch, the whole physical vocabulary that musicians have always used but electronic music largely couldn’t receive. Musicians like Genevieve Coppens and Hans Zimmer have been exploring what becomes possible at this level of expressive resolution. The results point toward something that has been missing from electronic performance for a long time.
For me, this has been the final piece of the puzzle. The semi-stable system provides unpredictability, the sense that the music is partly out of your hands, partly finding its own way. The expressive instrument provides the opposite: a direct channel between physical gesture and sonic response, immediate and intimate. Together they create something that finally feels, as a musician, like a real instrument rather than a collection of components. The unpredictability and the expressivity held in tension, each one making the other more meaningful.
One Layer at a Time
The way I work within this system follows something close to what Suzanne Ciani describes with her Buchla synthesiser.
Ciani is one of the true pioneers of electronic music. She encountered Don Buchla’s radical modular system in the late 1960s while studying composition at Berkeley, gave up a conventional career as a classical pianist, and spent decades building a practice around a single instrument that most people in the music world didn’t understand and couldn’t operate. She became one of its first genuine virtuosos, working with it not as a tool but as a creative partner, performing live quadraphonic concerts, designing iconic commercial sounds, and developing a relationship with the system that she has described in almost human terms. The Buchla, she has said, felt alive to her. More alive than any other instrument around her, precisely because of how open-ended it was.
The approach Ciani developed was to build in sequential layers. Each one running independently. Each one found and settled before the next begins.
You get one system working. You find its flow, its internal logic, its natural breathing rhythm. Then you move to the next layer while that one keeps running underneath. You can only work in one layer at a time. That is not a limitation of the system. It is the system’s pedagogy.
Because while you’re focused on layer two, layer one is doing its own thing. It’s evolving, shifting subtly, finding its groove without your interference. You’re not managing it. You’ve released it. By the time you step back and hear the whole, the music has developed a coherence and spaciousness you couldn’t have constructed by actively controlling everything at once.
When I’m inside it, building, it’s genuinely hard to hear this. I’m immersed in the immediate problem of the layer in front of me. But listening back, that’s when it becomes clear. The music breathes. There’s space in it. Real space, not engineered space. Space that emerged because I was forced to leave it alone.
Compare this to working in a fully controllable digital environment, building track by track with full access to everything at once. What happens, at least for me, is that I get restless. I start filling space because I can. I push changes in before the previous ones have settled. The music becomes denser and more active, and paradoxically less alive. The control collapses the space.
The friction of the semi-stable system solves this. You can’t rush it. The system won’t let you. And the waiting, the enforced patience, is where the music finds its character.
The Paradox I’m Living
In the previous essay I wrote about complicity: how most people working in creative fields are using AI tools anyway, myself included, and that this isn’t hypocrisy so much as the ordinary structure of technological adoption. I want to be honest about where I’ve landed with that.
I use these tools constantly. To build context, to develop software for my visual systems, to speed up the administrative and strategic thinking that surrounds the creative work. I’m using one right now, in a sense, to help shape this essay from a voice note I recorded after a session.
But the session itself, the actual music, the improvisation, the discovery of the whale sounds in the resonator, that happens without them. Not as a principled rejection of the technology, but because that work requires the specific conditions that these tools remove: slowness, resistance, unpredictability, the enforced patience of a system that will only do one thing at a time.
In the previous essay I argued, following Wolfram, that irreducible computation cannot be shortcut. You can move it outside the human, but doing so means the human no longer benefits from having done it. The modular rig is a daily demonstration of this. The computation is irreducible. The music that emerges from negotiating with a semi-stable system could not have emerged from a more efficient process, because the inefficiency, the waiting, the responding, the constraint of one layer at a time, is what produces it.
The tools help me build the instrument. The instrument then demands that I slow down and play it.
The distance between desire and making is not the problem. It is the site of everything worth having.
The edge of control is where it gets interesting.
