Can a Song Without a Soul Still Move You?
When the lines blur between human and machine, what’s left of the art?
The Assumption We All Have
There’s this idea a lot of us carry around, especially if you’ve worked in music or film for a while that AI-generated music is always going to feel a bit hollow.
Like it’s missing something fundamental.
Thanks for reading The Quiet Room! Subscribe for free to receive new posts and support my work.
No heart. No lived experience behind it. Just pattern-matching and prediction dressed up as art.
And honestly, that used to be my view too.
But lately, I’ve been feeling a mix of emotions about all this: excitement, curiosity, and a hint of anxiety as I watch the music industry evolve with the rise of AI-generated music.
It’s a brave new world, and none of us really know what the future holds. But one thing is certain: AI is here to stay, and it’s going to change the way we create, consume, and appreciate music.
The Research That Changed My Mind
A 2025 PLOS ONE study compared human-composed music with AI-generated tracks in film clips. They measured pupil dilation, skin conductance, and self-reported emotions.
The results were unsettling.
The AI music often created the same level of emotional arousal or in some cases, more. Participants’ pupils dilated more during the AI tracks. They rated the emotional fit as just as good.
Another study by Lecamwasam and Chaudhuri found that if you don’t tell people who made the music, they often can’t reliably tell which is which. Some even preferred the AI-generated calming tracks, even though they assumed human music must be better.
The big takeaway: our expectations shape perception.
Just knowing something came from a person changes how we hear it.
The Viral TikTok and The Velvet Sundown
The other day, I was chatting with my daughter. She told me about this piece of vaguely jazzy music that had gone viral on TikTok everyone was sharing it, saying how it felt so warm and nostalgic.
Then someone announced it was AI-generated, and almost instantly there was a pile-on to banish it. People wanted it gone just because of where it came from, even though nothing had changed about how it sounded.
And then, five minutes later, I opened LinkedIn and saw a post shouting about a band called The Velvet Sundown.
The most middling, middle-of-the-road Americana rock band you can imagine except they didn’t exist three months ago.
They’ve somehow racked up 650,000 monthly listeners and millions of streams.
And it’s becoming clear this is bot-made music listened to by bots, all feeding the payola machine, while real artists are left out in the cold.
Streaming has now reached a point of creative inequality that’s hard to ignore. We have a pooled revenue model being siphoned by the platforms and the majors, powered by algorithmically generated filler modelled on real musicians who don’t see a penny.
And yet, you can feel this pressure building. More artists pulling away from the big streamers. More experiments with fair-pay platforms and subscription and pay-to-play models that actually reward human work.
The Work of Hannah Davis
One example I keep coming back to is Hannah Davis.
She built TransProse, a system that turns text into music by analysing sentiment over time. You feed in a novel or a screenplay. It tracks how emotions rise and fall sentence by sentence, then translates that into musical instructions tempo, key, note density.[

](https://substackcdn.com/image/fetch/$s_!dahY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3cb4d-8849-413b-8875-3e14b4d90ac1_698x230.png)Generating Music from Literature
In 2016, Accenture commissioned Davis to create an orchestral piece called Symphonology. She ran a collection of news articles and corporate strategy documents through TransProse to extract the mood, then generated a full orchestral score.
A live orchestra performed it probably the first time a strategy deck and the global news cycle were turned into a symphony.
[

](https://substackcdn.com/image/fetch/$s_!IctS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7decd848-c441-4bc7-a5cc-bd0e91d303e6_1414x456.png)https://aclanthology.org/W14-0901.pdf
I’ve been reading The Artist in the Machine by Arthur I. Miller (AI Miller can you believe?), a brilliant book. It came out in early 2020, which, in AI terms, already feels like ancient history. But it’s a fascinating exploration of machine creativity, which, of course, is really more about human creativity and what it actually is.
Miller mentions Davis’s work in the context of generative composition back when this was still an interesting little niche of electronic music study. That was happening then, quietly laying the groundwork for where we are now.
Neuro-Responsive Music and the Next Frontier
I’ve been exploring this in my own work, especially around neuro-responsive generative entertainment, music that doesn’t just follow the film but responds to you while you watch.[

](https://substackcdn.com/image/fetch/$s_!Lris!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94eb9437-79e8-4302-a23b-9cfb039e8d65_727x545.jpeg)
Your heart rate. Your brainwaves. Your skin conductance.
Imagine a soundtrack that senses when you’re losing focus or feeling tense and subtly adjusts itself to keep you engaged or help you relax.
Picture combining Davis’s emotional tagging with Pengcheng Xiao’s reinforcement learning model that optimises for emotional coherence and variety, then adding live biometric data.
You’d end up with a system where the narrative provides the emotional skeleton, the AI generates the score, and your body fine-tunes it all in real time.
I honestly don’t know if that’s inspiring or unnerving. Probably both.
The Process is the Point[

](https://substackcdn.com/image/fetch/$s_!D59m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10495d3c-d455-4fa6-bc61-12c19bcbd971_1920x1280.jpeg)
As AI-generated music gains traction, it feels like we’re witnessing a deeper shift.
We’re no longer just talking about the end product the song, the score, the soundtrack but about the process itself. The process. The art of creation.
And that’s where the real magic lives.
The brutal reality is that AI can create music that is just as good sometimes better than what humans can produce. But that’s not the point.
The point is how we choose to respond to this new reality. Will we double down on valuing the artist as an artist, rather than a content factory? Will we start to connect more deeply with the process, not merely the product?
It’s similar to how state arts funding often works. Grants and support tend to focus on the promise of the final work, the polished result. But anyone who has been close to art-making knows the true value isn’t the outcome.
It’s the fragile, restless process of getting there.
The art is a byproduct. The process is the point.
What the Future Might Look Like
Maybe this is a future where AI and human creativity merge in a kind of dance.
Where artists aren’t just creators but curators and conductors, shaping, guiding, and collaborating with intelligent tools.
And that’s something to be excited about.
Because in the end, it’s not about the machines. It’s about the music. The connection. The feeling.
The sense that you’re part of something alive and evolving.[

](https://substackcdn.com/image/fetch/$s_!gJAM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2365aefe-b780-4fcc-ba53-80e90519dff8_1000x563.jpeg)
A Final Thought
Let’s not be afraid of AI-generated music.
Let’s not see it as a threat but as an opportunity. An invitation to look deeper. To connect more fully. To appreciate the process, not just the destination.
Let’s subscribe to the artist, not just the art. Let’s support the DNA, the process, not just the product.
Let’s celebrate the beauty of human creativity amplified rather than replaced by the power of AI.
Because in the end, music isn’t just a product - It’s an experience.
And that will always be worth creating, worth listening to, and worth celebrating.
Thanks for reading The Quiet Room! Subscribe for free to receive new posts and support my work.