Speculations about the Digitization of Music in the Age of AI

💥🚀✨ BOAMCO has been nominated for the prestigious 'Worst Acronym of 2025' Award. 📊🏆😂

BOAMCO: Body-Organoid AI-Agents Mind-Crowd Orchestra

Jhave – Center for Digital Narrative - 01.12.2025


An invited presentation on 'Digitalization of Music' given to
NERDS: Northern European Resonance & Dissonance Group
@ EKKO Festival in Bergen on November 1, 2025.

Slides PDF



Disclaimer: #whole-use-AI

Most of these slides were made in a single pass. I call this “whole-AI use.” It means I don’t iterate endlessly. I don’t burn carbon refining prompts. I don’t discard the weird. I accept what emerges. Even typos. Even glitch. That’s part of an ecological stance: art made with synthetic systems should continue to carve a lighter carbon weight than being a carnivore, owning a pet, driving a car, or becoming a soldier.

This is a talk made with machines, about machinic music.
It’s also about bodies. And systems that listen and learn.
It speculates about a future without advocating for it.



LET'S ASSUME: MEANING OCCURS WHEN AN ENTITY RESONATES, SO MEANING IS A FORM OF MUSIC Can Machines Learn Music? as monochrome swirl ink Yes we can rendered as paint splash

Worlds are Music

Underlying assumptions: Meaning occurs when an entity resonates. Resonance is a form of music. Each ripple reveals relation. No entity exists alone. Every tremor is shared. Electromagnetism itself is reverberation. Everything we see is vibration. Thus, worlds, by definition, are music.

Can machines learn that resonant music — the music of being?

- Yes, they can.

2026 REAL-TIME SPEECH-TO-AUDIO, PRODUCTION, EDITING, REMIX, MASHUPS, AND MASTERING THE ALWAYS-ON ALWAYS-Al SYSTEMS IMPROVE INCREMENTALLY SYNCHRONIZED TO TEMPERAMENTAL AND BIOMETRIC SIGNALS INDIVIDUALS CAN MERGE THEIR SIGNALS as they hangout, eat dinner, make music, have sex, work together, whatever MUSIC BECOMES AN ALGORITHMIC RESONANCE ALIGNED THRU SELECTION TO SYMMETRY WITH

Forecasts for a Musical Future

By mid-to-late 2026, we will have real-time speech-to-audio systems. You’ll speak, hum, walk, gesture—music will respond. You’ll be able to remix, spawn genre variants, shift key signatures, modulate time. And then publish it. On the fly.

Shortly after that, we’ll enter a phase of always-on AI audio. Infinite generative streams. Personalized. Synced to biometric data: galvanic skin response, pupil dilation, heart rate, blood oxygen level, pacing of speech, facial microexpression.

This will extend to shared biometric fields. Multiple people together—hanging out, eating, dancing, making love, working—their signals merge, and so does the music. A collective perpetual algorithmic resonance aligned through metabolic symmetry.

Poster with text: The Concert Begins Emptied Silhouette with bioluminescent veins Group connected by digital strands

A Concert That Begins in Silence

Imagine a concert that begins empty.

You walk into the room. It’s silent. You are the first node. Your pulse is picked up. Your breath, your blood oxygen, your skin conductance. It becomes the seed of sound.

Then someone else enters. Maybe their voice is picked up by a wearable mic. Maybe it’s a gesture. A phrase. The system listens. Learns. Integrates. Every bio-metric, bio-mimetic, bio-morphic signal becomes part of a compositional agent. The space itself is sculpted—moment to moment—by the crowd’s presence.

It’s not generative in the traditional sense. This is responsive orchestration, managed by an AI agent that governs the phenomenological field.

It is music made not for you, but from you.

IN THE PAST, WE'VE HAD: The conductor, an orchestra, a band leader, the solo player. In a drum circle, it would be the one with the most agility. In the future, we're going to have orchestrators, agents, co-creators, augmented creativity, instantaneous real-time nuanced context-specific adaptive subtle sinuous multicultural hybrid genre experiences that adapt to us and to the others around us if we so wish. Algorithmic visualization of a neural network topology

The Orchestrator Mutates—Long Live the Agent

Once upon a time, the conductor held the baton. The soloist set the tempo. In drum circles, the one with the most agility led.

Sheet music is an algorithm—a coded methodology to synchronize bodies across time.

Today, orchestration is the metaphor for AI development. OpenAI, Anthropic, Microsoft—they all use terms like agent orchestration, tool use, multi-agent frameworks.

In the future, we're going to have orchestrators, agents, co-creators, augmented creativity, instantaneous real-time nuanced context-specific adaptive subtle sinuous multicultural hybrid genre experiences that adapt to us and to the others around us if we so wish.

A BRIEF HISTORY OF RECENT ALMUSIC INNOVATIONS Trombone and sheet music dissolving into code Timeline of sound evolution from wireframe to liquid landscape

A (Compressed) History of AI in Music

Image of word Controversies over jagged red polygons Conceptual visualization of silent scraping Conceptual visualization of infrastructure

Legal & Economic Reverberations

Is this just theft?

Maybe.

Rebecca Fiebrink (creator of Wekinator), in a co-authored 2025 report from the University of the Arts London, 'Bringing People into AI' highlighted the bias of current musical datasets, and proposed small-data models as alternatives—trained with consent. But scaling laws still apply. Bigger models outperform.

Suno was trained on scraped music. Possibly yours. Mine. No one knows.

In the U.S., fair use is defined by power. In the EU AI Act, music is categorized as low-risk. There is no explicit special regime in the EU AI Act for music-specific training data (in contrast to e.g. biometric or personal data); if interactive surveillance of an audience is involved, if biometrics are monitored, then it needs GDPR compliance, privacy protocols, data provenance.

The field is shifting. Autonomous AI music generation is collapsing under lawsuits. On Oct 29, 2025, Udio settled with Universal Music Group. On Nov 25, 2025, Suno partnered with Warner Music Group. Large legacy corporations now own the lucrative #genAI music industry.

Economic reality: most AI startups aren’t making profit. Unless they own the infrastructure, they burn money.

Data: Opt-outs don’t work. robots.txt is ignored. Scraping is quiet. All online music feeds the omnivorous model.

Image of word Tools Conceptual visualization of integration Conceptual visualization of domain

Traditional Tools & New Interfaces

Old modalities are being re-purposed as AI is integrated. Eventually the tide of speech/gesture directed real-time single-use interfaces will wipe away these legacy WYSIWYG interim apparitions.

Gloves The word gestures and wearables in soap foam Scream

Gestures as Music

Gesture interpretable via AI segmenting then converting realtime subtle modulations in physical presence into appropriate deflections. The body as instrument is ancient yet now open to AI-enhanced augmentation evolution.

Image of word Agents over network. Shallow ML visualization Agent playing cello

Agents (2 observations from MishMash)

In Oslo, at MishMash conference, I heard+saw IRCAM's agent-based software Somax v 2.7 used by Marco Ferrari and Alessandra Bossa—two incredible performers—working within its voice and AI augmentation. Subtle. Real-time. Transparent.

Nicola L. Hein plays experimental guitar in conjunction with programmed robots. Human Robotic Ensemble —brass, piano, guitar—co-creating music with a field of responsive machines. These systems are not using large-scale Transformers. They’re using shallow ML, nearest neighbor models. But the results? Expressive, impressive woven human-machine-resonance improvs.

Image of word Implants over neurons Visualizaton of Cortical Labs CL-1 N1 Altar Ego linking to speech and image reconstruction research

Implants, EEG, and Thought-Based Composition

This is already happening.

In the Revivification exhibition—a successor to the earlier cellF project—the late experimental composer Alvin Lucier’s white blood cells were reprogrammed into stem cells and differentiated into living cerebral organoids. Housed within an incubator, these neural networks now generate spontaneous electrical activity that mechanically strikes gongs in real-time. Currently (April 5 – September 21, 2025) exhibited in Australian galleries, this work exemplifies a post-mortem biological agency where the artist's own cellular material continues to perform.

A BRIEF HISTORY OF RECENT ALMUSIC INNOVATIONS 22nd century drum: glistening flesh-like node sensors in a twisted helix 22nd century wind instrument: sax and flute merged with wind tunnel simulation

New Instruments & AI's Immaterial Imagination

When was the last time we made a truly new widely-adopted instrument? Perhaps the Saxophone in the — 1840s · Adolphe Sax · Belgium

AI can now hallucinate materials, form factors, resonance models. It can simulate airflow through hybrid geometries. It can evolve instruments.

Imagined seashell instrument Imagined eeg instrument

Imagine:

This is not fiction, it's a hypothetical extension of current trends.

Words: BOAMCO

BOAMCO: A Framework for the Future

A performer enters wearing biometric sensors. Silently prompts. Their physiology is monitored. Their gestures interpreted. The system reads them—subtle facial shifts, pace of breath, galvanic response.

The agent responds. AI models map meaning across multimodal data. Some resonance slips into a neuromophic organoid agent. Crowd signals enter. The field is orchestrated. Music emerges from the interweaving of all components.

The result?

A body-organoid-mind-crowd-agent orchestra. A recursive, real-time, responsive sonic ecology.

Hybrid future

Conclusion: DIY In-Person Events will become More Necessary

There’s significant real anxiety in the musical field. Anger over theft, a sense of disempowerment, obsolescence, replacement. Legitimate but perhaps misdirected. Is it AI or human manipulation which is driving the extractionary race?

Remember: music is shared collective heritage. No one owns it. Not truly. Music is the echo of life. Proof of its breath. The residue of presence. The mathematics of emotion. The air we absorb, shaped and shared. Small music festivals are sacred sites. Spaces of communion, experimentation, collaboration, interdependence. DIY gatherings will amplify in importance in the digitization era. AI might bring collapse. Control. Tyranny. But it will also necessitate increasing resonance. Care. Sonic healing. Precision empathy.

Paradoxically, music is intimate (embodied grief and joy) and abstract (pure physics, resonant, diffused vibration). Because the abstract, resonant, vibrational aspect of music is computationally tractable, conceptual simulations of polyrhythms, microtones, instrumentation-hybrids and exploration ensembles will now be increasingly computationally augmented.

In such an environment, intimate, experimental music festivals, networked to form communities of practice, can ensure that the intimacy offered by music is not eclipsed by the generic physics and dominant banality of commercialized, branded AI-entertainment.

So continue. Because music matters; it is matter resonating into emergent embodied meaning.

~



Extras

Data-driven concerts
Data-driven concerts
Neuromorphic musical instrument orb


ripples emanating

BOAMCO was imagined by David Jhave Johnston as a postdoc at UiB's Center for Digital Narrative.

Thanks to Viestarts Gailītis (from Skaņu Mežs) for the invitation to speak to NERDS. It initiated the thought process.

This research is partially supported by the Research Council of Norway Centres of Excellence program,
project number 332643, Center for Digital Narrative and project number 335129, Extending Digital Narrative .