Disclaimer: #whole-use-AI
Most of these slides were made in a single pass. I call this “whole-AI use.” It means I don’t iterate endlessly. I don’t burn carbon refining prompts. I don’t discard the weird. I accept what emerges. Even typos. Even glitch. That’s part of an ecological stance: art made with synthetic systems should continue to carve a lighter carbon weight than being a carnivore, owning a pet, driving a car, or becoming a soldier.
Worlds are Music
Underlying assumptions: Meaning occurs when an entity resonates. Resonance is a form of music. Each ripple reveals relation. No entity exists alone. Every tremor is shared. Electromagnetism itself is reverberation. Everything we see is vibration. Thus, worlds, by definition, are music.
Can machines learn that resonant music — the music of being?
- Yes, they can.
Forecasts for a Musical Future
By mid-to-late 2026, we will have real-time speech-to-audio systems. You’ll speak, hum, walk, gesture—music will respond. You’ll be able to remix, spawn genre variants, shift key signatures, modulate time. And then publish it. On the fly.
Shortly after that, we’ll enter a phase of always-on AI audio. Infinite generative streams. Personalized. Synced to biometric data: galvanic skin response, pupil dilation, heart rate, blood oxygen level, pacing of speech, facial microexpression.
This will extend to shared biometric fields. Multiple people together—hanging out, eating, dancing, making love, working—their signals merge, and so does the music. A collective perpetual algorithmic resonance aligned through metabolic symmetry.
A Concert That Begins in Silence
Imagine a concert that begins empty.
You walk into the room. It’s silent. You are the first node. Your pulse is picked up. Your breath, your blood oxygen, your skin conductance. It becomes the seed of sound.
Then someone else enters. Maybe their voice is picked up by a wearable mic. Maybe it’s a gesture. A phrase. The system listens. Learns. Integrates. Every bio-metric, bio-mimetic, bio-morphic signal becomes part of a compositional agent. The space itself is sculpted—moment to moment—by the crowd’s presence.
It’s not generative in the traditional sense. This is responsive orchestration, managed by an AI agent that governs the phenomenological field.
It is music made not for you, but from you.
The Orchestrator Mutates—Long Live the Agent
Once upon a time, the conductor held the baton. The soloist set the tempo. In drum circles, the one with the most agility led.
Sheet music is an algorithm—a coded methodology to synchronize bodies across time.
Today, orchestration is the metaphor for AI development. OpenAI, Anthropic, Microsoft—they all use terms like agent orchestration, tool use, multi-agent frameworks.
In the future, we're going to have orchestrators, agents, co-creators, augmented creativity, instantaneous real-time nuanced context-specific adaptive subtle sinuous multicultural hybrid genre experiences that adapt to us and to the others around us if we so wish.
A (Compressed) History of AI in Music
- 1980s - George E. Lewis – Voyager : improvisational proto-AI agent, responding to his trombone.
- 1980s - David Cope – Experiments in Musical Intelligence : rule-based Mozartian simulations.
- 2009 - Rebecca Fiebrink – Wekinator : real-time ML + gestural input. Example: I used it with Leap Motion in 2017 to make reactive soundscapes mapped to my AI-generated ReRites books.
- 2017 - Google Magenta : Used Yamaha e-piano dataset. LSTM models generated grainy clumsy piano pieces. You could change temperature to shift from Satie to Cecil Taylor.
- 2019 - MuseNet : Transformer-based, multi-genre, long-form generation. More structured than LSTM, but often bland.
- 2020 - Jukebox : Added vocals. Layered with discrete VQ embeddings. Still compressed.
- 2022 - Riffusion: Visual diffusion models turned spectrograms into audio streams. Perpetually evolving stream that could be directed.
- 2024 March - Suno v3: Game changer. Music was no longer toy-like. It was usable. Genre-coherent. Example: Suno v3 (Alpha) w. ReRites Lyrics All songs prompted on March 6, 2024.
- 2025 August - Suno v5 — Example: I created 15 hours of music in 71 days. Genres like “quirky gentle post-math poly nightcore lo-fi EDM corecore.” Stylized. Listenably weird. Unvalid music.
- 2025 September - Suno Studio : A full IDE. Split stems. Generate harmony mid-track. Hum a melody, replace vocals. Drag-and-drop text-to-audio, image-to-score. Auto quantize. Auto align. Auto vibe.
Legal & Economic Reverberations
Is this just theft?
Maybe.
Rebecca Fiebrink (creator of Wekinator), in a co-authored 2025 report from the University of the Arts London, 'Bringing People into AI' highlighted the bias of current musical datasets, and proposed small-data models as alternatives—trained with consent. But scaling laws still apply. Bigger models outperform.
Suno was trained on scraped music. Possibly yours. Mine. No one knows.
In the U.S., fair use is defined by power. In the EU AI Act, music is categorized as low-risk. There is no explicit special regime in the EU AI Act for music-specific training data (in contrast to e.g. biometric or personal data); if interactive surveillance of an audience is involved, if biometrics are monitored, then it needs GDPR compliance, privacy protocols, data provenance.
The field is shifting. Autonomous AI music generation is collapsing under lawsuits. On Oct 29, 2025, Udio settled with Universal Music Group. On Nov 25, 2025, Suno partnered with Warner Music Group. Large legacy corporations now own the lucrative #genAI music industry.
Economic reality: most AI startups aren’t making profit. Unless they own the infrastructure, they burn money.
Data: Opt-outs don’t work. robots.txt is ignored. Scraping is quiet. All online music feeds the omnivorous model.
Traditional Tools & New Interfaces
Old modalities are being re-purposed as AI is integrated. Eventually the tide of speech/gesture directed real-time single-use interfaces will wipe away these legacy WYSIWYG interim apparitions.
- LANDR – AI mastering, automates (to some degree) what is now a highly technical intuitive art. (First launched 2014)
- Mubert – continuous AI-generated music platform and API for apps, games, and streaming. (2017)
- mmAudio – background sounds for video: also introduces music occasionally. (2023)
- Weavy – patch-based graphical programming interface for ad-creatives to automate video generation. (2024)
- Eleven Music (ElevenLabs) – AI platform for generating full songs and soundtracks from text (and other) prompts. (2025-08)
- Lyria RealTime – real-time music model and API for low-latency, interactive streaming generation. (2025-05)
- Adobe Firefly Generate Soundtrack – Firefly feature that auto-generates background music tailored to video clips. (2025-10)
Gestures as Music
Gesture interpretable via AI segmenting then converting realtime subtle modulations in physical presence into appropriate deflections. The body as instrument is ancient yet now open to AI-enhanced augmentation evolution.
- David Rokeby’s Reflexions (1982-84) used "8 x 8 pixel video cameras ... connected then to a ... Apple II ... a program in 6502 assembly code ... which controlled a Korg MS-20 Analog synthesizer to make sounds in response to the movements seen by the cameras" and became Very Nervous System (1986).
- Laetitia Sonami’s Lady's Glove work in the 1990s with sensor gloves was revolutionary at extracting nuanced software-interpreted data from the body to influence music.
- Imogen Heap’s Mi.Mu gloves are performance-tested, infamously versatile, and let you conduct parameters with your hands.
- Now: TikTok coders, like MadHand, build gloveless systems with AI-enhanced gesture recognition.
- & Roli's commercialized Airwave uses IR + computer vision to track fingers in space. Theremin exponential.
Agents (2 observations from MishMash)
In Oslo, at MishMash conference, I heard+saw IRCAM's agent-based software Somax v 2.7 used by Marco Ferrari and Alessandra Bossa—two incredible performers—working within its voice and AI augmentation. Subtle. Real-time. Transparent.
Nicola L. Hein plays experimental guitar in conjunction with programmed robots. Human Robotic Ensemble —brass, piano, guitar—co-creating music with a field of responsive machines. These systems are not using large-scale Transformers. They’re using shallow ML, nearest neighbor models. But the results? Expressive, impressive woven human-machine-resonance improvs.
Implants, EEG, and Thought-Based Composition
This is already happening.
- AlterEgo (MIT) – silent speech recognition. You think, it responds. No sound, no movement.
- EEG-based decoding – speech and image reconstruction from brainwaves.
- Neurolink’s N1 implant – neural signal capture from live cortical neurons.
- Meta – reconstructs images from fMRI data.
- Cortical Labs’ CL1 chip – neuromorphic silicon neurons, already in the market.
In the Revivification exhibition—a successor to the earlier cellF project—the late experimental composer Alvin Lucier’s white blood cells were reprogrammed into stem cells and differentiated into living cerebral organoids. Housed within an incubator, these neural networks now generate spontaneous electrical activity that mechanically strikes gongs in real-time. Currently (April 5 – September 21, 2025) exhibited in Australian galleries, this work exemplifies a post-mortem biological agency where the artist's own cellular material continues to perform.
New Instruments & AI's Immaterial Imagination
When was the last time we made a truly new widely-adopted instrument? Perhaps the Saxophone in the — 1840s · Adolphe Sax · Belgium
AI can now hallucinate materials, form factors, resonance models. It can simulate airflow through hybrid geometries. It can evolve instruments.
Imagine:
- A breath-field resonator – wind is sound.
- A dermal tremolo – skin-pulse controls frequency.
- A neurosonic lattice – EEG + AI + organoid, generating music directly from thought.
- A memory alloy chime field – reacts to presence and temperature.
- A wetware orchestra – living computation modulating harmonic layers.
This is not fiction, it's a hypothetical extension of current trends.
BOAMCO: A Framework for the Future
- B – Body
- O – Organoid
- A – Agents
- M – Mind
- C – Crowd
- O – Orchestra
A performer enters wearing biometric sensors. Silently prompts. Their physiology is monitored. Their gestures interpreted. The system reads them—subtle facial shifts, pace of breath, galvanic response.
The agent responds. AI models map meaning across multimodal data. Some resonance slips into a neuromophic organoid agent. Crowd signals enter. The field is orchestrated. Music emerges from the interweaving of all components.
The result?
A body-organoid-mind-crowd-agent orchestra. A recursive, real-time, responsive sonic ecology.
Conclusion: DIY In-Person Events will become More Necessary
There’s significant real anxiety in the musical field. Anger over theft, a sense of disempowerment, obsolescence, replacement. Legitimate but perhaps misdirected. Is it AI or human manipulation which is driving the extractionary race?
Remember: music is shared collective heritage. No one owns it. Not truly. Music is the echo of life. Proof of its breath. The residue of presence. The mathematics of emotion. The air we absorb, shaped and shared. Small music festivals are sacred sites. Spaces of communion, experimentation, collaboration, interdependence. DIY gatherings will amplify in importance in the digitization era. AI might bring collapse. Control. Tyranny. But it will also necessitate increasing resonance. Care. Sonic healing. Precision empathy.
Paradoxically, music is intimate (embodied grief and joy) and abstract (pure physics, resonant, diffused vibration). Because the abstract, resonant, vibrational aspect of music is computationally tractable, conceptual simulations of polyrhythms, microtones, instrumentation-hybrids and exploration ensembles will now be increasingly computationally augmented.
In such an environment, intimate, experimental music festivals, networked to form communities of practice, can ensure that the intimacy offered by music is not eclipsed by the generic physics and dominant banality of commercialized, branded AI-entertainment.
So continue. Because music matters; it is matter resonating into emergent embodied meaning.
~
Extras
