Technology Learning Artificial Intelligence

Max Cooper is using AI to push the frontiers of creativity and communication

Author

By Webb Wright | Reporter

May 31, 2023 | 15 min read

The London-based music producer – known for his ethereal sounds and visually hypnotic live performances – discusses the limits of human language, the future of music, the dangers of unchecked AI research and the mysteries of consciousness.

Image

Now an acclaimed music producer, Max Cooper was trained as a computational biologist. / Max Cooper

The first thing one notices about Max Cooper’s music is the complexity. Intricate layers of sound intermingle, forming a dense audio latticework of stunning depth. The music has an unpredictable, almost hallucinatory quality – one moment it’s calm and composed, like an astronaut gently floating in space, the next it’s careening and chaotic, like said astronaut's spaceship spiraling out of control.

I first saw Cooper perform in Austin, Texas, earlier this year. For the entirety of the show, he remained behind an opaque screen, onto which were being projected some of the most captivating – and at times bizarre – visual images that I have ever seen, anywhere. Some of them seemed to have been generated by AI; there was no other way to explain their otherworldly appearance.

Then again, if any human being’s mind could dream up the sorts of surrealist imagery that I saw that night in Austin, it’s Cooper’s. Before he became an internationally touring music producer, he received his doctorate in computational biology – the study of using mathematical models to simulate the behavior of living organisms – from the University of Nottingham. His background as a scientist clearly bleeds into both his music and his visual aesthetic; everything is an exploration and a celebration of patterns, both organic and manmade.

AI, which is fundamentally a pattern-detection technology, therefore holds special appeal for Cooper. In an interview with The Drum, he describes how his latest work — including his most recent album, Unspoken Words — has been shaped by the use of AI, which has been catapulted into mainstream consciousness largely since the release of ChatGPT in late 2022. He also discusses his hopes and concerns about where this technology is leading humanity.

This interview has been edited for length and clarity.

Can you describe the process of how you create visuals to accompany your music?

Most of my music is written with visuals, so the visual ideas often come first. When you hear the music on its own, without the visual context, then you’re sometimes missing a big part. The most productive way to blend those two elements is when the visual structure lends itself to some sort of musical structure. That’s what I look for: Some sort of production technique that I can apply which links the two aesthetics together.

And then once I have that, then I'm looking for the emotional interpretation: What does this thing make me feel? How can I interpret it musically? So it’s a combination of this sort of mapping process that I can do technically, and also a mapping process that is more emotional and more about human expression, which is much harder to describe in words. Hence, the Unspoken Words album concept, in which I was delving into these sorts of ideas.

I would imagine that the growing sophistication and accessibility of AI must be extremely exciting to you because so much of AI is based on pattern detection. You have a partnership with AI-powered music platform Aimi — what are some other ways in which you've recently been leveraging AI for creative purposes?

The key thing for me is, ’How can we use AI to do things that we could never do before?’ Because a lot of uses of AI are just replicating what we can do already, but much quicker. Which is fine – it saves us time. But what I'm really interested in is, ‘What can we do that we could never achieve before?’

One recent project I did with [machine learning researcher] Xander Steenbrugge in that vein was I was looking into: How can we express things that we can't put into words? And how can I visualize this concept? I came across the work of Ludwig Wittgenstein – he was a philosopher who dealt with the limitations of language. And he was specifically interested in how the problems of philosophy were often problems of language. So his writings were relevant to this Unspoken Words album concept. But I also couldn't understand his writings; they’re really dense, he’s got a lot of his own terminologies... I didn’t want to spend six months just trying to get to grips with it. I had to make an album.

So I was thinking, ’Okay, here's this text, which I know is really relevant. But I can’t fully understand it; how can I visualize it?’ I realized that AI can do that for me. So I fed the Wittgenstein text into an AI system that visualizes text, and then we created essentially moving imagery sequences from the philosophical text.

What we ended up with was this really bizarre and beautiful audio-visual sequence that merged the boundaries between reality and symbolism, objectivity and subjectivity. So that was an example of using AI to do something that I couldn’t do myself.

Are you using AI for music production? Do you envision human musicians eventually being displaced by AI as they learn to make deeply impactful music that can resonate with any particular mood or mindset?

I’ve done some experiments with AI in the musical realm. It hasn’t yet taken over music – it’s going to at some point, but so far, the AI tools I’ve tried for making music are less competent than what I can do myself. That will change at some point and I’m sort of looking forward to that. Every week there’s a new wave of these magical devices.

But I think what musicians want is a fully integrated AI assistant, because there are a lot of processes I have to do musically, which are quite repetitive, and we all want the AI to sort of step in and say, ’Oh, I can see you're doing a repetitive task, would you like me to continue this for you?’ And then it does the next two hours of work for you. That'd be nice.

And I would want to use AI to actually be able to access new worlds of timbre and new worlds of sound in the context of an instrument that I can still play and use to express myself. I’m much less interested in getting AI to write melodies for me because that’s human expression, that’s human communication, that’s what I enjoy doing. I don’t want to take that away from music and just be able to click a button and have a piece of music. In my mind, that’s no fun at all. I think that while that's possible, it’s not something that I'm interested in personally. I love writing music for human expression, trying to share my thoughts and feelings with people. That’s why I make music. I hope that there’s still a place for human creators of music in the future. Music and art in general are fundamentally about human expression and communication, and if there’s not a human on the other end of that, then I think something gets lost – it’s not quite as engaging for me.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

Media Agency Briefing

Thursday

Our media editor explores the biggest media buys and the trends rocking the sector.

We’ve recently seen a major surge in big tech companies rushing to invest in AI. As someone who frequently interacts with AI, do you have any concerns about this corporate gold rush?

We have to be concerned about unchecked progress. The good thing is that everyone’s talking about this, so it’s no secret. But I’m certainly concerned about whether these big companies should be just in this arms race with little to no understanding of how these systems work, really.

My Ph.D. was on the evolution of gene networks, so I used to simulate how networks of interacting genes could evolve into basic patterns of gene expression across cells, and how organisms can grow into complex structures using gene networks. And it’s the same basic principle as AI: when you build these things to self-optimize, you don’t have any idea how they’re doing what they’re doing. That’s one of the things that's been called for, that we need to slow things down and try and understand what's happening [in advanced AI models], and I think that's a sensible thing that we should be doing.

But realistically, if these major players slow down with this, maybe players that we don’t want to be doing this, these things are just going to go plow ahead with it. I don’t think you can stop it now, really, which is a shame, but I think that’s the situation.

I have similar concerns because marketing from big corporations really does to some degree set the tenor for how we as a civilization approach and understand new technologies. But in some important respects, I also believe that early artistic uses of AI can shape our society’s future relationship with AI. Do you have any thoughts about the responsibility that artists like yourself have at this moment with regard to AI?

My angle on that has been to pay attention to the discussions that are happening. For example, in the visual realm, AI has been trained on the work of human artists, and then in seconds, it can replicate aesthetics that people have spent their lifetimes developing. So there’s an issue there around ownership when it comes to AI-generated art, and there need to be systems of reference that show where aesthetics have come from – we need to know clearly what those underlying datasets are.

It sounds like a big thing, but in the sciences, that’s exactly what people have been doing for hundreds of years in scientific publications: Someone publishes something, they write down where their references are, and if you follow the references, and the references’ references, you get this tree of all this information from all these different people that has given rise to this particular publication. So it’s quite similar to the way that AI works in some ways; it’s just that those references are hidden, and they shouldn’t be. I’m also working on a project right now which is based on one artist’s work, and we’re using AI to augment, rather than replicate, that person’s art. It’s still about a real human artist, even though we’re using AI..

Can you tell me a bit about why you wanted to partner with Aimi (most recently through their Community Experiences feature)? What excited you about that collaboration, and what do you look for in a brand partnership?

Aimi was quite an early arrival to this whole [AI-generated music space]. And I was just interested in what the platform can do, really. One of my main interests was being able to work with them and then eventually have a studio assistant; it’s fine getting a system that can regenerate my sort of music, but what I'm really interested in is whether I can collaborate with that system in the studio and have it running in parallel to my production process. And that's something that I hope we get to. So that's one of my main interests: not something that would write music but something that would be working parallel [to me], making variants on what I'm doing, suggesting small things based on the way I've written music previously, that sort of thing. So that's why I'm collaborating with [Aimi] and will continue to work with [other brands] like that. And also, the good thing about what [Aimi is] doing is that they’re building AI systems which are specifically linked to particular artists. They’re not offering an AI system that you can use to generate your own piece of music, released under your name, that’s trained on someone's work. They’re offering an AI system that was trained on that person's work, and it says it was trained on that person's work. They fall on the right side of that [transparency] issue, in my opinion.

Do you believe AI will ever become conscious, capable of subjective, emotional experience?

Consciousness, as we understand it, is grounded in our existence as embodied biological machines living in a community and in the world. Until we have AI systems living among us and embodied in our world, I don’t think they could ever have anything approaching the sort of consciousness that we have, because it’s so fundamentally grounded in that embodiment in many ways.

There are so many [aspects of human consciousness] – particularly when it comes to the social world that we exist in – that will be nigh impossible to program [into an AI]. So if an AI were to become conscious, it’d probably be some very bizarre type of consciousness that probably will be very hard for us to communicate with, or even assess whether or not it is in fact consciousness; I don’t even know if you’re conscious, let alone some machine that thinks in a totally different sort of way than we do.

For more on the latest happening in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter here.

Technology Learning Artificial Intelligence

More from Technology

View all

Trending

Industry insights

View all
Add your own content +