AI is about to shake up music forever but not in the way you think – BBC Science Focus Magazine

Posted: June 13, 2021 at 12:44 pm

Take a hike, Bieber. Step aside, Gaga. And watch out, Sheeran. Artificial intelligence is here and its coming for your jobs.

Thats, at least, what you might think after considering the ever-growing sophistication of AI-generated music.

While the concept of machine-composed music has been around since the 1800s (computing pioneer Ada Lovelace was one of the first to write about the topic), the fantasy has become reality in the past decade, with musicians such as Francois Pachet creating entire albums co-written by AI.

Some have even used AI to create new music from the likes of Amy Winehouse, Mozart and Nirvana, feeding their back catalogue into a neural network.

Even stranger, this July, countries across the world will even compete in the second annual AI Song Contest, a Eurovision-style competition in which all songs must be created with the help of artificial intelligence. (In case youre wondering, the UK scooped more than nul points in 2020, finishing in a respectable 6th place).

But will this technology ever truly become mainstream? Will artificial intelligence, as artist Grimes fears, soon make musicians obsolete?

To answer these questions and more, we sat down withProf Nick Bryan-Kinns, director of the Media and Arts Technology Centre at Queen Mary University of London. Below he explains how AI music is composed, why this technology wont crush humanity creativity and how robots could soon become part of live performances.

Music AIs use neural networks that are really large sets of bits of computers that try and mimic how the brain works. And you can basically throw lots of music at this neural network and it learns patterns just like how the human brain does by repeatedly being shown things.

Whats tricky about todays neural networks is theyre getting bigger and bigger. And theyre becoming harder and harder for humans to understand what theyre actually doing.

Were getting to a point now where we have these essentially black boxes that we put music into and nice new music comes out. But we dont really understand the details of what its doing.

These neural networks also consume a lot of energy. If youre trying to train AI to analyse the last 20 years of pop music, for instance, youre chucking all that data in there and then using a lot of electricity to do the analysis and to generate a new song. At some point, were going to have to question whether the environmental impact is worth this new music.

Im a sceptic on this. A computer may be able to make hundreds of tracks easily, but there is still likely still a human selecting which ones they think are nice or enjoyable.

Theres a little bit of smoke and mirrors going on with AI music at the moment. You can throw in Amy Winehouses back catalogue into an AI and a load of music will come out. But somebody has to go and edit that. They have to decide which parts they like and which parts the AI needs to work on a bit more.

The problem is that were trying to train the AI to make music that we like, but were not allowing it to make music that it likes. Maybe the computer likes a different kind of music than we do. Maybe the future would just be all the AIs listening to music together without humans.

Im also kind of sceptic on that one as well. AI can generate lyrics that are interesting and have an interesting narrative flow. But lyrics for songs are typically based on peoples life experiences, whats happened to them. People write about falling in love, things that have gone wrong in their life or something like watching the sunrise in the morning. AIs dont do that.

Im a little bit sceptical that an AI would have that life experience to be able to communicate something meaningful to people.

Read more:

This is where I think the big shift will be mash-ups between different kinds of musical styles. Theres research at the moment that takes the content of one kind of music and putting it in the style of another kind of music, exploring maybe three or four different genres at once.

While its difficult to try these mash-ups in a studio with real musicians, an AI can easily try a million different combinations of genres.

People say this with every introduction of new technology into music. With the invention of the gramophone, for example, everybody was worried, saying it would be terrible and the end of music. But of course, it wasnt. It was just a different way of consuming music.

AI might allow more people to make music because its now much easier to make a professional sounding single using just even your phone than it was 10 or 20 years ago.

A woman interacts with an AI music conductor during the 2020 Internet Conference in Wuzhen, Zhejiang Province of China. Getty

At the moment, AI is like a tool. But in the near future, it could be more of a co-creator. Maybe it could help you out by suggesting some basslines, or give you some ideas for different lyrics that you might want to use based on the genres that you like.

I think the co-creation between the AI and the human as equal creative partners will be the really valuable part of this.

AI can create a pretty convincing human voice simulation these days. But the real question is why you would want it to sound like a human anyway. Why shouldnt the AI sound like an AI, whatever that is? Thats whats really interesting to me.

I think were way too fixated on getting the machines to sound like humans. It would be much more interesting to explore how it would make its own voice if it had the choice.

I love musical robots. A robot that can play music has been a dream for so many for over a century. And in the last maybe five or 10 years, its really started to come together where youve got the AI that can respond in real-time and youve got robots that can actually move in very sort of human and emotional ways.

The fun thing is not just the music that theyre making, but its the gestures that go with the music. They can nod their heads or tap their feet to the beat. People are now building robots that you can play with in real-time in a sort of band like situation.

Whats really interesting to me is that this combination of technology has come together where we can really feel like its a real living thing that were playing music with.

Yeah, for sure. I think thatd be great! It will be interesting to see what an audience makes of it. At the moment its quite fun to play as a musician with a robot. But is it really fun watching robots perform? Maybe it is. Just look at Daft Punk!

Nick Bryan-Kinns is director of the Media and Arts Technology Centre at Queen Mary University of London, and professor of Interaction Design. He is also a co-investigator at the UKRI Centre for Doctoral Training in AI for Music, and a senior member of the Association for Computing Machinery.

Read more about the science of music:

Go here to read the rest:

AI is about to shake up music forever but not in the way you think - BBC Science Focus Magazine

Related Posts