AI-powered robot rappers: a 100 year history

The word ‘robot’ is 100 years old. Czech playwright Karel Čapek coined the term in his play Rossum’s Universal Robots which premiered in January 1921. Back then, most people in Europe knew someone who had died in the First World War. The idea of sending robots to war instead of humans seemed quite compelling in that context. Nowadays, armies aim to actually downsize in preparation of more drones and combat robots.

A different, but related, development also gained momentum around 1921: the idea of noise generated by machines as music. This idea stems from the thinking of futurist Luigi Russolo and specifically his intonarumori.

Luigi Russolo & his assistant Piatti with the intonarumori (Photo by Hulton Archive/Getty Images)

These machines are not robots, but the idea that machine-generated sounds combine into music has had a profound impact. How that idea currently plays into our thinking of what music is, is what I explore here.

Robots & music in 2021

Thinking about robots in music right now often centers around AI music stars. A couple of months ago, I wrote how robots could create a connection between the virtual and physical worlds. What does it mean, though, to have a robot, or even just an artificially powered intelligence, create music?

FN Meka

A self-proclaimed ‘AI-powered robot rapper’ FN Meka has more than 9 million followers on TikTok. He just dropped a new track this month.

https://www.youtube.com/watch?v=TxrNcbcc8fo

FN Meka isn’t actually a robot, there’s a team of people behind the avatar and while the music is AI-generated, the voice is human. Yet, this is a great example of physical and virtual worlds colliding, similar to what we’ve seen with, for example, Gorillaz. The main difference with the Gorillaz being the shift in music production from human to AI-based.

What Russolo’s intonarumori could actually do was make very specific noises that the composer related to everyday industrial sounds such as hissing, crackling, exploding, etc. In total there were 27 variations of the instrument. Inside the instruments a lever controlled the pitch to fix or slide across a scale in tones and gradations thereof. Which, in a way, isn’t too dissimilar to how Auto-Tune works, a technology that FN Meka isn’t one to shy away from.

Visual impact

During the first months of the pandemic there was a clear shift from audio to video in terms of music consumption. This implied that people wanted to lean in more, pay more attention to what they listened to with an added visual element. The rise of FN Meka on TikTok fits into this narrative. He’s an engaging visual appearance who entertains by drawing viewers into his world.

@fnmeka

Which song 🎵 is your Favorite ⁉️ #iphone13 #airpod #iphone

♬ original sound – FNMeka

The importance of this visual aspect to musical culture is what prompted researchers at Georgia Tech in the US to create Shimon, a musical robot that looks, well, kind of cute.

The visual cues – like the ‘head’ bopping to the beat – are important for both audiences and fellow musicians to help connect to Shimon. Moreover, the researchers who developed the robot not only drew on artificial intelligence – more specifically deep learning – but also on creativity algorithms. This means that Shimon has the power to surprise, to sound creative in his own way.

Creative robots?

The connection between creativity and machines is significant, because it allows for a future that exists beyond the boundaries of what we know right now. When Shimon surprises his fellow musicians by shifting rhythm or pitch he softly pushes a boundary that often seems very far away.

In his blog on the AI Revolution Tim Urban explained how developments in AI research are propelling us towards artifical superintelligence – the moment computers become more intelligent than humans. What Urban doesn’t discuss is creativity. A trait some argue will never appear in a machine. And yet, this all depends largely on how we define creativity. Arthur Miller, for example, asks not just whether machines can make art in the future but also whether we will be able to appreciate it. Perhaps we will have to learn to love it, similar to how some of us enjoy the sound of Auto-Tune and others do not.

While the threat of a superintelligent being is not something to dismiss, right now any AI-powered music still leans heavily on human input. To create music with an AI, so to speak, is to create a set of parameters – a training set – which the machine uses to process sound. For Holly Herndon‘s album Proto she worked with her own AI which she called Spawn. From the data fed into the machine by Herndon she was able to draw a voice. To then incorporate this voice into the music in such a way that it felt musical, or creative, meant splicing and editing that vocal sonic output.

The final leap

What Herndon does is similar to what Russolo did with his intonarumori. He had 27 instruments to recreate the sounds he heard and aimed to combine those into a music that fit into a certain tradition of composition. She built not a physical machine, but a synthetic singer whose voice she created with data input and subsequently rearranged into her musical vision. FN Meka plays around with the idea of an AI-powered robot but leans more heavily into the visual culture of a virtual music star. Where the next jump in history sits, then, is closer to what Shimon stands for: a robot capable of listening and by virtue of his data, his knowledge, being able to jump just that bit further than the training set supplied to him. That already leads to surprise, which in itself is a prerequisite for creativity.

How the rise of Authorless Music will bring Authorful Music

Forty thousand. That’s the number of songs being added to Spotify every day. Per year, that’s nearly 15 million. With AI, we are approaching a world where we could easily create 15 million songs per day. Per hour even. What might that look like?

Can music experiences performed by robots be Authorful? (photo: Compressorhead)

The music trend we can most linearly extrapolate into the AI age is that of utilitarian music: instead of putting on an album, we put on workout music playlists, jazz for cooking, coffee time Sunday, music for long drives.

Artists have become good at creating music specifically for contexts like this. It often forms a big consideration in marketing music, but for also the creation process itself. But an artist can’t be everywhere at once. AI can and will be. Meaning that for utilitarian music, artificial intelligence will have an unfair advantage: it can work directly with the listener to shape much more gratifying, functional music experiences.

This will lead to the rise of Authorless Music. Music without a specific author, besides perhaps a company or algorithm name. It may be trained by the music of thousands of artists, but for the listener it will be hard to pinpoint the origins back to all or any of those artists.

Do we want Authorless Music? Well, not necessarily. However if you track music consumption, it becomes obvious that the author of music is not important at all for certain types of music listening. Yet we crave humanity, personality, stories, context.

Those familiar with trend watching and analysis, know to keep their eyes open for counter trends. When more of our time started being spent on social platforms and music became more anonymous due to its abundance, what happened? We started going to festivals in numbers never seen before. So what counters Authorless Music?

The counter trend to Authorless Music is Authorful Music. Although there will be a middle space, for the sake of brevity I’ll contrast the two.

Authorless MusicAuthorful Music
OriginAI-created or obscureHuman-created (ish)
FocusSpecialised in functionSpecialised in meaning
RelationLittle emotional involvementStrong emotional involvement
TraitPersonalizedSocialized

Authorless Music: primarily driven by AI or the listener is unable to tell whether the listed artist is a real person or an algorithm. The music is specifically targeted towards augmenting certain activities, moods, and environments. Due to its obscure origin, the listener has little emotional involvement with the creator (although I’m looking forward to the days where we can see AI-algorithms fan bases argue with each other about who’s the real King / Queen of AI pop). In many cases it will be personalised to the listener’s music taste, environment, weather, mood, etc.

Authorful Music: primarily created and / or performed by tangible people or personalities. It will be focused in shaping meaning, as it is driven by human intent which embeds meaning by default. This type of music will maintain a strong emotional link between artists and their fans, as well as among fans themselves. This music exists in a social way – even music without lyrics, such as rave music, exists in a social context and can communicate that meaning, context, and intention.

With the increasing abundance of music (15 million tracks per year!), the gateway to Authorless Music has been opened. What about Authorful? What experiences will we craft in a mature streaming landscape?

Two important directions to pay attention to:

Socialising music experiences

It’s so easy to make and manipulate music on our smartphones now. Whether it’s music as a standalone or accompanying something on Instagram or TikTok. One reason for this massive amount of music being added to streaming services is because it’s easier than ever to make music. With apps that make it easy for people to jam around with each other, we’ll see a space emerge which produces fun tools and basically treats music as communication. This happens on smartphones but is strongly complemented by the virtual reality and gaming space.

See: JAM, Jambl, Endlesss, Figure, Smule, Pacemaker.

Contextualising music experiences

There is a lot of information around music. What experiences can be created by exposing it? What happens when the listeners start to enter the space between creator and listener and find their own creative place in the music through interaction? (I previously explored this in a piece called The future of music, inspired by a cheap Vietnamese restaurant in Berlin)

Examples of this trend: lyrics annotation community Genius, classical music streaming service IDAGIO, and projects like Song Sommelier.

Special thanks to Data Natives, The Venue Berlin, and Rory Kenny of JAM for an inspiring discussion on AI music recently. You’ve helped inspire some of these thoughts.

New to MUSIC x? Subscribe to the free newsletter for regular updates about innovation in music. Thousands of music professionals around the world have gone before you.

What playing around with AI lyrics generation taught me about the future of music

Will AI replace human artists? What would the implications be? These questions grip many in the music business and outside of it. This weekend I decided to explore some lyric generation apps and see what I could get out of them – learning a thing or two about the future of music along the way.

Below I’ve posted the most coherent lyrics I managed to get out of one AI tool. I’m dubbing the song Purple Sun.

Image with a purple sun
What I imagine the song’s artwork to look like.

You can make the sun turn purple
You can make the sea into a turtle

You can turn wine into water
Turn sadness into laughter

Let the stars fall down
Let the leaves turn brown

Let the rainwoods die
Let wells run dry

I love the turtle line. I guess the algorithm struggled with rhyming purple.

Two lines down is a wine / water line. Initially I was impressed by having a western cultural reference. But hold up… turning wine into water? That’s just evil.

Read it over once more. Or twice. By reading it over more, I became convinced that obviously humans are the superior songwriters.

But you know what, I’ve been lying to you.

The origins of the above lyrics are actually human, from a 90s rave song called Love U More by DJ Paul Elstak.

And they carry meaning. A lot of meaning to a whole generation of people in The Netherlands and other parts of Europe. Myself included. The meaning comes not necessarily from what the intent of the lyrics is. It comes from the music, nostalgia, memories, associations.

This is listener-assigned meaning. As soon as you release music, you give over control of the narrative to an audience. Artistic intent may have a lot of sway, but sometimes a song that’s a diatribe against fame turns into something stadiums full of drunk people chant.

A few statements to consider:

  1. AI has a role as a tool to be used by people to apply their creativity.
  2. Not all successful human created art objectively requires a lot of skill.
  3. Creativity doesn’t end with the creator. The creator sets intent, the listener assigns meaning.

Let’s pair #1 and #3. In the first statement I talk about people, rather than mention specific roles as in the thrid statement. That’s because AI allows more people to be creative, either as listener, creator, or the space in between.

It’s this space in between that will be impacted and shaped by AI. Think of the dadabots projects, such as their infinite neural network generated death metal stream, apps like JAM, Jambl, and Endlesss which allow people to express themselves musically in easy ways, or technologies that turn music into something more adaptive like Bronze and FLUENT (disclaimer: I’m an advisor to the latter). Not all of the above use AI, but all cater to this space in between listener and creator.

The reason why I added statement #2 is because AI-created music doesn’t necessarily have to be objectively good. Music is subjective. Its sucess depends on how well it can involve the listener. That’s why AI is destined to be the most important force for the future of music in a more creative world.

Credits for the lyrics above: Lucia Holm / Paul Carnell. Thank you for the wondrous energy, the memories, the music.

Image via Rising Sun.

New to MUSIC x TECH x FUTURE? Subscribe to the free newsletter for weekly updates about innovation in music. Thousands of music professionals around the world have gone before you.

Postinternet Music

The third internet generation for music is here.

Purpose

MUSIC x TECH x FUTURE is on a bit of a hiatus. I started it 2 years ago with the goal of shedding light on topics that I felt were being neglected.

Two years later, I feel more positive about the conversation in the music business. Besides that, great newsletters (like Platform & Stream) and writers (like Cherie Hu) have emerged and cover a lot of the topics I set out to cover with MUSIC x TECH x FUTURE. So what role can I play now in moving the conversation forward?

I have been doing a lot of thinking about what’s next. How will all these trends we discuss combine? What are we not talking about? Where are the opportunities? What is the next generation of artists doing? What do they know that we don’t?

By thinking about this, I have slowly been reinventing MUSIC x TECH x FUTURE along with the topics I cover. Music as a business is a complex ecosystem. Music as a phenomenon has kept generations of musicologists and philosophers occupied in discussions without conclusions for millennia. The question I have been answering is: what do I find important and what is nobody talking about?

Inspiration

By focusing on innovation in music, and always expanding my musical and artistic horizons, I have seen some developments over the last year that are starting to click together. I am now of the opinion that we are seeing the emergence of an important new generation of music that is going to spawn its own ecosystem.

Broadly speaking, music & the internet has had two phases so far:

Phase 1: the great disruption

Let’s call it the Napster moment. It led to the first new status quo. The rule it imposed was this: “anything that can be stored in digits can be communicated digitally through networks.” (this rule has also been called “information wants to be free”) This introduced music, and its business ecosystem, to the age of networks. Instead of moving products through distribution and media channels, it now moved through networks… and anyone that wanted to play the game, no longer had to find a way into the channels — everyone was on the network.

MySpace Tom: a friend for everyone

Phase 2: the MySpace moment

This phase is probably heralded by what I call the MySpace moment. MySpace grew as piracy thrived. Communities formed. We understood what social media could mean for music. Then MySpace collapsed and there was nothing there to take its place. Instead, the smartphone enabled the next generation of music and social platforms. On-demand music services like Spotify and SoundCloud appeared — both making an impact on modern music culture far exceeding MySpace’s. Communities formed again.

Phase 3: the SoundCloud moment

So what’s phase 3? The streaming economy is maturing. We are still figuring out how it will work exactly. Let the constant lawsuits between musicians, songwriters, labels, and streaming services be a testament to that. The shitty smartphones we used to have, have been traded in for phones that are more powerful than the computers on our desks a few years ago. AND they have cameras on both sides, AND we have fast internet, ALL the time. Queue YouTubers, Instagram stars, as well as producers rebooting their careers by becoming Snapchat personalities. 🔑

Meme culture went mainstream. People retiring now, with lots of free time on their hands, have been using the internet for 20 years. People reaching maturity now don’t know the world without internet. They may have been carrying smartphones before taking their first chemistry class. It introduces new questions and phenomena in our culture and in music. A 2017 headline that captured one of those phenomena well was: “Rap’s Biggest Stars Are Depressed & So Are Their Fans”.

Net art commenting on internet & mental health.

OK OK OK SO WHAT IS PHASE 3?!

I can’t tell you. We can only see it once it’s there. But I can tell you how to be part of it.

With each of these shifts media culture shifted, so you have to look at what changes media culture is going through right now. Artificial intelligence, voice activated devices, augmented reality, and virtual reality all play tremendously important roles here. We still don’t know what the SoundClouds, Facebooks, Spotifys, PewDiePies and Justin Biebers (discovered through YouTube) of this phase will be, but we do know what technologies and media formats they may employ.

When MySpace started collapsing, everyone wanted to figure out what the ‘next MySpace’ would be. There was no next MySpace. Not in the way anyone was thinking about it. Ultimately, Facebook and SoundCloud filled that gap and took things way further than MySpace.

So what would the next SoundCloud look like?

This is what I know about the next SoundCloud. It can be clunky. In fact, it may be better if it’s not easy to use (e.g. Snapchat): kids will spend time figuring out how to move into virtual spaces where they can do their own thing. P2P services were not easy to use at first, torrents weren’t easy to use, and as elegant as it was, SoundCloud was not as easy to use as MySpace in its early days as long as you were trying to use it for MySpacey purposes.

It has to do 1 thing extremely well though (let’s call it ‘killer feature’). I remember that SoundCloud’s waveform & commenting feature was so great that artists were learning basic code, so they could remove MySpace’s standard players from their profiles and add SoundCloud’s waveform.

Then it has to have high cultural appeal. The waveform helped SoundCloud travel. It was cool. It’s hard to say what it will be like for the next SoundCloud… But perhaps it’s a cryptotoken. Blockchain is cool and cryptocurrencies are cool. They have cultural appeal, partly because of their association with ordering drugs online via the Tor network. But also because they represent dissent against the status quo, whether that’s valid or invalid. And the first cryptocurrency millionaires in music are already here. 50 Cent.

Perhaps Mat Dryhurst, a prolific thinker and artist (some may know him from his work with Holly Herndon), will be proven right and we will see a tokenized SoundCloud. Fingers crossed, because I admire what they’ve done and the role they’ve played in helping modern music & internet culture take shape.

But what about…

We assume too often that what comes next follows more or less linearly from what was there before. By doing so, we discount important developments and blind ourselves to their potential impact. In previous paragraphs, I have done exactly that. So it’s time to clean up my mess.

What is internet culture?

First of all, I need to clarify what I mean when I talk about internet culture or online culture. I am talking about audiovisual aesthetics, language, cultural memes like jokes, discourse about identity, politics, society and psychology. These emerge online. From bedrooms. From people of all ages and countries, connecting online to collaborate, iterate, remix, and discuss in virtual space.

This has manifested through music genres like vaporwave and nightcore (example below), but also more serious topics, such as a cultural emphasis on mental health, and identity (most notably gender identity). Then there’s a darker side to it too. The alt right has been able to create so much impact, from bedrooms, by using the same internet culture dynamics that previous examples utilize — eventually memeing Trump into the White House. They accomplished it as part of an alliance of mostly pre-internet organisations, institutions, and structures, but those organisations couldn’t have pulled this off without their internet army.

When I talk about internet culture, or online culture, I do not mean to suggest a separation between online and offline. I’m just pointing at the origin. As a matter of fact, the internet has become such a standard part of our lives that we are online even when we’re offline.

On a free weekend day, leave your phone at home. Go explore the city. Go to parts you’ve never been. Soon, you may get lost and want to check Google Maps. You may see something fascinating that you’d like to photograph and share on Instagram or Facebook. You might take a mental note to look that building up on Wikipedia when you get home to get more history.

By now, our minds are always online. Even when we believe we’re offline.

Always online

This is the number 1 thing that changed over the course of aforementioned phase 2. Even when smartphones arrived, we weren’t online all the time. But now we are. The fact that we are always carrying devices around that are connected to fast internet, with cameras on both sides, and with great screens compared to those 5–10 years ago, is one of the most important realities for the future of music.

Musical.ly, sold last year for around $1bn, comes to mind.

Mixed reality

How platforms deal with ‘mixed reality’ may be as crucial as the question of how the previous generation dealt with the rise of the smartphone. Back in Facebook’s younger days, the company was struggling to crack mobile and eventually took drastic measures to become mobile-first. Getting ahead of the problem this time, Facebook entered the virtual reality space in 2014 through the early acquisition of Oculus VR for $2bn.

But I don’t think it’s VR as a medium that will have the high cultural impact that the internet did. I think it’s about the interface to other aspects of our experience. It’s why I believe the below video of Mark Zuckerberg’s wife, Priscilla Chan, calling Mark from ‘the real world’ while he’s in a VR version of his home, was one of the most important tech showcases last year.

Skip to 4:50 if the video doesn’t auto-play from there.

Offline and online is blurring, so what does that imply for music?

Instreaming

Late last year I attended a gig that has really started falling into place since. A friend from Holland (Victor, also known as S x m b r a) was coming to Berlin to do a gig. I met him when he was mostly known for writing for Generation Bass — an important blog for underground bass music culture. He is extremely plugged in and knows so much about trends in music (particularly online niches), so I really trust him as a music curator.

He is also part of something called c a r e, which is described as:

c a r e is a post-internet party taking place online.
c a r e is about sharing together. c a r e is a future sensation.
this digital experience enables you to connect with internet kids worldwide. it also provides the opportunity to meet and discover artists and people which have common interests. we are a based world community that meets at url parties. we are glad to invite you to this virtual concept of partying. we hope you’ll enjoy the event! see you online.

Through c a r e, he teamed up an interdisciplinary collective called Clusterduck which specialises in internet culture. Together they organised a “url / irl party” as part of Clusterduck’s Internet Fame project, which is part of the Wrong Biennale — a global event celebrating digital art.

During the event, an audio & video stream connected people from their bedrooms to the ‘irl’ event. These people could interact with each other online, but they were also “instreamed” so their chat messages & webcam feeds on Tinychat would be shown inside the party. The founder of c a r e, who wasn’t present in person, is even billed on the poster and broadcasted a DJ set from url to the irl space in Berlin.

A lot of people at the ‘irl’ part of the event were familiar with some of the people they saw on the ‘url’ part displayed on a prominent screen above the dancefloor & bar. So it created this sense of community & connection and blurring of irl & url.

You could walk into such an event and think it’s just some young folks who set up some webcams, but when you see it as part of the greater trends in our all-absorbing media & tech culture, what was happening there becomes way more significant.

Internet culture and music

I will be going way deeper into this in future articles and newsletters, but I want to give you an example of what I think people should be paying attention to.

For example, the Sponsored Content album by an artist called Antwood. It’s a perfect example of the post-internet avant-garde expression in music. Antwood:

“In the past year, I found that ASMR [dubbed by Google as the biggest YouTube trend you’ve never heard of], which I had previously used as a source of foley in my music, was a fairly effective sleep aid. I’d been using the videos in this way for a few months, when I noticed a popular ASMR YouTuber announced a plan to incorporate ads into her videos; quiet, subtle ads, woven into the content. What bothered me about this was that these ads would target viewers, such as myself, during times of semi lucid vulnerability. This disturbed me, and I unsubscribed.

Sponsored Content explores this idea of subversive advertisement, at least superficially. It’s obviously about the ubiquity of ads and the commodification of online content. The unlikely placement of ads in the music aims to force the listener to become hyper-aware of being advertised to rather than passively internalizing it. But after the record was finished, it became undeniable that really it wasn’t so much a “concept record” about advertisement; it’s as much about intentionally devaluing the things I’ve invested myself into, and over-complicating my work. When I realized this, I considered taking the ads out, and playing the music straight. But I left the record as it is: honest, flawed, with a little humour, and slightly up its own ass.”

I’ve compiled over 25 hours of albums and releases that I feel adhere to this trend in music (Spotify playlist). My playlist biases towards the club & nightlife variants of this trend, but the visual and musical aesthetics & themes should give you a good understanding of what this is about. The most famous example is probably Arca, who has produced for Kanye West and Björk.

Aforementioned Holly Herndon, who toured with Radiohead, uses AI in her work: “We have an AI baby that we’re training on our voices; on our voices and on the voices of our ensemble. Yeah, it’s learning how to talk and how to sing, so it’s freaking weird”.

Another great example of the post-internet trend in arts and music is YouTuber Poppy, who recently released an album called Poppy.Computer on Mad Decent.

Besides the obvious commentary on internet culture & society on her channel, Poppy plays with the uncanny valley hypothesis of robotics professor Masahiro Mori. The hypothesis suggests that humans feel fine with robots that are obviously not human, but the more semblance these robots get to humans, the stronger our feelings of eeriness and revulsion.

In music, perhaps the best known example of a post-internet genre is vaporwave:

The Virtual Vaporwave Scene

From boardroom to bedroom

Over the last 2 years, I have written a lot about the music business ecosystem. Always with an innovative angle, but often focused on the type of big issues that are discussed and decided about in boardrooms. While those things are immensely important, it’s also reactive. Reaction doesn’t set trajectory — it can only adjust it.

My focus is going to shift from the boardroom to the bedroom. From complex issues with big financial implications, to profound ideas that may not always have a clear link to monetization. It is a focus on the creator, the inventor, the innovator.

The newsletter has always placed emphasis on utility. I want what I do to be useful in some way. The most important way in which I try to do that, is by showing what is next, which I will continue to do. What is next is already here — you just have to know where to look.

This is our culture we are talking about. That is primary.
That is what enables the business around it. Which is secondary.

MUSIC x TECH x FUTURE. Those words say it all.

(This post originally appeared on Medium, which I’m moving away from. When you can avoid the large platforms, you should.)

Free competes with paid and abundant competes with scarce

Facebook recently launched a sound library including tracks you can use for free on videos. People criticized the concept in a music business discussion group (also on Facebook, ironically). I would hear the same rhetoric that people have when they say bands shouldn’t perform for free, because it’s not just a bad practice, it is also bad for your peers.

But let’s look at the reality that people in music are complaining about.

1. There are many different types of artists

There are always going to be people who find it awesome to see their music used by other people: even if they don’t get any money for it. They may be college students who are just happy to see their music travel. They may be people working full time jobs who do a little music on the side and don’t depend on the income. They may be professional producers who put out these tracks to libraries as a type of calling card.

Either way: there is always going to be free music and you will always have to compete with it.

2. Giving your music away for free can actually work

You have to have a monetization strategy at the end of this, but the easiest way to win attention online is to make great ‘content’ (in this case music). This content should be available with as few barriers as possible: which means making sure it’s available for free. The second part of your strategy should include steps on 1) how to hold people’s attention after you capture it, and 2) how to identify opportunities to monetize your fanbase (I wrote about it in detail in this thesis).

But sometimes you don’t need a strategy for monetization. It’s not easy to get signed to big labels nowadays and it usually requires you to show that you can build up your own audience. One of my favourite examples of someone who successfully leveraged free is Alan Walker. An EDM artist with tracks that have more plays than some of the most popular tracks from stars like Kendrick Lamar. How? He released his somewhat odd music through NoCopyrightSounds, which specialised in providing YouTubers and Twitch streamers with music they could use for free, without fear that their videos would get taken down. Eventually, they soundtracked the whole subculture and put a new sound in EDM on the map (read more).

3. AI is going to one up everyone

We are seeing amazing developments in AI. The most recent example is Google DeepMind‘s AlphaZero, which beat the world’s best bot in chess after spending just 4 hours practicing. Startups from Jukedeck, to Amper, to Popgun, to Scored are all trying to make music generation easier.

We already see more music being released than ever before, but so far it has still depended on human output. Through AI, music is already being untethered from human productivity. Standing out in abundance is a minuscule problem compared to what it will be 5 years from now.

Free music libraries are the least of your problem

There is no singular music business or industry. Everyone is playing by different rules and all those rules will be upended every time there’s a big shift in technology. From the record player, to the music video, to the internet, to AI and blockchain, music is the canary in the coal mine and you have to have a pioneer mentality or else you are falling behind every day.

The people who are one step ahead may be underground today, but some are the stars of tomorrow.

By all means, let us discuss the ethics. But be careful not to let your opposition blind you to the point where you cannot see how a new generation of music is thriving and leaving you behind. Because then it’s too late. For you.

What the End of the App Era Means for the Music Business

The average smartphone user downloads less than 1 app per month, according to comScore. The era of apps is ending, and we’re moving in an era of artificial intelligence interacting with us through messaging apps, chatbots, voice-controlled interfaces, and smart devices.

What happens to music in this context? How do you make sure your music stands out? How do you communicate your brand when the interface goes from visual to conversational? And what strategic opportunities and challenges does the conversational interface present to streaming services?