Why the next big innovation in music will change music itselfâââand how our moods are in the driverâs seat for that development.
Over the last half year, Iâve had the pleasure to publish two guest contributions in MUSIC x TECH x FUTURE about our changing relationship with music.
The first had Thiago R. Pinto pointing out how weâre now using music to augment our experiences and that we have developed a utilitarian relation with regards to music.
Then last week, James Lynden shared his research into how Spotify affects mood and found out that people are mood-aware when they make choices on the service (emphasis mine):
Overall, mood is a vital aspect of participantsâ behaviour on Spotify, and it seems that participants listen to music through the platform to manage or at least react to their moods. Yet the role of mood is normally implicit and unconscious in the participantsâ listening.
Having developed music streaming products myself, like Fonoteka, when I was at Zvooq, Iâm obviously very interested in this topic and what it means for the way we structure music experiences.
Another topic I love to think about is artificial intelligence, generative music, as well as adaptive and interactive music experiences. Particularly, Iâm interested at how non-static music experiences can be brought to a mass market. So when I saw the following finding (emphasis mine), things instantly clicked:
In the same way as we outsource some of our cognitive load to the computer (e.g. notes and reminders, calculators etc.) perhaps some of our emotional state could also be seen as being outsourced to the machine.
For the music industry, I think explicitly mood-based listening is an interesting, emerging consumption dynamic.
Mood augmentation is the best way for non-static music to reach a mass market
James is spot-on when he says mood-based listening is an emerging consumption dynamic. Taking a wider view: the way services construct music experiences also changes the way music is made.
The playlist economy is leading to longer albums, but also optimization of tracks to have lower skip rates in the first 30 seconds. This is nothing compared to the change music went through in the 20th century:
The proliferation of the record as the default way to listen to music meant that music became a consumer product. Something you could collect, like comic books, and something that could be manufactured at a steady flow. This reality gave music new characteristics:
- Music became static by default: a song sounding exactly the same as all the times youâve heard it before is a relatively new quality.
- Music became a receiving experience: music lost its default participative quality. If you wanted to hear your favourite song, you better be able to play it, or a friend or family member better have a nice voice.
- Music became increasingly individual: while communal experiences, like concerts, raves and festivals flourished, music also went through individualization. People listen to music from their own devices, often through their headphones.
Personalized music is the next step
I like my favourite artist for different reasons than my friend does. I connect to it differently. I listen to it at different moments. Our experience is already different, so why should the music not be more personalized?
Iâve argued before that features are more interesting to monetize than pure access to content. $10 per month for all the music in the world: and then?
The gaming industry has figured out a different model: give people experience to the base game for free, and then charge them to unlock certain features. Examples of music apps that do this are Bjorkâs Biophilia as well as mixing app Pacemaker.
In the streaming landscape, TIDAL has recently given users a way to change the length and tempo of tracks. Iâm surprised that it wasnât Spotify, since they have The Echo Nest team aboard, including Paul Lamere who built who built the Infinite Jukebox (among many other great music hacks).
But itâs early days. And the real challenge in creating these experiences is that listeners donât know theyâre interested in them. As quoted earlier from James Lynden:
The role of mood is normally implicit and unconscious in the participantsâ listening.
The most successful apps for generative music and soundscapes so far, have been apps that generate sound to help you meditate or focus.
But as we seek to augment our human experience through nootropics and the implementation of technology to improve our senses, itâs clear that music as a static format no longer has to be default.
Further reading:Â Moving Beyond the Static Music Experience.