Google Glass

When augmented reality converges with AI and the Internet of Things

The confluence of augmented reality, artificial intelligence, and the Internet of Things is rapidly giving rise to a new digital reality.

Remember when people said mobile was going to take over?

Well, we’re there. Some of the biggest brands in our world are totally mobile: Instagram, Snapchat, Uber. 84% (!) of Facebook’s ad revenue now comes from mobile.

And mobile will, sooner or later, be replaced by augmented reality devices, and it will look nothing like Google Glass.

Google Glass
Not the future of augmented reality.

Why some predictions fail

When viewing trends in technology in isolation, it’s inevitable you end up misunderstanding them. What happens is that we freeze time, take a trend and project the trend’s future into a society that looks almost exactly like today’s society.

Past predictions about the future
Almost.

This drains topics of substance and replaces it with hype. It causes smart people to ignore it, while easily excited entrepreneurs jump on the perceived opportunity with little to no understanding of it. Three of these domains right now are blockchain, messaging bots, and virtual reality, although I count myself lucky to know a lot of brilliant people in these areas, too.

What I’m trying to say is: just because it’s hyped, doesn’t mean it doesn’t deserve your attention. Don’t believe the hype, and dig deeper.

The great convergence

In order to understand the significance of a lot of today’s hype-surrounded topics, you have to link them. Artificial intelligence, smart homes & the ‘Internet of Things’, and augmented reality will all click together seamlessly a decade from now.

And that shift is already well underway.

Artificial intelligence

The first time I heard about AI was as a kid in the 90s. The context: video games. I heard that non-playable characters (NPCs) or ‘bots’ would have scripts that learned from my behaviour, so that they’d get better at defeating me. That seemed amazing, but their behaviour remained predictable.

In recent years, there have been big advances in artificial intelligence. This has a lot to do with the availability of large data sets that can be used to train AI. A connected world is a quantified world and data sets are continuously updated. This is useful for training algorithms that are capable of learning.

This is also what has given rise to the whole chatbot explosion right now. Our user interfaces are changing: instead of doing things ourselves, explicitly, AI can be trained to interpret our requests or even predict and anticipate them.

Conversational interfaces sucked 15 years ago. They came with a booklet. You had to memorize all the voice commands. You had to train the interface to get used to your voice… Why not just use a remote control? Or a mouse & keyboard? But in the future, getting things done by tapping on our screens may look as archaic as it would be to do everything from a command-line interface (think MS-DOS).

XKCD Sudo make me a sandwich
There are certain benefits to command-line interfaces… (xkcd)

So, right now we see all the tech giants diving into conversational interfaces (Google Home, Amazon Alexa, Apple Siri, Facebook Messenger, and Microsoft, err… Tay?) and in many cases opening up APIs to let external developers build apps for them. That’s right: chatbots are APPS that live inside or on top of conversational platforms.

So we get new design disciplines: conversational interfaces, and ‘zero UI’ which refers to voice-based interfaces. Besides developing logical conversation structures, integrating AI, and anticipating users’ actions, a lot of design effort also goes into the personality of these interfaces.

But conversational interfaces are awkward, right? It’s one of the things that made people uncomfortable with Google Glass: issuing voice commands in public. Optimists argued it would become normalized, just like talking to a bluetooth headset. Yet currently only 6% of of people who use voice assistants ever do so in public… But where we’re going, we won’t need voice commands. At least not as many.

The Internet of Things

There are still a lot of security concerns around littering our lives with smart devices: from vending machines in our offices, to refrigerators in our homes, to self-driving cars… But it seems to be an unstoppable march, with Amazon (Alexa) and Google (Home) intensifying the battle for the living room last year:

Let’s converge with the trend of artificial intelligence and the advances made in that domain. Instead of having the 2016 version of voice-controlled devices in our homes and work environments, these devices’ software will develop to the point where they get a great feeling of context. Through understanding acoustics, they can gain spacial awareness. If that doesn’t do it, they could use WiFi signals like radar to understand what’s going on. Let’s not forget cameras, too.

Your smart device knows what’s in the fridge before you do, what the weather is before you even wake up, it may even see warning signs about your health before you perceive them yourself (smart toilets are real). And it can use really large data sets to help us with decision-making.

And that’s the big thing: our connected devices are always plugged into the digital layer of our reality, even when we’re not interacting with them. While we may think we’re ‘offline’ when not near our laptops, we have started to look at the world through the lens of our digital realities. We’re acutely aware of the fact that we can photograph things and share them to Instagram or Facebook, even if we haven’t used the apps in the last 24 hours. Similarly, we go places without familiarizing ourselves with the layout of the area, because we know we can just open Google Maps any time. We are online, even when we’re offline.

Your connected home will be excellent at anticipating your desires andbehaviour. It’s in that context that augmented reality will reach maturity.

Google Home

Augmented reality

You’ve probably already been using AR. For a thorough take on the trend, go read my piece on how augmented reality is overtaking mobile. Two current examples of popular augmented reality apps: Snapchat and Pokémon Go. The latter is a great example of how you can design a virtual interaction layer for the physical world.

So the context in which you have to imagine augmented reality reaching maturity is a world in which our environments are smart and understand our intentions… in some cases predicting them before we even become aware of them.

Our smart environments will interact with our AR device to pull up HUDs when we most need them. So we won’t have to do awkward voice commands, because a lot of the time, it will already be taken care of.

Examples of HUDs in video games
Head-up displays (HUDs) have long been a staple of video games.

This means we don’t actually have to wear computers on our heads. Meaning that the future of augmented reality can come through contact lenses, rather than headsets.

But who actually wants to bother with that, right? What’s the point if you can already do everything you need right now? Perhaps you’re too young to remember, but that’s exactly what people said about mobile phones years ago. Even without contact lenses, all of these trends are underway now.

Augmented reality is an audiovisual medium, so if you want to prepare, spend some time learning about video game design, conversational interfaces, and get used to sticking your head in front of a camera.

There will be so many opportunities emerging on the way there, from experts on privacy and security (even political movements), to designing the experiences, to new personalities… because AR will have its own PewDiePie.

It’s why I just bought a mic and am figuring out a way to add audiovisual content to the mix of what I produce for MUSIC x TECH x FUTURE. Not to be the next PewDiePie, but to be able to embrace mediums that will extend into trends that will shape our digital landscapes for the next 20 years. More on that soon.

And if you’re reading this and you’re in music, then you’re in luck:
People already use music to augment their reality.

More on augmented reality by me on the Synchtank blog:
Projecting Trends: Augmented Reality is Overcoming its Hurdles to Overtake Mobile.

Monetizing virtual face time with fans

How the convergence of 2 trends opens up new business model opportunities for artists.

When I landed in Russia to get involved with music streaming service Zvooq, my goal was to look beyond streaming. The streaming layer would be the layer that brings everything together: fans, artists, and data. We started envisioning a layer on top of that, which we never fully got to roll out, in big part due to the challenges of the streaming business.

It was probably too early.

For the last decade, a lot of people have been envisioning ambitious direct-to-fan business models. The problem was that many of these were only viable for niche artists with early adopter audiences, but as technology develops, this is less so the case today.

Let’s have look at a few breakthrough trends in the last year:

  • Messaging apps are rapidly replacing social networks as the primary way for people to socialize online;
  • Better data plans & faster internet speeds have led to an increase in live streams, further enabled by product choices by Facebook & YouTube.

Messaging apps overtaking social networks is a trend that’s been underway for years now. It’s why Facebook acquired WhatsApp in 2014 for a whopping $19 billion. While 2.5 billion people had a messaging app installed earlier this year, that’s expected to rise to 3.6 billion in coming years. In part, this is driven by people coming online and messaging apps being relatively light weight in terms of data use.

In more developed markets, the trend for messaging apps is beyond text. WhatsApp, Facebook Messenger, and Slack have all recently enabled video calling. Other apps, like Instagram, Snapchat, Live.ly, and Tribe are finding new ways to give shape to mobile video experiences, from broadcasting short video stories, to live streaming to friends, to video group chats.

For artists that stay on top of trends, the potential for immediacy and intimacy with their fanbase is expanding.

Messaging apps make it easier to ping fans to get them involved in something, right away. And going live is one of the most engaging ways to do so.

Justin Kan, who founded Justin.tv which later became video game streaming platform Twitch (sold to Amazon for just under $1 billion), launched a new app recently which I think deserves the attention of the music business.

Whale is a Q&A app which lets people pose questions to ‘influencers’. To have your question answered, you have to pay a fee which is supposed to help your question “rise above the noise of social media”. And Whale is not the only app with this proposition.

Yam is another Q&A app which places more emphasis on personalities, who can answer fans’ questions through video, but also self-publish answers to questions they think people may be curious about.

Watching a reply to a question on Yam costs 5 cents, which is evenly split between the person who asked and the person who answered. It’s a good scheme to get people to come together to create content and for the person answering the questions to prioritize questions they think will lead to the most engagement.

What both of these apps do is that they monetize one of the truly scarce things in the digital age.

Any type of digital media is easily made abundant, but attention can only be spent once.

These trends enable creating an effective system for fans to compete for artists’ attention. I strongly believe this is where the most interesting business opportunities lie in the music business at the level of the artist, but also for those looking to create innovative new tools.

  1. Make great music.
  2. Grow your fan base.
  3. Monetize your most limited resource.

This can take so many shapes or forms:

  • Simply knowing that your idol saw your drawing or letter;
  • Having your demo reviewed by an artist you look up to;
  • Getting a special video greeting;
  • Learning more about an artist through a Q&A;
  • Being able to tell an artist about a local fan community & “come to our city!”;
  • Having the top rank as a fan & receiving a perk for that.

Each of these can be a product on their own and all of these products will likely look like messaging apps, video apps, or a mix.

A lot of fan engagement platforms failed, because they were looking for money in a niche behaviour that was difficult to exploit. People had to be taught new behaviours and new interfaces, which is hard when everyone’s competing for your attention.

Now this is becoming easier, because on mobile it can be as simple as a tap on the screen. Tuning into a live stream can be as simple as opening a push notification. Asking a question to an artist can be as simple as messaging a friend.

So, the question for the platforms early to the party is whether they’ll be able to adjust to the current (social) media landscape, or whether they let sunk cost fallacy entrench them in a vision based on how things used to be.

There’s tremendous value in big platforms figuring out new ways for artists and fans to exchange value. They already have the data and the fan connections. Imagine if streaming services were to build a new engagement layer on top of what already exists.

Until then, artists will have to stay lean and use specific tools that do one thing really well. Keep Product Hunt bookmarked.

Interview: Wil Benton (Chew.tv) about building a livestreaming platform for DJs

Can Chew be to music what Twitch is to gaming? Find out what it takes to build the world’s largest video platform for DJs.

Chew team

Wil Benton is one of the founders of Chew, a service that lets performers create a livestream of their DJ or studio sessions. They were launched in January 2015 and signed up tens of thousands of creators, broadcasting over fifty thousand performances.

Not only does Chew provide a platform where you can interact with DJs while they’re playing — it also functions as a massive archive of DJ sets, easily rivaling those of Boiler Room, and providing a more visual alternative to Mixcloud.

This is the first edition of a series of interviews with music startup founders and professionals. With the series, I want to shine a light on what goes on in music startups, how they work and what their challenges are. So, first up: Wil about building Chew.

Chew.tv logo

How has the journey been since graduating from the Ignite startup accelerator?

It may sound cliched, but we really wouldn’t be here today without the support and guidance we had on the Ignite accelerator. The team were the first to believe in Ben Bowler and I as founders, investing in us as a team (our idea pre-programme wasn’t quite as strong as it is today!) and giving us the focus and headspace to start building what became Chew at the start of last year. 

Our continuing success is testament to the Ignite team and all that they do — so can’t really say more than that!

Some people argue that investors are wary of investing in music startups due to uncertainties with rights and monetization. Have you encountered this?

In a word, no. Not yet anyway!

I think, had we not been demonstrating ‘interesting’ metrics and engagement on both sides of our creator & consumer marketplace, we would’ve found it harder to raise the two rounds of seed funding we’ve raised to date — but, on the whole, raising investment’s been a pleasure so far!

We’re gearing up to our first institutional round towards the end of this year; and conversations there have been promising too; again possibly thanks to the numbers we’ve got. That and the large amount of time we spend talking to our investors (both currently and looking to invest).

Chew presentation

You ran a crowdfunding campaign letting users invest & get equity. What made you choose this?

We looked at crowdfunding as a way to fill part of the seed round we did at the start of this year. We’re building a community-based business, so it made sense to look at crowdfunding as a way of allowing our EU-based users to invest.

What better way to demonstrate we’re building something of value than our users actually investing in what we’re building?

We ended up having 122 individuals investing in the campaign; many Chew users but also supporters who saw value in what we’re doing. Seedrs, the platform we used, operate a nominee structure where their legal entity represents all 122 investors’ interest — but we have a great relationship with both parties and keep them in the loop with news on the business every fortnight.

Crowdfunding as a route to accessing capital isn’t the easiest thing to do — but as a way of generating interest in our community, product, and offering, it was unparalleled.

How did the idea of Chew come about?

Ben and I met the summer before we launched Chew — introduced by a mutual friend because we shared a love for music and tech. The predecessor to Chew was called EatBass (sticking with the culinary theme here!) and we spent a few months on that before I left my job at an advertising agency at the end of 2013.

Ben had spent a lot of time working with live streaming at his job with AEI and was being asked back to stream club nights and other events after having left. That’s originally where the idea for a live streaming platform for music came about. I started working full-time on Chew in that guise at the start of 2014, in a marketing and biz dev role. Meanwhile Ben covered the tech side by working evenings and weekends until joining me full time in August 2014.

Wil Ben Chew

It wasn’t until our time on the Ignite accelerator in October that we focused the idea being a platform and community for DJs and the electronic music community, though.

How did you assemble your team?

We raised an SEIS investment round in April 2015 after we’d finished Ignite, which gave us the capital to hire our CTO, Sam. We spent ages trying to hire for the full-stack role we wanted to fill; and Sam ended up finding our listing on the AngelList profile. He joined us the week after graduating with a Computer Science degree.

We’re still a team of three today; Sam as CTO, Ben as CSO/CVO and me as CEO. This year, we’ve been lucky enough to welcome a few ‘grownups’, who bring extensive industry experience to the team on a consultancy basis as we continue building out the business.

What are you happiest about regarding Chew? What pains you?

Our continuing success — and hearing about the value we’re adding to our users’ lives and careers on a daily basis!

Pain points are, thankfully, few and far between at the moment. Finances, given we’re working on a limited runway, and resource, being a team of three, have their downsides — but I wouldn’t have us operating in any other way!

Chew office setup

What are you happiest about regarding Chew’s current feature set? And what bugs you?

We’ve achieved a huge amount in our short history — especially given we’ve only one (truly awesome) developer!

Our ability to plan, build and execute features to a reliable schedule — on top of bug fixes, community support etc — never ceases to amaze me.

In terms of personal bugs, it’s more of a resource issue than a problem with our features. We’ve got so much more to do, but our team is at capacity — so we need to expand to be able to improve what we have. So not necessarily a bug of mine; just conscious awareness that there’s only so much we can do as the lean team we are today!

You have over 25,000 DJs and producers on the service… How did they find out about Chew?

We had just under 30k users sign up in our first 18 months. We spent four or so months last year testing low level spend on Facebook ads (less than ÂŁ5k) and, having just looked at the data, our numbers (in terms of engagement and platform usage) are actually better if we ignore the data from the duration of the Facebook spend.

Otherwise, our growth has been purely word of mouth. We turned Facebook ads off in August last year and haven’t looked back! We’re pretty active on the socials and in terms of community support, and we find that keeps our DJs and creators happy.

The happier [the DJs] are, the more content they produce on Chew and the larger the audiences they bring.

We’ve also just acquired our largest competitor, Mixify. The users we’re transitioning onto Chew is more than ten times our registered user number — so seeing how that impacts our numbers will be a fun journey!

How do you think DJs can benefit from live broadcasting?

Live streaming is an open, democratic process that allows anyone, anywhere in the world to share what they’re doing in realtime. It’s the realtime aspect that connects us as consumers, the ‘spontaneous togetherness’ we get from sharing this experience. Josh Elman, one of the VCs who invested in Meerkat, wrote a great blogpost about this.

For DJs, music producers, and personalities, it levels the playing field and enables anyone at any stage of their career to build an audience, drive that engagement that defines success as a musician and ultimately monetise their activities. That’s what we’re seeing with Chew — bedroom DJs building a global fanbase, established artists communicating with an engaged audience from their bedrooms or studios and record labels sharing new content from their artist rosters.

You mentioned spontaneous togetherness. How have you tried to foster that?

We are as hands off, from a platform point of view, as our creators want us to be.

Everything that happens on Chew is user-driven; our contribution to that is making sure the tech and platform makes things as easy as possible for our creators and consumers to engage with each other.

Do you think live streamed shows should be an essential part of any performing DJs digital strategy?

Yes — but potentially more than just shows. We see the best consumer engagement when our creators break away from the ‘let’s stream a show’ mentality.

It’s more about creating a consistent flow of content than sticking a webcam behind you in the club.

Live video is probably the most powerful thing, second to only live events, in a DJ, producer, or personality’s digital strategy for a number of reasons. Frequency and consistency are key, though. Without them, we don’t see as good an engagement from the audience side.

Wil Benton of Chew.tv DJing

You mention frequency and consistency being key. Does that in any way contrast with ‘spontaneous togetherness’?

Great point — I hadn’t thought of it like that! Being consistently spontaneous kind of defeats the point doesn’t it 😉

I think, like I said earlier, allowing every creation and consumption decision to be user-driven helps drive this togetherness — but it’s the regularity of spontaneity that drives the behavioural change from a consumption side of things, which allows creators to maximise their audience’s engagement.

Are you going to be launching Twitch-style monetization options like donations and subscriptions?

We’re working on a number of new features — watch this space!

Do you have any words of advice for people with a genius music startup idea and other founders?

I’ll let Betaworks/ Startup Vitamins answer this for me.

One of the things we learned on Ignite:

You can never have a product in users’ hands too quickly.

Build, launch and iterate as fast as you can.

Follow Wil & Chew on their journey: