You may have heard about the recent collaboration between Drake and The Weeknd that wasn't a real collaboration, but an AI-generated fake. This incident is a canary in the coalmine not just for the music industry, but any creator or rights holders across numberous industries. Joining us to discuss is Jessica Powell of AudioShake, an A.I. startup that builds sound separation software to help musicians make additional revenue for their work.
You may have heard about the recent collaboration between Drake and The Weeknd that wasn't a real collaboration, but an AI-generated fake. This incident is a canary in the coalmine not just for the music industry, but any creator or rights holders across numberous industries. Joining us to discuss is Jessica Powell of AudioShake, an A.I. startup that builds sound separation software to help musicians make additional revenue for their work.
Richard Kramer:
Welcome back to Bubble Trouble, conversations between two humans with pulses as opposed to computer generated voices. Yes, it's the real life double act of independent analyst Richard Kramer, that's me, and the Economist and author Will Page, that's him. And this is what we do; lay out inconvenient truths about how business and financial markets really work. We're now on the fourth corner of our inquisition into the dinner party topic du jour, AI and ChatGPT. Today we want to get behind the controversy over Drake and his Digital Doppelganger with a formidable Jessica Powell who swapped her Google comms gig for a startup, presciently anticipating how AI is going to be an instrument as widely played as the piano or guitar. Back in a moment.
Will Page:
Yes, Jessica Powell, double thrilled, triple thrilled to have you on the podcast this week. And what we'd like to do at the start is just give you the microphone and in a tweet light length and a tweet type attention span, introduce yourself, who you are. But really important at the get-go, tell our audience how they can discover and follow what you're doing.
Jessica Powell:
I hate introducing myself, so this will be very short. I'm Jessica, I'm the CEO and co-founder of a company called Audioshake. Before that I was at Google for a thousand years doing a whole bunch of different things, but my last job there was yes, running communications across the company. And I also have a, I guess a side gig as an author.
Will Page:
And just to add, there's a spike in your Amazon sales curve because I bought you a book half an hour before we came in the studio as well.
Jessica Powell:
Oh great.
Will Page:
Listen-
Jessica Powell:
That will pay for my retirement, I'm sure.
Will Page:
Once the royalties get to you, which will be in another thousand years. Let's weave our way into this subject because it's a big one, and I think we've got a very special guest to work our way through it, and our audience wants to get their head around this topic. So the way I like to start it is with a question, which is, when you sort of throw together all the hype that's surrounding AI, machine learning, large language models, ChatGPT and alphabet soup of acronyms, do you see all this as a problem in search of a solution or a solution in search of a problem? Is the tech getting ahead of the use case? Just if I throw that at you, how would you unwrap that question?
Jessica Powell:
I would say yes and no to your question in that it's such a broad question and I think that's actually part of the problem with how we talk about AI. At the end of the day, AI is essentially like there's sophisticated statistical models engaging in some form of pattern matching. And that I think is a very helpful definition from a technical perspective. It's also a kind of meaningless definition from actually how consumers and users and non-technical people need to think about it. I think that in some ways we would do better to just refer to it as technology and focus on the use cases and be very specific in terms of what those use cases are and whether we're okay with them or not.
Instead, what I see is both an anthropomorphizing of AI where people talk about it particularly since the launch of ChatGPT and so forth, where people talk about it as if it was a God, which isn't helpful, right? That's not helpful. Or and a huge simplification of it and where they collapse all kinds of AI as if it was like a widget and it's this one widget that just appears in these products, that AI widget. And it's kind of comic, I mean it's disturbing. It's also kind of comic. We wouldn't today talk about Google Search and QuickBooks and any number, so we wouldn't talk about them as if they were the same product using the same widgets. And yet that is how a lot of the conversation right now feels with AI.
Will Page:
I hear it and that term, God, I think read this podcast is called Bubble Trouble and it's really about exposing where bubbles are blowing up and piercing before they get to your stock price. But I think the evangelical point is interesting because when people get evangelical about tech, they lose track of common sense. I'm fond of telling Richard a story about a big AI conference talking about voice technology and how Alexa could tell you what the weather was. And I was like, "Well, I could also look out the window." And the AI programmer hadn't thought about that option that existed. Why would you want to look out the window?
Obviously we're going to get into the subject of the Drake Weeknd remix, so maybe I'll ask you in your eloquent voice to explain to our audience what happened with this song that appeared on streaming platforms, the backstory to it, what you heard when you were listening to it and the attribution that went behind it. Do you want to just give a quick primer and this rather bizarre story. And I'm sure there's even more bizarre stories coming down the pipe next week, but a week's long time in tech. Just explain what happened with Drake and The Weeknd.
Jessica Powell:
So there was a track that was uploaded by someone named Ghostwriter. It was a collab between The Weeknd and Drake, except it wasn't actually Drake or The Weeknd. Ghost writer claimed that it was 100% AI created track. It was pretty catchy and it quickly went viral and racked up millions and millions of streams. Universal came out and they issued takedown orders. It's very easy to find because anyone who's worked in content moderation stuff can tell you the whack-a-mole nature of all of this. So a takedown order can be issued, but it's just going to pop up somewhere else. That's as true with music as it is with say a terrorist video, right? You take it down in one place, it's immediately uploaded somewhere else. And so that's exactly what happened. So you can still find it online today pretty easily. And it caused quite a debate and I'd say a panic within the music industry around copyright and likeness and whether this is the future of music and so forth.
Will Page:
It was a knee-jerk reaction, which was akin to we're staring into the abyss, we're at the cliff edge, the end is nigh. Then you turn up on Substack, and I'll give it to you. You can write, you can really write, and you have this beautiful essay called Is Deep Fake Drake Just a Remix? And I thought that was a really nice way of just giving cool heads, calm down, take a chill pill and think through historical precedent, which will help us understand the story, but also give some guidance about how it might play out in the future. You want to just give a quick summation of the purpose of that blog and the message behind it?
Jessica Powell:
Sure. I think I was writing it in some ways just to make sense of it myself, and also I think answer some of the things that I kept on hearing from people in the music industry. I think that on the one hand, and I'm going to both sides this, which is super irritating, but here I go. I think on the one hand there is a good argument to be made that this is just the next version of remixes. If we think about the evolution of content participation and creation, if we start with the early ages of early hip hop, then through to YouTube, which normalized sharing and creation, and then TikTok, which has normalized participation, isn't it sort of natural that the next thing that would happen is people trying to throw themselves even more into the music and engage with their favorite artist's content even more deeply? To me that seems inevitable.
And I think in the same way that the early eras of hip hop, I mean even today, right? But it was actually about paying homage to these great artists that were being sampled. It wasn't an act of disrespect, it wasn't an act of trying to rip someone off or not have them get paid. It's always been a conversation, it's always been in reference to what came before it. And I think remixing and sampling comes out of that tradition. And in a lot of ways the AI covers come out of that tradition. So I think you can actually argue in a cultural sense that there are a lot of similarities.
And I think again, another parallel would be that you think about how sampling was received, and remixing was received in hip hop and DJ culture early on, how the labels and some artists thought about it. It was seen as deeply cool by everyone young consuming music, and seen as this terrible act of copyright infringement by the people on the rights holder side. But I think now with years and years behind us, I think it'd be hard to find a music executive that would malign Jay Dilla's Donuts or DJ Shadows Introducing, Danger Mouse's The Grey Album, which was a mashup of Jay-Z and the Beatles. Those are incredible works of art. And so you can see a lot of, I think, cultural parallels.
So at the same time, I think that argument is a little disingenuous in that aren't these deep fakes? And the act of taking someone's voice and appropriating it and repurposing it for something different, that's a pretty invasive thing. What are the things that we think of when we think about how we identify ourselves? Our voice, whether we like it or not, feels pretty core to our identity. And so I think that's a bit differently or I think of that as somewhat different from, let me take a piece of your work and splice it into this new work where the reference is still clear what the original work was. Here you're taking my voice and making me say things that I didn't say.
In fairness, I think the vast majority of this is done as parody. It's done as essentially the equivalent of fan fiction. And the context in which these things are created is certainly relevant from both a legal perspective, but also I think from a societal perception perspective. But I do find it interesting that something that we in other contexts would find really abhorrent, right? Deep fake, like political deep fakes for example, somehow become more palatable when they're brought into entertainment.
Will Page:
And let me just quickly come back on that very quickly, which just, there's another extension to this, which is rappers which use voice controller so they can sing on key. They have the human voice, you have technology which substitutes the human voice, and then you have the listener. Should the listener be offended that rapper isn't the rapper, it's the rapper through a voice controller? And to take it back in history, Joe Elliott, the singer of Def Leppard sold 30 million copies of Hysteria to the American population. Mutt Lange manipulated every line that he sang through technology, and that was 1986. So there's a little bit of pot calling the kettle black here in terms of the role of technology. I get consent, and Richard can pick up on that. But yeah, I just want to throw that out there, which is we've been here before and we'll be here again. Richard.
Richard Kramer:
Yeah, I guess I'd like to zoom in on the question of how you reward and protect the inputs and ideas that are original. Now, all of those albums you cited, I loved them all, and I, with Will Page would unpack where the samples came from, and that was great fun. But at some point there was an original musician playing an instrument or singing, and that person's livelihood and future livelihood relies on being able to protect the rights to their work. And since this Drake song came out, it's emerged that on platforms like Spotify, there are millions of AI generated songs simply there to try to eat into the pool of revenues that get allocated to all artists. And they're not really created with that artistic integrity in mind and they undermine the ability of those artists to get rewarded. How do you, without re-litigating DRM, which was 10 years or 15 years of pain that came before you even join Google, how do you make sure that those real artists are going to get rewarded and protected for when they create something original and unique that then can be spliced and resampled and reused?
Jessica Powell:
Right. I think there's a couple of, if I can respond both to what Will was saying and then what you were saying because I think there's a couple of different points there that are interesting. On Will's part about a producer or an artist using something to change their voice or something like that. Yes, I think there's absolutely in the history of music, wasn't the drum machine going to kill music and it has not, right? I do think there's, again, I 90% agree with this being sort of the next phase of technology and that there will be ways to control and to monetize. Which I'll get into in a moment, to address Richard's point.
I do think, and maybe this is just me on a personal level as someone who's created music, I'm a terrible singer, no one's going to take my voice or someone who writes, right? I do think there's something very deeply tied to one's identity about one's voice that does add an extra nuance to this that didn't exist perhaps in those previous debates around technology and art, which is, I personally find fascinating. And I think the easiest way to think about it is to say, Will, how would you like it if tomorrow you woke up and there was a clip of you on Twitter saying that you thought communism was the most effective economic social model ever invented? Now maybe actually you're about to tell me that actually you fundamentally agree with that. I think very few economists do agree with that, right? And all I'm trying to say is that I think that having your voice appropriated and have you saying something that you didn't say just feels a little bit more violent than having your work put into something else.
Will Page:
Agreed.
Jessica Powell:
Which then brings me to Richard's point, and I note that you have neither defended nor cut down communism from my previous example.
Will Page:
I was at a major record label yesterday, Warners, I was introduced as the former Spotify Chief Communist.
Jessica Powell:
There you go. So
Will Page:
I said, "Isn't it great that all songs are priced the same under streaming, that's communism, everybody gets half a cent."
Jessica Powell:
All right, well we know what deep fake to create. So then to Richard's point about essentially control and making sure that the original artist is paid out properly, I think that's an excellent point. And I do think we see some precedent and some existing problems that both show way forward and complicated. So I think on the positive side, what I think will a hundred percent happen is that we will start to see within the next month companies coming out saying that they're launching solutions for artists to create authentic synthetic voices that can be licensed, that can be distributed, that can be tracked and so forth. Already we saw on Twitter that Grimes came out and said, I can't remember exactly what she said, but it was something like that she would split 50% on any track that she approves that that uses her voice.
So I think we'll see that, but made systematic. I don't think immediately every single artist is going to go sign up for that. But I think there will be artists that'll be very interested and they'll be some of the pioneers in that space and we'll see cool things emerge from it. That will happen. And I do think that marketplaces generally when we look at music, marketplaces are a great way to answer piracy and infringement generally. I, as a kid using P2P services, I wasn't using P2P services because I wanted to rip off artists. I was a kid who barely had an allowance. I wasn't even thinking about that, right? What I was thinking was I just wanted my music and there was no place for me to get my music. And yet when iTunes came about, the fact that I could get a track...
Will Page:
20 years ago...
Jessica Powell:
... for a dollar-
Will Page:
... to the day. 20 years to the day.
Jessica Powell:
Right. I could get a track for a dollar or whatever it was and I wasn't going to have to wait 24 hours for it to download. It wasn't going to be a corrupted file. It was the ease and the market that made it possible for me to get that. And while I would never argue that the market solves everything, that in this case, I think it was an effective counter. Essentially the market provided a much more effective solution for users where then the purchasing question was just one of ease and convenience, where it was worth it to me, even as a young person without much money to make that trade off. And I think there's something here where if you could build a marketplace, or again, I'm using that term broadly just to say a service where people could make use of authorized voices, create with them, maybe even there's a chance that the artist is going to see what I've made and somehow provide some kind of approval or a validation of that. That's a pretty compelling offer. You're not going to get rid of people doing it rogue, of course you're not. But can it cut into that and create a viable revenue stream and means of control? I think so.
Will Page:
You remind me of Eden Smith's quote about one man's smuggler's another man's entrepreneur. At the moment, it might seem illegal and anti-moral, but if markets can be built to monetize this, it flips. I got two more questions before the break. Just real quickly sticking in this kind of cul-de-sac of copyright, and I want to get it wide open to other media verticals, but is there anything stopping Will Page and Jessica Powell putting a song up on Spotify with the words "Featuring Drake,"? Just leave the AI debate for a second, but I don't think there's anything stopping me putting Drake in the song title to trip the algorithm and to gather the streams as well.
Jessica Powell:
I think that there're probably experts on exactly this. I've never understood, for example, why if you just think of search ads, why is it okay to advertising in someone else's trademark and have that come first. It's the same thing, but I think that is legal, right? And that's never made sense to me. There must be some sort of legal slippery slope in that you have to allow it for some this or that. I don't know. So I don't know the answer to that question-
Richard Kramer:
But certainly listening to the CEO of UMG yesterday on his earnings call, he was spitting fire about the topic of protecting copyright and protecting artists. And threatening to bring the four horsemen of the apocalypse in the legal sense down on all of the infringers that might be out there. And I guess given that's an unrealistic prospect, why is it that we don't seem to be able to create a system which can at least give the original artist the chance to say no and the chance to say, "Well, if I am the person who's the guardian of Prince's estate, I am not going to license his voice to sing the vocals on Avory Led Zeppelin album," which easily AI could do.
Jessica Powell:
Well, it can do it. Yeah. I think there's what should we do or what should exist from what kinds of products should exist for that to be possible and then what is actually feasible given the nature of technology, right? We have so many more protections, whether we're talking about spam or copyright or any number of bad things and quotes. We are much more sophisticated today than we were five years ago than we were 10 years ago. Those things all still exist. So yes, of course a hundred percent Prince's estate, and I don't even want to use Prince. Let's use a contemporary artist, right? Because I think a lot of times we talk about some of these things in terms of legacy classic artists where we're like, "Oh, well Tom Petty"-
Richard Kramer:
Back to dad-
Jessica Powell:
[inaudible 00:19:31]. Right? Let's just actually make it a contemporary artist. There could be a contemporary artist that is like, "I'm cool with X, Y, and Z, but I'm not cool with you using my voice." And so of course that it should exist. Of course it should exist that they should be able to say no. Does that mean it won't exist? No, because that technology is out there. But what you can do is, like I said, A, have some sort of marketplace for the artists who do want it to exist. B, you could, this is me freelancing right now and I've only thought about it for five seconds, but what I would think you would do is you could create databases of authorized voices, for example, use of those voices that don't have that fingerprint or watermark out in the world.
You could create a system of detection that at least would lead to a greater amount of takedowns, that if something didn't have the authentic stamp, you're still going to have everyone gaming that system, right? You'd have people gaming that system in terms of what they put in for keywords for example, and how they list a track. So for example, let's say again, I'm making this up, but let's say today it is possible to write Jessica and Will Page featuring Drake, and that's not taken down. But let's say starting next week, anything that says featuring Drake that wasn't passed over via Drake's label is now getting taken down. Well then Will and I are going to start messing around with how we list Drake. We're going to do-
Richard Kramer:
Not featuring Drake.
Jessica Powell:
Right. Or you'll see what people do on TikTok, which is I think there's like if they don't want to say the term sexual abuse for example, or rape or something like that, because they worry about content moderation and takedowns, they will use a different word. I think it's mascara. So I'm just saying there will always be right-
Will Page:
Jesus Christ.
Jessica Powell:
What was the Trumpian build a wall and someone builds a ladder higher or whatever, that will always exist and we just have to know it exists. But you can and should build mechanisms for that can cover off the majority of use cases so that on the whole artists are protected.
I think there's one other thing which we haven't really talked about that's worth mentioning, and maybe this is something will can speak to too, which is the vast majority of this stuff is, even when the tools get better, right? It's not great. The Drake and The Weeknd track was really good. It also by the way, was not very likely a hundred percent AI. You can't tell an AI system today to create a Taylor Swiss song for me with an 808, right? There were a whole bunch of pieces of that. First they had to separate the vocals from those artists. They had to then do, anyway, there are a whole bunch of different things that we don't need to get into in here.
Eventually you could do that from start to finish with AI, but it's hard to make music that sounds unique and is catchy. And yes, you can algorithmically get to better sounding things. Most of this stuff though, the stuff that's fun is where you're getting to add in your own little flavor as the user. This is you playing around with your artist's voice. It's not going to be blockbusters every single time. And so I think, just like it's really hard to make a remix, the vast majority of remixes get couple of streams.
Will Page:
You throw a lot at the wall in some of this-
Richard Kramer:
But you say that, but there is an enormous whole genre of music now coming past its 30th birthday, which is drum and bass and electronic dance music, which it's going to be very difficult to pull apart the strings or the separation that your company does. And I was thinking about that in the context of classical music, because maybe in this fantastic recording of a symphony orchestra, that bassoonist really just wasn't that good. So you really want to drop that one bit in and separate that out and you can imagine that. But when you think of just how much music today already originates as all digital, are we going to be able to have copyright on music in five or 10 years time? Is that a realistic goal?
Jessica Powell:
I think so, right? I think if people are still creating music, if people are creating it, it can be copyrighted. I think there's a debate, there's the copyright office thing about whether an AI work can be copyrighted and that in itself is a ruling that probably will not fully hold up. It was probably too broad to fully work. But I just think this kind of goes back to what Will was saying. I think most of the ways that we're going to be creating music is we're going to be using AI as an assistive tool, not letting the computer do the entire thing. That's not that fun. Fine for background music, but-
Will Page:
Just like GitHub Copilot, everyone thought it was the death of coding. Now coders are four times more productive because they're using what they previously saw literally three months ago as a deep threat to their profession. So-
Jessica Powell:
Yeah, no one on our team thought Copilot was, everyone was just like, "Great, someone will look at the places where I forgot to close the packet."
Will Page:
Last question before the break is, I just want to revisit that essay again, that wonderful essay you've got up on Substack, and say, our audiences are a broad church. You have people listening to me from a media perspective and Richard with 30 years experience covering financial market misbehavior. Can you see a similar essay being written in a year or two years time about the financial markets even? How do you see ChatGPTs now got Bloomberg GPTs? Do you see this stuff affecting financial markets and how they function as well?
Jessica Powell:
Probably. But if there's anything I've learned from working in technology all these years, it's just that sometimes we should just shut up and not give our opinions on things we don't know anything about. So I am going to leave it by saying I don't know, but yeah, how would it not, right?
Will Page:
I know. I have this deep sense that the echo chamber of industry analyst notes is ripe for ChatGPT to drive a tank tuition through-
Jessica Powell:
I will tell you what I would love to see that doesn't require AI at all. It's that when analysts come out, say attacking a company or pumping it up, whatever, hyping it up on when they get quoted in Bloomberg or whatever, that people would also put whether they're sale or buy side next to that.
Richard Kramer:
So that is just to finish off this first half, that's been my mantra for the last 30 years, creating an independent research company where we don't take money from the companies we cover, so we don't have the conflicts of interest. And we call those analysts sycophants and stenographers. And that's one of our great taglines and one of my great interventions, they are the ones who are there to praise, not appraise. And they are there to say, "Congratulations on a great quarter," and then ask a question, which is indeed one of the biggest banks, the top rated analyst, his questions are always begin with the phrase, "How should we think about,". In other words, "I'm an empty vessel, so just tell me what to think. I'm a highly paid professional, but I have no ideas on my own. You may as well just get your AI to write the script for me and then I'll write down, "Yes, you said the moon is made of this, was the green cheese, thank you very much.""
Jessica Powell:
I remember all the different Google earning calls that I would have to do. And it's funny you say that because I would always remember when people would get up and they would just exactly, "Congrats on a great quarter." And I'm just like, "Why are you saying, we're just here. Can we leave? Please can we leave. No one cares."
Richard Kramer:
You can ask your friends at Google and all the other big companies why they don't let people like me ask questions and why they allow all the top analysts from the top banks to ask these pre-planned or prescripted questions sort of along the lines of, "Could you tell us how great you are and will you be even greater tomorrow?" With that, I think we need to close out the first half of this fascinating discussion. We'll be back in a moment to dig into what Audioshake really does and how AI will or won't reshape the world of music. Back in a moment.
Will Page:
Welcome back to part two of Bubble Trouble, where we have Jessica from Audioshake, and we're exploring the issues around AI and copyright. And towards the end of part one, we were discussing deep fakes and manipulating words and faking names that are on songs. Jessica, prior to you starting at Google, which was around about 1846, you worked for a startup called Sezak, and I also worked for the Performing Rights Society. And if you're a songwriter and you join a performing rights society, you can have an alias. So Elton John is not Elton John's name. George Michael is not George Michael's name. It's an alias. And it just made me think about Mark Twain, "History doesn't repeat itself, but it's sure as hell rhymes." When I started at the PRS, I looked at the aliases that were available on their database and I found 12 John Lennons with an S on the end. And you can see what they're trying to do, fake somebody else's royalties as well.
Now I'm going to toss it back over to Richard and we're going to go down a rabbit hole with your company. We've talked about the problems that we're staring at. Part two is all about solutions. Richard, take it away on Audioshake, shake, shake your booty.
Richard Kramer:
Yes. So Jessica, I watched your terrific interview at the Code Conference with Keith Shockley, obviously legendary name in hip hop culture, defining this sort of sonic atmosphere of the likes of Public Enemy or the Beastie Boys or others that was so near and dear to my heart in the music I listened to when I was a teenager and in my twenties. And I understood Audioshake has technology that can separate out all the peace parts of a recording. And that maybe you'd like to, as we were talking about before in a symphony orchestra, do something different with that bassoonist or the French horn, or pull out the baseline from some of those Bootsy Collins tracks and apply it to something more modern. Can you walk us through what Audioshake does and its capabilities, and how ordinary people might understand what this means for music production?
Jessica Powell:
Sure. So like you said, what Audioshake does is it uses AI to separate a recording into its different component parts, which we call stems. But the easiest way to think about it is in a song you have the vocal stem or the drum stem. So it's just the parts. They're essentially like Legos or the building blocks of a song. And we're talking about music, but we also at Audioshake work with dialogue and speech. So also for example, separating dialogue from music and so forth or effects from a movie. So essentially we are trying to pull apart or atomize sound into smaller units so that they can be recombined, repurposed, used in different ways. And that can range from on the music side, one-off kind of uses like someone needs to make a remix, someone needs to make a Dolby Atmos surround sound immersive mix, or they need to do a sync license and put an instrumental in the movie and they need to remove the acapella, right? Those kinds of uses through to things that happen at scale.
So for example, you're using an AR app and as your body is moving, the sound is changing in really subtle ways in relation to your body, or a game is changing the music and the game is reactive to what the player is doing. All of those kinds of things were sort of at scale uses that need standardization of audio assets and need to be able to break songs apart. And we do that. And then on the outside of music, we do a lot of work in, for example, the dubbing space and the transcription space where we strip dialogue and essentially make it clean so that people can get much higher accuracy, transcription and captioning and those kinds of things, as well as retain the music and still have this output that sounds really good and is localized and whatever. There's a ton of different use cases that range from these individual things that we think of today as happening by audio experts, through to very subtle or big things happening that are consumer facing that involve essentially making audio more interactive, more customizable, more editable.
Richard Kramer:
One of the things I was thinking about in the same way that a Hollywood movie before you get the final cut, the director's cut, whatever you want to call it, they focus test it. They show different endings to people. I had never known this before, but of course the story isn't necessarily the same. And in some different regions they might have different characters, different elements of the story that are cut out or added back in. And I'm wondering, is that going to happen to music? Are we going to get A/B testing of what version of WAP by Megan Thee Stallion gets turned off or on the car radio based on the age of the little girls in the backseat? My teenage daughter loves singing all the words to that song when it came out. Is music going to be something that's no longer fixed and static but gets tweaked constantly and evolves to suite tastes?
Do we want our base turned down a little bit in the future because we're not feeling the vibrations? Or do we want a bit more percussion because that's fashionable? And Will's taught me that three or four years ago, or 10 years ago, song times used to be a lot longer and they would have a lot longer intro before the lyrics came in. So are we somehow allowing these technologies to subtly manipulate what we know as the unitary work? Is that going to get redefined and instead it's going to splinter into this post-modern melange of dozens of versions of a song by the same words?
Jessica Powell:
Yeah, that's a great question. I have so many things to say. First I would say it's been a long time since music was static. If we even just think, again, if we think about what we were started to do on YouTube, like performing covers. And then what we do now on TikTok where we will sped up and slow down versions on TikTok for example. Or how labels will sometimes take a song and allow authorized remixes or sometimes it'll be unauthorized and then get claimed. People will see that different songs perform differently in different markets. And if you made certain tweaks, you might have a song that performs better in Thailand say than in the UK, and that a local producer has put a flavor, a twist on it that just resonates more. And sometimes those twists end up resonating globally.
You think of that Eman Beck version of Roses that kicked, I think that was maybe three or four years ago, that was massive. I think the producer was in, I want to say it Uzbekistan, but that kind of stuff is already happening. But yes, do I think it will happen even more so? Absolutely. What you will be able to do in the future is, we did a thing with Green Day, where Green Day didn't have the masters to 2000 Light Years Away, which is a song from I think 1991. And they used Audioshake and they created the vocals, the drums, and the bass. They uploaded that to TikTok and made that audio available for people to use, the duet feature. And it made it possible for all of their guitar playing fans, which Green Day has a lot of guitar playing fans, to essentially throw themselves into the band and play guitar along with the band. So they became Billy Joe, right? And were playing guitar along with Green Day. Which any of us who ever learned an instrument that was the dream.
I sat on my bed trying to pick out baselines from punk songs in high school. But now essentially I'd be able to remove the baseline and play along with these bands that I liked. Green Day had to, they used our tech and then uploaded to TikTok. In the future. At some point in time, TikTok will have a button and you will just be able to do that. Or you'll be able to take any song, make it into an instrumental, a karaoke version. But everything that I just said, and certainly if you get into more extreme versions, let me pull out the baseline from this old James Brown track and then remix it and do all that stuff. I don't think probably realistically labels are going to allow that in the next year. I think that is inevitable and it will happen. But I think that it's already very difficult to determine splits even on the original mix of a track. Imagine how ahead exploding it is to think of what happens as those become derivatives and everything.
So I think that stuff is already happening on an individual level, right? Producers are ripping stems, they're creating remixes and that sort of thing. But I think there's all kinds of use cases ranging from karaoke and music education that allow fans and artists to get closer, that I think will be licensed and will happen. And I think that'll open up new opportunities in really cool ways. But I also would say that there's a lot of ways that audio will change for us that won't be as apparent to people. That will be apparent to say people using Audioshake and are apparent to those in the audio world. But consumers don't need to know about it per se.
And that is in the same way that when you're watching a commercial and the car, the Lexus goes around the bend, the music editor has maybe increased the energy of the base, right? Just to heighten the sense of adventure. The listener in this case is still processing that track as though it were the same track that they already know and love. I think those kinds of more subtle audio manipulations that are customizable to the use case or customizable to a user's movements, I'm on the AR fitness app and I punch the air and the drums snap. It's just part of my audio experience. It's not as full on as a remix in terms of it's-
Richard Kramer:
There's a terrific analogy you may be aware of in when they film these car commercials, it's actually a blank chassis that's driving around, and they use digital effects to fill in the car. They also have a set of backgrounds. So of course you'll see a Jeep driving through the snowy winter when it's a winter season, and you'll see it driving around scenery like Mount Aetna for some reason is used a lot in Italy in Sicily. They have some stock areas that they, the car is always on an empty road of course with no traffic jam sitting there. But they take what is effectively the guts of a car and they replicate what the car looks like and have it drive according to the season. So almost everything, all the content of that commercial is added, is just sort of added in. There's no inherent filming going on. And I hope music doesn't get that way.
I hope someone is still playing the instruments. I can see a scenario whereby, as you say, I would want to play bass on those old Talking Heads or James Brown or whatever songs. If I was a bass player, that's how I'd want to learn.
Jessica Powell:
I think good music is really hard to make, and I think it's also really intimidating. If you get dropped in front of an instrument dropped in front of a da, that's hard. And I think it's actually only a good thing if we can make music more accessible to people even at the most, and it's not necessarily to turn everyone into musicians. But, to allow people the ability to express themselves through music and interact with music more deeply and be able to create with... I'm a terrible drawer, I can't do it. And nor do I have particular ambition in trying to get better at being able to create visual art. But do I love playing around with something like Stable Diffusion or Dolly just to see what it comes up with that I have, like does it map to what's in my head? Of course.
And would I love to be able to do that with music where I could take a piece of music I love or a melody that's in my head and perhaps have an AI lyric generator generate something that maps to that melody? And then I'm looking at the lyrics and I'm like, "Ah, that isn't totally right, but let me switch these words around." And then I've got my song and I'm saying, "Wow, what would this sound like as a country song? What would this sound like if all of a sudden you sped it up?" And if I could do all of that with buttons and really simple, almost think of a mobile interface, right? That's so powerful. And the likelihood that the thing that I would make would be so exciting that is going to all of a sudden push Taylor Swift off the charts, exceptionally low. But what it's done is it's brought me into this really creative act that has had me engage more deeply with music. I think that's a wonderful thing for society if we can get to that point.
And by the way, every single technology that I just named and that workflow is a hundred percent available today. It's just that it's not all bundled together in a way that's really easy to use.
Will Page:
I got a bunch of quickfire questions for you before we get to Smoke Signals, or to put wind in the sail of Audioshake, because I think what you're onto is part of the solution and it's definitely not part of the problem. But firstly, just another anecdote from history to quote Mark Twain again, how history rhymes. That Jimi Hendrix came to England to start his musical career to find his rhythm section. He actually toured the mining clubs of North of England as well. And the first record producer that worked with Jimi Hendrix here had a sign above his office desk and there a sign said, "The day that we can get computers to replace drummers is a day we'll have a proper music industry." That was 1966 and here we are today.
First question-
Richard Kramer:
I do not believe that story Will, that-
Will Page:
I have a picture to show? I'll put the picture up on our transcripts. Yeah, Nigel Greens told me that story too. He was there at the time. Now first question, not sure if Americans do irony, but we did mention it's a 20-year anniversary of iTunes in part one. Isn't it ironic that 20 years after iTunes, which unbundled the album, what Audioshake seems to be doing is unbundling the instruments behind the album? The great unbundling. Your thoughts?
Jessica Powell:
That's very funny. I actually thought about this last night that the novel that I wrote called The Big Disruption. There is a founder in it from a rival company to the company that's the focus of the book. And this rival company, their whole goal is to atomize everything like language. Everything is just broken down-
Will Page:
To make the world a better place. To make the world a better place-
Jessica Powell:
To make the world a better place. Everything is being atomized. And I was brushing my teeth last night and I was like, "My God, Jessica, this guy that you were making, is that not what you're doing?" So yes. But I think that creating richer for... The thing that I get inspired by, there's sort of two things. One is being able to offer up to artists the chance for their music to be used in new ways if they want it to. And try and contribute to the artist ecosystem to me is really important. I think it's also particularly cool that with older art that really doesn't have its stems at all, that even in these sort of one-off use cases that we're talking about, like sync licensing or Dolby Atmos mixes, that those are use cases that actually are just completely closed to older artists or to artists where the stems have been lost because they can't... If you just have a mono track recording effectively, that's all you have and now you can't... If the music editor really wants an instrumental and all you have is a full mix-
Will Page:
Advance its design quality leave you on the hard shoulder. And we had Sir Peter Battleship on our podcast recorded, and should you come to London, we're going to take you up there to Apple's world-class spatial audio studios. And the one song they used to demonstrate the power of spatial audio is Elton John's Rocket Man, which is reparsed and may have even used Audioshake, I don't know. But reconstructed to be produced in spatial audio and consumed in spatial audio. It's interesting that two-way network effect. You need the AirPods to appreciate spatial audio and need some producer to make spatial audio for the magic to really work. It takes two to tango.
Jessica Powell:
And then the producer also to tell you that because you're listening to it in AirPods, you're not actually getting the right experience and that you need to sit in the studio. But the other part-
Richard Kramer:
Buy expensive headphones.
Jessica Powell:
That's right. But that second part I would say is also, again, we're focusing a lot on music, but where I am, I love the idea that Audioshake can be used to improve or move forward a lot of different consumer audio experiences. I think there's also just a lot of really cool things that can happen on the accessibility side. For example, when you're able to split audio. So for example, we work with a company that is doing haptic technology and makes it possible to feel music through different surfaces and textures. And think about that from a concert going experience for people who are hard of hearing or might have certain sensory processing disorders. Or Amazon just I think last week announced something where you would be able to boost the dialogue in a film. They were just announcing it for a couple of films that I think they had multi-track audio for.
But I looked at that and I was just like, "Well, you should just use Audioshake on that and do it for an entire catalog or the top 1% of catalog." Because then all of a sudden people who are relegated to having to overly rely on subtitles, which can also be really hard because those move really fast, all of a sudden, sorry, captioning, not subtitles, but all of a sudden instead could have the vocals boosted for them. So there's all these things that again, from the day-to-day level, we aren't going to think of them as sound separation or you're not going to know Audioshake's involved, but some of those use cases to me are the most inspiring.
In the same way that one of the things that really struck me being at Google during the shift from desktop to mobile was a friend who was visually impaired and worked on accessibility at Google pointed out to me, he's like, "The mobile is actually a much more accessible interface because so much of it is run by your voice." And that's actually a place where we're on the same footing. And if you're designing for voice, you're also designing for people who are visually impaired, which is something that had never occurred to me. But when I thought about it that way, I was like, "Oh wow, that's a huge paradigm shift." And I think that accessibility is something that we don't talk a lot about in technology, but a huge swath of the population and all of us at some point are going to be affected by our ability to consume, listen, engage, interact with content and might face barriers in doing that.
Will Page:
Second of three questions, quickfire again, if we go back to Drake in The Weeknd, something that your article really inspired me, and I'm going to get the source data to prove it and work with you on this, is shock, horror, gosh, there's been 20 million streams of this Drake Weeknd fake AI track. And the first thing I went to in my brain was, what if that was 20 million people producing 20 million streams, therefore each person streaming it once and then thinking, "Meh not that great." And then going back to original Drake and Weeknd work and streaming it all over again. There's got to be something here, which is from threat to opportunity, what Audioshake can do, what AI music can do is drive catalog, reengage your audiences in those songs which you've forgotten about. Your thoughts?
Jessica Powell:
Yeah. And so as I wrote in that Substack post, if you think about, it's a very similar logic to remixes. And when remixes first started to come out, a lot of the response was to shut that down. It still is on a certain level. But the argument that DJs have long made is that remixes extend the lifecycle of the original song, they extend the relevance of the artist in the conversation. And all of that drives streams back to an engagement to the original track. So would not that potentially be the same thing with AI covers as well? And I think the thing that then you have to, if you buy that argument, I'm not talking about any of the legal parts, I'm not talking about any of the other components that I think are all relevant. I'm just saying if you think about it from a market perspective and you think about it from a consumption and its interaction with the original track or the original artist, I think there's probably a lot of truth to that parallel.
Then the question it takes you to is, how good is that remix ecosystem today? It actually is terrible, right? In that people will go and create remixes, they'll rip stems that they can't get ahold of, they'll rip the stems, they'll create the remix, they'll do different things to that remix to make sure it's not detected by the content recognition systems. They upload it to the different streaming platforms. Most likely it will not get detected. Most likely it will not get a ton of streams because that's just how it works. But if you are lucky and if your remix gets a lot of attention and goes viral, then what might happen is that someone at the label will notice it. They will come in and they will claim that track, which is fair because you never had permission to create it in the first place. You might get some payment for it. Quite likely you'll be told that you can create a remix for them on spec.
And so it's actually not great for anyone. It's not great for the artist because they're not getting a cut of every single remix that used their track. It's not great for the label because they're just having to deal with it as they spot it. And it's not great for the remixer because great remixes are art, right? And they should be compensated for that. So we can talk about remixes or we can talk about AI covers. I think the same problem exists across both, which is how do you create a way for people to do these kinds of things that they're already doing, but to do it in a way that actually makes sure that artists are paid. I think that's all possible, but there has to be an industry willingness to build that.
Will Page:
Last one for me then over to Richard for Smoking, which is I have an idea of a startup I want to do. First thing I'm going to do is finish your book, then I'm going to come to you for careers counseling to make sure I don't try and do it. But to give away some business secrets, it's based around the idea of adaptive content. If I think about working at Spotify with the running app where you picked the music you wanted to run to and the musical would adapt to your pace of running, it's in that space. And one thing that I saw with gaming companies in Seattle is the idea, and Richard alluded to this, that if you could be in a sort of car chase game and let's say you're listening to a song as you're driving your car in this car chase game. Imagine a sort of Grand Theft Auto type thing. And then you come off the motorway and you enter a nightclub. The same song could potentially change as you enter the club. And then perhaps there's another sequence where you break into an office and then the song changes to an office sequence as well. So it's an adaptive soundtrack. Do you think that's where Audioshake could find itself landing some big hits?
Jessica Powell:
Oh, absolutely. We already work with a couple of gaming companies that are doing exactly that, and that was my first short answer.
Richard Kramer:
Okay, well let's-
Will Page:
That's wicked.
Richard Kramer:
Let's cut it there because we need to go to our favorite part of the show, Smoke Signals, where we ask people to give us the couple of things that are the smoke before the fire, the uh-oh moments with all this hype and hysteria, bubble trouble, and especially around AI right now, where you overhear claims or terminology that just makes you face-palm cringe. So can you give us a couple things in this field of AI and music that you just absolutely wish you could expunge from the record if you had the magic eraser?
Jessica Powell:
I just wish everyone would start... It's just really funny, the number of companies that are like, "And we use AI, and we use AI." We all use AI. Let's get over it. I know this is very rich coming from a woman whose company is called Audioshake.AI, but in my defense, that was a $1 domain and that's why that exists, but otherwise. Because AI is in everything, and unless you're do... I think unless it's really core deep research tech, you probably don't need to say it. And probably none of us, even those of us who are doing like deep tech need to be saying it in a couple years anyway. It's sort of silly.
Richard Kramer:
Okay. Another one with respect to the music industry perhaps, the kind of thing that just makes you shake your head and go "Uh-huh."
Jessica Powell:
Certainly a year or two ago I would've said Web3, not-
Richard Kramer:
We said it a year ago.
Jessica Powell:
In fairness, I think there's a lot of cool use cases for Web3. There definitely is. I just think it was very overhyped and that there-
Will Page:
Do you know that McKinzie reckoned that the metaverse could be worth the GDP of Japan?
Jessica Powell:
It could be, so could I. So could you. Sure, anything's possible. Anything's possible in the metaverse.
Will Page:
And when you talk about AI here, AI there, we asked Seth Gerson from Survios, a very interesting gaming company, about his smoke signal. He talked about when you hear metaverse of something, and his worst one was a metaverse of pets.
Jessica Powell:
Oh wow.
Will Page:
A cringe factor of 10. Jessica, so on a one, Jessica, this is been a great podcast in terms of the balance. Looking at the problems in part one, identifying solutions in part two is always a great narrative to work with. But the one big takeaway I'm getting from this, and I want to stress, music, which you and I are in, finance where Richard is, but music is a microcosm for everything else. We're the first thing that always gets disrupted. Napster, back then, AI today. Everybody else feels that disruption further down the line. But we are the bellwether, we're the canary in the coal mine. But I just have this image that you've given me throughout the conversation of when you buy a suit, you can buy a suit off the peg or you can have it bespoke, and there's a decision about that. Bespoke's going to cost you more and take longer. Off the peg's going to cost you less and it's quicker. But it just feels like what AI is going to do is just bespoke everything to your needs.
I don't know whether that's horseshit and I apologize if it is, but I just feel what's happening here is content is going to be customized to everyone's needs through AI. And that for me is a positive note. So Jessica, looking forward to reading your book, love watching your TED Talks and reading your Substack. I'm going to be your first book reader and Substack subscriber. But thank you for giving up your time over in the Bay Area on the Pacific time to join us for this podcast. It's been great having you on. Thank you so much.
Jessica Powell:
Thanks so much for having me.
Will Page:
If you're new to Bubble Trouble, we hope you'll follow the show wherever you listen to podcasts. And please share it on your socials. Bubble Trouble is produced by Eric Nuzum, Jesse Baker, and Julia Nat, at Magnificent Noise. You can learn more bubbletroublepodcast.com. Until next time, from my co-host Richard Kramer, I'm Will Page.