This week, we dig into some of the hype around AI, with the announcement from financial markets data provider powerhouse Bloomberg’s BloombergGPT, a 50-billion parameter large language model, purpose-built from scratch for finance. Is this the needle mover AI has been waiting for? We’re bubbling on the use cases: sentiment analyst, news story summaries, bespoke research. What does this mean for our pen pals - the sycophants and stenographers in that echo chamber of Wall Street?
This week, we dig into some of the hype around AI, with the announcement from financial markets data provider powerhouse Bloomberg’s BloombergGPT, a 50-billion parameter large language model, purpose-built from scratch for finance. Is this the needle mover AI has been waiting for? We’re bubbling on the use cases: sentiment analyst, news story summaries, bespoke research. What does this mean for our pen pals - the sycophants and stenographers in that echo chamber of Wall Street?
Richard Kramer:
Welcome to Bubble Trouble. Conversations between the independent analyst, Richard Kramer, that's me and the economist and author, Will Page, that's him. And this is what we do for you, lay out the inconvenient truths about how business and financial markets really work.
In this week we dig into some of the hype around AI with the announcement for financial market data provider Powerhouse Bloomberg about Bloomberg GPT, a 50 billion parameter large language model purpose-built from scratch for finance. Is this the needle mover AI has been waiting for? We're bubbling up on the use cases, sentiment analysis, news story summaries, bespoke research, but for Bubble Trouble listeners, what does this mean for our pen pals? The sycophants and sonographers in that echo chamber of Wall Street? Will has some thoughts and as someone who's practiced this craft for close to 30 years, I'll have my own reactions. Back in a moment.
Welcome back to Bubble Trouble. Will, why are you so excited about AI and ChatGPT and even coming to Bloomberg, that incredibly expensive data terminal with the user interface from CompuServe 1994 that is sort of the arcana and the messaging platform that the financial markets work off of. I mean, what do you know about this Bloomberg stuff and why do you care?
Will Page:
Just to fact check and approve that statement? Now, I do believe the interface predates Tony Blair coming to the leadership of the Labor Party. So yeah, it does go back a bit.
Richard Kramer:
Yes.
Will Page:
I think we're onto a big topic here and for our listeners, I think it's one that we should pursue. I got some great guests in the field of AI and ChatGPT lined up for the future. But for me and you just to go toe for toe low for a minute here, this is where I get interested into it, because we can debate till the cows come home, the use cases for ChatGPT. Is it the big thing that people are saying or is it just going to sort of gribble out there in terms of it was a nice idea, but we go back to Google search, we go back to human curation. So I want to take you back to when Mark Thompson was the director general of the BBC, and he gave this wonderful press conference, and I use this as an example when I'm teaching students economics. Where he realized there was layers and layers of duplication in BBC news journalism and he was going to make huge cuts.
So Mark Thompson, the director general of BBC, obviously a very important role for British Society is giving the press conference and there was 14 BBC microphones at the press conference, debating how could you say there's duplication, how can you justify these cuts? And it's the visualization I want to sow in the seeds of our listeners here of one person has one story and his own organization has 14 microphones to receive that story. Now, when you take that downstream and you think 14 news departments writing up what has to be the same story, I think you have an echo chamber. So take that BBC example and flip it to your world, Richard. If Apple produced quarterly results, there's only one story. Do you really need, and I'm sure you would justify the case for this, but I'm asking you, do you really need all these analysts to write up what is essentially the same story or is there a role for ChatGPT to take some of that heavy lifting and automate it?
Richard Kramer:
So let me give you two concrete examples of what happens when a company like Apple, the largest market cap company in the world reports. First of all, there is already technology to robo journalize, if that's the right phrase. So Reuters will ingest those results, repeat some of the talking points from the headlines of the company, write a sentence about whether they beat or missed expectations and publish it in somewhere around five seconds. So when you think of those 14 mites at the BBC, some of them are trying to get the instant news, the hot take, others are writing for a different sort of audience and trying to understand the long sweep, how we interpret these comments, how the government reaction might be to those comments as this funding source of the BBC, what it means broadly for public service broadcasting. So there are different audiences. Now, among all those 50 analysts that are covering Apple, some will take a very short term view and I call that examining the liken on the bark, forget about missing the wood for the trees.
Others will try to step back and say, "Well, let's look at the pattern over the last eight or 10 quarters or 2, 3, 5 years." Is this quarter unusual or does it just simply fit a pattern that we're very comfortable with? So whether it's the timescale, people look at things on, whether it's the details they choose to pick out, whether they're applying AI for sentiment analysis to see if the CFO's voice wavered when he was answering a question or the CEO had fewer of his famously praiseworthy terms, how wonderful, amazing, incredible, groundbreaking, et cetera, the products are, people are listening for different reasons, pulling different things out and they communicate to different audiences. So I'm not sure those 14 mics will all go to one, because I'm not sure it was the same story for each of those 14 journalists. They may have been looking at a different part of the, feeling like the blind man and the classic example, a different part of the elephant and feeling something completely different.
Will Page:
Interesting, interesting. So you're defending your industry. I didn't expect a turkey to vote for Christmas here. Let's stay at a high level here. Let's stay at a high level. Give me your take on ChatGPT. You've heard the hype, you're studying this area. What's your interpretation of how far we can lean into the wind of this new future?
Richard Kramer:
So when we think about the onset of AI tools like ChatGPT, and I think AI is going to impact a huge range of areas. ChatGPT is just one sort of example, and the classic example that's been trotted out since Microsoft sort of sort of had its attack on Google, was about how it would impact search. Something that we have trillions of a day, all of us spend loads of time whether it's on maps or within Google itself or in websites searching for stuff. I think search needs to be divided between what you would call fly paper and a trampoline.
Will Page:
[inaudible 00:06:34].
Richard Kramer:
And fly paper is something you land on and you stick. And the reality is the two thirds or more of search queries are what you would call zero or one click. What year was JFK assassinated? Well, the answer is 1963. What's the weather today? Well, it depends on where you are, down in Spain it might be 18 degrees and sunny. Who plays for number 10 at the Scottish rugby team? These are searches for which there is either one answer, a launching pad to get to the place you want to find your answer.
Where can I buy Will Page's book, Pivot? Well, Amazon has probably paid for the search query to take you to them. So you have to distinguish between where we're looking for a simple answer, which is the majority of search queries and where we're looking for a conversation. And it's where we're looking for a conversation or probing more and more, that's where something like ChatGPT really comes into its own. I think the grand overarching statements that it's going to revolutionize search need to be contextualized by understanding that a lot of times you ask someone a question and then you want to get an answer. What's today special? Page, you're not going to have a long disputation with how the chef sourced its ingredients. You just want to know, is the burger good?
Will Page:
I love the fly paper, trampoline analogy, that goes into your greatest hits on this podcast. But I do think you're missing a slight trick there, which is what's it going to do to search as one question. Will I need to search? Is another question, that is, can ChatGPT and use developments in AI reduce the need to go to Google in the first place? Can I just probe your thoughts in there? We can just separate the two components out, which is yes, it can affect different types of search and not all searches work the same, I get that. But what about the need to search? What if ChatGPT reduces the actual need to go to the Google landing page and search in the first place?
Richard Kramer:
There will definitely be new modes of information gathering that'll come up. And if you go back eight or nine or 10 years, when do we have the introduction of Apple Siri and Amazon Alexa, everybody thought the trend was going to be voice assistance.
Will Page:
I know. Alexa, take people to Bubble Trouble.
Richard Kramer:
[inaudible 00:08:50] Alexa would have all these skills that they would develop and you'd be able to do a little bit more than ask, what was the distance to the moon or set a timer or play a bit of music, as long as you didn't want to play it from Spotify, because Spotify and Amazon decided not to get along together. So you have frequent technologies that spring up that threatened to change the way we behave, but the reality is human behavior changes very slowly and a lot of people aren't necessarily comfortable speaking to a device, which they may be concerned is listening to them all the time.
Now, will clever students very quickly figure out that ChatGPT could write their term paper for them? Absolutely. They're doing it already. Will universities employ software that tries to figure out whether the students actually got the term paper written for them by ChatGPT or wrote it themselves? Absolutely, they'll be doing that as well. So it's this cat and mouse game in terms of introducing new technologies. But again, the human behavior is slow to change. Google is a verb for your kids and my kids' generation and they will certainly embrace new ways of finding information where they need to. But yeah, I think it's going to be a very slow evolution. It's not going to change with a snap of a finger overnight.
Will Page:
I hear it. I hear it. That Alexa thing, that's a real Bubble Trouble episode in the way that Amazon seems to be kind of cashing their chips and taking a loss in that bet. I remember going to an AI conference talking about the advancements in Alexa and how accurate it was at predicting the weather and I raised my hand and said, "What if I just looked out the window, would I have a better bet?" And the AI scientist hadn't actually considered the probability of looking out the window to assess the weather yourself in their experiment. Nevertheless, what I wanted to do now is just come back to the kind of core premise of this podcast. Is there a role for ChatGPT in the echo chamber of understanding financial results? Bloomberg clearly thinks there is. I want to get your take on that in a second, but if I can just put forward a framework for this discussion, which is how can you frame the current state of machine learning of AI in ChatGPT?
I see it as depth not breadth. For narrow ideas, for narrow queries we have a use case. For broad general search, perhaps we're going to have to wait a couple of years. But way I see it is a two by two metrics in terms of, can you see the errors or can you not see the errors in the response you've been given and cross that with, does the error rate matter a lot? Does the error rate matter less? So just to repeat, are the errors visible or not and do the errors matter or not? Now just to throw that two by two metrics at you, Richard, any thoughts in terms of that as a way of applying this to our conversation?
Richard Kramer:
Well, I'm taken back to something that happened probably I want to think maybe a decade ago where there was one of these flash crashes on NASDAQ and all of a sudden in the blink of an eye, a stock which is incredibly famously stable like Accenture, a 700,000 person, I think a consulting business famously sort of knocks out its results within tiny fractions of where the expectations were, because they have a very predictable business because they pretty much know with consulting engagements how much work they've got every quarter and so therefore can predict it very well.
But in that flash crash, all of a sudden Accenture fell sort of 40% in a matter of minutes. Now, at that point, human beings, portfolio managers and analysts would look at the stock and say, "Well, what the heck is going on?" It very quickly deduced that there hadn't been a news announcement from the company and it wasn't like all the shareholders chose to sell at exactly the same time and maybe there was something going on with a bug in one of the program trades that were going on where the results or the share price was quoted immediately at a tiny fraction or 40% below where it normally trades.
And so yes, someone could see the errors and in that case the error rate mattered a lot and what happened is the stock quickly recovered and of course most of the people actually holding shares didn't rush to sell in a panic, because they thought something was going wrong. Now, would AI have helped there, have helped uncover the cause of this flash crash? Possibly. But the human judgment required and in sort of taking the pause and waiting to see what was going on, might be just as valuable and how to capture the value of that human judgment within AI is going to be the real challenge, because we do have brains that process at about two to the 30th.
While AI can, as you say prompt very quickly, very narrow sets of information, it doesn't have the ability to contextualize and think very broadly yet. That's sort of artificial general intelligence and that seems to be a long way off. Where I think AI and programs like ChatGPT will have the greatest impact will be automating the sort of dull scut work that the markets have to go through. For example, entering all the figures on the balance in the model that can be done, that can be filled in instantaneously with AI.
Now, as an analyst, I will tell you that looking at those numbers, eyeballing them, comparing them to what was going on last quarter or a year ago, looking for my new changes, that's something that takes human judgment. That's something I don't want AI to do, because I find it valuable to do it myself and to feel like I know what's going in the numbers. I just don't have them auto-filled in my model. A lot of other people will feel differently and will feel like, well, they get a speed advantage by filling in the model and then they can look at them later if they deem it necessary. But I think it's going to be a situation where I would flip your matrix and say, "Are you looking to spot the errors-
Will Page:
Interesting.
Richard Kramer:
... because you decided that the error rate matters a lot." Or you don't really mind if there's a lot of errors, because you don't think it matters at all. So if there's minute changes on the balance sheet of a company, well, you know what? Who cares? I don't need to ask questions about that. That doesn't really matter, because there's a bigger picture that I'm looking at in the company's earnings that doesn't have anything to do with the balance sheet.
Will Page:
Oh, you're looking for the errors. It reminds me as the first time author of, did you employ a fact-checker? And if you didn't, why? Because you didn't care about the errors. I certainly did. So I employed a fact-checker. So yeah, I hear it. Let's just before we get to the break, a few more examples of where I think this can dive in and you tell me whether I need to dive out or not. I mean, firstly the editorial issues that goes with these analyst notes that goes with that Bloomberg 1994 interface. Surely we could be seeing something akin to Spotify playlist here and that ChatGPT could tag and link these notes together, which is, I'm interested in a bunch of things. I don't want to read a bunch of independent PDF files. I want you to grab the key synthesis from each and piece it together and beads on a string, which is what a playlist is.
Secondly, what about enriching the notes, illuminating the links, a lot of the clutter that's in those notes using ChatGPT. I got to get this in a tweet link sentence, not a 95-page report. And thirdly, what about using co-pilot like experience here where you can resurface previous information from past notes? I mean, I'm just thinking about enriching the analyst notes experience for one primary reason, which is I always remember meeting the chief operating officer of our large bank, very large bank who said to me he spent $110 million a year on research and 97% went unread. Surely by enriching these notes experiences more of what gets produced gets read at less cost.
Richard Kramer:
Yeah. And one of the big questions will be do you have access to the data sets? So the quality of the answers you get from any AI program will depend on the quality of the data that's being used to feed into it. So if you have access to all of one specific large investment bank's research, you can tag all of the things that they're saying and/or got wrong. You can say, "Can I please look at all the recommendations that haven't worked as opposed to the ones that have and try to understand why it is that they rate 90% of the stocks buys when half the stocks in the market underperform."
But to really understand the entire conversation of the market, who was most accurate, who saw issues a year or six months ahead of them coming to the fore, you'd really need access to all the 50 analysts work covering the companies and that may not be available to you and you really need to be able to put all of that data into a machine-readable, similarly usable format. So if there's a small independent analyst that you're not subscribed to, well maybe they have the best information or have some of the best insights like I'd say our firm does, but they may not be as readily available if you're not subscribing to their work like you're being given the conflicted investment bank reports for free.
Will Page:
So I hear it. Last question before the break, just extend that out a little bit, which is where I think we're on different sides of the street in this conversation, is back to that point about search. You give a very eloquent understanding of what this could do to search. My question abstracts the situation says, "Will you need to use that Google search page?" Similar here in terms of what they can do for analyst notes, my question is, but most of those analyst notes are going unread, so you could put on the shelf the most perfect piece of equity analysts you in your career, but if no one's picking it off the shelf to read and act on it... That's my issue.
So last question is, if you take my world of media and entertainment and your world of finance and how we have Album McPlaylists and playlists dedicated just for you, do you see a role for ChatGPT, which means I don't have to visit the Goldman Sachs note and I don't have to visit the HSBC note and I don't have to visit the Credit Suisse note if Credit Suisse is still there, that is. I just get my notes, everything is bespoke just for my personal taste. A kind of Discover Weekly of analyst notes. Do you think that's what Bloomberg are busy cooking in their kitchen?
Richard Kramer:
Look, Bloomberg has its own analysts and wants to surface its own news stories and its own analysis of things and may have a conflict of interest in surfacing that information versus the banks. And the banks all of course publish that research as advertisement for their financial services. They want to sell companies. So everybody's got their own ax to grind and everybody has the same problem of making their work visible to fund managers, which are bombarded with choices of analysis and they have to choose whether they try to go with unconflicted work from independent researched firms or a look at the bank's work because they feel the banks are very close to the companies because they might be working with them on financing deals. Or work with Bloomberg, because they are very familiar with the user interface and they're logged into it every day. So there's lots of reasons.
I would throw it back to you and do you believe that AI has been able to assemble a better playlist or a better mix than the Will Page mix on Mixcloud, which has got 37,000 listens? Do you think AI can understand all the different types of genres of music you want to reflect in your own interest in music and assemble them better than Giles Peterson or some DJ that you've listened to and respected for 30 years? I think there's still a big gap there. And tell me, how do you think it's going to work in the music industry with playlists?
Will Page:
Let me close out part one with the way I answer that question to music, but apply it to finance as we meet each other in the middle of the road. And the way I always describe the role of the algorithm or AI and curation compared to that of the human playlist creator or the human DJ even, is that in a world where you have so much information, let's call it financial information, the algorithm grabs the seeds in a hand that no hand could grab far bigger hand and throws those seeds much farther than no human could throw them and then the human can observe which of those seeds take root.
I think that's a really nice way of comparing the role of machine learning versus human instinct of just we are dealing with a quantity of information and supply of information that we've never dealt with before. Managing it needs a balance of both. It takes two to tango. You need the machine learning to throw the seeds out and you need the humans to realize why they're taking root or why they're not taking root. What are the causes and consequences of that? We'll come back in part two and get into the subject further. I sense some compromise in our positions here. Let's see if ChatGPT can write up the transcript notes for part two. Back in a moment.
Richard Kramer:
We're back with part two of Bubble Trouble where Will Page and I are debating the impact of AI and programs like ChatGPT on the worlds of finance and music and entertainment. And I want to throw Will down the rabbit hole for a moment and ask him to elaborate a little bit more on how ChatGPT might impact his beloved industry of music. How long is it going to be until ChatGPT writes a top 10 hit, Will? How long is it going to be till AI musicians take over from the real thing and play that funky horn section like you never imagined a human being could? What do you think?
Will Page:
Well, firstly, shout out to the pioneer in this field, which is a chap called Ed Newton-Rex, his website by the same name, his Ted lecture, a must watch for our listeners and he's going to be coming on the show in a couple of weeks. So we have the expert, the global expert on this topic to handle that question. But I am reminded of two things, one on fraud and two of the role of drum machines, let me just break them down firstly.
In the 1960s was a famous record label exec who signed an artist called Jimi Hendrix when he was here in the UK. Jimi Hendrix by the way came to Britain to find his rhythm section and in the back of his wall he said, "Once we get computers to replace a role of drummers, then we're going to have a music industry," and that was 1966. That's quite an interesting claim.
Then came the drum machine. We still have drummers, but we have a lot of music which is produced using drum machines. Now, that took a good few decades to get us there, how many years or months will it take to get us to the point where the other instruments can be created by machines as well? I think that's just an interesting way of just putting the revolutionary aspect of AI, the kind of knee-jerk reaction to it into context. So let's see where the computer takes us. Where I think we do have a huge issue just now is with stream fraud and just recap in our industry, and I want to make it not specific to music, but to many other industries, there's a big pot of cash that needs to be allocated every month from streaming services and you can dictate where that streaming activity goes through illegitimate means such as fraud, you can grab a bigger share of that big pot of cash.
Now, there's three types of fraud. There's click farms, hundreds, thousands of people dedicated to streaming music, often in countries like Indonesia and the Philippines. There's a account hacking, which is where I hack the account of Richard Kramer, which is dormant and deliberately dictate the streaming activity of his monthly feed music that I get compensated for. But the third one is the interesting one, which is carbon copies. If Lana Del Rey's new album, which is fantastic, gets out there, but a leak happens or somebody gets access to those files before it gets out there and can create a carbon copy of those songs and the algorithm doesn't recognize it, this one is more of a human being with a pulse and this one is a carbon copy generated by AI, then the playlist editions and the streaming activity can go in a direction of the carbon copies.
I think this carbon copy language, getting again out of my niche area of music, which I've got to do for your audience here, is really important, because you can apply that carbon copy aspect to even your industry in terms of actually who wrote that analyst note, who is the source? I guess we're actually leaning to the language of deep fakes here, Richard.
Richard Kramer:
Well, when I think about that perversion, if you will, of the revenue stream between the listener and the artist, you've just given me three clear way that money that should flow to the Ed Sheeran's or the Harry Styles of the world, the Lana Del Rey's or the Billy Eilish's that have created these iconic new expressions of music, the modern expressions that some of what they're doing is being siphoned off by nefarious actors. And shouldn't AI have a big role in stopping that?
Will Page:
Oh, I see your thinking.
Richard Kramer:
Why can't a company like Spotify that has proudly and launched its own AI DJ, why can't it unleash AI on the a 100,000 tracks that get uploaded every, what is it, every week?
Will Page:
Every day?
Richard Kramer:
Every day, separate the weed from the chaff and find all of those carbon copies, because they don't have the blockchain signature, the large record label that these artists are signed to and just knock them off the system. Aren't they getting any better at playing whack-a-mole and getting rid of these diversions of money in an illegitimate way away from these artists that we cherish?
Will Page:
It's a commendable point and I'm going to use the terms poacher and gamekeeper here, because what you've inspired in my head at least is, why is it that it's illegal application, these technologies which always seems to move first. So we know we have a problem in our industry of carbon copies getting onto our platform and scraping away streams and dollars that shouldn't really belong to them. But your point is, well why can't AI become the gatekeeper and reduce this type of activity? And that's interesting and it reminds me of somebody who's hopefully going to be on our show next week, Lucky.
His name is [inaudible 00:26:52] Lucky, good luck on that. But his company, Muso.Ai, and he was saying to me that the application of AI in something like journalism, which is not that dissimilar from the person writing the analyst note, it does feel like a bit of a Napster moment here, which is it the medium or the message. I want Richard Kramer's take on this company or this financial development or do I like Richard Kramer's take, but I don't care who wrote it. Is it the medium or the message? And I think that's where similarly you need a not regulation, I don't think that's the right word. You just really need to think through the causes and consequences of having this technology enter a market, whether it's human curation at stake, be it you writing up an analyst note or me writing up a song.
Richard Kramer:
Well, if you look at the $800 billion digital ad industry, there's an enormous amount of ad fraud out there. There are sites called, made for advertising sites, which are created just churn numbers of impressions and they will be mixed in with the billions of impressions that get bought legitimately on the larger sites in the world, whether it's be any of the Meta Facebook properties or Google or any of the large publishers. And yet because of this quality of data issue, we haven't been able to develop AI programs that say, "Well, hang on a second, you shouldn't pay for those ads on those sites, because no one's actually watching them. No one's actually seeing them. Those are sites entirely devoted to generating oppressions, not to attracting audiences." And the amount of ad fraud should be large enough, since it's in the billions, to motivate people to use those sort of AI programs.
But it's hard work and there is always something I call FOFO, fear of finding out. No chief marketing officer wants to find out that when they went to their agency and spent millions and millions of dollars on advertising, that a big chunk of it ended up running on these bogus sites that no one watched. So in the same way for a Spotify, it might be extremely embarrassing to discover that when Lana Del Rey drops her new album that some clever fraudsters already cloned it and half the time that when people are searching for Lana Del Rey, but they mistakenly forget to type a space between Del and Rey, then there's a Lana DelRey that points you to an entirely different artist's work, if you want to call it that, which is a carbon copy clone of Lana Del Rey's songs, but paying royalties to somebody other than Lana. And I think that why isn't AI being developed to help the good guys as opposed to enable the bad guys?
Will Page:
So this is great. So let's drop up another two by two matrix here. You're inspiring some ideas in my head here. So you could literally have a situation where in a market are there efficiencies which AI can enhance or inefficiencies that AI can solve? And on the other side, there's two by two matrix. Are the incentives to do good outweighed by the disincentives to be exposed how bad things originally were? So it's, is AI the poacher or is AI the gatekeeper?
And a lot of it is going to depend on, like you say, that chief marketing officer, the first thing a chief marketing officer has to do every financial year is retain their budget to do marketing. It's not to spend money better, it's to ensure they have money to spend. And is there a disincentive to say, "Geez, there is all this dead wood and I can actually do a far better job with one third of the budget I actually had. Can you just park this AI sandbox experiment in the company to the curb please?" I don't want to know the answers, I don't want to know about the efficiencies it could bring.
Richard Kramer:
Well, it has always been the case in technology as I've observed it for the past 30 plus years, that the leading edge technologies are universally deployed by the fraudsters and criminals. All media, whether it's been toolbar, if you remember those toolbars and AOL, how many of them were turned out to be back browser intercept schemes and mal advertising scheme, mobile games and mobile advertising and all the utility apps that would fire off 50 ad calls in the background when they were supposed to be cleaning out the memory of your phone.
The fraudsters have always been on the bleeding edge of figuring out how to stay one step ahead of legitimate companies and siphon off a portion of the spend. And whether it's pirated video or pirated music, you name it, there have always been efforts to, whether out of anarcho-syndicalism or a sheer larceny to break the grip of DRM and the music labels as you know very well. So it's funny, because it comes down to incentives. The incentive for someone effectively trying to steal money is so much greater than that incentive of someone trying to prevent a small amount of their money being stolen.
Will Page:
I hear it, and maybe you should say Adam Smith, who I'd love to say is a friend of the show, but he's long gone, but he'd often talked about his wife as a smuggler.
Richard Kramer:
He's Scottish, so he must be a friend of the show.
Will Page:
But he talked about smugglers, smugglers an entrepreneur or are they committing an illegal act, a smuggler capturing a market where a market previously didn't exist. I always remember Adam Smith came from Kirkcaldy not far from a town called [inaudible 00:32:20] and when Scotland had its first ferry from [inaudible 00:32:23] to go to mainland Europe, I think it was Zeebrugge in Belgium, the primary use of that ferry was not people going on holiday, it was people bringing back cheap alcohol and cigarettes back to the Scottish mainland, doing exactly what-
Richard Kramer:
Nothings changed in 200 years.
Will Page:
Doing exactly what Adam Smith had predicted from his hometown. Irony, come on in.
Richard Kramer:
So look, let's get back to your smoke signals. The things, when you hear all of this incredible hype that we've had about AI in the past three to four months since OpenAI came out with ChatGPT, since Microsoft embraced it, since it became a talking point on the lips of politicians and ordinary people, not just tech nerds, I want to hear what are the couple of things that really are making you go, "Pump the brakes here a little bit?"
Will Page:
Well, completely unscripted and thinking live on the hoof here. I mean, you've given me a lot of food for thought in this conversation and my appreciation for that I'm sure from our audience too. But I think one thing which is coming to mind here is the cottage industry of the consultant culture, which is I'm sure every firm is hiring independent consultants just now to ask questions and answer them in terms of how this technology is going to work. And there is an incentive in the consultant cottage industry to give as big an answer, as bloated an answer as practically possible. So I am worried that firms might overpromise and overspend on this technology based on independent advice, which was incentivized to produce that result anyway. So maybe just throw that one out. You know how big this industry is going to be in terms of advising firms, which haven't got a clue which way to look, which way they should be looking. Do you see that as a problem?
Richard Kramer:
Well, I see the one in immediate practical application of AI being replacing the sort of scut work, if you will, at the low end of most firms, whether it's Indian IT outsourcing firms that are maintaining software packages and Scottish insurance companies that were written 20 or 30 years ago with a bunch of lousy old style code. Well, you can teach an AI program how to code in Fortran or COBOL or one of these old, and let it do the code maintenance as opposed to having a team of people in India do that.
It could be the low level auditors checking invoices at Deloitte or KPMG that they're literally hundreds of thousands of people doing this very boring scut work. I know people who have taken jobs as training consultants in AI for law firms where rather than rewrite every commercial real estate contract from scratch, they'll have a repository of every contract that the company has ever written up and they can use that to do the low level drafting that you'd be paying a junior lawyer or a paralegal to do. So it's going to start replacing tasks from the bottom, the simplest tasks first, and that kind of scut work that someone's got to do as maintenance or sort of very base level operational jobs in firms, that's what's going to get replaced first, not least because the conflict of interest in senior management don't want to replace their own roles as CEOs or CFOs [inaudible 00:35:30].
Will Page:
Did I say something about turkeys waiting for Christmas in part one of this podcast? Yeah, just building that point out. So we look at the GitHub co-pilot controversy. There was copyright lawyers wading into that debate at the early stages. But back to that visualization almost of disruption of digital disruption of AI being like a rising Titus at your ankles. Is it going to come up further or is it going to settle at your ankles? But what I'm hearing now from engineers back in the valley is proving enormously successful. I've got some engineers saying on record that they're twice, three times as productive writing code thanks to GitHub co-pilot. So from a threat to an opportunity in comes this technology, is it going to displace a role of engineers or is it going to make them even more productive? Is it going to force engineers to raise their game when coding versus what they were doing previously? I think that's an interesting application too.
Maybe just to round out smoke signal number one, we have language of deep fakes, which kind of begins with D. So does the word dark. It's a bit of a scary terminology. I think what's going to be interesting is when we get the application of AI that gets it right when the humans all got it wrong, I think that's going to happen. The laws of probability suggest will happen. It doesn't mean humans are now surplus of requirements. It had nothing to do with that. It's just I'd be really interested in the knee-jerk reaction to learning that this AI machine picks stocks better than Richard Kramer or this AI machine wrote better music than Will Page or whatever the application. Wrote better code than a coder.
I think we are going to get a headline and you know how hungry for headlines journalists are where ChatGPT got it right and the humans got it wrong. And my worry in the smoke signal from the heart here, Richard, is we all jump on that bandwagon. What is that famous label exec quote? What's the most important form of transportation in America? The bandwagon, because everybody wants to get onto it. I worry about the bandwagon reaction when we have that one outlier example of AI getting it right and humans getting it wrong. What's your smoke signal?
Richard Kramer:
Yeah. Well, if there's one thing I'm concerned about, it's we've already discovered that AI is prone to hallucinations. There was a very interesting article the end of last week about Washington Post and all the leading publications, journalists saying, "Well, hang on a second. We asked AI about certain topics and it's cited all Washington Post articles that didn't exist." They just made them up, which is a tried and tested method for students for years and years instead of citing sources, it's always easier just to make stuff up and sick it in and see if no one notices.
Will Page:
I'm innocent until charged, denial, denial, denial.
Richard Kramer:
There is a risk that AI hallucinations are accepted as reality before they become debunked as hallucinations. We know AI will and can make stuff up and make it sound incredibly plausible. And we know that stuff that sounds plausible in a world full of rank, conspiracy theories left, right and center, that those sort of hallucinations going to become accepted as fact. And my smoke signal or concern is that we'll need another rather thick layer of due diligence, which we've seen time and again is lacking.
I was reading something this morning in the Financial Times about how JP Morgan's many deals are being examined, because they bought one company that was supposed to have 4.65 million users for their product and it turned out to have 300,000. We'll need another layer of due diligence to cut through the noise and to debunk these hallucinations as they're being injected into the bloodstream of our human communications constantly. And it's the ability to spin out the deep fakes so quickly, to spin out these plausible, but factually incorrect stories so quickly.
Will Page:
This checks out.
Richard Kramer:
Because obviously an AI can write dozens of stories in the time a journalist is just starting to sharpen up their pencil and put pen to paper.
Will Page:
So back to your poacher, your gamekeeper example from earlier, when I speak to people in the credit card industry discussing financial fraud, they always try and optimize the time it takes to detect. We're declaring war on fraud, there's a war going on and the forces are winning it. No, focus on the time it takes to detect. If you can optimize for that, then all the other problems you need to solve in financial fraud become easier. And I guess what you're saying there is, if AI can be used as a force for good in reducing the time it takes detect deep fakes, conspiracy theories, misrepresentation or just outright lies, then we've got a solid application to move forward with.
Richard Kramer:
Well, we do. But then you have to ask the question, are people willing dupes? I mean, you have enormous percentages of the US population that believe in UFOs or believe in these crazy conspiracy theories. And do people want to be convinced by actual facts? Do they want to live in a world governed by actual facts? We're not entirely sure about that just yet. We'd like to think it's the case, but we can't say with certainty. So it's who's going to want to apply those AI tools to actually get to the truth of the matter or who is going to be happily duped in nanoseconds by an AI story that sounds so outrageous that couldn't possibly be true, but maybe it is? Is that a third head I see growing behind your, Will, behind the second head? Doesn't all Scottish people have at least three brains? Whatever story it's going to be.
Will Page:
[inaudible 00:40:50] my university exam mark's definitely not. But yeah, it's definitely there. And as we close out this week's episode of Bubble Trouble. It's worth acknowledging that number one show on Netflix just know is about UFOs, Unacknowledged.
Richard Kramer:
Right, and we all are prone to the incredible and outrageous that underlies those conspiracy theories, because we want to feel like we're slightly clever than the average Joe in having unpacked all that stuff. And the ability of AI to accelerate that sort of divergence from reality is going to be something we're going to need a lot of due diligence to protect against.
Will Page:
And before you close out the show, we should just announce that next week, myself and yourself are on vocations. There'll be computers interpreting our voice, doing our show for us. Is that correct?
Richard Kramer:
Oh, dear. It'll be a much better show. With that, it's been a really interesting discussion. We're just sort of scratching the surface and getting warmed up for a long series of guests to talk about how AI might or might not impact society. We did that before with the Metaverse, Crypto and a few other topics and this one I think will run and run. So with that, I'd like to thank my co-host, Will Page. I'm Richard Kramer and thanks again for listening to Bubble Trouble.
Will Page:
If you are new to Bubble Trouble, we hope you'll follow the show wherever you listen to podcasts. And please share it on your socials. Bubble Trouble is produced by Eric Newsom, Jesse Baker and Julia Net at Magnificent Noise. You can learn more @bubbletroublepodcast.com. Until next time, from my co-host Richard Kramer, I'm Will Page.