We continue our exploration into the dinner party topic of converation on everyone’s lips: AI with the first of many very special guests on the topic, Professor Chris Speed. This week, we take a design lens to the problems (and the solutions) that AI presents us with.
We continue our exploration into the dinner party topic of converation on everyone’s lips: AI with the first of many very special guests on the topic, Professor Chris Speed. This week, we take a design lens to the problems (and the solutions) that AI presents us with.
Richard Kramer:
Welcome back to Bubble Trouble, conversations between the independent analyst Richard Kramer, that's me, and the economist and author Will Page, that's him. And this is what we do for you; lay out some inconvenient truths about how business and financial markets really work. And we're continuing our exploration into the dinner party topic of conversation on everyone's lips; AI. With the first of our very many special guests, Professor Chris Speed. This week we take a design lens to the problems, and potential solutions, AI presents us with and we don't dare to get onto whether AI is the next bubble or if for humanity it spells trouble. More in a moment.
Will Page:
Professor Chris Speed, thank you so much for coming on the show today. We got to meet each other a long time ago during the heart of lockdown and then more recently back in my hometown of Edinburgh where you're based. But if I just give you the microphone for a second, it'd be great for you just to introduce yourself, your work briefly, and also really important, how can our audience follow your work?
Chris Speed:
Well, thanks for inviting me both. It's terrific to get some airtime, if you don't mind me saying. But it's also personally fascinating because I've been... Well, I worked as an artist, I trained as an artist, but it turns out through art practice you become very adept at, I think, navigating what we've called the digital economy. It turns out you need to be pretty interdisciplinary of mind, be prepared to talk to economists through to social scientists and even with colleagues such as those involved in AI in the School of Informatics. So upon arriving to Edinburgh, I guess, I think way back in the late Noughties, the big project was to align a design school with a big university. And the most powerful department we've seen at Edinburgh has been the School of Informatics. So hence setting up design informatics, which is the interdisciplinary space between design cultures and informatics. As Will knows, more recently I've been directing this Edinburgh Futures Institute, which takes even more multiple disciplines and places them in a research teaching melting pot.
Will Page:
And requires a rubber desk, because you're dealing with so many academic disciplines, you're banging your head against it all the time. And we'll come to the Futures Institute and that incredible development in part two. But in part one, what I wanted to do was start off with this topic of design. Let our audience get their heads around what design means. I like the expression, "Are you a solution in search of a problem, or a problem in search of a solution?" And that takes some mental juggling to understand what that expression means. But my hunch is what design does is it solves that problem. What is the problem you're trying to solve? If you start with design and then travel down informatics, you get to a much better destination than if you start with informatics and end up with design. That's my guess. But you tell us what design means for our audience's benefit.
Chris Speed:
Well, look, I think you're not wrong. It's just that design means different things to different people when we think about what it does. I think at the end of the 20th century, design was perhaps best understood to be instantiated in things, in artifacts. Whether it's a Dyson vacuum cleaner or Heatherwick building it was, or a Jony Ive iPhone, right? It was the artifact. Now actually more recently, I think through SASS we begin to understand that software shifts where the thing is. And literally if you think where your iPhone is, you used to think your iPhone, or everything that was valuable, was in your hands.
Actually everything valuable is now in the cloud that's associated with the thing in your hands. So I think what we've done and what the last 20 years has been about understanding a shift from an object-dominant logic of design in which the value of things is artifacts to a service dominant logic of things in which it's the things that you have, which might be physical, they might be software, but they let you get the jobs done, the things that matter to you; healthcare, welfare, education, learning, so on and so forth.
Will Page:
Wow, I know I'm on to a good podcast when I learn so much from the first question. Richard, over to you, sir.
Richard Kramer:
I'm going to go completely off our script because I want to throw something I hear far too often, and we'll get to talk about AI, but this thing about user experience, I remember probably a decade ago when people started talking about the internet of things and I just laid out all the little gates and user experiences I would have from leaving my building at work to unlock my bicycle to get home. And all of these little things, I'm still waiting for the user experience of unlocking my bicycle to somehow be improved by software. Haven't we over-promised what software can do with these user experiences when frankly the muscle memory and our human ability to change behaviors is fundamentally slower than what maybe the software can propose?
Chris Speed:
Okay, I don't think so actually, Richard, because I think what happened was many of those things that you were talking about there from the door, let's think of getting out of your house, whether it's grabbing the keys from the bowl to getting to the door to getting to the car or walking to the train station. Now all of those things were developed in what I would prefer to be a 20th century model where software wasn't required. What we do know is that now we have constant feedback loops from the software and the interaction with things. There were no feedback loops between you getting to your car from the bowl of keys to the door to the car. There were just none. And actually all of those products were sold without any feedback loops. There was no data feedbacks trying to improve them.
So they were the best things we could possibly imagine because the market decided that the particular Yale lock, the way we twist it, is probably common in my house to your house because the market decided, literally through buying and testing out. Now that buying and testing out in software is taking place in a snap. The feedback loop from the apps on your phones are feeding back and negotiating at high speed across huge data sets. So the critical thing is to SASS. I mean, as you begin to think about those loops of the things. Now having said that, there's nothing wrong with everybody knowing how to open a front door from behind the front door. It's pretty good. I'm not convinced we need software to come in and fix something that everybody knows, but there's lots of other technologies in the way now, which actually software as a feedback loop does help.
Richard Kramer:
And that's where I was going in the next question. It's a great segue, because haven't we been capturing intelligence in silicon and software basically for the last 50 years now? And if we want to demystify AI, we can call it a subset of machine learning, which we can call a subset of linear algebra or regression modeling. And eventually on average, after trying something a million times, you'll get a pretty good idea of how it works or how often it fails. And maybe we're just looking, applying these probability machines everywhere we look. So I guess my design question to you is what are we solving for? Are we trying to make a lot of little things more productive, cheaper, faster ways to do stuff? Or are we trying to solve for creativity to try new combinations of things to try solutions that we might never have imagined before? Where are you suggesting design focus efforts? On optimizing what we do today or are inventing a new future?
Chris Speed:
Good question, because when you spoke of design, I'm going to have to speak for the design industry, right? I'm a design academic. I won't speak to the terrific people at DeepMind who will consider themselves to be the designers of AI. For design. I think [inaudible 00:07:48] a few things here; one of the biggest design disruptions we saw in the last 150 years was of course Fordism. This entire economic remodeling of how we moved from craft building cars, we used to have teams building cars and one by one we would all work together in a small team to build a car. Fordism comes along as an economic solution that knocks out a whole way of organizing the economy. And I think, let's be honest, behind every use of AI, the only you reason you and I or your listers are going to adopt an AI it to save time or money.
Even if it's my student who thinks, "I really want to go out tonight, maybe I'll just try ChatGPT and I can get that essay kick started." Even if they're an A star student and they have no intention to plagiarize, it's still a business solution. It's a way of saving time and money. So what I find fantastic is this Fordism was a generic solution for large scale industries, which we all know transform the 20th century. What we're seeing now with these platforms, these products, that's down to the individual. Every single individual who chooses to use ChatGPT or Midjourney, Dall-E, is choosing it because of, really, a labor saving option. That's radical, right? Suddenly we didn't think about personal business models, but now we're going to think about personal business models, labor saving.
That's a problem because not everyone can afford to jump onto the premium version of ChatGPT. I pay for Midjourney and I'm getting astonishing images and I pay for them. So there's a better work we've got to do because we're going to go big on this. It's going to sort out those who can get access and those who can't. Most people will get access through the web browsers, aren't they? Through search engines, Bing, whether Google, so on and so forth. So it's going to lift every [inaudible 00:09:49] proportion, but we're still going to find some exclusive users. So that's a bit of a challenge. Personally, I think people inside the academy are worried about some of the privilege, again, it's going to offer.
Richard Kramer:
And just to make sure we're being clear, you think most of what AI will be used for will be that optimizing?
Chris Speed:
I think it's-
Richard Kramer:
That, "Let's do things better."
Chris Speed:
Or cheaper. Look, it's going to increase value creation whether you place value on economy, on time, on your social status because you'll use it across social platforms, on your marketing. Again, the problem we have in capitalism, it leans toward an economic bottom line rather necessarily a social or environmental. I'd throw my thoughts in, I was telling Will the other day that as someone who's as a designer, I confess to having aspect of dyslexia and I get paralyzed when I start a new Word document. Apple N is terrifying for me. But now I can start Apple N with a whole bunch of words, whether it's to write a university strategy document or to being at the start an essay. I'll work my hardest to develop according to my narrative, but it does bring everyone up to a level. I mean, there's no more Cs Richard, everything's a B minus. Everything is a B minus. And then it's a question-
Richard Kramer:
We'll save my college career.
Will Page:
You've just given us a headline for this podcast. Now Professor Speed, I know Richard's keen to ask from a student's perspective what to make of all this. So just before he does and before we get to the break there, just very quickly, where are we in the hype cycle of this just now? I remember the DeepMind hype cycle. Everybody was talking about DeepMind now nobody's talking about DeepMind. Everyone's talking about ChatGPT. Are we at the beginning of the cycle? Are we peeking out and it's going to go back in its own little cave and we'll see what happens three or four years down the line?
Chris Speed:
My hunch is e must be be on the uptick. What's interesting is it's seeing different areas. So every sector now is ingesting, digesting and thinking through the implications. I mean, it is fun watching Twitter, isn't it? Because every single sector is processing the implication and then you'll find every single sector, or a lead user in the sector, is then offering prompts. And then you're beginning to think through, "Oh, hang on," and you're seeing this ripple effect. So I think we're on the up. I really do.
And I don't think every sector is processed. You've got some lead sectors, lead users experimenting with those prompts and then we're going to have to either fall off a cliff or find out that it just normalizes and everything comes up. I think we went through this with calculators. That's my guess. My father was a maths teacher. I suspect he was terrified when suddenly and then, I guess, suddenly everyone was using, or were they using a calculator? They turned up on the wrist and then at some point an exam board said, "Oh look, just take them into your exams. We'll just write harder questions for you to use the damn calculator." So...
It's the normalization that we're fascinated, isn't it? So I suspect we're on the up and let's go down.
Will Page:
It is a time of year where the students are filling out their UCAS forms. I've just imagined so many just buzzing out and studying informatics. Richard; kick it, sir.
Richard Kramer:
Yeah. I guess to wrap up, or come to a conclusion of the first part, you've got all these students coming in fresh-faced, 18, 19 years old, having got their three A stars, brilliant kids. And where do you point them? I mean, I know how AI, or if you want to call it more broadly, that optimization function has worked in the corporate world for the last 50 years. Which is you wake up in the morning and try to automate what you did yesterday, whether it's finding a fault in the network, preventing a cyber attack or finding a pattern in some data to figure out where new customers are. But when you have kids come in fresh face, want to change the world, I want to design new systems soup to nuts that'll be more efficient or better for the planet, where do you start them? I mean, assuming they'll get to that point where AI is writing their term papers while they're off in the pub.
Chris Speed:
That's a good one. I mean look, this is knocked us sideways. As a sector that is stood up-
Richard Kramer:
You mean academia.
Chris Speed:
Academia.
Richard Kramer:
It's knocked academia...
Chris Speed:
Universities. This came in. Let's be honest, we saw this coming. We didn't pay any attention. ChatGPT came in middle of last semester, so before Christmas. Bang. Who knew it would have quite the ease of adoption? The fact you can just get on chat AI and just get to it. So look, imagine that by Christmas, the exams coursework were just... I know I shouldn't declare, but my son, he was using it in his submission [inaudible 00:14:43] university. And everyone in January, as we were processing, began to try to find a lexicon for spotting. And then of course those tools came out in January because see, you can find patterns when there was any evidence that it had used the ChatGPT being the dominant tool. So we're learning incredibly fast. We've had to put out a statement from the university trying to declare almost a moral position because we can't prevent, we have no way of reaching.
We also knew, to be honest, that in the past all applicants had access to a lesser or greater extent in getting scripts written by external people. There's been many essay factories, right? For many years. So we've always known that privilege allows people to benefit where there is money. All we're doing now is working fast with the students. We're speaking to them always and constantly, the staff student/liaison committees, where we do bring them into the governance structures, always for the last six months have had a dominant conversation around what are we doing? So we put out the requests and urge Edinburgh student to take a critical position.
We're pretty convinced that we can still identify criticality. And criticality is this broad term which allows to what extent can a student take the knowledge they have from the world around them and begin to construct unusual organizations of knowledge? Unique organizations of knowledge in which you can really feedback. We push seminars. I think we'll find a turn back toward the value of a seminar. My hunch is that the value of university education will be for students to return to face-to-face after the COVID. So there's nothing like a face-to-face conversation, which allows me as a tutor to pull in a myriad of references, visual, textual. And in that you find what we call the criticality emerging. Trust me, there's a bell curve. It isn't in existing students now all the way, but I'm excited as a complimentary tool.
Richard Kramer:
Interesting. And just to make sure you answer my question specifically, you've got Will Page has decided to toss up his career as a rockonomist and author and retrain in design at Edinburgh. He goes to the institute and he says, "I want to get started." Do you tell him, "Look for one of these thorny problems," like for example, redistributing the rights to music holders, rights holders and songwriters. Do you tell him to imagine a future and work backwards? Where do you start these students on a journey where they've just been given this plutonium powered super rocket to travel up to Arthur's Seat?
Chris Speed:
That's a great provocation and question. What I hope is, and what I do know is, that the intractable problems of the present; social, environmental, do require multiple lenses. There's no point embarking any of these projects on your own. So we urge you to surround yourself, dear Will, applying to Edinburgh, surround yourself with lawyers, informaticians, moral philosophers and creatives and biologists and geoscientists. Surround yourself with as many different epistemical starting points as possible and begin to listen and then see how that changes the prompts that you would ask if you then enter through just one of those lenses.
So we might know Will's a rock [inaudible 00:18:10], is that what you called him? But what might they say if we then introduce him to law? Now I know Will's pretty good at law, but what if I then throw him into the philosophy school and then pull him out through the music school and then begin to think about how those things are informing, going back to the economists and thinking, "What did he learn? What did he pick up on the way to ask questions?" Whether it's using assisted technology or not, to begin to think about where the industry needs to go. Because I think it's through the multiple lenses that we really do think the intractable problems, the wicked problems, you're not going to be able to assess that. You're not going to do it on your own. You need to be with people.
Will Page:
I hear it. I hear it. Let me close out part one by saying for the record, had you be my professor at Edinburgh University back in 2002, I might have actually paid attention and not bluffed it in my exams regardless. But that's a wrap for part one. Back in part two to go down a rabbit hole on this fascinating topic. Back soon.
Richard Kramer:
Welcome back to part two of Bubble Trouble with Professor Chris Speed from Edinburgh. We're going to go down the rabbit hole a little bit and talk about the dinner party conversation I had last night. And it seems like you can't avoid the risks posed by AI, the call for a pause in AI research from senior figures in approaching this question of regulation as if the politicians understand it better than the computer scientists or designers do. So there was some interesting data in the Financial Times about how only two or 7% of the staff at DeepMind or OpenAI are working on what's called alignment i.e., making sure these systems aren't used for nefarious ends. Or you can ask an AI to harm someone or find a way or wreck a network of some sort. Now, how do you as a designer, approach tools like this when you know they can be potentially harmful? Are you getting more than two or 7% of your students to think about alignment and making sure that AI isn't used in a discriminatory way or a harmful way that everybody seems so exercised about right now?
Chris Speed:
Yeah, I have to say, I don't think it was my idea, but the appointment of Shannon Vallor, who came from Google actually, was just stunning. So that was probably three years ago now. And I think standing up a moral philosopher, introducing a moral philosopher who understood depth; the technical and the ethical implications of this technology was the fastest move an old university has made. So I'm so proud and pleased we've embedded Shannon's work and her team, the Center for Technomoral Futures, right across the curriculum. I think you'll find if you Google Edinburgh now, you'll soon follow an association with Shannon because it could have been a real misstep. We could have pursued what we call techno determinist route, which is lead with one of the best AI schools in the land, but to stand up and compliment it. So in every debate we have now, Shannon, Gina, the team are all there and their PhD students are absolutely stunning.
And let's be clear, a PhD student these days, in the context of the Futures Institute, must be multidisciplinary. From design and philosophy, AI to law. Now they all under an umbrella, but they're really leading. So we've now embedded her core course. Every student must take her course in AI and ethics, and she's also just approved a master's program which kicks off in September. So now I'm really pleased. In terms of a leg of the stool of what we're going on. It's there and as strong as I can possibly have hoped for. So no, it's present and it's incredibly conscious, responsible.
Richard Kramer:
But let me just flip back to the realities, the real politic, if you will, of both private companies and nation states. So I'm delighted that Edinburgh and other universities are standing up and recognizing that there are some moral dilemmas posed by these systems. But how do you constrain the behavior of some of the largest companies in the world? And indeed when the race for AI, or AI supremacy, is couched as a clash of civilizations between the West and China or the U.S. and Europe and lots of other nation state actors. How do you get that academia to infect the thinking of all of these self-interested private actors?
Chris Speed:
It's a good question. I'd like to think the agency of the university is to support the shareholder's assessment of a business. Well, I mean Cambridge Analytica was entirely flawed as a business because it didn't support what we might call the active moral philosopher within the business proposition. So if we had the hacker, the hustler, and the hipster, and now we're adding a moral philosopher to mitigate against a business making a misstep, then I'm hoping that the universities are places where organizations shareholders encourage the C-suite to go to.
I mean, Shannon works very closely with the Scottish AI Alliance. They've just won a large award to support and steer towards Westminster. I mean, it really does depend on a company wanting to lean in. If it wants to lean away... There's a great phrase in design, sorry to pull it back to design, but the best defense against the dark arts are the creatives. And I might extend that to the humanities. So I repeat that; the best defense against the dark arts is leaning into the humanities. Now, if you want to pursue a dark art, if you have a dark business, I'm sure you have every power and intention to move away and not hire us just through conversation, not through consultancy. But what can I do, Richard, appoint the best people at my end and encourage shareholders to encourage businesses to stay alive by leaning into the moral opportunity.
Will Page:
I want to come off the back of that because it's so encouraging to hear that you've done that at the University of Edinburgh in that I often say you can't study economics without philosophy. An economics degree without one class in philosophy is a degree in regurgitation. It doesn't teach you how to think or, "How did the professor learn how to think?" Or, "How did David Hume see a billiard ball hit another billiard ball and know what that billiard was about to do?" Or Karl Popper with Conjectures and Refutations. It's so easy for students today to just circumvent how we think in their journey at university, which is all about thinking and it's... My heart skipped a beat. When you talked about that appointment.
I want to wing it back to productivity, something you delved into part one going completely off-piste here. But before we get to our smoke signals, let me quote what Gus O'Donnell, most senior civil servant in the British government known as God, Gus O'Donnell, God, used to say to me. He said to me, "Productivity comes in three forms. You can either do more with the same, you can do the same with less or more with less. Here's Tom with the weather." That's a great lecture in productivity in less than 40 seconds. And if you think about your reference to productivity in part one, if you're going to do more with less thanks to this, what is that more and what is that less? Can you label them for me?
Chris Speed:
Wow. Okay. The first thing I would refer to is probably, if you don't mind me doing this, is referring to Kate Raworth's Doughnut Economics...
Will Page:
Well, great book.
Chris Speed:
... Where in which we can imagine, listeners, a donut. Literally a donut. A ring of a donut. If you take that donut as a ring in which actually represents a nourishment, the cultural nourishment, the economic nourishment to keep jobs, which we know are powerful and they produce cultures, economies. Staying within the donut is a place which is incredibly enrichening and nourishing. If we exceed the donut with a series of economic propositions which become too much, and I'm trying to recover your language, Will, in the question, then it becomes excessive. We have too many things. Are we doing too much? If we return to a center of the donut, there's nothing there. If I think of Edinburgh without the festivals, there's no nourishment. Now I know we need to do more with less in Edinburgh because crikey, on a bad summer in Edinburgh, there's too many hotels, there's not enough taxis. There's too much comedy, there's just too much excess.
Will Page:
It's no laughing matter.
Chris Speed:
And my hunch is that somewhere is a sweet spot when we take on a quadruple bottom line, which doing just enough to enrichen our lives with culture, doing just enough to make sure the carbon use isn't excessive, just enough to make sure that we have work for everybody and it's appropriate and it's not exploitative and extractive. So I'm not sure I'm answering your question very well, but there seems to be rather than a binary assumption, too much, too little, too less, too more, that we do need to find a place where humans are nourished. I mean, of course I work at a university, a place where nourishment wants to meet and be thoroughly nourished by bumping into philosophers and economists like yourself. So I'm not going to suggest it's a less or more, but there's somewhere in between. A series of rhythms where we need to get it right. We are facing a climate crisis. So I'm very keen personally to find ways that we develop economics in such a way that it isn't kind of less or more, but it's responsible. Does that help? Is that right or wrong or...
Will Page:
Amen to that. I feel that a lot.
Richard Kramer:
I'd like to just touch on one really difficult point I'm having with AI and creativity. And that is that if you think about machine learning, if you think about studying a dataset, the likelihood that you'll get an answer outside of that dataset, when you run extensive recursion models and regression analysis on that data, seems unlikely. So how do you reconcile what is fundamentally narrowing of perspective or combing the data for patterns with that fundamentally unpredictable, improbable event of creativity? The design inspiration that someone says, "You know what? Let's do something completely that's not on the whiteboard, that's not on the flip chart, that we just had this brilliant light bulb moment." And will the adoption of these tools narrow people's perspectives such that they give up some of that creativity as a trade-off for the productivity gains they're getting?
Chris Speed:
Well, look, so we had a great early AI visual project in 2016 actually. The proposition was, going to zoom back a bit, is that can things design things? So I'm getting tired of the Heatherwicks, I'm getting tired of the solo designer being our superhero. I think the Dysons and Ives have gone. I think we know we need to design with others to make sure we're listening and learning about their inclusivity. The further question is, if you count how many people are in our collective rooms, and listeners, think how many people are in your rooms and now count how many objects are in your rooms. The objects beat hand down the amount of people in the rooms that we're all sitting in by a hundred fold. What if you could ask the experience of all of those things, how they would like to occupy the world?
So for example, we tried to ask this question of a fork. What would a fork like to be? If you ask it humans, 99.9% of humans say a fork is good for eating. If you then run an image search, it turns out some people use forks as splint when they broken their wrist. Who knew? Turns out the AI spotted that. Turns out that you can tie bows with a fork because you can tie the ribbon around the middle prongs and you get a great ribbon tying tool. Turns out if you do small amounts of gardening, a fork is good. Now the dominant idea is that people generally think forks are good for one thing. I've just taught you three more things that forks are good for. Now, when we did search, it was an image search across the Google catalog and that's what the AI gave us.
So my suggestion is that don't assume just because you have Instagram and humans providing and reinforcing your assumption of what forks are good for, trust me, if you have a very large data set and AI is more likely to bring forward radical human uses of forks than I can possibly find from my various consumption spaces. So I think it's a tool. I honestly think that if we push it, let's not product design, let's not let the high street and humans dictate what things are good for because we might be missing, we might not have the skills to think beyond the market.
Will Page:
Richard, I'm tempted to say that another fourth use of a fork is we're getting a large piece of bread out your toaster when it's stuck. But seriously, kids don't try that at home. Richard, we've got a few minutes for some smoke signals. So time to light up a flame.
Richard Kramer:
Yes, we have a tradition in Bubble Trouble where we ask for the uh oh moments, the smoke signals, the things that make you worried. And with all this incredibly fevered talk about AI and in the future, of course, AGI, the artificial general intelligence which is going to take over humanity and affect its demise. We got to ask you, since you're steeped in this stuff and in the design world, what are the kind of things you hear, the couple of things that make you go, "Uh-huh, no, that's really not what we're talking about here."? The things that you'd caution listeners to raise the skepticism flag when they hear and just probe a little deeper?
Chris Speed:
Let's have a think. I mean, I'm trying to and try to do three very quickly. So the business model of the individual to use ChatGPT tonight, tomorrow, next week is irresistible because it promises you a labor saving option, but you're just not aware of how much carbon it's using. So yeah, I can use that tool. We are just not getting, I don't think, a feedback loop on how much carbon it's using. I've pushed it away. Another problem. Two-
Richard Kramer:
If we can pause there for a second. It is absolutely true that we don't really know yet what this is going to cost...
Chris Speed:
Exactly.
Richard Kramer:
... What a personal subscription will cost, what more complex searches will cost, how we're going to price it. No one's really talked about that yet. It's just been thrown out there for people to play with.
Chris Speed:
Yeah. Precisely. And that energy displacement, economic displacement has not been rehearsed. Two, I understand that there's over 300 languages in Africa. And again, I'm happy to be corrected by listeners, but how many languages are going to be introduced into these large data sets that allow us to explore language doing more than just Northwest American English? So I'm really acutely aware that it's a colonial project perhaps. It's another West Coast American colonial project, which in the end I'm just so worried about the representation of other cultures which might help us understand what it is to be human.
Richard Kramer:
And again there, it's so fascinating to see that Facebook has large language models which can start to translate between 200 language pairs without going to English in between. So they can do Japanese to Swahili, they can do Urdu to a Turkic language they're speaking in Hungary or Uzbekistan. So you are starting to see that universal translation device, that tricoder we all saw in Star Trek when we were kids, start to be a reality.
Chris Speed:
And the third, I guess being a creative, is attribution. I mean, really interesting to play with Midjourney as a tool, but gee whizz, I know that I'm just pulling the economy. Now, of course Will and I might share interest in whether smart contracting can support an attribution from something like Midjourney to a micropayment all the way back. I'm certainly paying 10 bucks I think for Midjourney, but is any of that's reaching back to the artist? Of course not. So there's some diversion there in value creation, which I'm acutely aware and I wouldn't dream of using Midjourney to produce images for my own marketing needs, let alone the university's. But even in that idea generation, I'm acutely aware, I'm obfuscating the origin.
Will Page:
Professor Speed, in wrapping this one up, I have to say a guest of our podcast in the future and somebody who wants to work with you on, you're a Fantastic Futures Institute back home in Edinburgh, who runs Miso.ai, said to me, he said, "If you really think about it, we're right back to where Napster was in the music industry in 1999, 2000." And if you say the Wild West, it sounds cringible, but it really is. But what you've given our audience, I think, is a great understanding of how to think about the productivity of AI, the pros of productivity gains, the cons of productivity gains, and the philosophical questions about productivity gains.
And I want to applaud Your Futures Institute back home in Edinburgh as one of the first, I think, real, at least on British soil, cross-disciplinary subjects. If ever there was a wake-up call for universities to break down those silos of departments that have been there for hundreds of years and start collaborating, it's here, it's now, it's artificial intelligence. So Professor Chris Speed, thank you so much for joining us on Bubble Trouble and we would love to get you back, because this subject has got a lot more runway that we need to explore.
Chris Speed:
Oh, thank you. Thanks very much guys. And apologies for the derivation down forks, what would [inaudible 00:36:01]. Yale Locks in houses, but it's been a pleasure talking to you. Thank you very much.
Will Page:
Thank you.
If you're new to Bubble Trouble, we hope you'll follow the show wherever you listen to podcasts. And please share it on your socials. Bubble Trouble is produced by Eric Nuzum, Jesse Baker and Julia Natt at Magnificent Noise. You can learn more at bubbletroublepodcast.com. Until next time, from my co-host Richard Kramer, I'm Will Page.