Mastodon

Episode #155   Data is the New Plastic! Ethics, Accuracy and AI with Dr John Collins of Machine Intelligence Garage

apple podcasts
stitcher
spotify
google-play

In a world where it takes as much power to post one image on Instagram as it does to make one 330ml plastic bottle… how are we going to turn the massed ships of big business from ‘profit at all costs’ to something actually sustainable? With Dr John Collins of Machine Intelligence Garage and Digital Catapult.

Dr John Collins worked for the UK’s Central Electricity Generating Board in the days when such things were nationalised industries. His PhD involved creating a real-time dosimeter for workers in nuclear plants so they didn’t have to wait 2 weeks to learn the results of the film-based dosimeters that were in use. In doing so, he saved the CEGB considerable amounts of money – and, mere importantly, saved the lives and health of the men and women who worked there.

Thus began a lifetime working at the leading edge of business where innovation meets ethics and morality so that now, he is the Ethics and Responsible Innovation Advisor at Machine Intelligence Garage and on the Ethics Advisory Board at Digital Catapult. He is writing a book called A History of the Future in Seven Words.

With all this, he’s an ideal person to open up the worlds of business, innovation and technology. In a wide-ranging, sparky, fun conversation, we explore what might make AI safe, how a future might look with sustainable business, whether 1.5 is ‘still alive’ and if that’s even a useful metric – and how much power does it take to post an Instagram picture compared to making a plastic bottle (spoiler alert: it’s the same power and the same CO2 generated – assuming both use the same power source and *if* the image is stored for 100 years… which the way we’re going, might not happen. But still… ).

In Conversation

Manda: My guest this week is someone that I found on LinkedIn, one of those social media that I don’t fully understand, but people who are not part of my normal bubble have a tendency to impinge on my awareness in ways they don’t in any of the other social media that I use. And so I came across Dr. John Collins when I was looking for someone to help me and us to get to grips with the ethics, particularly of AI, but of all digital and disruptive innovation. And John is the ethics and responsible innovation advisor for the Machine Intelligence Garage, or garage I guess, if you live in other parts of the world. And is on the Ethics Advisory Board at Digital Catapult, which I am awed to discover, is something that the current UK government has set up to promote sane, intelligent, responsible, ethical innovation. So anyway, John seemed to me to be an extraordinarily good person to talk about things where I really have not enough knowledge even to know what questions to ask.

And it definitely proved so. And in the long run, we ended up dragging the conversation into areas of my obsession, possibly areas of his obsession. We had so much fun, just riffing off things that we take for granted that may or may not be true. Certainly things that I take for granted that may or may not be true, and that therefore the podcast takes for granted that may or may not be true. Yes, I do conflate myself with the podcast, although we do have rather a wonderful team and I’m not conflating the rest of the team with the podcast, honestly. So here we go. It was a fun and inspiring conversation. People of the podcast, please welcome Dr. John Collins.

So Dr. John Collins, welcome to the Accidental Gods podcast. Thank you for making time in what seems an extremely busy schedule. And I found you on LinkedIn, which I don’t really know how to work, but it’s an amazing place to discover people who aren’t part of my normal bubble. And we’re stepping very much out of my normal bubble on this, which I think is a good thing. Stepping out of bubbles is always good, but it means that this is not an area where I have any kind of expertise. And so with apologies to the listeners who do have expertise in this area, likely to ask really stupid questions. So I apologise for that in advance and welcome to the podcast.

John: There’s no such thing as unintelligent or daft questions. I mean, sometimes what we consider the simplest of questions, like when the child asks you ‘why?’ Are the most taxing actually. Because more often than not, what they expose is what we don’t know and how we can’t describe or explain. Explanation is probably too strong a word to describe why something happens, why something is, why something isn’t. So, you know, feel free to ask whatever questions. And actually, the more searching they are…

Manda: All righty. I will give my inner five year old free rein. Thank you. Okay, so with that in mind, thank you. You’re ethics and responsible innovation advisor for the machine intelligence garage and on the Ethics Advisory Board at Digital Capital. And I spent a little while exploring YouTube videos from both of these, so I have a slight edge on people. But can you explain what these two things are and then how you fit into it?

John: Well, the Digital Catapult is UK government agency, that brings together big business, small business, start ups in the tech world. To help them grow and scale and do things right. And as part of doing things right, they have created five or six years ago a group called the Machine Intelligence Garage. We’re an advisory committee of I think it’s about 15 to 20 ethics and responsible innovation advisors, people who’ve worked in ethics or know about ethics, or think they know about ethics. To come together to help advise companies, in particular start ups, about how really to embed ethical thinking and approaches from the very start of their start up. Ideally from before they start up, and having deep conversations that explore how to better develop their products and processes. So they’re mindful of the rights of others and ethics and everything that goes into actually running what I would call a proper business in a responsible way and actually using tech responsibly.

Manda: Brilliant.

John: It’s very important. Because most people don’t even consider what they’re doing before they start doing it. I’m a scientist by training. We tend to do experiments.

Manda: Don’t we just. Let’s interrupt there for a moment and get a little bit of your background of how you then came to be invited to be an ethics adviser to business. From a government, I have to say, this is revealing my great surprise that they actually get themselves together enough to do that. I’m quite impressed; I thought they were too busy playing musical chairs at the top. But they wanted proper ethics, or at least they were advised that that would be a good idea. How did you come in? At what stage did you come into this and how did you come to be someone whose name was put forward or who snagged their attention?

John: I have a background in developing disruptive technologies. Those technologies that actually change the shape of the world in some way, shape or form; financially, environmentally or otherwise. And the first time I really started doing this, actually I did a PhD in nuclear physics and semi-conductor theory, that was funded by the Central Electricity Generating Board at the time. And electricity generation was nationalised in the UK and we had lots of nuclear power stations. And I did this thing called a case award, which was a degree that’s funded by industry, by the CEGB, as it was, and it was to develop a personal dosimeter, which is a dose meter for particular types of radiation that were very hard to detect in real time. In fact, impossible to detect in real time. In terms of people being able to carry it around. So that people working in the nuclear power industry didn’t get exposed to radiation they didn’t know about until a couple of weeks later when this film badge that they used to wear, which used to fog up according to the amount of radiation that they’d been exposed to, was finally developed. And it was terrible. It used to used to cost the CEGB about 3% of their yearly earnings in downtime and loss of people being able to work, sometimes ever again. Sometimes people would find, Wow, I’ve had a dose that’s huge. Where did it come from? No idea. So I developed this little sensor that was portable that worked in real time to detect radiation. This particular radiation. And it saved the CEGB about 1% of their annual earnings.

Manda: And presumably also it meant that they could find out where where the leaks were.

John: Exactly.

Manda: In real time rather than waiting two weeks and trying to track back where some guy had been.

John: I’m not saying there’s loads and loads of leaks, but there’s often… You’re working in a place that has a background radiation that is higher than you want to be exposed to. But that’s the nature of the nuclear reactor. But the length of time you spend there should be commensurate with the potential exposure level. And it’s knowing that flux. And it gives you an idea, right: you’ve had your 3 minutes or you’ve had your 3 hours. Now, you can’t go into this area again for another two weeks or for another week. Not, you’ve had five times what you should have.

Manda: And I’m guessing you saved lives or certainly saved people’s health, which much as I love the CEGB – actually, but you know, it’s good that you saved them 1% of their gross product. But the fact that you were saving people’s lives and helping their health is also a really essential part of what you did.

John: Absolutely. In fact, that’s the more important part than the money side of things. You know, justification for funding is often an economic justification or an emotional justification. Whereas most of the time it should be. I mean, it’s a little bit like our activities towards abating climate change. You can go for funding for various projects, tech or otherwise based, and they always ask, the funders always ask what’s our return on investment? Well it’s a greener, cleaner planet, you know? Well, no, how much money is it going to make us, John? And it’s soul destroying. So, yeah, you’re absolutely right. Is that -let’s help save people.

Manda: I want to come back to how you got from there to being at Machine Intelligence Garage. But before that, I live in a world where everybody is aware that we’re in the middle of a multipolar, multi systemic crisis and that everything needs to change. And that one of the things that needs to change is the profit motive. I’m hearing you and thinking you live in a world where the profit motive is still God. Is that the case?

John: Every Start-Up I talk to wants to make money. That is their main driver most of the time. Right. I mean, in the tech world, all we ever hear about are tech pros creating a new crypto token. I call it crypto shit because I think they really are bad for everybody and they really need to be got rid of. But yeah, money is the old god still. Yeah, there’s more money awash in the city of London for tech investment than there’s ever been. And there are companies now… Ten years ago the idea of a unicorn company was out of this world. Now everyone’s trying to be one.

Manda: What’s a unicorn?

John: A unicorn company is one that’s valued before they’ve even made money. Before, oftentimes, they’ve even got going properly. They’re valued at £1,000,000,000 or $1,000,000,000 or more.

Manda: Wow. OK…

John: Yeah. Can you imagine being a three year old company that suddenly has a valuation of $1,000,000,000 or more? Mostly on the fact that they’ve had huge investment. So you get these young Start-Up companies, five, ten people who suddenly get £20 million investment, and then they get £200 million investment. And then on the back of virtually nothing, they’re suddenly valued, because of that investment at £1,000,000,000 or more. Then they become very attractive. Then you go through your IPO, your initial public offering and lots of people make lots of money. Even though they might not have done anything. They might have actually done something really bad. But we just don’t know.

Manda: It’s black orchids all over again, isn’t it? Only they’ve got the potential to destroy the planet in ways that black orchids probably didn’t have. Black tulips. Sorry. Black tulips. It’s that everybody says something is worth something, therefore it must be worth something, which isn’t necessarily the case. Last thing I heard, everybody wanted to go from Start-Up to acquisition by Google or Amazon or Facebook or Apple, because then you can just stop working and the bigger companies bought you up. But let’s leave that for a moment; go back to your working for CEGB. You’ve created a real time dosimeter, which sounds absolutely amazing. I remember when I was a vet, we used to have the little bits of film on the badges and all the time, we were only being exposed to x rays. But even so, you’d come back and somebody would go, right, you’re not going in the radiology room again for the next six months, sorry. So give us a very edited highlight of how you hopped from there to machine intelligence.

John: Couple of things in between doing that and being headhunted by DeBeers diamonds. And I was headhunted to go in, basically 28 years old, given 5 million quid, three people to work with and a big lab and told John, go play.

Manda: Wow.

John: What was I playing with? I was playing with the notion that we could create, develop, invent a technology that would make manmade diamond that was indistinguishable from natural diamond, in a more easy fashion than the high pressure technology that had been developed back in the mid 1950s. That is still used to create pretty much 80% of the world’s diamond, which is manmade, it’s synthetic, and it’s used for industrial uses. You know, having a manufacturing process; you can create identical sustainable materials. And the same is true for Diamond. Anyway I was particularly successful. I created the technology, invented the technology to make lab grown diamonds. Also called synthetic diamonds, because they’ve been synthesised. But they’re still carbon, they’re still diamond. And I did all sorts of wacky stuff. And it was because I was given the opportunity.

Manda: So did you make the Koh-i-Noor? Or could you make the Koh-i-Noor? I mean, obviously someone has to cut it, but you could have created a synthetic Koh-i-Noor? Doesn’t that crash the the value of diamonds Somewhat?

John: No. You see, this is the thing. In theory, you could make the Koh-i-Noor, but I don’t think anyone would really want to, because it wouldn’t have the same value. And part of the purpose of this project was to show just how low cost could you make these things and just how high quality. And now you can see Pandora. All of the diamonds that they use are lab grown diamonds. And there’s a really good, the best reason, is the reason why I actually started doing the project; which wasn’t known to De Beers until I left. And I finally, a few years after I left, actually told them or they got to know. They wanted to do it to protect their market. The diamond market is huge. It’s created, it’s marketed. Every diamond is unique. 86 million carats a year of natural diamond are mined and sold. This huge number. So they’re not that rare. I mean, you’re not talking in terms of tens of thousands. You’re talking about millions and you’re talking about over the last hundred years, tens, hundreds of millions of carats of diamonds effectively have been mined and produced. Huge, huge number. And will that continue? All the time we have mined diamonds. But I have a problem with mined diamonds. I mean, the Kimberley Diamond mine was the only manmade structure visible from space. And when I was a kid, I saw pictures of this and I saw pictures of children, Angola and other places, Opencast mining.

John: I just thought, you know, everyone’s saying, you can see the Great Wall of China; you can’t see the Great Wall of China from space. You have to have quite a high resolution telescope to do that. Even from the space station, what people were seeing was the Yangtze River much, much bigger and they thought, Wow, that must be the Great Wall. Gee, what a great wall and all that. So I just thought, it’s just obscene to have a diamond mine that big. And that was one of many. You know, there were dozens of these things, enormous craters on the earth. They use enormous amounts of land that could be used for regenerative agriculture. They use a huge number of people. Great, that we employ a huge number of people, but they could be retrained in agriculture, regenerative agriculture, something. They’re all in places where that would be an ideal technology to implement. Use huge amounts of water. Resources, should we say? And ethically it’s awful because the more. There is this drive towards mining, there’s illegal mining, there’s artisan mining. There’s lots of women’s artisanal diamond mines all over Africa, you know, And they need to be able to operate and not force out of operation by the really big boys. But the key to lab grown diamond and the ethical bit, is they’re environmentally friendly.

John: They can be created from renewable energy sources, electricity in particular. Most importantly, they’re conflict free. They’re not blood diamonds. We can trace their provenance back to the very gases that they were made from, and we can actually produce a nice sort of table of outputs about just how ethical they actually are. And it’s really important because mined diamonds, no matter what, finding the provenance of them is like finding the provenance of old artworks. I mean, it’s all done on a basis of trust. And there are some industries where that trust is lost very easily, particularly when you have large companies that can be divisive about their marketing. Not saying that De Beers was like that. Actually, De Beers did phenomenal stuff. They sold all of their diamond industry interest to Botswana for nothing. And then they created Debswana, that has actually made Botswana a  country that can be self-sustaining, growing economically good. It has a massive health service, great architecture. I mean, it’s amazing what the diamond industry and the mining industry did for South Africa. But it’s time that changed. So I got into the diamond thing to grow this thing, to stop mining, to provide an alternative source. But it might take 60 or 70 years to do, but it’s the way to go. 

Manda: So I want to come back to your timeline, but I have two ancillary questions. First one is. I’m going to interview a gentleman called Dr. Simon Michelle on the podcast early next year, and he’s a material flows specialist. So he sent me some YouTubes of him talking about material flows. And in one of them he’s with another gentleman who’s specialises in rare earth mining. And I was totally gutted by the videos of something simple like graphite, even not the rarest of earths, in China. And the acreage, the thousands of acres of polluted land and polluted water. And I’m wondering, are they now visible from space? Or is the horror – it was truly the ultimate dystopian landscape – that I saw in those videos, is the De Beers – or not necessarily De Beers but the mining industry, worse than that? Or is it just that other mining industries have risen up in the last couple of decades since rare earths became so useful?

John: The whole problem with mining is sheer scale. And it is driven by the sheer scale of our consumerism. So electric cars, for example, need batteries. Battery technology progressed a fair bit, but it’s still a bit barbaric in many ways. It’s still a bit basic. We’re still using extracting all sorts of materials to make batteries, as we are for everything. Everything comes from some sort of extraction, whether it be water, whether it be lithium, whether it be gold, you know. And it’s all taken out as the individual material. It doesn’t come ready made. You know, I’m mining batteries.. No, I’m mining the materials for batteries. And if you’ve got ten different materials, you’ve got ten different mines, probably in ten different places, and they’ve all got to be brought together. So there’s the massive environmental impact, both of the mining and the sheer scale of it. I maintain one of my life’s purposes is to help people understand scale. Understand you know, what what it means. What a million means, what a billion means. I mean, I maintain that people in the UK didn’t know what a million was until the lottery came along. And they really hadn’t heard it very much other than from government. They certainly hadn’t heard about it as individuals and people, 30 odd years ago, whenever it was. But I first understood a million when I was about eight, nine years old.

John: I was at the school and one of our new classmates was a guy from Sheffield whose family had moved down from Sheffield after his dad had a foundry accident and lost both his legs in a foundry accident. And he got one of the UK’s largest payouts at the time, which was more than £1,000,000. Now no one believed him in the class. This class of junior schoolkids all said, No, you can’t, it wouldn’t fit in your house. They didn’t understand about banks, you know. We didn’t talk about money like that at home or anything else. So it wouldn’t fit in your house. You know, it’s not in the house, but yeah it’s more than £1,000,000. But no one believed him. So it was a great source of disturbance in the classroom. So my teacher, Mr. Burke, said, Look, here’s a £5 note and he got a micrometre out and he showed us how to use a micrometre to measure its thickness and measure its length and measure its width and calculate its volume. And then what’s the volume of £1,000,000 of £5 notes? Yeah, a £5 note because you know, not many people carry around £10 notes. Certainly didn’t carry around guineas or anything else. Anyway, we calculated it was about a metre cubed, a yard cube thereabouts.

Manda: All right. It would fit in the house.

John: So. Yeah. But that gave us a first idea of what £1,000,000 was like. And now you can put £1,000,000 million into a hold or £50 note. So it’s comparatively easy. And you can put it into your socks with €500 notes, you know, that sort of thing. And that’s why they had to get rid of them. So understanding the scale of things, when people say, Oh, we want 100,000 toothbrushes per day to be manufactured. Well, mobile phones is a better one. Mobile phones, factories might knock out 100000 to 1000000 phones a day. Just imagine how many that is every second! Or Coke bottles. Coke cans, 20,000 per minute going round on a slow conveyor.

Manda: And then where do they all go?

John: And so, yeah, when you’re mining 100 different elements for 100 different uses or you’re mining a thousand barrels of oil a minute from somewhere. Understanding what that means is is really very important. So yeah, the mining industry generally is bad news for the planet. Good news for us. And we can’t just stop it. There’s too much and it’s too interconnected. It’s like I was saying; you got ten components, that’s ten degrees of separation on your complex system. Just for that one item.

Manda: So we’re definitely still going to come back to your timeline, but we’re now into Accidental Gods territory. Of we’re in our multipolar crisis. Exactly as you’re saying. Everything is very big. Nobody is going to stop using computers tomorrow so that the assembly lines can stop. Do you have a personal vision? Nothing to do with your employment by the government. First of all, how long do you think we’ve got before we hit material flow crisis that Simon Michaux is saying? And I have to say he thinks not very long for crucial material flows. And then before we hit a more critical planetary boundary crisis that slows down our consumption, whether we like it or not. So that’s the first thing: Do you have a sense of timescale, and then within the timescale that you inhabit, do you have a sense of any way that we could begin to shift away from the destructive habits that we’ve got into?

John: I think. For me, part of the issue and my vision is actually to get people to understand that we’re addicted to electricity. We burn more fossil fuel than we ever had before. We’re using all our renewable energy sources not to offset our fossil fuel use, but to supplement it so that we can do stupid things like mine cryptocurrency. Using a huge amount of electricity. Doesn’t democratise money, doesn’t do anything any good except for the six or seven major shareholders in the world who are already exceptionally, obscenely wealthy, to make more money. To giant Ponzi schemes. You know that addiction to electricity is getting worse. The more we call to burn less fossil fuels, or to use less materials, extractive materials, the more we’re doing it. So we’re actually burning more fossil fuels than ever before. We’re creating more electricity than ever before. And the way that we’re creating most of our electricity, we might be producing as a world ten, 15, 20% renewables, renewable energy, electricity. But the other lot is from burning wood pellets. How environmentally destructive is that? And just because it’s biofuel or biomass doesn’t make it any more environmentally friendly. In fact, it’s less environmentally friendly in many respects. All the major thing that has changed, we’re not manufacturing very much more. In fact, there’s been a manufacturing downturn globally because consumerism has gone down because of economic problems. But we’re making more electricity to do more what? Computing. We’re using the Internet more. We’re creating tons of data. And data is the new plastic, right? It is not the new oil. That’s my mantra it has been for the last 15 years is that single use data is worse than single use plastic, because you can’t recycle data. You can reuse it sometimes. Most of it isn’t reused. Most of it’s created. The Instagram pictures that I take, they’re stored in five or six servers all around the world whether I like it or not, it’s never depleted it.

John: I create these digital things and… Whereas we can turn analogue signals, voltages into digital effects, ones and zeros maybe. We can’t take those ones and zeros and turn them back into voltage. We can’t turn them back into electricity. It’s a one way street. It’s a sort of diode like behaviour. So getting people to understand that no one can do everything, but everyone has to do something. And this crass argument that, oh, it’s, it’s down to the individual to reduce their carbon footprint. Well, yeah, it is in small part. But it’s down to all of us to do that. The whole carbon footprint anathema is a bit weird, but we all need to be educated. And my vision is that everyone understands that switching a light off when you leave the room or turning the heating down by one degree might not seem like very much. But when you’ve got a billion people doing that, you’ve suddenly got the the enormity of use reduced by a bit. And if we can do that year on year for the next ten years, we will reduce our overall consumption without even knowing it, by a substantial amount. And that will help us achieve many of the goals or move towards achieving many of the goals that we’re trying to achieve. But asking people to say just stop oil or just stop doing something, is impossible. Everything is so interconnected. Our lives are interconnected, but our consumption is interconnected as well. We can’t just stop doing stuff. The vision, understand the scale of stuff, and then understanding that you don’t have to reduce that whole scale yourself. But we all have to collect together.

Manda: Brilliant. This feels like we’re into territory where I can have an opinion. I might know something about it. So first of all, we still we’re working on a timescale of ten years perhaps I think I heard you? But let’s go back to that. There’s a thing called Jevons Paradox, which is basically that he pointed out that if you reduce the cost of an item, whatever that item is, it doesn’t make it easier to get hold of. It doesn’t make people use it less, they just use more to fill the gap. I hear you on if everybody turns off the lights, turns the heating down by one. But quite quickly, we reach the kind of bare minimum that we did with water this summer so we could stop taking showers, have sponge baths – probably too much personal information – but we can, down to a minimum. But there comes a point where our water consumption is what’s needed to keep the animals alive and keep us alive. And there is no more left to cut. And yet, if somebody switched off the water, we would end up using less even if we died. So it seems to me that either we’re going to hit planetary boundaries and we’re going to be forced to stop consuming easy oil, whatever else. Or we have to find ways that don’t take us down the Jevons Paradox route, but take us to something else, that isn’t based on massive material consumption and everybody wanting to chase profits. And you seem to be right at the leading edge of this question. And I ask everybody what their vision is, and everybody gets a bit lost because it’s really hard. We do live in a planet where it’s easier to imagine the total extinction of humanity, than it is to imagine an end to predatory capitalism. But I wonder, have you got an image of what the world looks like if we manage to reduce everything to a point where we’re not destroying the biosphere?

John: Yes, I do. Yeah. As a kid, I heard Carl Sagan speak and he said, we’re just basically we’re just a spaceship travelling through the universe. We’re a closed system. So all the time we’re using something, it’s not disappearing. It’s changing its form. And my vision for how we might slow that degradation of the planet down, and ourselves down; it’s like I say, we don’t have to remove everything 100%. Because we’re not doing that. We’re just shifting where it is. I mean, the amount of water there is on the planet is the same this week as it was last week, plus or minus a bit. You know, it’s a closed system. And we have a variety of technologies. I’m not saying that technology is the solution by any means; but actually, if we reframe our imagination to think, okay, so I’ve got these lithium batteries. Rather than mining it, I’m going to repurpose it or I’m going to recycle it. We focus far too much on extracting, converting, using and discarding. So I was brought up in that generation where my dad would repair the twin tub washing machine 15, 20 times over its 20 odd year existence. We would have bought it second hand and he would have repaired it and patched it up until it can no longer be repaired.

John: It is absolutely impossible other than replacing components which were going to be cost ineffective and actually you’d just have to repair it again. So he would then say, Right, we’ve got to buy a new washing machine. But this one’s done it’s time. You amortise its cost and the effort put into over it and it’s pennies per day. But he will still keep that twin tub washing machine in the garage because those bits might come useful. Or I go down to the scrap merchant and we’d take various bits down and exchange that scrap metal for money so it could be reused and reprocessed, just like our aluminium foil caps or milk bottles we used to collect for charity, for school or newspapers and cardboard and everything else. Really into that let’s refurbish, let’s replenish, let’s have a maintenance schedule to maintain our technology, even if it is really big. Then we move on to my generation, which I can repair our washing machine and I have done. I expect if it doesn’t last 7 to 10 years, then I complain bitterly. But I will repair it whenever it needs to be repaired. And can do that. But I will go more quickly towards the replace rather than the refurbishment aspect, because I can. And because actually life can be a bit too fraught, when you’ve got three kids under the age of 11 and one of whom is a baby and you have to wash nappies every day, and it really used to get me down.

John: So I will then after 7 to 10 years amortise how much value I’ve got out of this washing machine, and then I’ll buy a new one. But I’ll have the other one safely disposed of. You know, and it’s sent off to the Great Washing Machine graveyard in Wales. And there it will be stripped down or it will just be stored for years and years. Maybe it’ll be stripped down, the metals will be reused or waiting for a time when a technology can reuse. And now we’re on the current generation where Oh my phone screen is cracked – I need a new phone or I need to get it refurbished, which is which is great. Or, I want the latest of this. So after six months or a year, even though it’s still working, it gets put in a drawer or maybe sent to somewhere else. So we’ve progressed all along this line further and further away from the the real activity that we want, in order to be able to sustain, at least sustain and hopefully better maintain the planet.

Manda: Okay, so time is flowing past. Thank you for that. It’s tremendously encouraging. I’d like to shift towards the leading edge where tech is taking us. Because yes, it is taking us to people wanting the latest iPhone rather than the one that they bought six months ago. But with any luck, as we change people’s views, that will change. But it seems to me that you are working in the edge of technologies that the rest of us don’t know exist, that presumably are not just there to extract value from people and the planet. But at least some of which are explicitly designed to help us make a transition to a more regenerative future. And I wonder if you could introduce us to some of those and let us know how we can help them along.

John: Yes, it’s a fascinating sort of journey. So if we come back to the sort of ethics side. For nearly a decade, I worked in this area called Synthetic biology; engineering, biology. I say it’s the engineering of biology to do useful things and make useful stuff to heal us, feed us, fuel us. Most imperatively, to sustain us, sustain life on earth. And it is engineering or biology to do good stuff. Its aim is to create more resilient organisms, is to use bacteria to eat plastic, maybe. Or convert plastic into energy or all of these sorts of things that you read in Ursula le Guin Books and Margaret Atwood books. And most people have a dystopian outlook on it. Actually, it can be really, really useful. Of course that’s fraught with ethical problems, but it is one of those technologies that really can help maintain a lot of the ecosystems that we’ve got and actually be generative for them. And to use some of the things we’ve developed. Some of the technology, this biotechnology, to create some of the goods that we use all the time that we at the moment get from extractive technologies. But really should be creating regenerative technologies, let’s put it that way.

John: And so my journey in the ethics world and the tech world has gone all the way through from sort of power generation, electricity generation through to materials generation with the manmade diamond, through to biological materials generation with engineering biology. And one of the key features of engineering biology is the fact that we can model and simulate and hopefully predict how we’re going to create new materials. Drugs, for example. Or recreate materials that exist in nature, that we don’t want to scavenge from the planet. And there’s all sorts of protection mechanisms put in around that scavenging now, but it still happens. We still farm, you know, we still grow potatoes. You’ll see that one other area that I’ve been associated with has been lab grown meat, like lab grown diamonds you know, and various other lab grown things. Lab grown meat is potentially a really good part solution for the generation of greenhouse gases by agriculture. In particular livestock, poultry and aquaculture. If we can make these things more environmentally, efficiently and friendly then…

Manda: We’re looking now very explicitly at the ethics and you said right at the start that you wanted to help Start-ups to start with an ethical base. How do you reach agreement across the board between you and any hypothetical Start-Up, as to what is ethical? What what are our parameters? Because everybody has a different framing of the world and and let’s say the ethics of an indigenous tribe in the middle of the Amazon might very well be different to the ethics of the tech pros in California. How do you find commonality that is still useful.

John: Its a big issue to face. It’s all done through discussion and everyone’s ethics is different. But I think the universal approach to ethics must be ‘is this is this needed? Is this going to do good in the world or is it going to do nothing? Is it going to use more resources than it should do? Is it going to cause harm? Is it going to impinge on someone else’s human rights in some way, shape or form?’ Because the Bill of Human Rights is generally accepted around the world. There are some aberrations, shall we say. And there’s certainly a lot of people who stretch it a lot further than it should be. But in general, if we when you’re talking about seven or 8 billion people on the planet, I think nearly everyone would agree the Bill of Human Rights is mostly right for them. So all our ethical thinking should really be based around our understanding of what human rights are and what freedom of speech is, and how we have to acknowledge – or I believe we have to acknowledge – that if we’re going to enable freedom of speech, this freedom of speech for everyone, even if we don’t like it. We might not like what they’re saying, but we should love the fact that they can say it. Now that gets stretched to a point where if they’re starting to say things or do things that stretch that human rights belief, then we all have to say that this is outside of our basic Venn diagram, enclosed by the Bill of Human Rights and what we generally accept is good for mankind. And more than humankind as well, you know, we mustn’t ignore that. It’s very, very important because we we anthropomorphise everything. But that takes away the rights of buildings to not be degraded by acid rain. Who knows? The rights of animals to have clean air as much as we have.

Manda: So how do you plan forward? Because it seems to me that second order impacts are really hard to predict. If we’d gone back 15 years and talked to the developers of Twitter and said to them, at some point in the future, a bloke who’s got enough money to send his own private rocket to the moon is going to take it over and tweak the algorithms, such that the free speech of a certain sector of the community will be amplified and the rest may end up finding themselves diminished. And nobody knows what that will do to the future of democracy. Or even if we were to say that Twitter will become one of the tools through which trust will be undermined to the extent that democracy is under threat. That would have been incredibly hard to predict, when social media hadn’t happened, and we hadn’t seen that it brings out the worst in everybody. How how does anybody ever get anything moving off the launchpad without paralysing themselves with what might go wrong?

John: Well, take the example of start ups working in tech. Suppose we were talking to the Start-Up that created Twitter all those years ago. And we want to think, what’s this going to be? We have imagined already the name Twitter. And we put a whole narrative around it, it’s wonderful. We use our imaginations all the time. It’s what we’re imagining that is often at fault. That’s my view. Is often perhaps, is often at fault. We’re always imagining bad or we’re imagining a false good. We’re creating a need to do this because it’s going to help all these people. We need cryptocurrency because people don’t have access to cash money, and that’s going to disappear soon. And everyone wants the ownership of their data. A lot of these are false, entirely false imaginations, put together into a narrative to help us to accept the fact that actually what we really want is something very different. So we’ve gone into the start up of Twitter. So from the off they’ve imagined that this system can occur, then they’ve gone through the activities, to create the code that enables them to do what they think should be doing. And every step of the game, what we should be doing is helping that company to consider, what are the consequences of doing this to me? To my company? To society? To other people’s society? So I think although it’s not possible to predict, it is possible to imagine the outcomes. And by imagining those, you can imagine ways of not getting there, or ways of getting there quicker in the event of good things.

John: And that’s what really some sort of ethical introspection needs. It needs people to sit down and imagine. So I take people through, on the tech side, talking about things like algorithmic bias. I say, Well, the algorithm itself isn’t biased, the bias is actually on how it selects data, and the data might be biased as well. So you’ve got bias times bias when it comes down to that. But how do you detect that? So I thought, well, imagine you are data, right? You’re a piece of data. So you’re a PDF image or JPEG image of yourself. Imagine what happens to you as you go through that algorithm; go through the journey of being the data and how it flows through that algorithm. And line by line what’s happening to me? And then you can see, Oh, I’ve been excluded. Why have I been excluded? What is it about this algorithm that selected a one, not a zero? You know, sort of thing, this bit of code. What is it about the algorithm that has done this? And how can I understand that better, so that I can inform the data supplier or the data set that it needs to be selected differently? Or I can inform the programmer how I can select differently. See what I mean? It’s that sort of understanding; rather than just writing code, it’s what is the code actually doing? And what does the data actually mean? What does it actually contain?

Manda: But is it not possible to have algorithmic shifts, such that you’re not promoting the people who are going for outrage and the dopamine hits? I think Audrey Tang in Taiwan is doing stuff with a local Taiwan social media that enhances cooperation. And that we could, that you wouldn’t have to necessarily be policing content, but you’re policing the response to the content at an algorithmic level. Yes/no?

John: Yeah, it comes down to scale, but it also comes down to culture as well. I mean, when you take Taiwan and you take what Audrey Tang is doing, it’s within a cultural mix where the bounds are known, the boundaries are known. And the detectability and discovery of the information is limited to that language and to those people in almost a closed environment. The problem for me arises is that the world isn’t borderless. It’d be a much better place if it were. But as soon as you cross borders, you cross cultural boundaries, cultural norms, different taboos. What is talked about in one country is considered anti-Semitic in another. And so do you exclude that entire country or most of the people in that country, especially if they’ve been misinformed? Do you then exclude them? This is the whole policing bit. Is that you can’t get an algorithm to police something that it hasn’t been able to detect for a long period of time and learn from. Machine learning, remember, is human.

Manda: And yet we have algorithms that are very effectively and demonstrably creating division, creating polarisation, throwing people into tribes to which they cleave more strongly than they did before. Is it really not possible to create algorithms that create more cohesion and cooperation?

John: I think probably it is possible, but maybe that shouldn’t be the goal. Maybe the goal should be to define those algorithms that are detecting words in sentences and saying ‘no. In our glossary, this word, and this often happens, This word is something that’s inflammatory to these people’. Therefore, we need to look at this again and look to see how the structure of those words is actually doing something bad or it’s fairly innocuous. So there’s a lot of learning involved in there. We’ve all heard about some of the instances where people have been excluded on the basis of saying something that is totally innocuous. It just happens that that same sentence has appeared in part of a larger paragraph somewhere else from someone who’s produced something inflammatory or derisory or otherwise. So again, it comes down to the scale of this. You have to remember, you know, a computer is typically operating on these platforms about 100 times, 1000 times faster than the human brain. We’ve got 86 billion neurones in our brain operating and firing to produce 1000 thoughts a second, maybe one/four a second for some people, who knows? Inordinate amount of data to search through. So this detectability is incredibly limited. And if all your algorithms are pointed towards detecting this mass of information, embedded… There’s needles and needles of a haystack of needles. You start to come across a brick wall where you just can’t find it quickly enough or enough of it and respond quickly enough.

Manda: So in my own iterative Loops, which I suspect are quite slow, we have single use data is as dangerous as single use plastic. I still have an image of the Pacific gyre of plastic, which is four times the size of Texas and can be many metres deep. And I think I’m struggling to imagine data being that dangerous. But. It doesn’t go away. It’s held. The CIA has got several terabytes for every person on the planet. And yet, Twitter, Facebook, social media. I hadn’t intended to take this podcast towards social media, but it is proving really interesting. It seems to me that there’s the… You said a few words in a sentence and Twitter doesn’t like it and now it’s banned you for life. And that’s very sad. But on the other hand, you could have been the next president of the United States trying to incite revolution and got banned for life. And that’s, you know, frankly, quite happy with that. You don’t get to call fire in a crowded theatre when there isn’t a fire. But it seems… My understanding is that the algorithms, particularly in Facebook, Twitter, possibly TikTok, which I don’t go near, so I don’t really understand it… That cause posts to be liked and re liked depending on the number of responses that they get. And that the most inflammatory posts tend to arise from a relatively small number of accounts who are very, very good at posting very inflammatory posts.

Manda: To the extent that the political parties in the US discovered that if they didn’t produce posts that were inflammatory and which fed into their own tribes side of you need to crush the other side really badly or we’re not getting our dopamine hits. Then they didn’t get any traction. So you have an algorithmic set up that promotes division regardless of the language and based on the number of responses that it gets, which seems to me to be driven by the venture capital nature of funding and the fact that you have to, as you said, 20% of us could leave, but they’re still dragging in more people because they have to not just continue to make a profit, they have to make an increasing profit every year to keep the vulture capitalists happy. Presumably at some point they reach saturation and then they can’t grow anymore. But then they have to create more division or change Instagram so that it becomes more of an income stream. Are you seeing anywhere at the edge of tech that you’re working at, people trying to create social media that we could all shift to? That isn’t algorithmically set up to promote hate between various groups, on a planet that needs us all to pull together, to help solve the multi systemic crisis.

 

John: There are there are people producing new platforms like Web Three, for example. But then again, it’s a question of scale. And we all have to do this together. I believe that we can all globally imagine a better future, not a worse future. And we can imagine ways of getting there. Not going to be easy. They’re not all going to work. Most of them probably won’t work. Some of those ways are things like regenerative, anything. We need to learn to stop innovating, in my view, and start maintaining the innovation we’ve got and getting rid of some of the rubbish stuff. You know, some of the things that are purely generative and pointless. Getting rid of some of the data generation or making it more efficient, making our energy use far, far, far more efficient. Low energy use electronics for computing, for example, could save 30% of the world’s electricity generation. 30%. That would be an instant hit on the need to burn fossil fuels. So and there are people working on those things. They’re very specific. And all of them to a tee say, well, look on our own, we’re not going to be able to do this, to get to less than 1.5 degrees. We have too many borders, too many boundaries, and not enough collaboration on a global scale.

Manda: Yes. This is a whole new podcast I can feel. We’re already so far over time, John. I don’t know what we’re going to cut to fit this in, because I think the idea of a borderless world government is a really interesting one. But also how do you stop it being corrupted and whose values sets do you give it? That’s a whole different set of questions.

John: It’s a Utopian dream. I’ve got a little project going on that I’ve had going on for a long time. I hardly progress with because it’s not 100% easy project and it’s called a History of the Future in seven words. And the seven words include things like borderless and regenerative and other things around what we might consider imagining doing, and how it might consider changing the world. But when you start to dig into it, it starts to become a very, very knotty problem.

Manda: We have to stop there. I can feel this is going on and on. We definitely need to come back for part two, sometime in the New Year because I’m booked up until then. And in the meantime, I hope all of your exciting, disruptive, leading edge tech companies actually do make breakthroughs that lead us into the flourishing future That has to be possible. Thank you so much for coming on to the Accidental Gods podcast.

John: You’re welcome. It’s been great fun.

You may also like these recent podcasts

Connecting with Water – Loch meditation

Connecting with Water – Loch meditation

A free meditation from the Accidental Gods ‘connecting with water’ series. Put on your ‘seven league boots’ and head in the direction of the setting sun into the time before people were here.
Stand on the shores of a Loch and discover something new in the depths of yourself.

Cultures of Commoning: Quadratic voting, indigenous connectivity and pacifist chess

Cultures of Commoning: Quadratic voting, indigenous connectivity and pacifist chess

How can we create more connection with the people who matter – locally, nationally and internationally? Ruth Catlow of Furtherfield has designed 3-person chess where the pawns can declare world peace, set up the Treaty of Finsbury Park between humans and other species – and experimented with quadratic voting on the blockchain to create real local democracy.

STAY IN TOUCH

For a regular supply of ideas about humanity's next evolutionary step, insights into the thinking behind some of the podcasts,  early updates on the guests we'll be having on the show - AND a free Water visualisation that will guide you through a deep immersion in water connection...sign up here.

(NB: This is a free newsletter - it's not joining up to the Membership!  That's a nice, subtle pink button on the 'Join Us' page...) 

Share This