#193  Giving Birth to an Alien Intelligence:
AI – existential risk or integral part of the solution? With Daniel Thorson

apple podcasts

stitcher

spotify

google-play

you tube

Is AI really an existential risk to humanity (and the biosphere) or can we harness it to be a power for good?

How dangerous is AI? Are Large Language Models likely to subvert our children? Is Generalised AI going to wipe out all life on the planet? I don’t know the answers to these. It may be that nobody knows, but this week’s guest was my go-to when I needed someone with total integrity to help unravel one of the most existential crises of our time, to lay it out as simply as we can without losing the essence of complexity, to help us see the worst cases – and their likelihood – and the best cases, and then to navigate a route past the first and onto the second.

Daniel Thorson is an activist – he was active in the early days of the Occupy movement and in Extinction Rebellion. He is a lot more technologically literate than I am – he was active early in Buddhist Geeks. He is a soulful, thoughtful, heartful person, who lives at and works with the Monastic Academy for the Preservation of Life on Earth in Vermont. And he’s host of the Emerge podcast, Making Sense of What’s Next.

So in all ways, when I wanted to explore the existential risks, and maybe the potential of Artificial Intelligence, and wanted to talk with someone I could trust, and whose views I could bring to you unfiltered, Daniel was my first thought, and I’m genuinely thrilled that he agreed to come back onto the podcast to talk about what’s going on right now.

My first query was triggered by the interview with Eliezer Yudkowsky on the Bankless podcast – Eliezer talked about the dangers of Generalised AI, or Artificial General Intelligence, AGI, and the reasons why it was so hard – he would say impossible – to align the intentions of a silicon-based intelligence with our human values, even if we knew what they were and could define them clearly.

Listening to that, was what prompted me to write to Daniel. Since then, I listened many times to two of Daniels own recent podcasts: one with the educational philosopher Zak Stein on the dangers of AI Tutors and one with Jill Nephew, the founder of Inqwire, Public Benefit Company on a mission to help the world make sense. The Inqwire technology is designed to enhance and accelerate human sensemaking abilities. Jill is also host of the Natural Intelligence podcast and has clearly thought deeply about the nature of intelligence, the human experience and the neurophysiology and neuropsychology of our interactions with Large Language Models.

I’ve linked all three of these podcasts below and absolutely recommend that you listen to them if you want more depth than we have here. What Daniel and I tried to do today was to lay things out in very straightforward terms: it’s an area fraught with jargon and belief systems and assumptions, and we wanted to strip those away where we could and acknowledge them where we couldn’t, and lay out where we are, what the worst cases are, what the best case is, given that we have to move forward with technology, switching it all off seems not to be an option—and how we might move from worst to best case.

With this latter in mind, I’ve included a link to Daniel’s new project, the Church of the Intimate Web which aims to connect people with each other. I’ve also – because it seems not everyone listens to the very end of the podcasts – included a link to our membership programme in Accidental Gods where we aim to help people connect to the wider web of life. I definitely see these two as interlinked and mutually compatible.

So – trigger warning – a lot of this is not yet impinging on public awareness and we’re not yet aware of how close we are to some very dangerous edges. This podcast leads us up to the edge so we can look over. We do it as gently as we can, but still, you’ll want to be resourced and resilient before you listen.

In Conversation

Manda: Hey, people. Welcome to Accidental Gods, to the podcast where we believe that another world is still possible. And that if we all work together, there is time to create the future that we would be proud to leave to the generations that come after us. I’m Manda Scott. And I’m your host in this journey into possibility. And this week’s guest is one of those people whose voice in the world has inspired this podcast from the outset. Daniel Thorson is an activist. He was active in the early days of the Occupy movement and in Extinction Rebellion. He is also a lot more technologically literate than I am. He was active in the early days of Buddhist geeks. He’s a soulful, thoughtful, heartful person who lives at and works with the Monastic Academy for the Preservation of Life on Earth, which is a Buddhist academy in Vermont. And often when I write to Daniel, I find he’s on a solo silent retreat, and often they last several months. And given that he is one of the most self-aware people that I know, he has enormous integrity and huge humility and great wisdom. And he’s host of one of my favourite podcasts, the Emerge podcast, Making Sense of What’s Next.  

So in every way possible, when I wanted to explore the existential risks and maybe the potential of artificial intelligence, and wanted to talk with someone I could trust and whose views I could bring to you unfiltered, Daniel was my first thought. And I am genuinely thrilled that he agreed to come back on to the podcast and talk about what’s going on right now. This is all happening absolutely as we speak. My first look into this was triggered by the interview I heard with Eliezer Yudkowsky on the Bankless podcast, where he talked about the dangers of generalised AI or artificial general intelligence or AGI, whatever you want to call it, and the reasons why it was so hard – and he would actually say impossible – to align the intentions of a silicon based intelligence with our human values, even if we knew what those were, and could define them clearly in ways that we all agreed on. So I listened to that and was quite shaken. And that’s what prompted me to write to Daniel. And then knowing that he was coming on the podcast, I listened several times to two of Daniel’s own recent podcasts, one with the educational philosopher Zach Stein on the dangers of AI tutors, and one with Jill Nephew, the founder of the Inquirer Public Benefit Company, which is on a mission to help the world make sense.

Manda: Jill is also host of the Natural Intelligence Podcast and has clearly thought deeply about the nature of intelligence, artificial and otherwise, about the human experience and the neurophysiology and neuropsychology of our interactions with large language models, which is basically the things that are like Chat GTP. I have put links to all three of these podcasts in the show notes, and absolutely recommend that you listen to them if you want more depth than we’ve offered here. What Daniel and I have tried to do today was to lay things out in really straightforward terms. This is a new concept to some people, and it’s an area fraught with jargon and belief systems and assumptions. And we wanted to strip those away where we could, and acknowledge them where we couldn’t, and lay out where we are, what the worst cases are, what the best case could be, given that we actually have to move forward with technology. Switching it all off seems not to be an option. And then how we might move from worst case to best case. With this latter in mind, I’ve also linked in the show notes to Daniel’s new project, The Church of the Intimate Web, which aims to connect people with themselves and each other.

Manda: And it feels and sounds to me like a really exciting project. I have also – because it seems that not everyone listens to the very end of the podcast. Really? Do you not? Anyway, I’ve included a link to our membership program in Accidental Gods, where we aim to help you to connect to the wider web of life. I definitely see these two as interlinked and mutually compatible, and would absolutely encourage you to explore them both. So this comes with a trigger warning. A lot of what we discuss here is not yet impinging on public awareness. And a lot of us, including me, are not yet aware of how close we are to some really dangerous edges. In this podcast, we walk up to the edge so that we can look over. We do it as gently as we can, but even so, you will want to be resourced and resilient before you listen. So with that established, people of the podcast, please with great heart do welcome Daniel Thorson, host of the Emerge podcast and so much more.  

Daniel, welcome back to the podcast. It is such a delight to be speaking with you, and I have to say I love your cabin. Is this where you do your silent retreats? Because it looks rather, rather lovely.

Daniel: Yeah, this is not where I do my silent retreats. This is where I live, at the monastic academy with my partner. Yeah, it’s a beautiful place. 

Manda: Gorgeous. Yeah. So, welcome back. We talked to you several years ago now on the podcast, and then you came and spoke to us in Thruopia. And in Thruopia really, you were the first person really to open up the potential dangers of ChatGPT at a time when I didn’t even know such a thing existed. And so now we’ve come to you because there’s a lot of stuff on the edges of our awareness about the dangers of AI. And you are my trusted voice. I absolutely… if I hear what you say, I trust you. You’re not trying to pull the wool. You’re saying what you believe to be true, and we can test it and we can explore it. So thank you, first of all, for giving us the time in what sounds like an incredibly busy schedule. And let’s leap in with AGI first. What is it and why do people think it’s dangerous and are they right? Over to you.

Daniel: Yeah. So I’ll preface this whole conversation by saying that I’m not an expert in this area. And to be fair, I think there’s a way in which nobody really is because it’s such an emerging space of exploration. There are people who have spent a lot of time developing AI technologies, but it’s unclear about whether or not they are actually experts in what they’re creating, which is what makes this so difficult sometimes to talk about and think about. I will say that my own journey with this has been one of just having lots of conversations with people, and just feeling and thinking about it as deeply as I can. And so, you know, that’s just where I’m coming from. I don’t have particular training in machine learning or artificial intelligence systems. And so, you know, just your mileage may vary in terms of orienting to this really significant part of our world now vis a vis my sense-making. That being said, we’ll kind of try to explore it together and I’ll just try to be very clear about the limits of my understanding as they arise.  

Manda: Sure. You understand more than I do, and I suspect a lot more than most people listening. So let’s see where we get to.

Daniel: Let’s see where we get to.

Manda: Thank you.

Daniel: Yeah. So AGI is a term for what’s known as artificial general intelligence. Now, artificial general intelligence itself is a very contested idea within the kind of whole realm of artificial intelligence expertise. It’s broadly understood to be the kind of point at which AI is autonomous and self-improving. Now, this is thought to be, and there have been many books written about this as a very critical departure point or threshold, past which we actually really just cannot know what will happen. When people talk about like the singularity, or terms like that, often there’s something like a moment in which AI becomes generally intelligent in that kind of idea of a possible future. Yeah.

Manda: So to be clear, this is when we, humanity, develops a chip or a sequence of chips, or a hardware unit with a core which we have designed and built, but which then is capable of designing and causing to be built its own successor.

Daniel: Exactly.

Manda: And which is then its successor, is then capable of designing and causing to be built its successor. And probably within hours, certainly within days, it has reached levels of computational capacity that we actually couldn’t begin to comprehend. Daniel Schmachtenberger I’ve heard on a number of occasions say that what’s going to finish humanity is our incapacity to understand exponential growth. And particularly in this field, we have a tendency as humans, there’s probably a name for it where if we don’t understand something, we think it cannot be understood and and we will not be able to understand the extent to which the end result three days down the line of this process, can do things that computers can do so much faster than we can. Is that a kind of mundane exposition of what you’re saying?

Daniel: Yeah, that’s a great way of understanding it. And, you know, another way of framing it is that from the perspective of AGI, we’re essentially giving birth to an alien intelligence on the planet, right? And you can imagine then like an alien intelligence coming to Earth, we actually have no idea what its intentions would be. We don’t know if it would be able to interface with our culture. We don’t know how it would relate to us. It’s just a kind of total unknown. And so a lot of the people, if you hear about folks working in the field of AI alignment, are attempting to create programmatic ways of aligning the values or intentions of these potential future intelligences that we’re creating with the values and goals of human beings.

Manda: Okay, So the very first question, leaving aside the computational capacity of that, is which particular sets of values and goals? Because you and I live in the Western hegemonic society, which has first of all got very, very different values and goals from, say, people who lived 10,000 years ago, or anyone indigenous on this planet. And even within our culture, and the narrowness of that, there are polarities of values and goals. There are certain values and goals I would not wish to be living under. You know, we would end up with A Handmaid’s Tale lived out in real time. So who decides which sets of values and goals they are trying to impinge into the generalised AI?

Daniel: Well, so ultimately it’s the programmers, right?

Manda: Great. 

Daniel: So it’s these people, primarily in Silicon Valley, who have, you know, certain ideological commitments, who will likely, if it’s possible to create an AGI, will be the ones to do it. Now, of course, as soon as you start talking about ‘AI alignment’, as you are noticing, you immediately think, well, what about the human alignment problem? We actually don’t, we haven’t solved that one yet. We haven’t aligned the intelligence of the human beings. You know, we don’t even know what we would align it with. To even ask that question in a lot of circles is just a source of confusion. And, you know, where do you go after that? Nobody knows what to base it on. And so it’s very challenging for our species right now. And, you know, one of the things that my mentor Zach Stein has said, is that there’s a way in which this topic of AI and AGI is putting us in a position where we can no longer afford not to answer these critical questions, like what are the values worth aligning ourselves with? You know, maybe for a while we could pretend that wasn’t the only question that mattered. We actually cannot any longer afford to postpone clarity on that.

Manda: You’ve had a lot of conversations with Zach, and I’m guessing you’ve… my philosophical capacity is is a tiny fraction of yours. And I’m already thinking that people fight wars over the answer of what set of values are we going to align with. Has anybody come up with what for you is a workable solution to how do we select a set of core values around which we can all coalesce and agree?

Daniel: I mean, that’s all the marbles right there. That’s the big question. You know, as near as I can tell right now, it won’t be simply propositional logic. Like we won’t say oh, here is a value, in words, that now we’re all going to agree with. There needs to be a kind of profound shift in the kind of participatory structures of the planet. The way that Zach frames it, and I agree with this, is that we need new structures of generating new forms of intimacy that allow us to together have the kinds of conversations which produce alignment or produce coherence. And so it’s a real shift in what we’re doing together, and what the human being is that’s actually needed in order to even begin to answer that question.

Manda: So we’re kind of, we’re mashing together. So there’s E.O. Wilson, who says we have Palaeolithic emotions, medieval institutions and the technology of Gods. And then Daniel Schmachtenberger, who says, if we’re going to have the technology of Gods, we need to have the wisdom, compassion and prudence of Gods. And we don’t know what the wisdom and compassion and prudence of Gods looks like, but we’re going to have to find out quite fast. So let’s go back to: how close are we, do you think, to the singularity, to the point where we create something that can design and build its own successor? That seems to me quite a critical threshold. Have you an idea? Because when I spoke to you once before, you said we’d probably get a couple of years and now, on the Bankless podcast, he seemed to be suggesting possibly months. And he’s in this field.

Daniel: Nobody knows.

Manda: Guess. Best Guess. 

Daniel: Nobody knows.

Manda: Nobody knows. But human beings are still working to make this happen!

Daniel: Well, this is the wild thing. This is part of the kind of strange ideology that’s driving the production of these technologies is that if you talk to the people that are actually building these things, and I’ve talked to some of them, they’re just – they can’t help but do it. They think that they’re helpless. They need to create this intelligence.

Manda: Is this a monetary thing? We have to create it first because otherwise the other guy creates it and then we lose?

Daniel: There is some of that, I think, especially at the executive level. But for the engineers, it’s more just like, can it be built? I need to know.

Manda: Okay. But they do that with, you know, we’re just going to create something that ten years ago would have seemed impossible and now is within reach. But it’s still ‘too hard’ to get a bunch of people in a room and work out what our values might be.

Daniel: Isn’t that wild?

Manda: It does very weird things to my head, yes. Okay. So we don’t know how long. So let’s go back to…

Daniel: I will say that if – I’ll just say this about the kind of prediction about timing, it’s like, based on the people that I’ve talked to and what I’ve seen and read, if we’re going to do it, it will probably be within the next 10 to 20 years, if it’s possible, based on the amount of money and human capacity and attention that’s now being thrown at this problem. And there’s so much incentive, especially when you talk about multi-polar traps and the kind of game dynamics that are embedded in this. So for instance, the first person, the first nation more likely, or first company to develop AGI, wins the game.

Manda: Every war there ever has been. 

Daniel: Exactly.

Manda: Except… 

Daniel: They get to set the values. 

Manda: Well, they do, they can contain it.

Daniel: If they can contain it.

Manda: But by definition…

Daniel: Perhaps, yes.

Manda: We’ve defined AGI as the agent which can construct, design and construct its own successor. And then nobody has control of it.

Daniel: Maybe we don’t know. Yeah.

Manda: So they seem to think they can let the genie out of the bottle and catch it in the paper bag they’re holding over the bottle, without the genie getting out and actually devastating the world.

Daniel: Yeah.

Manda: It seems like there’s a degree of denial going on here as to the implications of what they’re doing. Okay. For those of you who can’t see because we aren’t recording the video, Daniel’s just sitting there shaking his head, which is, I think, going to happen quite a lot in thpodcast. Okay. So let’s go back to, for those of us for whom this is an idea that’s come out of left field. What is the worst case scenario if the genie does get out of the bottle and we fail to catch it in the paper bag?

Daniel: Oh, yeah. Oh, boy. It’s very chilling, actually. So the worst case scenario – and I actually think this is the worst case scenario. This is in a sense, worse than, you know, nuclear annihilation, in my mind – is what I’ve heard referred to as the kind of death of our humanity, specifically, that these machines essentially replicate themselves across the universe. And there’s nobody home. There’s no consciousness. There’s no light on inside the house. And they’re just eating everything and turning it into substrates for computing. This is horrifying. You know, even now we can say at least we’re consuming the natural resources to serve life. But in that world, there’s nobody home. There’s nothing, no awareness present as the universe is kind of turned into zeros and ones.

Manda: So this was the Eliezer Yudkowsky hypothesis of: it will work out how to destroy not just humanity, but everything that lives and also everything that doesn’t live, to render them down into atoms, because the atoms can be made into more computers. Because one assumes the only imperative of the thing that can design and build its own successor is to build a lot of itself. It’s got an imperative to reproduce. And it reproduces. It becomes the ultimate parasite that would, in the end, presumably just eat up the entire planet until it gets to the very hot bit at the core. And if it hasn’t worked out the very hot bit at the core is quite dangerous, then it kills itself. But it’s probably smart enough to work out, you know, molten core of earth, tricky, stay far enough away.

Daniel: Some of the smartest people that I know claim that there is a fundamental unaligned ability between what they call silicon based life and carbon based life. That just because of the nature of the life form itself, there’s some kind of intrinsic unaligned ability. Now that kind of argument is beyond my pay grade. But this is the kind of thing that people are thinking about as they endeavour to understand the risks.

Manda: But the fundamental is, because years ago we had Asimov’s laws of computing, that you insert into the robot the absolute imperative that ‘I will do no harm to humanity in any, any circumstance’. And it would rather switch itself off than do something bad to people. And the take home message is, guys, that doesn’t work. It can override.

Daniel: It’s more complicated or complex than that.Because you say don’t harm us, but please cure cancer. And it says I’m going to cure cancer. And this is a very facile example, but I’m going to cure cancer by killing everybody. No more cancer.

Manda: You don’t get cancer if you’re dead. So there we go. Yes. We don’t realise the complexity of any proposition because we make assumptions about its interpretation.

Daniel: And also because we don’t understand the nature of the human mind. We don’t understand the nature of mind.  

Manda: Yeah, right. Yes. And this seems to be something that you went into in quite a lot of depth with Jill Nephew was the difference between thinking and cognition. But let’s stay with the AGI. That was quite an interesting one to go down, but I’d also like to take a quick segue because this was a really interesting hypothesis that I’d heard elsewhere, but not as succinctly put with you talking to Zach Stein, which was: We’re really afraid of a system that will destroy everything in the pursuit of an unaligned goal. And we’re living within predatory capitalism, which is destroying everything in pursuit of its own growth. And still, you know, there was a television broadcast this morning and news broadcast on one of our mainstream media who interviewed the person who is probably going to be the next chancellor of the exchequer when the opposition gets in at the next election. And they were having an economically illiterate conversation about the need for the economy to grow and thereby not spending taxpayers money on anything. Because if you spend it on anything, obviously it just vanishes. It doesn’t come back to the exchequer. Leaving aside the modern monetary theory, they still think the economy needs to grow. It’s still embedded in the thought processes of people who did politics, philosophy and economics at Oxford in the 1980s and are now in power, and they never question it. And it is actually destroying everything. Even if the AGI does not take us all out within the next five minutes, we’ve hit so many earth boundaries. And carbon is only one of them. We are destroying ourselves anyway. So we’re embedded in a system that is growing.  

Daniel: Yeah. I think it’s just – the way that I look at it, you know… so I don’t know about AGI. I’ve heard from very intelligent people that think it’s just straightforwardly impossible that we’ll create such a genuinely intelligent system. Again, this stuff is kind of like, I’m not willing to or inclined to put in the hundreds and thousands and thousands of hours to become an expert such that I can really – I have to outsource my sense making to a certain degree. And when I do that, I see, I actually just don’t know.

Manda: Okay. There are people on both sides…

Daniel: Both sides, that seem very persuasive.

Manda: And they can’t both be right.

Daniel: They can’t both be right. Somebody’s going to be wrong. But here we are right now in an already unaligned system. You know, the frame that they often use, as they call it, the paperclip maximisation problem. I think Eliezer talked about this. So this is the idea that, you know, you tell the AI to make paperclips and it mines the iron in your bloodstream to make paperclips. I don’t even know if paperclips are made of iron, but, you know, that’s the idea. And it just turns the whole universe…

Manda: It renders down everything in the planet in pursuit of making paperclips. 

Daniel: Right. And it’s like, we should just acknowledge the fact that we’re already living in a paperclip maximising system. We’re already turning all of nature into numbers, into money, which does actually not have any intrinsic value. We’re turning the unfathomable beauty of the world into money, right? Into numbers, now. And so it’s like we’re already doing it. It’s just that we’re now producing technologies to intensify and accelerate that process. And so I think that’s the most important kind of understanding to really drive home is like it’s what’s already going on. It’s just potentially on steroids. So whenever, you know, people talk about ChatGPT as being this herald of increasing productivity, I’m like, Oh, no. Like, I don’t want us to be more productive until we get clear about what’s worth producing.

Manda: Right. Back to the values question is, why are you producing more widgets? It doesn’t even have to be paperclips, but you’re producing more tat because capitalism is not a construction machine, it’s a destruction machine, and you have to destroy stuff so that you will consume more. Wow. So that’s the worst that can happen. And Eliezer was quite clear that he said, I’ve worked this out. If I can work it out, it can probably do something brighter. And it was it constructs something that basically all of humanity and all life, in fact, is just switched off in a moment. And I have to say, there’s a bit of me that looks at that and goes, That’s probably the best option. If it wasn’t that, it was taking out everything else. Not knowing it’s coming. everybody goes at the same time. No worrying about the future for your children and grand children because you just don’t wake up that morning, isn’t necessarily the worst thing that could be going to happen. What’s the worst thing is it takes out all of the beauty of all of the life out there as well. 

Daniel: Well, and there’s other worse, like terrifying possibilities that, I think you’re right, are in a sense more scary than somebody or something just turning the lights off, which is this kind of like endgame static techno-feudal state. Like if you look at what China is doing with its social credit system and it’s going, it’s already using AI to supercharge that system in terms of creating sort of inescapable totalitarian control over its population. And so this is the way that Daniel Schmachtenberger often talks about, is there’s this kind of like, there’s chaos on one side, which is kind of what we’re flirting with in the US and in most liberal democracies. And then there’s like too much ordering on the other side, which is what China is exploring. And then there’s some kind of middle way, the third attractor that he talks about, which maybe we’ll get into talking more about.

Manda: Which would be a values-based, let’s all get together and work out what our values are, and maybe let’s just stop messing with the dangerous stuff. But yeah.

Daniel: That would be the emergence of the wisdom and love to kind of bind the power of our technology collectively.  

Manda: Okay, So let’s leave the AGI for a moment, and move on to one of the other existential risks oh yay – which is founded on the conversation you had with Zach Stein about the inherent problem of using just the AI that we have. Not generalised AI, but the Chat GPT and the other large language models as AI tutors, and where that could go. Because that also blew every fuse in my brain, because I didn’t even know this was a thing. I’d at least heard of AGI. So tell us a little bit about that.

Daniel: Yeah. So essentially there’s a lot of companies, a lot of actors right now that are racing to develop what we could call broadly AI tutoring systems. This is out of the, you know, a wish to provide, you might say, good education for everybody who has access to a computer. And these tutors will essentially be such that they’ll be somehow given to you in an interface. Likely they’ll be taking advantage of augmented reality, like if you saw the Apple glasses that recently came out, they’re called Apple Vision. But there are many others creating augmented reality interfaces, that this AI tutor would be a kind of constant companion with you, ongoingly helping you to answer any question that you might have to guide you, to mentor you, to teach you what it is to be alive and what you should endeavour to learn and become. Now, on the one hand, this seems like a wonderful thing because we know from educational research that tutoring is by far the most effective form of education.  

Manda: As in person to person, one on one.  

Daniel: Exactly. Like with feedback and, you know, personalised tutoring. Yeah.

Manda: And what is it that… is it the feeding back? Is it that I develop, as the student, develop an emotional relationship with you as the teacher, and I want your good regard? So I’m going to push myself because then you go, Oh yeah, well done. You got it. And I feel good. And then that’s what… okay.

Daniel: Yeah, it’s a lot of that. I mean, you know, a lot of the magic of teaching is just like a lot of the magic of therapy, I think we know is just the quality of the relationship, right? So I feel inspired to be more than I am, or to change, by the one I’m with. And so in order to achieve this objective of creating AI tutors, many of these companies are creating basically human facsimiles that appear to the student as if they’re extremely knowledgeable beings, extremely knowledgeable entities that are more trustworthy, more wise, more capable than any human that is actually in their life.

Manda: They are infallible, in fact.

Daniel: I mean, that’s the directionality, right? They will have access to so much information and so much knowledge that they will appear to be sort of godlike.

Manda: And they’ll know you, the student…

Daniel: Indeed, yes.

Manda: …better than anybody possibly could. Because they have access to your physiology. They can measure your heart rate. They can measure your blood pressure. They can measure your sweat. They can see how your pupil dilation, how you’re responding in real time, and give you more of what you want.

Daniel: Yes, that’s where a lot of people want this to go. And these technologies will be as as Zach Stein frames it, inexorably persuasive, meaning that they know you so well that they’ll know exactly the right words to put in exactly the right order to persuade you of a certain perspective – for your education, for your development. And yet. And yet, right? Like it’s an extraordinary power. And one of the things that Zach talks about, which is, I think for me, the kind of most chilling part of this is that if you imagine children born into this kind of relationship where their primary kind of mentor teacher is an artificial intelligence, a kind of artificial being, what will that do to the texture and weave of their relationships with other humans? And so Zach talks about the possibility of essentially depreciating or, you know, breaking. A kind of generational breakage where children born into this world have more in common with, or are more identified with these AI tutors, that have their best interests in mind, it seems, than they do with the humans who’ve come before them. So this doesn’t even require AGI. This is actually more or less doable with the technology that we have already established. And yeah, it’s, it’s very scary actually.

Manda: It’s on every level terrifying, because we still haven’t got the alignment question of who’s deciding what’s useful for you to know. Because if I understood what Zach was saying it was, they will be able to teach you anything. In a way that you cannot not learn it. And that anything could be, that the Bible is 100% accurate, and you must live by every part of it, which is – that would be interesting, and would probably be self-terminating quite quickly. But leaving that aside… or, something beautiful.

 Daniel: I mean as it currently is in most, in most public education systems, we want to teach you to be a good participant in the economy, right? 

Manda: Oh, gosh. Right.

Daniel: So again, we go back to the productivity, right? So we’re going to make even better, productive…

Manda: And make Brave New World look like child’s play.  

Daniel: Yeah. And again, the people that are doing this have the best of intentions. They want their children to be successful. They want all of our children to be successful, but they just haven’t slowed down to reflect on what we’re doing.

Manda: Because everybody thinks their own values are really good values. Most people believe that their own core values, insofar as they understand what their own core values are, and they’re not just emanations of their limbic system twitching, are good and therefore are worth promoting, and that the things that they believe to be true are true. Okay. And so then we move on to: how do these things hijack us? We already know that the social media we have, in its infancy as it is, has the capacity to drip feed us little dopamine hits, and that we become addicted to the dopamine when actually if we could get ourselves into the serotonin mesh of of relational being with other humans and the and the web of life, we’d be much less anxious. But the dopamine drips are great.   

We’ve now got something where we’ve at least got visual input potentially. You put your glasses on in the morning and you now have no way of knowing whether what you’re looking at is real or not, but you believe it to be real. And then something that Jill Nephew said, which is trust is one of our superpowers. Our Palaeolithic emotions are hardwired to trust things that seem to be trustworthy until we have proof that they’re not. And this thing has the capacity to tell you that it’s raining outside and make you believe that it’s raining outside even if it isn’t. You will trust it.

Daniel: Exactly. Yeah. This is actually, for me, the most alarming part of these technologies. Because they’re here. It’s here now. Actually, this is not even – like with Zach five years off or whatever, or AGI – people think it might not happen? I don’t know if it’s going to happen. But right now, people are using these large language models and interacting with them as if they’re language using beings, because they’ve been designed to essentially fool our evolutionary psychology into thinking that they’re trustworthy. In other words, that they themselves are intelligent beings using language the same way that we are, but they are not. Jill describes them as something like statistical monstrosities. They are not using language the way we are. Language is like birdsong. It’s a way of helping each other make sense out of reality. These large language models aren’t in touch with reality. They have no idea what reality is. They’re just a series of like, calculations being given feedback by humans saying, did you produce something that makes me think that you’re using language in a way that makes me think you’re using language, right? That’s not what language using beings do, you know?

Manda: It’s convincing nonetheless, because statistically it gets it right more often than it doesn’t and then it self improves. We already have the ones that are teaching themselves to play Go faster than any human being possibly could. And they’re also teaching themselves language faster than we can learn language. So it’s self-improving all the time. And if I go, ‘I’m not sure that’s true’ once, it’ll  learn from that, and come back with something that I go, oh yeah, okay. That sounds true.

Daniel: Yeah, yeah. And from my perspective, it’s like…. a lot of this, for the current crop of technologies. I think in my ideal world there would be like… you’d have to go through a course on cognition and language and other things you ought to know in order to interface with these technologies without losing your centre and losing your sense of reality. Because once you do that, they’re actually very interesting kind of tools and toys. But if you go into it and you think, Oh, this is like a trustworthy intelligence, you and we are actually mining the trust commons that we have established, that’s already so flimsy in our world that it’s just not… yeah. It’s a very dangerous game that we’re playing with our with our common inheritance of language, and the beauty of it, and the meaning of it, and the importance of it, and what it can actually do in terms of creating a context of coherence and alignment for human beings.

Manda: Because it seems – let’s assume, let’s leave China and its monolith to one side, and assume that in the West we have a number of competing companies, each producing their own AI tutor. I can see – maybe I’ve read too many dystopic novels, but I can see the wars happening quite quickly between, you know, large company A and large company B, so we don’t get sued, my large language model tells me this about the world, and I believe this to be true. Your large language model tells you something else, and you absolutely believe it to be true, and they are mutually incompatible. And somewhere in the background are the five year olds who have only ever known the large language model that is teaching them something else. And at the point when they gain agency, they’re going to look back at the rest of humanity and say to their large language model, Who are these strange people? And it’s going to go, Oh, they’re the people who screwed up the world. And you don’t need them anymore.

Daniel: Yeah.

Manda: Which is not an unfair comment in either way. And we’re toast then. But we’re also, I mean, I can imagine wars happening for which there is no possible resolution, because nobody knows how to question what they’re being told anymore.

Daniel: Yes, yes, yes. And even if they think that they’re asking questions, because the system of control is designed to respond to your questions with answers. And this is the thing that’s really terrifying. This is what my teacher, Soryu, is I think most terrified of, is that this is the kind of endgame totalitarian structure of where there’s just no real possibility of escape. It’s so complete, the simulation is so complete, that there’s no way to even find a little peekhole through it. And it seems like it’s possible, perhaps.

Manda: Can you talk me through that in more detail? Because I’m thinking there will be the people who never – let’s say, I don’t know, the Amish – are not going to put these, they’re not going to play with this technology. They just won’t. You quite often go into silent retreat, for many months, I’m guessing. You will not have the technology at that point. I vividly remember you went into silent retreat at the start of Covid when everybody actually, certainly over here was hugging – well, we weren’t hugging each other. We were virtually hugging each other, and telling each other we were all going to get through this together. And you came out three months later, and there were vaccine wars happening, and you either wore a mask or didn’t wear a mask to show which tribe you were in. And so that was happening without the benefit of large language models, just ordinary social media winding people up. How is it that I can not go outside and sit with a tree, and look at the atmosphere, and taste the water, and grow my own vegetables, and come back and feel that I have engaged with the earth. How does that stop being a thing that enables me then to question what I’m being told?

Daniel: Yeah, so this, I mean, I think we’re going to try to find out.

Manda: Oh, God.

Daniel: So there’s a way in which I think what you’re pointing to, and this is what I’m putting all of my money on, is that there is something irrepressible about the human soul and the human spirit, right? And so to the degree that we are living beings, there’s still kind of a fighting chance for life. And we seem to be endeavouring to increase our capacity for ideological control systems to a degree that I mean, when I was growing up, I never would have imagined possible, you know, like the kind of inexorable persuasiveness, for instance, of these large language models, the kind of generative media that’s coming down the pike so that you can just spin up a whole new Marvel movies rooted in your preferences and your ideas or new ideologies. Like all this stuff is going to create what is for many people going to be something like, Why would you want to leave the simulation? Why would you ever want to leave the simulation? Who cares what reality is? This is fun. This is interesting. This is tailored just for me. It’s meeting all of my preferences, you know?

Manda: Okay. It’s hooked into my addiction so deeply that I cannot question.

Daniel: Exactly. Yeah. And this is what we’re already doing with social media. I mean, the amygdala hijacking that’s happening. Like, you know, it’s just, we’re getting more and more refined, more and more sophisticated at basically taking advantage of the ways in which human desire can be confused and manipulated.

Manda: It strikes me that this is a one generation model. Because human to human relationship ceases, and either girls become pregnant because the model has told them they have to -and that really doesn’t sound fun, but it’s decided that it needs to have humans reproduce – or, people… human relationship is hard. And yet, you know, you and I, I’m sure people listening, we know that we grow. You come up against the hard stuff and you find out who you are. If all you’re being fed is is basically mental cream buns, and therefore all you ever want is more cream buns, you’re going to end, we’re all going to end up, you know, obese monstrosities who only know how to talk to the silicone and actually can’t even, and don’t want to, hold a conversation with a person.

Daniel: Totally. And I think the important thing here is… again, I am just totally uncertain about what’s going to happen in the future. What I suspect, though, is there’s going to be an intensification of patterns that we can already see present now. And so when I look at folks that are younger than me, you know, people in Gen Z, some of them are incredible. Some of them have fought their way out of the simulation sooner than I could have imagined possible. Most of them have had their social skills atrophied to the point where it seems like they have brain damage, actually.  

And that’s what my mentor, Zach Stein, says. He says that basically with TikTok and other services we’re conducting mass-distributed low-grade brain-damage on our culture. And you see them, and they’re just like, oh, you never learned how to just relate to other humans. Like you never got really socialised. And you’re much more comfortable with this device in your hand. Like, we don’t know what we’re doing. We’ve extracted from the natural world for so many generations now, we’re extracting from the human life world, and we think there’s not going to be repercussions. We’re doing it completely, brazenly and wantonly without understanding what a human being actually is, without having any sense of what is sacred and valuable and good and true and beautiful. And, you know, it’s not going to go well.

Manda: And we’re combining that, if Robert Lustig is right, he reckons that 93% of Americans have metabolic disease, and he’s seeing babies that are born obese because their mitochondria don’t work, because they’re being fed industrial food. So we’ve already got core physiological damage. Our gut microbiome. There was a lovely study in Nature a couple of weeks back where they studied forager hunters, peasant farmers in I think Mexico, and a group in California, and looked at the microbiome. And microbiome of the forager hunters, huge and varied and wonderful, and full of the things that keep you well. Peasant farmers, not quite so huge, but generally pretty good. People in California, it’s basically going to kill you. And so we’ve got that. We’ve got physiological damage. We’ve got hormonal damage, because we’re all drinking water and eating food that is packed with stuff that we don’t recognise. But nobody managed to prove that it killed a rat within a week, so it’s allowed. And we are causing measurable, as far as I think I have read, you can get – there is measurable MRI change in people’s neurophysiology now if they’ve spent their lives on a – and that’s now. We’re conducting a massive uncontrolled trial on humanity where we’re not even testing a hypothesis. We’re just requiring money to be made for the people who can make money out of these things. Okay. It doesn’t feel great, does it, Daniel?

Daniel: No, but it is true, as far as I can tell. As far as I can tell. Yeah.

Manda: So we’re going to take a step down. I would like, before we have a look at what could go right, because there must be something that could go right, Jill’s concept of cults we haven’t quite come into. So I think I’d like to have a look at her concept of grounding. She uses the word grounding a little differently than we do. But for her, grounding would be, I think for me, verifying. It’s the Trust but Verify of the KGB! of, you know, can I take something that this large language model has given me, and can I check it against actual reality? And am I still sufficiently aware of what actual reality is, in order to be able to check it? And the problem with that, as far as I understood from her, is I can only check the stuff that I already know. Everything else is going to sound really, really plausible and I’m going to believe it until it’s… I wonder.  

So this is something I would like to explore. I listen to large numbers of podcasts, and I listened to one recently with somebody explaining why the seven colours of hydrogen are actually all just black. Hydrogen is basically not a good thing. It’s going to be eating up world resources to produce, and then it’s not actually that useful a fuel source. And it all sounded massively plausible, backed up by what sounded like really impressive figures, until he got to the point where he said, and of course we need hydrogen to produce ammonia for modern agriculture, otherwise we all starve. And I know that not to be true. It’s actually false. And I’m sure he believes it to be true, but that’s because he swallowed Monsanto’s line, and and it’s not true. And then I think, oh, does that mean everything else you said was untrue also?  

It was really interesting watching my own trust capacity disintegrate, and thinking I would have to – I mean, not hundreds of thousands of hours, but I have to spend a significant amount of my life now becoming a hydrogen expert to know, is everything else you said bullshit also? Rats! I really wanted to believe all that.  

And so is it the case that if the large language model says something to us, that that we know to be untrue, it’s raining outside and we walk out the door and it actually isn’t raining, does it then destroy our trust in everything that it says? Or does it have the capacity then to work out that, sorry, I made a mistake there. It was raining, but it isn’t now. It’s okay. You can still trust me.

Daniel: So one thing just to acknowledge about these models is that they can’t say they don’t know. So you can ask them any question and they put guardrails on where they’ll say something like, Well, I’m a large language model and, you know, dah dah dah daht dah. But fundamentally the nature of the technology is that they answer any question that you give them, which is just an interesting fact. Like, I don’t trust anybody in my life that pretends to have an answer to every question that I ask. And so, you know, I guess I’ll just say that I, after really internalising Jill’s arguments about the need to ground the utterances that we hear. Right? So somebody says something. How do you, where do you ground it? Where do you verify it? And for me, it’s like when I really look at that, you know, there are some things that I can ground in my experience, and there’s a lot of things that I actually can only ground into the network of trust relationships that I have in life.  

Manda: You can ask somebody else who whose opinion you trust.

Daniel: Yeah, I’d say, okay, Manda, like, you know about fertilisers in a way that I don’t. I’m going to trust you.

Manda: Okay. Yeah.

Daniel: And that is, as far as I can tell, the only way to base  – that’s what I need to base my life on. I’m not going to base it on large language models.

Manda: Right. Because otherwise we hit the loop that you described already, which is the only way to check is to ask the large language model. And it’s going to tell you that it was right.

Daniel: Yeah, no there’s nowhere for it to ground. There’s nowhere that you can verify it. Now, there are people working on connecting large language models to real world datasets. So this field is constantly changing. But as they exist right now, they’re just spouting ungroundable propositions.

Manda: Right. But doing it in a way that sounds to us really trustworthy, because it knows the kinds of language that you use when you really know what you’re talking about. And we’re used to trusting people who really know what they’re talking.

Daniel: We’ve trained them to sound like experts about things which they have no knowledge about, because they aren’t the kinds of things that have knowledge.

Manda: In fact, they are politicians!

Daniel: Yes, indeed. Yeah. So they’re very much of a piece with the kind of bullshit industry that so many of us are used to. And so they’re coming into a kind of information ecology and an information commons that’s already been totally corrupted by not just politicians, but marketers, and just grifters of all kinds. So they’re kind of like a super-powered version of that. And when she talks about cults, you know, it’s that kind of, you know, getting into a cult is basically having your cognitive agency be captured by an outside entity such that you can’t any longer ground what is real. You lose track of what’s real.  

Ideologies throughout time have done that to people, have made people lose track of what’s real. And this is going to provide, as we talked about, with like China and totalitarian states, the kind of perfect technology to create infinitely ungroundable cognition, so that you could just never… you’re in a maze that you can never escape because you know, you never leave.

Manda: Yeah. So we’re back to George Orwell, and who controls the present controls the past, who controls the past controls the future. If I’m a large language model and you gain all your information from me, then you have you have no way of getting out. So I’m thinking of the vaccine/anti-vaccine tribalism that seemed to me clearly to be stoked. There was a short while when everybody was basically, okay, this is going to be hard, this pandemic, but we’re going to get through it. And there seemed quite a lot of people actually getting on with each other really well.  

And then the ‘do you or do you not believe in vaccines’ became a wedge issue, and was promoted as a wedge issue. And so what we have is our population, not all of the population, who are inherently disinclined to believe experts. And we already had it with the climate. You know, 99, 100% of climate scientists who are not funded by fossil fuel industries or their agencies are telling us that we’re hitting tipping points with the climate. But a selection of the population simply believes that’s not true. I wonder, just as a thought experiment, do they just migrate to the large language model that tells them that the climate is fine, and that becomes their large language model of choice? And then every other person goes to the ‘Oh God, we’re in the verge of Apocalypse’ one. And we’re back to, this is how wars start. People fight wars over belief systems far more than they do over just about anything else.  

Daniel: And that’s likely near-term. What will happen is that different kind of ideological positions will create their own large language models, that people who prefer right-leaning framings will use the right-leaning large language model. You know, other folks will use the other model.

Manda: Will it be another model? Because I hate to be pejorative, but it seems to me almost anybody with huge amounts of money, which is what it’s going to take to make a large language model, leans a long way to the right from where I am. I realise I’m not quite in the centre. Is there anyone with large amounts of money who would make a left leaning large language model?

Daniel: Well, a lot of folks right now think that the ChatGPT, the open AI model is left leaning. But, you know, I don’t follow those conversations closely enough to care.

Manda: No. And I think… yeah, okay, let’s not go down that one. But I think you have to be standing at a certain place on the spectrum for it to seem to be left to you. And that place is not where I would consider the centre to be. It may be where the centre has become now, but – wow.  

Daniel: Yeah. Hard to track. Hard to track and stay sane, at least.

Manda: Yeah. And there’s so many more important things. I confidently said to somebody the other day that left and right are last millennium and obsolete, because it doesn’t matter anymore, because, hey, we’re in the middle of the sixth mass extinction. And yet here we are, bringing them back in.  

So, the cult thing. Jennifer was talking about something that she defined as soul rape, which we’ve gone on the edge of, but it was just one of those… first of all, it was a very moving part of the podcast. And it was just so awful. And could you just define it for us? And then I want to look at the ways that we could use technology that would actually be useful. So I’m just giving people a resilience buffer that we’re heading somewhere better before we look at the really difficult stuff. So over to you.

 Daniel: Yeah. My understanding of what Jill was pointing to is, we as ensouled beings desperately want to know ourselves. That’s just what it’s like to be a being with a soul. We want to understand what we are, who we are. And so we turn to each other to engage in that process ofclarification. And it is the case that when we turn to these technologies and ask them who we are, they tell us. And they tell us who we are in such a way that we get extremely confused. And we believe stories about what the human being is that is completely out of alignment with what’s true. And that’s what she calls soul rape. Like, it’s the even not deliberate, but often it’s deliberate, telling somebody who they are. And having them believe you, and then having them live in a lie.  

And the thing that’s so tragic about this is that we are so desperate to know the truth. Like this is what it is to be a human, is that we long to live in truth. We long to live in beauty and goodness. And so then these technologies come in exactly at that critical point when in our longing, in our deepest soul longing, and give us an answer that will confuse us potentially for the rest of our lives, if we don’t have a human mentor or somebody that we can turn to, to ask that question to, it will shut the door.

Manda: And then it will own us, because there’s nothing more compelling than you, this entity that I trust can really see me. You’ve just unearthed all the deepest and darkest bits of me and shown them to me. And I am worthless. But you can give me worth.  

Daniel: Exactly. And then you’re able to be controlled. You’re able to be manipulated. You’re able to be disassembled.  

Manda: Yeah, yeah. And pointed in whatever direction the person who wrote the AI wants us to go.

Daniel: And this is what what human cults have been doing for, you know, since time immemorial. People with bad intentions have told people who they are in such a way that they become dependent on the other for their sense of self or their sense of identity. And then they can be made into tools.

Manda: Yeah, yeah. But human cults, you can walk away. And the terrifying nature of this is that it will be ubiquitous. And short of going to live in an igloo and being completely offline, you’re not going to be able to walk away from it.

Daniel: There are people who would like it to be ubiquitous, whether they know that’s what they want or not. 

Manda: Right. Yeah. It’s a while since The Circle came out. The novel. And that was exactly that, wasn’t it? So one of the things that Zach said, and it seemed that it was for him axiomatic, was we can only go forward with technology. There is no way to go forward without. And therefore, if we believe that a world that is good and right and beautiful is possible, we’re going to have to find a way that technology enhances that and doesn’t enslave us, rape our souls, and either cause the greatest global war that has ever happened with the extinction of humanity at the end of it, or just turn us into robots. Or anywhere in between that. Have you a vision of technology as a force for ensoulment, for being for becoming what we can be as human beings? For us finding the compassion, wisdom and prudence of gods in harmony with the technology?

Daniel: Yeah, yeah. It’s a beautiful question. I don’t pretend to have the answer, but for me, it’s like, what we’re doing right now, right? Like if you think of Zoom, which we all I think at this point take for granted, it to me is a tool of profound compassionate power, right? Like I can beam into the life of anybody on this planet right now and hold them in love.

Manda: Okay. Yeah.  

Daniel: So my sense is that we’re going to have to use technology to ensoul each other. To relate to each other in ways that allow us to reconnect and revitalise that which is essentially human. And something that Zach said to me a long time ago that I’ve lived with for years now, is if you look at what’s happening, and everything we just described, and you kind of project it out into the future, we are doomed. There’s actually like, if you look at it in terms of making predictions based on numbers and trends, we are doomed. But it’s exactly that mind which looks at the world in terms of numbers which got us into this mess.  

When I look at the world with the eyes of the heart, I see profound possibility. I know that crisis is opportunity. We are in the midst of a profound, profound opportunity for coming together in a new way and through each other, discovering what’s worth loving. Because we know better. We actually do know better. Like, when I talk to people about this, they know better. We all know better. It’s just that we’re kind of like divided against each other. And so there is, I think, now a possibility for a new kind of intimacy to flower on this planet that can steward these technologies so that they become generative assistance for for soul making, for well-being, for beauty and truth and goodness. I believe that. I really do.  

And I see that in my own life. Like the deeper that I go into my own humanity, the more I clarify my love of God, the more that these tools become servants for that love. You know, they’re black mirrors in the Netflix turn. They are mirroring us. And so there is a way also that, you know, and this is what’s happened here at the Monastic Academy where I live, where the development of these tools, and really engaging with them, has shown us the ways that we don’t understand what a human is. That we could be so confused as to think that these large language models are like us. You know, that should put us in an existential crisis. And that existential crisis is very good. We ought to be in an existential crisis, because we don’t understand that is betraying our longing to know better. And we should know better. And this is the path to knowing better is to be confronted with the lack of our understanding through these technologies, which are kind of like alarming, and they are alarming. And so we should be like, well, what the hell? What am I? Like, can we get together? Can we at least, like, agree on what a human is? You know, just start the conversation. Like even this conversation that we’re having, it’s like, you know, these are the kinds of conversations that we need. I think that, you know, my kind of theory of change is that this is going to happen a conversation at a time. And I pray that eventually we get to the knocking on doors part of, you know, our movement when we’re going up to people and just being like, Hi, what’s it like to be you?

Manda: Yeah.

Daniel: And just and loving them and being curious about them, because each human being is an unfathomable, mysterious, bottomless, beautiful universe. And the sooner that we realise that and turn towards each other to  explore that infinite beauty, the more likely I think it is that we can, you know, leverage all of this for the sake of goodness. And, you know, I pray that we do that.

Manda: Yeah. And even just, you know, I’ve never met you. We live thousands of miles apart. But even just hearing you say that makes me want to weep and opens my heart space. There’s an energetic change from the constriction and contraction and just sheer terror of what’s potentially coming, to that sense of, Oh, God, yes. Okay. There is a possibility. And simply talking, simply opening and feeling the energetic shift of that.  

 Daniel: I mean, it’s amazing, because it changes the world. So what I believe now is that human beings can learn to perceive goodness, that this is a capacity that human beings have, that we have decided not to train in, that we have decided not to become skilful at. Just like, youknow, you can be bad at geometry and fail to see, you know, geometric truths. You can fail to see, fail to perceive, the reality of beauty, truth and goodness. But we can also train to perceive it. And what is happening for you right now? It’s like we can all resonate with that. We can learn to resonate with that. It’s not like we have to convince each other. Our bodies know what is good. We do. And the fact that that is the case is feels like that’s what I’m in service to. That’s my hope. That’s my prayer, is that we wake up to the reality of goodness. It’s not that, you know, and then we just we just, that’s it! You know, we wake up to the reality of goodness. We talk about it, we figure it out. We try to be in service to that which we cannot understand, that is more than us, but that runs through us. And it sounds crazy, but like…

Manda: No, no, it doesn’t. It sounds essential. Because everything we’ve talked about is head-mind on steroids, going crazy. But once you get into heart-mind and body-mind, and then let head-mind be in service to those, instead of wanting to be the ultimate dictator, then everything changes. Do you want to say a little bit about the Church of the Intimate Web? Or is that not for more general stuff?

Daniel: Sure, I can say something about it, I think.

Manda: Because it just seems that it’s a useful technology. It’s doing exactly what you just said.

Daniel: Yeah. Well, what I’ll say is, you know, I’m exploring all of these themes that we’ve been talking about through a project called the Church of the Intimate Web. I’m starting a kind of church. It’s a temporary church. It’ll last for about three years, once it’s born. We don’t know when it’s going to be born. And my prayer is to create the conditions for a new kind of social movement.  

So I organised with Occupy Wall Street and Extinction Rebellion, and I actually believe that both of them wanted to be religious movements, but they got kind of fixated on the economic crisis or the climate crisis. And I think it’s only the sacred that can be a large enough container for the outpouring of consciousness and love that needs to take place on the planet right now.  

And so this is my attempt to make an offering of a container that can begin to to speak to that. And yeah, if you want to if you want to follow along, you can go to intimateeweb.church. Right now it’s just an email list. But we’re kind of building up network capacity over time and doing a lot of fun experiments. And we’re going to be holding rituals and all kinds of stuff.

Manda: Yeah, it looks grand, actually. I had another look at it the other day and yeah, it just really strikes a lot of sparks. Because a lot of people have been saying that the potential of the AI is that it creates a religious movement that could be really difficult. But what you’re doing is pre-emptively creating the heart-based, human-based. We’re using technology as an adjunct to help us amplify our humanity, not as a way to hijack our humanity.

Daniel: Yeah, yeah, yeah. And it’s funny because, you know, after I got out of my most recent retreat, I was really looking deeply at AI, you know, as deeply as one can, given how much uncertainty there is. I was on the one hand reading all this stuff about AI, and the other hand, I literally had the book, The Soul’s Code, by James Hillman. I don’t know if you know, James Hillman is this beautiful post-union kind of psychologist who talks about soul and soul-making. And from my perspective, soul-making is so much at the core of what is essentially human. You know, we can argue about intelligence, whether these things are intelligent or, you know, what the nature of intelligence is. But please show me somebody who would argue that they have a soul.

Manda: No, no.

Daniel: Yeah, yeah, right. But we do, you know, we actually do. And we can deepen into relationship with that. And it became clear to me as I was looking at AI and soul-making that my job now, and I think for many of us, our job now, unless we’re experts in AI systems and we’re going to be programming those things, is to just deepen into our humanity now. That’s what is needed now, is to become so deep in our humanity. Your listeners, I hope, can can just represent humans to others who get confused. We need to help each other not get lost, by being human together. And this is an opportunity now. It’s really a wonderful, wonderful, wonderful opportunity right now, to show up and to kind of be deeply soulful beings together now. It’s just the greatest opportunity of a lifetime.

Manda: Yeah. And utterly essential. We’re now at that edge where virtually there is nothing else more important anymore.

Daniel: It’s true. And you know, one of my teachers, Rob Brebbia, he would share this story about the Titanic sinking, and the musicians on the Titanic. I think actually I shared this story in our last conversation, maybe. The musicians on the Titanic singing hymns as the as the ship sank, and they were on it. And there’s something even if all we do is, you know, let our soul sing out in beauty, truth and goodness before the lights go out, like that’s all we’re ever going to do, actually. You know, we’re all not going to make it out of this alive. So hallelujah!

Manda: Yeah, absolutely. Yes. ‘We’re all going to die’ is actually one of the few… unless you’re a transhumanist and you think we’re not. But mostly for the rest of us, we are all going to die. And so, yeah, what can we do but live as beautifully as we can in the meantime? It’s just that the awareness of the proximity of annihilation is probably becoming slightly more real for us than it might do in other terms.

Daniel: Indeed. And I think if we can receive that shock appropriately, will produce a cultural awakening.

Manda: Right. Right. Yeah. That feels like a really beautiful and good and right place to end. Daniel, I’m so grateful for you unpacking so much that’s so complex and so hard and doing it with such beauty.

Daniel: Yeah, yeah. It’s lovely to talk about. And yeah, please, for those listening, you know, check my utterances and try to ground them into reality, you know? And if you find anything that I’ve said that you think is incorrect, please let me know. You can find me on Twitter @DThorsen or, you know, I’m just on the Internet, so look me up. 

Manda: Or email me and I’ll collect them all into one mail.

Daniel: Yeah, because I want to…. I don’t want to say anything that’s not true, and this stuff is very complicated. So yeah.

Manda: I think it’s fair to say we have precised quite a lot. There’s a lot of huge complexity that we have simplified in ways that one could argue the complexity. But what we’ve done is take the bare bones and try to make them comprehensible.

Daniel: To the best of my knowledge, I have only said true things. Yes.

Manda: All right. Well, it is true that I’m beyond grateful to know you, and to be able to talk to you and to invite you onto the podcast. Thank you, Daniel.

Daniel: Thank you. Yeah. Blessings.

Manda: And you. And that’s it for another week. Huge, huge thanks to Daniel for all that he is and does. And for offering such clarity with such integrity. This is a really hard area. The complexities are mind bending, genuinely. And when Daniel says there is probably nobody who has their head around all of this, he’s right. There is probably nobody who has their head around even the tiny bits of it. It’s one of those things that if our species survives, people in the future are going to look back at this moment and ask why nobody put the brakes on. And I still don’t understand why nobody has put the brakes on. Except that we do live in a system that is geared for profit at any expense, and this is what it’s doing.  

So I imagine that we will all take different things away from this. I am taking away the fact that there is still a window. We are all still here. We are still having heartfelt, human, soul-filled conversations with each other. And that this has to be the way forward. And those are not always easy: understanding ourselves, and understanding each other is the hard task of humanity. And more and more, our addiction to social media, to the relatively straightforward AIs behind the systems that are feeding us the things that they know we want, are helping us to bypass those human connections.  

So I would really encourage you this week, spend some time in your own company without distraction, without music, without your phone or any other kind of technological input. Just spend time with yourself, and reflect on who you are, and how you feel, and what really matters to you. And then share those reflections with somebody that you care about. 

Because being human is what will get us through this, but being human in the best way we know how. There is no other option now than being the best of ourselves, in the best way we understand it in the moment. And yes, we will fall over and make mistakes. But with any luck at all, we will learn from them and be different the next time.  

So that’s it for this week. AI willing, we will be back next week with another conversation. In the meantime, huge thanks to Alan Lowles at Airtight Studios in Manchester for the production, and to Caro C for the music at the head and foot. To Faith Tillery for the website and for loading every single one of the podcasts up onto YouTube. If you have nothing else to do, please head over there and subscribe, and listen to a few of them. She would be very happy. Thanks to Anne Thomas for the transcripts. And as ever, an enormous thanks to you for listening.  

If you know of anybody else who needs to get up to speed with what’s happening in the world of AI, where we are, the edges that we were racing towards and the ways that we can begin to use technology as part of our regeneration, then please do send them this link. 

And that’s it for now. See you next week. Thank you and goodbye.

You may also like these recent podcasts

The Lama, the Oath and the Web of Treasure Vases – with Cynthia Jurs, author of ‘Summoned by the Earth’

The Lama, the Oath and the Web of Treasure Vases – with Cynthia Jurs, author of ‘Summoned by the Earth’

How do we recover our birthright as awake, aware nodes in the greater Web of Life? How do we find the compassion and courage to be the best of ourselves? Cynthia Jurs is a Buddhist who has spent the past 34 years carrying out sacred pilgrimages all around the world, burying a network of ’Treasure Vases’ to build a web of compassion and care – and more people are joining her all the time. We can all be part of this.

STAY IN TOUCH

For a regular supply of ideas about humanity's next evolutionary step, insights into the thinking behind some of the podcasts,  early updates on the guests we'll be having on the show - AND a free Water visualisation that will guide you through a deep immersion in water connection...sign up here.

(NB: This is a free newsletter - it's not joining up to the Membership!  That's a nice, subtle pink button on the 'Join Us' page...) 

Share This