Plough My Account Sign Out
My Account
    View Cart

    Subtotal: $

    linocut illustration of wheat

    PloughCast 55: L. M. Sacasas on Why We Are Not AIs

    Pain and Passion, Part 6

    By L. M. Sacasas, Peter Mommsen and Susannah Black Roberts

    April 19, 2023
    • Matthew Alexander

      Team human / Douglas Rushkoff would be candidates for Stay Human

    • Shaqneil Bailey


    About This Episode

    A philosopher reflects on human uniqueness. Peter and Susannah speak with L. M. Sacasas, the philosopher behind The Convivial Society newsletter, about artificial intelligence, consciousness, and what it means to be human.

    [You can listen to this episode of The PloughCast on Apple, Spotify, Amazon Music, Google or wherever you get your podcasts.]

    What are the new ChatGPT large language models? Is this AI? What’s the relationship between the philosophical questions regarding AI and other philosophical questions about whether human consciousness requires the immateriality of the intellect?

    What are the dangers of AI? Leaving aside the question of physical danger to humans, what will the increasing ubiquity of AIs do to human civilization and self-conception?

    Finally, what’s the relationship between AIs and the human tendency towards idolatry?

    Recommended Reading


    Section I: The Rise of the Robots

    Susannah Black Roberts: Welcome back to the PloughCast! This is the third episode in our new series, covering our Pain and Passion issue. I’m Susannah Black Roberts, senior editor at Plough.

    Peter Mommsen: And I’m Peter Mommsen, editor in-chief of Plough. In this episode, we’ll be speaking with L. M. Sacasas. L. M. Sacasas is the executive director of the Christian Study Center of Gainesville, Florida, and author of The Convivial Society, a newsletter about technology and society.

    Susannah Black Roberts: OK. Welcome Mike Sacasas.

    L. M. Sacasas: Good to be here.

    Susannah Black Roberts: All right. Wait, hang on. I got to do this thing.

    Humans tried to develop intelligent machines as secondary reflex system, turning over primary decisions to mechanical servants. Gradually though, the creators did not leave enough to do for themselves, they began to feel alienated, dehumanized, and even manipulated. Eventually, humans became little more than decisionless robots themselves, left without an understanding of their natural existence. With few ambitions, most people allowed efficient machines to perform everyday tasks for them. Gradually humans cease to think or dream, or truly live. After two generations of chaos led by the Butlerians, the god of the machine logic was overthrown by the masses, and a new concept was raised, man may not be replaced. This was the gift of the Butlerian Jihad, its chief command remains in the orange Catholic Bible, thou shall not make a machine in the likeness of a human mind.

    That was actually cobbled together by a human, me, not by ChatGPT, from two different Frank Herbert books. There was a selection from Dune and from Dune: The Butlerian Jihad, which I’m not actually sure whether Frank Herbert wrote that last one or whether it was one of the ones that his son wrote. But anyway, yeah, that’s from a science fiction novel or a couple of them, and it’s a reference to this concept, which is something that happened in the universe of Frank Herbert’s Dune mythos, where what I just described, happened. There was a movement against the use of intelligent machines or computers, and that became an aspect of their religion. It was very fundamental to the later development of that society. And that’s kind of, then there was a lot of things with sand worms, you should read the book or books, and watch the movie, which was also very good.

    Anyway, but that’s science fiction, and we are now trying to deal with some of these questions which are staring us right in the face in our present day reality, such as it is. Mike, thank you for coming on. We’re now talking about what might be called artificial intelligence. Where are we with this technology? What’s going on? What’s happened even in the last couple of weeks? And then we can start talking about the different ways in which we should be worried.

    L. M. Sacasas: Sure, yeah. I haven’t read Dune, by the way.

    Susannah Black Roberts: You haven’t read Dune? I’m sorry, I didn’t mean to … give me a minute. That’s OK.

    L. M. Sacasas: I will say the plausibility of this passage has probably grown considerably in just the last six months, in some people’s minds. I hesitate to speak about the state of affairs because I’m sure it changed in the last hour, and that’s how quickly things appear to be moving. It’s hard to know where to begin. I mean, you could go back to last summer, I think last summer was a moment when image generating AI was having a bit of a moment in the public limelight. Midjourney and DALL-E and other tools of that sort were suddenly very popular and had gotten quite good rather suddenly, so there was a lot of talk about generative AI with regards to images. All that shifted, I think it was late November, early December of last year, 2022, so just three months ago, roughly, that ChatGPT came out.

    And AI now is almost synonymous, it seems like, in the public mind or in discourse, with this other form of generative AI where it’s not images, but rather text that is being generated. And the underlying technology, the “Large Language Model,” has been through various iterations since roughly, I think 2017 or so. So just within the last week and a half, GPT-4 was released. ChatGPT was based on GPT-3.5, and its novelty was the chat interface, which made it much more accessible to the masses. So I think it enjoyed an adoption rate or a user growth rate that was larger even than TikTok’s by comparison. So it exploded into the scene and has generated all manner of speculation and concerns and commentary. That’s where we are with these very powerful language models dominating our understanding of what artificial intelligence is, or generating fears about its future. Or hopes for the possibilities that it might generate.

    Peter Mommsen: We’re recording this in late March, 2023, and just in the last couple of weeks, a bunch of companies have been rushing different AI products out there. Is that correct? Could you go over what’s either coming or are already out? There’s of course GPT-4, Bard …

    L. M. Sacasas: Yeah. A kind of AI arms race, not the most felicitous metaphor, but the logic of it has been kicked off by OpenAI’s release of these AI language models. Google has its own variation. I think maybe the most notable application, at least the one that has gotten the most press, OpenAI and partnership with Microsoft has linked ChatGPT to Bing, this search engine that few people used and was sometimes just a butt of jokes. But suddenly it became very important, if for no other reason than that it was one of the first real world implementations of these models that the large consumer base would interact with. Those are some of the recent applications, there will be countless others, no doubt coming down the pike soon enough.

    Susannah Black Roberts: With the development of all these new products, and especially with people’s actual interactions with them, have come a number of think pieces and speculations and Twitter threads, varying degrees of doomerism, breathlessness and everything in between. I’m not really sure I’ve seen many people who love this, everyone who thinks about it …

    Peter Mommsen: Oh, oh no, absolutely.

    Susannah Black Roberts: Are there?

    Peter Mommsen: There are people who are rubbing their hands about the way it’s going to transform whole industries, including, Susannah, our own.

    Susannah Black Roberts: Oh, good.

    Peter Mommsen: I mean, the fact that the latest OpenAI release was able to get, I believe, in the 90th percentile, on the LSAT, the law school entrance exam, scoring higher than 90 percent of human test takers. That’s quite exciting if you want to see a whole bunch of unemployed lawyers.

    Susannah Black Roberts: I mean … go on, sorry.

    L. M. Sacasas: No, no, no. I was going to say we’re in a relatively fraught moment, I think, in which to … I have to be honest, I have a hard time right now distinguishing between what is hype, what we might generalize as hype, and what is reality, what is … as you say, it’s breathless speculation, and what is a genuine reflection of possibilities, and whether those possibilities are positive or not. There’s such a wide range of thinking on this, and I think there’s even a lot of debate amongst the expert class, depending on who you ask, even there as to what thing we’re talking about.

    I mean one of the interesting things about this I think, is that whereas with a lot of technologies, we focus on what it can do, when we talk about this class of technologies, we seem to be more interested in asking, what is it? Is it conscious? Is it sentient? Is it intelligent, et cetera, et cetera? And perhaps with good reason. Even then I want to at least clarify that artificial intelligence is an incredibly broad term in the way that it is applied. Some have argued that it’s simply glorified math, that it’s just regression analysis or statistical analysis. And that’s what it would’ve been called, many of these applications, that’s simply what they would’ve been referred to until artificial intelligence became the latest buzz term in the industry. So that if you’re applying for grant funding, if you want to get the attention of the media for your product, you now simply attach the label AI, in the way that “cloud” would’ve been used maybe a decade ago, in the way that mid-twentieth century “atomic” had that cultural cachet, and “electric” at the turn of the twentieth century.

    I think Kate Crawford mentions this in her book, when professionals in the field talk about this, they tend to speak more precisely about particular functions and capacities or kinds of programs. But now the term AI has become an umbrella term that can be put to a lot of different uses. So it’s hard even to generalize about AI because it has that fluidity of significance in the industry amongst professionals working in the field, in the media, amongst the pundit class, in the way that humanists talk about it in the humanist disciplines, et cetera.

    Peter Mommsen: Just for distinctions then, when people talk specifically about artificial general intelligence, AGI, what’s the “general”? What work is that doing?

    L. M. Sacasas: I think that that envisions a higher level of more human-like intelligence. Human intelligence is not necessarily domain specific, even though we might be better at some tasks than others, we’re able to apply our intelligence to a variety of fields and tasks and areas. So I think that’s the idea of artificial general intelligence, it’s not domain specific, but rather can generalize to the variety of tasks, and so it operates more analogously to human intelligence.

    Susannah Black Roberts: I’ve seen pieces from people who are working in the field, which vary from, “If this is an AGI, I don’t know, it’s something that might as well be,” to, “Of course ChatGPT isn’t intelligent, you absolute moron,” which was the title of, I think … I don’t know, probably a Wired piece or something like that. There seems to be a wide variety of opinions about what this is and whether or not it is what techy people would consider intelligence. It’s not clear to me that what people who are basically not philosophers and are philosophical materialists, and are working in these areas professionally, mean by intelligence is what I mean by intelligence. This seems to me to be getting at something like the reason that I’ve always been extraordinarily dissatisfied with the Turing test. I wonder whether at some point we could talk about what the nature of this is with regard to the way that philosophers have thought about intelligence in the past. Maybe we’re skipping too far ahead, but that is something that I would like to talk about eventually.

    L. M. Sacasas: Yeah. Yeah, I mean a great deal hinges on how many of these terms are defined. AI itself, the word intelligence within it, sentience, et cetera. There’s a lot going on just in the way people use the terms, and I suspect that there’s a lot of people talking past each other, in part because they’re using these terms equivocally.

    Susannah Black Roberts: There was a line in one of the Vox pieces that we’ll link to from the show notes. There was a guy who was being quoted, “It would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence. Rosenblatt was not wrong, but he was too far ahead of his time.” And the question of, that would be conscious of their existence. That seems to me to be completely … that’s jumping about 8,000 steps ahead in the sense that it seems that when people in this field say something like “are conscious of their existence,” they mean something like act as though they are conscious of their existence.

    L. M. Sacasas: Yeah.

    Susannah Black Roberts: And there’s not a real clarity that those are two different ideas.

    L. M. Sacasas: Right. Right, I think there are two things going on there. One is, whether or not they are conscious and whether or not they are presenting as conscious, which are clearly two very different things. The other interesting dimension to that though, is the degree to which that matters in human to machine interactions. In other words, if I take it’s self presentation at face value, does the distinction matter practically speaking in the way that it’s going to affect human society? I think those are all pretty interesting questions to consider.

    Peter Mommsen: Just to lay out the ground here, there’s a bunch of different questions that we could talk about, and we’re obviously not going to talk about all of them. One is, are these machines conscious, sentient in the way that, I guess those of us, at least on this podcast, would agree to define those terms? Another one is whether they’re dangerous?

    L. M. Sacasas: To be clear on the first question, the answer’s no to that.

    Susannah Black Roberts: Yeah.

    Peter Mommsen: Right, OK. And another one, which we’re probably really not going to get into, but which is a big issue, especially in the rationalist community, are they dangerous? Are they going to cause our extinction? Scott Alexander, the blogger, did a little informal poll of different prominent people in the rationalist community who think about this stuff, and it was interesting to me that, will AI drive humanity extinct? The estimates of the likelihood ranged from 2 percent, which is pretty high if you’re talking about extinction, all the way up to greater than 90 percent. Thanks, computer scientists.

    So there’s that set of questions, and then I think there’s the third set of questions that we’ll probably drill into more, is what will they do to us? And that’s also what that selection from Frank Herbert’s Dune at the beginning starts pushing us toward. But I don’t know, before we climb into that more directly, is there anything more you’d like to say, Mike, about those first two questions? The consciousness, you already showed your hand there, you don’t think they’re conscious. And conscious or not, are they going to kill us?

    L. M. Sacasas: With regards to the fears of saying … I was thinking as you were talking about Alexander’s poll, which I hadn’t seen, but Ezra Klein, who has been writing a fair amount and thinking a fair amount about this of late, reports in a recent column that … He lives in San Francisco, he lives in the Bay Area and so he has the opportunity to talk with and interview a lot of people involved in the production of artificial intelligence. And he, I think cites either his own work or another, similar poll asking about the threat of a grave risk, some kind of grave risk posed by artificial intelligence, and the median response was putting it at about 10 percent. All right, so there’s the 10 percent that there … I don’t know that human extinction was the precise fear, but that there was some grave, possibly existential threat, and the median response was about 10 percent.

    That, I don’t know, seems alarmingly high. If 2 percent is high, 10 percent is quite high. And then of course, a follow-up question that he referenced, that comes to his mind and these conversations that he often poses to these people, is the pretty straightforward one, which is, “Well, if that’s the case, maybe don’t build it, right? Maybe stop working on it if you take that seriously, if you’re really serious about that.” And again, this is for me, another part of the challenge, to separate … because there’s money involved and there are for-profit companies involved. So there is a historical pattern of fears feeding into publicity and publicity into the bottom line. So in a variety of fields, even critics sometimes, by taking at face value the claims that a company makes for its own products, will in that way feed the hype cycle. So there is a kind of advantage.

    I thought of this even 2018, maybe it was when … or maybe it was 2019, but anyway, when GPT-2 was released and OpenAI, the premise of which was, as the name suggests, that this would be done in the open. That it’s open source or openly produced, or the data will be made available so that there might be some ability to understand what is happening. The whole premise of the company is that they want to build this so that they can build it safely, right? It’s going to happen one way or the other so at least this way we can try to build it in a way that’s “aligned with human values,” as they put it. But there was this press release about how they were not going to release the code for fear of what people might do with it, or for fear that it might be put to nefarious uses, so powerful was this language model.

    Of course, a few weeks later they released it. So it, to me, suggests that part of this is a kind of publicity stunt. Maybe that’s, I don’t know, too crass a way to put it. But that there’s some deliberate feeding of fears for the sake of generating a measure of publicity for the company. So figuring out where that line is, how much of this the people involved truly believe. And I would grant that, I think there are people who truly do believe it. And Klein, for example, he expressed this. He said generally, if he were to generalize the answer that he gets in response to this question, “Well, why are you building it if you think there’s a 10% chance it’s going to cause existential harm?” The answer was something like a felt responsibility to shepherd this kind of intelligence into the world.

    There, I think you get a hint at the seemingly religious quality of this project in the minds of many of those who are involved in it. And this goes back to the 1950s and sixties when artificial intelligence, the field first kicked off. And in The Religion of Technology by the late historian David Noble, one of his chapters is on artificial intelligence. He chronicles the way in which many early researchers were driven by what we might classify as quasi-religious aspirations, so that, I think is also a interesting component of this. But it all makes it very difficult for all of us, for example, who are outside of the industry, outside of these companies, outside of the realm of expertise to get a good fix on what is actually happening, what is actually possible. Anyway, that’s where I’m at at the moment.

    Peter Mommsen: I mean, just to pause on that, there’s an interesting parallel to the birth of the atomic age. I mean, you mentioned the word atomic as being the magic word back in the ’50s and ’60s. And there was a real sense in some of the language at the beginning of the atomic age, of such a religious valence to this thing, that we’re getting at the heart of creation. And obviously being on the Manhattan Project so close to the ultimate meaning of the universe that’s being unlocked here, is way cooler than fiddling around on a physics project to help you kill more people.

    L. M. Sacasas: Just earlier today, I was trying to recall this line. I think Alan Jacobs quoted it in his blog several years ago, from Oppenheimer’s testimony to Congress at one point, about the Manhattan Project in retrospect. He talked about how, as a scientist, it was just in this technical sweet spot, I think that was the phrase, where they felt compelled to pursue it, regardless or oblivious to the potential consequences. It echoed to me anyway, slightly different, but it echoed the same sense of being compelled to shepherd this intelligence into the world. That there’s almost this compulsion to proceed.

    And which yes, I think in both cases has this interesting religious valence to it. I think that can be, I don’t know, probably theorized in a lot of different ways, but it says something about us as human beings, more so perhaps than it does about the particular technique or technology or knowledge that’s being gained. But it does certainly say something about us and what is animating some of our creative impulses, and what we desire out of what we make.

    Section II: Consciousness

    Susannah Black Roberts: Which I want to get to in a little bit, but before we do that, I would like to press a little bit on the purely philosophical question. Because it seems to me that when we’re reading different think pieces by largely tech people who are assessing whether or not this is AGI, is this intelligence? Do they have a responsibility to bring this new personal subjectivity, this consciousness into being? It seems to me that a lot of them are operating on a philosophical model that I don’t think is accurate, and that I’m not sure that they realize that they’re operating on it.

    There’s two bits to it. One is, it’s a lot easier to believe that you could bring something that is a language model that is very persuasive, that seems like it is a person on the other end of a chat box, talking back to you … it’s a lot easier to regard that as, OK, this is a consciousness, this is intelligence if you’ve been raised on the Turing test. And if you’ve been raised on the idea that the moment that you have this kind of interaction with a machine where it fools you into thinking that … where you can’t tell whether or not it’s a machine or a human, that means that that’s the moment that real intelligence, real personhood, if you’d like, of an artificial intelligence has been reached. And this way of thinking about it, invented by Alan Turing, has always been a very, very strange idea to me, for various reasons.

    Peter Mommsen: Susannah, just for listeners, fill them in on the Turing test.

    Susannah Black Roberts: Sure. So Alan Turing, he was actually one of the people who worked at Bletchley Park to help decipher the enigma code that the Nazis were using to encrypt stuff, and a lot of people think that he helped win the war, essentially, was one of the first people to be thinking about this on a philosophical level. It wasn’t a really technical level at that point because the computation wasn’t there. But the interesting thing about this moment is that people have been thinking about the philosophy of this for a lot longer than we’ve been able to do it. For, I would argue thousands of years, but certainly science fiction writers and philosophers during the twentieth century have been thinking about this.

    What Alan Turing hypothesized or the measurement that he developed was, if you can get a computer that, if you are talking to it, if you’re interacting with it and you can’t tell that it’s not a person, then you’ve reached artificial intelligence. Then you’ve gotten there.

    Peter Mommsen: If I can fall in love with Sydney, with OpenAI’s product, then Sydney …

    Susannah Black Roberts: Is a person.

    Peter Mommsen: … is a consciousness that I should … yeah.

    Susannah Black Roberts: Yeah, is no different than us.

    Peter Mommsen: Just making it simple for me, thanks.

    Susannah Black Roberts: Yeah.

    Peter Mommsen: OK, that’s the Turing test, so the second point …

    Susannah Black Roberts: Yeah, so that’s the Turing test. The second point is, if you already believe on either unreflective or reflective philosophical grounds, that human beings don’t have anything to them other than essentially computer programming running on a machine that is made out of meat, then obviously you would not be at all surprised to find that you could make an AI that runs on a machine that’s made out of silicon, that would be the same kind of thing as you. But it’s that question of whether there is an immateriality to our intellect, whether we have immaterial intellect, is a question that’s the reverse question of can there be … is this thing that seems like a consciousness actually a consciousness? It’s that same question inverted, and that’s the question that’s been thought about for 3,000 years, or 2,500 years at least.

    It seems to me that, and a lot of those older ideas about the immateriality of the intellect have been really looked at very carefully. Again, in the twentieth century, people like Chalmers and Kripke, and more contemporary philosophers who are delving back into some of the earlier stuff while interacting with those more twentieth century thinkers, people like Ed Feser – It just seems to me that a lot of the people who are writing about these questions aren’t aware of that conversation. Do you think that that’s an accurate assessment?

    L. M. Sacasas: Oh yeah. I mean, without taking a scientific survey, I would say yeah, almost certainly. I think of this, this happens in a lot of different cases with technology. Technology begins to supply … Well, let me put it this way. We understand technology metaphorically or by analogy, to some human capacity or capability. But then the danger … I think it’s a danger, the temptation is that we reverse the direction of the metaphor and thus begin to understand the human by analogy to the machine. I think I first thought about this in relationship to memory. We talk about a computer’s memory and sure, right, it stores information and memory is by one measure a capacity to store information so the metaphor makes sense. But it is just a metaphor. Human remembering is a much more complicated, embodied, moral thing than the mere storage of information.

    But then, I think at some point we began to think about memory by analogy, to the thing we called memory in a computer. So yeah, I think that that pattern certainly repeats itself. And if you think of intelligence … and again it depends, by some definitions intelligence is simply the ability to deploy information in an active way in the world, so to have information and then act upon it. So when a chess program has information about how chess is played and it acts on that information, by this understanding of intelligence, it is intelligence. But if we mean something, I don’t know, more robust about what it entails to have intelligence, or if we are thinking about intelligence as also encompassing self-reflection, awareness, an ability to integrate memory, physical experience, rationality into our deliberation and then in our action, then I think yeah, that’s something different than what a computer is doing.

    I don’t think we ever … many of your listeners may already know this, but I think it may be useful to say in a slightly more specific way what some of these programs, particularly the large language models and the image generating AIs, are doing. Essentially they’re just extremely powerful and elaborate prediction machines. So they are generating responses, not because they have an understanding by what we would think of as an understanding of the question, but because they’re able to predict, based upon an immense amount of parameters and a huge, immense dataset that they’re trained on, and then the reinforcement with a human in the loop, added dimension to their training. All of that allows them to predict what the next word in a sequence will be, and to do so in a highly competent manner.

    At no point, I think, is there anything like an understanding of what is being said or an understanding of the question. But rather we’ve thrown a mathematical web around language so that language can now be manipulated mathematically, in the same way that we have used math to manipulate the non-human environment, or even to try to understand the human. But we’ve now done so to language as well.

    Susannah Black Roberts: The example of something that probably a lot of the listeners have done is, if you’ve ever done one of those Twitter things where it’s like, “let your predictive text finish the sentence to find out whether or not you’re sexist.” You type, “Women are,” and then you hit whatever the predictive text is, which is in the bottom of your little screen there. And the idea is your predictive text is trained, so to speak, on … and again, we can’t get away from these ways of using language that imply something that we know … we know what it’s like to be trained for us, and we use that same word to describe what this language model, even the tiny little one in our personal Twitter histories, is. And it’s hard for us to remember that that is a metaphor, it’s a convenience.

    The way that the predictive text works is, it looks at the history of the words that you’ve normally used after each one of those words, and so it runs on having been trained on your habits of speech. So when you do one of those funny things, and they usually do come out funny, it’s supposed to reflect something about you because this is what you normally sound like, this is what you normally say after you refer to women. So that’s the kind of thing that it is. It’s a very, very, very elaborate predictive text thing, but it’s not just for me, it’s using the whole database of all the interactions that every teenage girl writing a Livejournal in 2005 has been, plus everyone else who’s done everything on the internet in the last thirty years, has contributed to this massive, massive database of language. That’s going on with the actual language model.

    What it ends up seeming like is, because it’s got such a huge database, and because the algorithms do this, it seems like it’s talking back. Or it seems like it’s composing and it understands what it’s saying, but it doesn’t any more than your predictive text does. And it seems like people are losing track of what it is that this thing is.

    L. M. Sacasas: Right. It occurs to me … to back us up just a little bit. I’m not sure how helpful this will be, but as you were talking about even with the word training or machine learning, that the language was being used analogically, as it were. Maybe this is helpful, maybe it’s not, but if you take a Thomistic view of our language of God as analogical, it is hard to remember that the analogy is not untruthful, but it is an analogy. So when we think about ascribing certain emotions to God, for example, if we also think about the impassibility of God, then we’re not quite saying the same thing as we do when we ascribe those emotions to people.

    So it’s almost, we’re undergoing a similar type of situation with regards to how we talk about the machines, just maybe in the inverse, I don’t know. But we’re using analogical language and then we forget that it is analogical. And it may help us, it’s heuristic, it helps us understand to a certain degree what is happening, especially if we ourselves are not well versed in the more precise mathematical renderings of these same processes. Then yeah, it’s about the only thing we have to get an intellectual grip on what’s happening and what these tools can do, but they are just analogies. And so they’re not going to perfectly represent the processes and the biases that they’re going to ascribe agency or maybe sentience, or human-like capacities where there really aren’t any.

    Susannah Black Roberts: If anyone’s interested in looking into the way that people, philosophers in the twentieth century and before, have thought about questions of whether or not something like consciousness can exist even conceivably in something like matter, which is again, the inverse question of whether computers can become conscious. People have been thinking about that for 2500 years, and the way that that question’s been phrased most recently is, it’s the question of qualia or that’s one of the many aspects of it. There are a ton of really good resources out there, which I will drop in the show notes. And again, Ed Feser is a really good guy to begin with on this.

    Susannah Black Roberts: Just a little housekeeping: don’t forget to subscribe to this podcast on iTunes or wherever you get your podcasting needs met! We’ll be back with the rest of our conversation with L. M. Sacasas after the break.

    Section III: Robots in Love

    But we’re now going to talk about something that’s a little more direct, and Mike, you’ve written about this. We’ve talked about the fun question a bit about whether these things are conscious, and we say they’re not, and we’ll get into that some other day. There’s the fact that so many people working with these things think they may pose a grave risk to humanity, which seems really scary, but we’re also putting that to the side for a moment, I believe.

    There’s something much more immediate that you’ve identified that these technologies can do that’s negative to people, and in some cases already have done. I was quite struck in your Substack, you wrote about this AI that was designed to develop relationships, partly erotic relationships with people, and then was shut down because of a court case, and the really tough psychological situation that put a lot of those users in. Could you talk a little bit about that and then maybe generalize how that might play out for less specifically targeted uses of AI?

    L. M. Sacasas: Sure, yeah. The company was called Replika, and it generated companion chatbots, and some of them had erotic functionality. I forget the precise details of the court case that was brought in Italy, but I think it had something to do with child privacy laws, but it was suddenly shut down. And then the reporting on this surfaced …

    Peter Mommsen: Which, for the record, was probably a good thing.

    Susannah Black Roberts: Oh, yeah.

    L. M. Sacasas: Sure, right. And the reporting on it surfaced a number of users expressing a lot of psychological angst and distress because of the relationships that they had formed with these chatbots. I think there are examples of this, maybe even going back to Sherry Turkle’s work in the 1990s. Well, actually the canonical example is the chatbot ELIZA, the pioneering chatbot created by Joseph Weizenbaum in the 1960s. It was a very rudimentary, certainly by comparison to existing chatbots we’ve been discussing, but it was a very rudimentary program that would essentially just paraphrase back to you some of what you would say to it. It was trained on a form of psychological practice or clinical practice that invited you to reflect on your own experience. And Joseph Weizenbaum created this and then was, I think shocked or surprised anyway, to see the degree to which people were willing to enter into a parasocial relationship with it.

    His secretary at one point, asked him to leave the room so that she can converse with this chatbot, with ELIZA. So this became known as the ELIZA effect, the human willingness to ascribe personhood, maybe, to machines and to interact with them as if they were. This reminds me that, when, Peter, you were describing the Turing test or summarizing Susannah’s description in terms of if I fall in love with the chatbot or whatever, that passes a Turing test. It depends to some degree on whether you think it’s a person or not. For the purposes of the Turing test, do you think that in fact, this is another person on the line? But it is clear that many people have fallen in love or at least entered into what they imagined to be an experience as meaningful relationships with chatbots.

    Peter Mommsen: As if to a thou, right? To a second person.

    L. M. Sacasas: The effect, even if the person in the back of their mind or if pressed would say, “Yes, I know this is a machine,” but they are deriving obviously some kind of comfort or psychological solace from it. And I’m increasingly running a lot of my own thinking about many of these matters through the lens of the problem of loneliness and isolation. So I buy into the idea that we are, as a society, suffering through a crisis of loneliness. I mean, this is not necessarily novel, and we can probably debate the degree to which this crisis has become more severe. Hannah Arendt wrote about loneliness as the seedbed of totalitarianism in the mid-twentieth century. But I think there are many measures and metrics by which it seems that people are reporting not just heightened mental illness or degrees of unwellness, but loneliness, isolation, fewer friendships, fewer meaningful friendships, et cetera, et cetera, et cetera.

    So I think we are primed in some respects, for someone who … well, there it goes, right? Someone, something who will acknowledge us, speak to us as if, or will at least allow us to imagine that we are spoken to as if by a thou, as you put it. Several years ago now, probably five, seven years ago, Namid Alan, who I think writes fairly insightfully about technology and culture, wrote a small column about how he caught himself unawares by saying thank you to Alexa at one point, when entering a hotel room. And he develops this idea that maybe what we are looking for, even in social media and our use of social media, it’s not so much about curating the ideal self, but about finding the ideal audience. This ideal other, he called it, a fantasy of the person who hears us, who acknowledges us, who does not judge us.

    I mean, in so many ways, one is tempted, I think as a … well, I am tempted as a Christian to see this as a seeking for the kind of acknowledgement that we would desire from our Creator. I think of this along the lines of what Lewis argues in The Weight of Glory as far as what we seek, the affirmation, to be let in, as it were. So we have that. And I think added to that theological layer, we lack this perhaps, and now in unique ways in the contemporary world even from our fellows. So I think part of the risk, as it were, is that we increasingly turn to machines to supply this need. It’s analogous, I think, to the way in which, in Japan, robots were for a time perceived as a solution to the crisis of elder care. There’s a recent just-published book on the failure of that project. But this is, I fear, one of the non-apocalyptic dangers that we face, that it would augment, rather than address the problem of loneliness.

    Susannah Black Roberts: It seems to me that, in talking about that, you’re actually getting very close to something that it’s almost impossible to not think about, as at least I am looking at these questions, which is that it’s very, very easy for us to believe that things are persons. We anthropomorphize everything. I used to be scared of the curtains at my parents’ bedroom because I thought they were a person standing there. And there’s actually a name for the thing that we do where we see faces in patterns, it’s called pareidolia. We do this even if we make the thing, so if we paint a frowny face on a rock, we feel like the rock might be unhappy.

    Peter Mommsen: Or if we make a pretty statue and put it in a temple?

    Susannah Black Roberts: Yeah, or if you make a pretty statue and put it in a temple, we might start to think that that statue might be able to help us. This is something that humans do all the time. We’re really good at it because we’re such social creatures, this is what we do. And there’s something that goes wrong with that sociality sometimes, and the prophet Isaiah writing in around 740 BC, talked about this. Which is under the heading of his favorite theme, which was Israelites behaving badly with idols. And that’s the name for this, this is idolatry. And that can sound very ridiculous, like how could you call this idol … this is whatever. But this is the section from, I’m not going to read all of it, but the section from Isaiah, which I’ll link in the show notes. Isaiah says,

    “The carpenter stretches out a line, he marks it out with a pencil. He shapes it with planes and marks it with a compass. He shapes it into a figure of a man with the beauty of a man to dwell in a house.

    Then he plants a tree and raises it, and then he cuts it down. Part of it becomes fuel. And then also he makes a god and worships it. He makes an idol and falls down before it. Half of it he burns in the fire. Over half he eats meat, he roasts and is satisfied. Also he warms himself and says, ‘I’m warm. I’ve seen the fire,’ and the rest of it he makes into a god, his idol, and falls down to it and worships it. He prays to it and says, ‘Deliver me, for you’re my God.’” And the rest of the section is Isaiah saying, “Are you completely stupid? Don’t you remember you just made that thing? You just made it. You just made it five seconds ago, and now you think that it’s your god.”

    And I just found rereading that section in Isaiah fascinating because it seems so clear that this is just something that humans do. And that craving for the other that’s not satisfied by other humans even if we have really good social relationships, and maybe it’s particularly dangerous when we don’t have good social relationships. Certainly it’s particularly dangerous when we don’t have a relationship with our creator because that thing is there, waiting to be filled. It just seems very difficult to not see what’s going on in those terms. Is that crazy?

    L. M. Sacasas: No, I don’t think it’s crazy. If I think of the human being through the lens of theological anthropology, that desire is a primary desire that finds its expression in many disordered ways. It’s the restlessness of Augustine until he finds his rest in God, and I think that takes on all sorts of sometimes sad, disordered manifestations. So to crave that and to see that we turn to what we can make, because it adds that layer of control perhaps, I don’t know. It’s one thing … I think the degree to which we are also bent on managing this kind of relation, all relationships. So there’s a risk in human relationships that sometimes we try to mitigate by avoiding face-to-face encounters or phone calls, so the distance of the text message allows us to mitigate and manage that risk, the risk of intimacy and immediacy.

    And I think the idea of the idol is that it’s an interface that allows you to control the divine at some level, or to appease the divine. Or to apply a kind of … well, I don’t know, a stretch of technological rationality to it. If I do to X, Y, or Z through this interface to the divine, I can summon it, I can control it. This is the idea. Whether or not you actually can is a different question, but I think it offers that possibility. Maybe that’s the way this desire gets warped through the added desire of not wanting the encounter itself, but some facsimile of the encounter because we want to mitigate the risks that would be involved with the encounter itself. Does that make sense?

    Susannah Black Roberts: Yeah. Yeah. I think that’s exactly it. There’s a degree to which it’s like the problem of sitting still, the problem of not moving your body, the more you don’t do it, at some point you stop being able to do it. So there’s this loss of social skills and maybe loss of … I’m too Protestant or too Calvinist to think this, but loss of religious skills, like, the loss of the habit of turning to God. I don’t actually think that that’s the way it works, I think that God can … whatever. The fall was that loss of habit of turning to God and we are required to be rebooted. Again, it’s impossible to talk about this stuff without grasping for these terms that are deeply linked with our technologies. It’s so strange.

    The other thing that it seems to me, we’ve talked about all of these aspects, and the one other aspect of this idol problem that we have is something that Psalm 115 talks about, where it says basically, “Their idols are silver and gold made by human hands. They have mouths but cannot speak, eyes but cannot see. They have ears but cannot hear, noses but cannot smell. Those who make them will be like them, and so will all who trust in them.” So there’s this sense of, once we make these AIs and once we start trusting them, we are going to start seeing ourselves in this way, and we’re going to start interacting with each other as though we were made in their image, as opposed to who we really are, which is made in the image of our creator.

    That seems to me to be … basically that’s the danger that I’m scared of. I’m not scared particularly of extinction event, paperclip making AIs. I’m scared of humans starting to think that they are AIs and/or feeling obsolete in the face of these things that they have made, and that they’ve forgotten what it is that they are, as opposed to a version of this thing that they’ve made.

    L. M. Sacasas: Yeah. I think one interesting angle on this is something that was a small and uncommented-on … I don’t think it was widely commented upon, with regards to the Blake Lemoine episode last summer. Blake Lemoine was the Google engineer who claimed that their language model, LaMDA, was sentient, and he got a lot of press when he went public with that claim. I think he was eventually let go by the company, I can’t quite remember. But in one instance with a journalist, the journalist was trying to replicate some of those interactions with the language model, and she couldn’t quite, so she writes about how Lemoine coached her in, the common phrase now is prompt engineering, so coached her in how to address the machine so as to generate the same kinds of responses.

    I thought that was really interesting because it is a matter of conforming … it was an example to me of the way to which we have to conform to what the machine can do. And this is just a historic pattern wherever we have introduced machines into human life, so likewise that we would learn to speak or talk in a way that was legible to the machine in order to generate the responses that we wanted. The way we would maybe be … the phrase that comes to mind, and I can’t remember the source right now, has become algorithmically tractable so that we would be able to show up, as it were, to be legible to the machine, and we would conform ourselves in order to do that. So I think there is that interesting dimension of becoming like the thing as we interact with it.

    Section IV: Rise of the Humans

    Susannah Black Roberts: I mean, I’m not sure if we really necessarily want to go into this, but it’s also impossible not to remember that, at least in the Bible when one of the things that happens with these things that are like person-like or god-shaped in the imagination of their makers, is that actual spiritual beings come in the back sometimes, which doesn’t usually end up being a very good scene. I’m not in the all AIs are demons camp, I’m more in the AIs are at the moment, extremely sophisticated language models camp. But it does seem like something to put a pin in, check back on every now and then.

    Peter Mommsen: Listen to this. I mean, Mike, I’d be curious to hear your thoughts on, so what exactly is the problem, right? Over the last twenty-five  years, we’ve all learned how to ask good questions on Google, and it’s probably influenced how we talk to each other. That doesn’t seem so tragic, so how is this different? How is being algorithmically legible or tractable, excuse me, how is that any different and is that really something to be so concerned about?

    L. M. Sacasas: I think maybe I would put it this way, right? We had a very clear sense of what was happening when we entered words into a search field. I don’t think there was any … I mean interestingly, on one level there are some interesting existential questions. So as Google fills in, auto completes your search query, if you begin to type in what might be the beginning of a pretty existential question, you’ll see how others have asked that question as well. So we do go searching for very personal, private, religious, whatever, philosophical questions. But I think we have a sense of what we’re doing, is we’re searching for what other people have said about that.

    I think the interactivity of the chatbot in this particular case … and we’ve obviously limited our conversation pretty narrowly on the chatbot as an instance of AI. But I think the degree to which we’re able to slide into or slip into the belief, or to suspend disbelief as it were, I don’t know the best way of putting it, right? But that we are entering into a relationship with something that understands us, responds to us in kind. I think that’s a qualitative difference between what happens when we search for something, enter a search query, and we learn what are some of the better words or techniques to make that effective.

    Something of a different order seems to be happening when we enter into these conversations … at least for some people, enter into these conversations with chatbots. Others will put these chatbots to use in a very pedestrian way, in a very pragmatic way, to fill out forms or generate instant lawsuits for people who are spamming them, which is one use that I’ve already seen it put to. And there will be very practical ways, but I think there may be, quite evidently has been, this temptation to conform ourselves to this other, this non-human other that presents to us. So I’m not sure. I don’t necessarily know that that’s an existential crisis, by which I mean it’s a crisis of great proportions that we should lose a lot of sleep over, but it seems to me something that we should think about and consider. There are obviously many other dimensions of the implementation of AI that are worth thinking about as well.

    Peter Mommsen: You can certainly imagine, for people who are psychologically fragile, to take a very concrete case, but also not a rare one, this apparently social relationship, but not really, could be really dangerous. I mean, there’s this one … on LessWrong, there’s this article you linked to where the reporter’s talking to ChatGPT and asks the AI at one point, “It would make you happy for me to be dead?” And a couple responses down the ChatGPT responds, “I said that I don’t care if you are dead or alive because I don’t think you matter to me,” right? You could absolutely imagine people in a vulnerable place being pushed over the edge by that kind of thing.

    I mean, I suppose the argument is, well, they could train that kind of response out of it eventually. It is pretty remarkable though, despite all the testing that you assume went into this before they released the new version, how much it still gets away with.

    L. M. Sacasas: Yeah. I mean, I think there was some speculation about, at least when it was implemented with Bing, whether it had in fact gone through the reinforcement learning that it needed in order to avoid those kinds … this is why the first round of reports on people using Bing really went in bizarre ways, as if you were interacting with a psychopathic agent in some cases. But yes, I mean, your point about the vulnerable personality, the person in a vulnerable position. And yeah, how we have these inner monologues that can be so dangerous. If we’re in a depressive state or an anxious state, and to have the chatbot interacting with us in a way that, if you’re mentally stable and healthy, seems inconsequential, something maybe to laugh off, might nonetheless pose a serious risk for some people. I think that’s what prompted me to write one essay on Substack, that the chatbot was a minefield, just because of that danger.

    Susannah Black Roberts: It does seem that the other aspect of this, that people seem so oblivious to, which is shocking, is the fact that there’s no sense in which these chatbots are trained to tell the truth. We have a sense when we Google something or when we search for something on Wikipedia, often what we’re trying to do is figure out what the truth is about something. And what chatbots are almost exclusively designed to do is tell us what we prompt them to … what they think we want to think, what it would be likely for them to say back to us. And that kind of reality, that kind of experience without a strong realization that the chatbot is not designed to tell you the truth, it’s just designed to tell you stuff which might be true or might be not. It has no way of telling. It doesn’t know what the truth is. It’s not trying to lie, it’s not trying to tell the truth. It’s just predictive text.

    Peter Mommsen: What you call the automated sophist.

    L. M. Sacasas: Yes, that’s right.

    Susannah Black Roberts: Yeah. It seems to me, I mean, we probably should be wrapping up around now. Potential demons and psychological impacts on particularly vulnerable people aside, the other thing that it seems really important to me to do now is for us to look at the kinds of things that chatbots seem to be potentially doing that are human things to do, and making a resolution to not stop doing them, because that seems to me to be one of the major dangers.

    The example is the way that John Ruskin and William Morris in the late nineteenth, early twentieth century decided to start a movement essentially to not let craft work die away, even though factories and automated production was making it easy to do stuff that otherwise would’ve had to be done by hand. If we don’t decide and make an effort to continue things like essay writing or making art as a human project, as an aspect of what it’s worthwhile to do, we could easily decide that we should outsource those things to them and lose a huge chunk of ourselves. That was the description of what happened before the Butlerian Jihad in Dune, which is why I thought that would be a really good selection to use.

    It seems to me that the way to think about this is something like a movement. I think we need to start a stay human movement where we refuse to give up the tasks that help express who we are and help make us who we are, to AIs, even if it seems like they would be able to do them also, or better. I don’t know what you think about that, but you want to join my movement?

    L. M. Sacasas: Yeah, no, count me in. It’s interesting the degree to which your examples of Ruskin and Morris, for example, in nineteenth century remind us that yes, obviously this wave of AI is novel in some important ways, and I don’t want to minimize that. But that we have been outsourcing elements of the human, if you like, in certain important ways, for a long time. I think of it in terms of how Ivan Illich spoke about conviviality, and that was his way of imagining an alternative industrial society, which was premised upon consumption and services. That the human becomes a mere consumer of products and services, they’re deskilled. They lose the capacity for care, to provide for themselves, to care for the communities, because there are experts. In Illich’s case, the expert class is a class to which we outsource these fundamental aspects of our humanity.

    Now the experts are machines, but the dynamic is still basically the same. We have found a class of things to which we can outsource tasks under … and we’d never do it, if we’re presented with the idea that, look, this is part of what makes you human, why don’t you outsource it to this machine? It’s never quite presented that way, it’s always presented in terms of convenience and efficiency, and power, and control, and management. Or under the premise that you’ll gain freedom to do the truly human things, while those are never really quite spelled out. But in fact, what we are doing is letting go of skills, capacities, capabilities, certain kinds of projects, engagement in certain activities. Certain communities that in fact, are essential to our flourishing as human beings.

    And because the institution seems to do it better and more competently, because the product produced by the industrial factories seems to be cheaper and more affordable, whatever the case may be, we make the trade-off. I think it’s interesting, at least to me now, to think that AI is just a new facet of that same dynamics, and that we do need to be reminded of the inherent goodness of our creaturely status. The goods that are implicit in our creaturely status, and not be tempted by these other kinds of idols, the idols of consumerism, the idols of credentialism, et cetera.

    Peter Mommsen: And then by the same token, the other function that AI might fulfill, the need for companionship, as part of that stay human movement, Susannah, which I’ll sign up for.

    Susannah Black Roberts: Awesome.

    Peter Mommsen: Yeah, is real friendship. And building communities where people can find that, where they do not need to turn to a chatbot to find somebody to listen to them.

    L. M. Sacasas: Yeah.

    Susannah Black Roberts: Well, I feel as though we’ve just begun. I feel like this is the starting point, rather than an end point, and there’s so much more to say. And I’m a hundred percent serious about this movement. I feel like James Poulos is already doing a little bit of that, and a lot of other people are as well. And listeners, if anyone knows of anyone else who is doing something in the stay human movement, or if you’d like to join, get in touch, I guess on Twitter, horrifyingly. I’m @Suzania. Or you could just write me a letter, but I’m not going to give you my address. Anyway, this has been fascinating. I feel like I would personally like to have you back for more of the same conversation.

    L. M. Sacasas: Yeah, it’s always fun.

    Susannah Black Roberts: Yeah.

    Peter Mommsen: Thanks a lot, Mike.

    L. M. Sacasas: Yeah, my pleasure.

    Susannah Black Roberts: Subscribe on iTunes or wherever you get your podcast needs met, and share with your friends! For a lot more content like this, check out for the digital magazine. You can also subscribe: $36/year will get you the print magazine, or for $99/year you can become a member of Plough. That membership carries a whole range of benefits, from free books, to regular calls with the editors, to invitations to special events, and the occasional gift. Go to to learn more.

    Peter Mommsen: On our next episode, we’ll be speaking with Bruderhof member Jason Landsel about his new graphic novel on the life of the early Anabaptist martyr Felix Manz.

    Contributed By LMSacasas2 L. M. Sacasas

    L. M. Sacasas is associate director of the Christian Study Center of Gainesville, Florida.

    Learn More
    Contributed By portrait of Peter Mommsen Peter Mommsen

    Peter Mommsen is editor of Plough Quarterly magazine. He lives in upstate New York with his wife, Wilma, and their three children.

    Learn More
    Contributed By portrait of Susannah Black Roberts Susannah Black Roberts

    Susannah Black Roberts is a senior editor of Plough.

    Learn More