Plough My Account Sign Out
My Account
    View Cart

    Subtotal: $

    Checkout
    colorful painting of a machine

    What Problem Does ChatGPT Solve?

    If we use AI to reduce or avoid the effort of exercising our human creativity and intelligence, those gifts will atrophy.

    By Jeffrey Bilbro

    July 7, 2023
    4 Comments
    4 Comments
    4 Comments
      Submit
    • Bram W

      As with language, written text, and the internet, I posit that this new tool does nothing more than refine the interaction of the individual with the corpus of all human knowledge.

    • Leshy

      Who needs, AI ? Why is it being inflicted, on us, all ? The Industrial Revolution, version 5, is not something, we want, or asked for. The corporates, are after it; but people, are not.

    • JoeR

      Just because you can do something doesn’t mean you should do it. This feeds directly into sloth as we allow something to do what we are fully capable of doing. It also further alienates us from community. It can and will be used to manipulate people and continue to downward spiral of lower self esteem through lack of achievement and the attitude of disbelieving all we hear and see.

    • Mike D

      Thought-provoking and timely article! At work, we are always asking ourselves what the root cause of the problem is in order to get to the right solution, so the title of this article drew me in. It also helps that Wendell Berry, one of my favourite authors, is referenced and quoted throughout the article. Thanks for reminding me of the costly trade offs with adopting AI technology.

    Perhaps formulating queries will be the last task left to the prompt engineers laboring in the decadent dotage of humanity. If this comes to pass, I would mourn the narrowed range of human responsibility, but at least this task of forming questions would preserve some essential work for those attenuated humans. For asking questions is work. As I hope my students come to realize, there is indeed such a thing as a dumb question, and learning to ask the right questions may be the most valuable fruit of a good education. Many essays on ChatGPT pose various questions to the software, but I want to pose one to the engineers developing these Large Language Models (LLMs) and to the enthusiastic adopters of this technology: What is the problem for which LLMs are the solution?

    If it’s that we’re curious to read a Shakespearean sonnet about climate change, then it seems we’re investing a lot of time and energy into a pretty frivolous endeavor – though I’m as interested as the next person is to listen in on a conversation between Elon Musk and Wendell Berry. If it’s that we want to save the time we waste writing emails and memos and assessments, then I’m not sure accelerating the production of pointless text solves the problem of its abundance. And if it’s that we are drowning in a sea of too much data and need a computer to sort through it all and distill the particular information we need, then I don’t see how pumping out unreliable – but confidently asserted – answers improves our information ecosystem. My fear is that the problem in search of a solution is the human effort required to learn and communicate truth. Unfortunately, this is not, in fact, a problem.

    When we struggle to put the right words in the right order in an effort to articulate some splinter of truth, we exercise our human intelligence and creativity and become more capable of being responsible and redemptive agents. Such a struggle may not be pleasant. It may often be tedious. But without this struggle we cannot learn and grow, and to the extent that we are invited to outsource the effort of sense-making, our intellectual and creative abilities will atrophy and our relationships will suffer.

    colorful painting of a machine

    Hannah Hoch, Man and Machine, 1921, oil on canvas.

    Machines can “learn” from mistakes, but humans must struggle and suffer in order to grow: weight training breaks down muscle fibers, prompting them to hypertrophy in response; learning causes brain cells to break their DNA, and the repair process creates memory. Not all struggle is created equal, however, and often our struggles seem frustrating; Christians would say this frustration, this sweat by which we must earn our bread, stems from the fall. Despite the pointless tedium that characterizes much striving, we also experience the productive effort that accomplishes good work and, in the process, changes us. So, we might begin discerning which technologies to adopt – which struggles to bypass – by differentiating between productive struggle and frustrating struggle. Technologies that relieve frustrating struggle can make our efforts more productive and can increase our ability to do good work, whereas technologies that eliminate productive struggle diminish our capacities and prevent us from becoming the kind of people able to exercise our freedom responsibly and redemptively.

    The challenge, however, is that there’s not a bright line between these two kinds of struggle. Tedious struggle can feel frustrating yet, in fact, be essential and productive. Even those struggles that seem entirely frustrating can be occasions for growth. I don’t love weeding or washing dishes or picking rocks out of a field, but spending long days doing those tasks has shaped who I am as a person. I’m grateful for good hoes and automatic dishwashers and tractors with rock rakes. Even so, I recognize that using these tools shifts my relationship to the work at hand; there are real trade-offs when we forgo effort. Athletes and musicians in particular might testify to cases in which these trade-offs are clearly a Faustian bargain: shooting free throws is tedious; taking ground balls is tedious; practicing scales and arpeggios is tedious. Yet it is the countless hours of disciplined, effortful struggle that make possible the wondrous freedom, the seemingly effortless performance, of a skilled human body.

    Thinking and writing are among the many realms of human work where the line between frustrating and productive struggle remains blurry. Some features of a writer’s struggle seem mostly to be frustrations, such as losing a draft to a computer malfunction or not being able to find the right source. Cloud backups and electronic databases deserve our gratitude for ameliorating these problems. Other technologies, however, occupy a gray zone: If I access books on my computer, I get them instantly but skip the experience of browsing their neighboring books on a library shelf. If I use Zotero to keep track of my sources and properly cite them, I pay less attention to the context in which they appeared. If I, unlike Wendell Berry, draft essays on a computer, I miss the palpable sense of letters and words as I shape them with my pen. These technologies relieve tedium, but do they also elide forms of struggle that I need to undergo in order to become a better writer?

    If these digital technologies operate in the gray zone between frustrating and productive struggle, LLMs promise to eliminate those aspects of a writer’s struggle where effort is most productive. ChatGPT’s offer to summarize a book tempts me to forgo the slow work of wrestling with a difficult text, work that enables deep understanding rather than superficial familiarity. ChatGPT’s offer to write an email tempts me to skip the effort required to lovingly respond to another person. And ChatGPT’s offer to generate an argument relieves me of the responsibility for learning about, synthesizing, and imagining the truth of a particular issue. The bottom line is that whenever we rely on a technology to ameliorate the frustrations of struggle, we also change what we are capable of producing and performing.

    Hence there are always trade-offs when we offload effort to technologies, and the extent of such trade-offs remains opaque at the moment of decision: few besides the Amish foresaw the ways that adopting automobiles would reshape the fabric of our towns and communities; few recognized the widespread social, emotional, and political effects that would come from conducting social life virtually. It is surely impossible to know all the ways that relying on LLMs and other forms of AI will alter our ways of thinking, relating, and being. Some are worried that continued work on AI will lead to computers taking over the world and eradicating humanity. Even those with less pessimistic predictions warn of the likelihood of unpredictable and dire consequences. I’m somewhat skeptical of the more extreme dystopian predictions: as Jonathan Haidt and Eric Schmidt note in a recent essay, ChatGPT is not an evil genius because it is neither evil nor a genius. But they, as well as people like Geoffrey Hinton, are no Luddites, and they make a compelling case that the increased adoption of LLMs will have negative consequences for our public discourse. So although I don’t know enough about computers to know whether these scenarios are plausible, I know enough about humans, enough about myself, to know that relying on AI will erode the disciplines and virtues by which we unfurl the possibilities inherent in being human.

    To the extent that we inhabit a culture where thought and writing have become effortless, we will also inhabit a culture incapable of responsible and redemptive relationships.

    Technologies that promise to remove the disciplined struggle by which we develop our freedom, our capacity to achieve the good, strike at the core of our humanity. As Berry writes in “Preserving Wildness,” “Humans differ most from other creatures in the extent to which they must be made what they are – that is, to the extent that they are artifacts of their culture.… And so it is more important than ever that we should have cultures capable of making us into humans – creatures capable of prudence, justice, fortitude, temperance, and the other virtues.” Such a culture cannot be built around technologically enabled avoidance of effortful sense-making. To the extent that we inhabit a culture where thought and writing have become effortless, we will also inhabit a culture incapable of responsible and redemptive relationships.

    To illustrate this claim, I’ll enumerate a few of the ways that relying on such technologies will atrophy and distort – rather than strengthen and improve – our sense-making faculties. The first is a Heideggerian observation that reliance on LLMs leads to a prompt-based enframing of the world. ChatGPT privileges those questions that can be answered by consensus. This is because LLMs are, by their nature, a portal through which we can query accepted opinions and receive straightforward answers. When we relate to the world through an LLM, we treat reality as a standing reserve from which we expect to frictionlessly receive answers to any question that comes to mind. We don’t have to study. We don’t have to endure frustration. We don’t have to weigh opposing perspectives. We don’t have to imagine alternative ways of seeking answers. We don’t even have to “torture” nature in a Baconian sense. We just type in a query and hit enter. Questions that cannot be answered in this fashion become less interesting and imaginable.

    This way of enframing reality forecloses a whole range of possibilities, making us less able to pursue those questions whose terms are fundamentally contested or simply inaccessible to discursive reasoning. In this latter category are those fundamental human questions that cannot be fully answered verbally: What is the good life? What is happiness? How should we respond to suffering? Language can help us live out responses to such questions, but it cannot satisfactorily settle them. Solviture ambulando, it is solved by walking, is the only fitting response to some queries, yet no LLM can provide answers of this type. Even those realms of human knowledge that we may think straightforward and fact-based, realms we term science-based, turn out to be more contingent and culturally determined than they often appear. Relying on LLMs risks locking us into the conventional ways of thinking about and describing any given subject when what we may need is to learn from different perspectives in dialogue.

    Yet such dialogue becomes impossible when words are detached from any responsible author. Human speech establishes sets of triadic relationships among speaker, hearer, and object, and in truthful and responsible speech, these relationships are defined by fidelity. As the title of one of Berry’s essays has it, we ought to stand by our words. This obligation has long been violated by dishonest speakers and by the abstractions of corporate or bureaucratic pablum, but computer-generated speech erases the very possibility of standing by one’s words. No matter how well it may parody an individual’s idiom, a regional dialect, or a disciplinary argot, ChatGPT produces words from nowhere and no one, and there is therefore no person to stand behind the words that appear in pixels. There is no person with whom to dialogue about, or whom to hold accountable for, what is said. Interacting with LLMs trains us to expect pre-formed answers from the void rather than endure the productive effort of con-versation  – literally, turning back and forth with another person in pursuit of truth. Hence LLMs enframe reality in a way that obscures the possibility of faithful and responsible speech, speech that seeks to tend the health of other persons and our common world.

    This brings me to the second way in which a reliance on LLMs will distort our sense-making facilities: not only do they obscure the possibility of responsible meaning-making, but they are also inherently parasitic on the productive effort of human persons to discern and create meaning. As such, they degrade our verbal commons, making it more difficult for all of us to speak meaningfully to one another. LLMs “read” coherent, human-authored text to learn the structures of meaningful syntactic and semantic relationships. Similar processes undergird the development of AI tools that can generate music and visual art. Some visual artists are seeking legal recourse for the unauthorized use of their work to train these AI systems, but regardless of whether their suit receives a favorable ruling, the point stands: LLMs work only by feeding off the effort of human sense-making. Their increased use will degrade our informational ecosystem, threatening human efforts to reliably convey the truth.

    On one hand, our current dominant modes of discourse have already been largely emptied of meaning; there’s not much life left for a parasite to suck out of corporate jargon, ad-speak, and political agitprop. ChatGPT’s success at mimicking human language is an indictment of the ways we already use words. LLMs are the technology that a culture awash in prepared and manipulative content deserves. It’s a small step from employees spinning out endless SEO (search engine optimization) content and social media influencers chasing eyeballs to LLM-written text and deep fakes. An Equity, Diversity, and Inclusion office recently sent out a ChatGPT-generated email after a shooting at another school. Such bureaucratic emails are already formulaic and essentially meaningless, and the fact that an LLM could generate a passable email exposes the pre-existing vapidity of this discourse. One writer I spoke with who works in content generation for a tech company told me her company has developed a workflow in which they insert a competitor’s text into an LLM and tell the system to generate content that will outperform the competitor’s page in a Google search. Corporate communication and online content production can provide real goods, but to the extent that such writing is already devoid of any real effort to serve readers, not much is lost here.

    What happens, however, as this process becomes more widespread and LLMs recursively perform such tasks on LLM-generated text? We’ll be caught in an endless loop of mirrored gibberish. Not only will this further homogenize the verbal ecosystem we inhabit, but it will also undermine confidence in anything we encounter. Trust in what we see and read online is already dangerously low; people already think everything is spin, and now we’ll be aware that any image we see or story we read may be the hallucination of some computer algorithm, no matter how authoritative it may appear. It’s ironic sampling all the way down. This erodes the trust on which a functional public discourse relies. C. S. Lewis warns that this trust is essential to the possibility of friendships, yet it’s eroded by the prevalence of fraudulent speech: “That demand for our confidence which a true friend makes of us is exactly the same that a confidence trickster would make. That refusal to trust, which is sensible in reply to a confidence trickster, is ungenerous and ignoble to a friend, and deeply damaging to our relation with him.” As we come to inhabit a world where we can trust the reality of nothing that we see or read, we will be less able to give our trust to anyone, less able to dialogue constructively and form friendships with other people.

    Practicing the productive effort that develops our capacities as free persons, made in the image of God, will not be easy in the world we find ourselves. But it remains possible.

    Part of the way that LLMs erode interpersonal trust is by further abstracting language from its origins in our embodied experience. Meaning comes to us as enfleshed creatures, and our words depend on and recall this embodied sense-making. On one level, this is true in the ways that words make the process of thought palpable to an attentive hearer or reader. Berry writes about this reality by way of the etymology of sentence: “When we reflect that ‘sentence’ means, literally, ‘a way of thinking’ (Latin: sententia) and that it comes from the Latin sentire, to feel, we realize that the concepts of sentence and sentence structure are not merely grammatical or merely academic – not negligible in any sense. A sentence is … a feelable thought, a thought that impresses its sense not just on our understanding, but on our hearing, our sense of rhythm and proportion. It is a pattern of felt sense.” On a deeper level, this embodied sense-making means that our language has been shaped by the sensory capabilities of a human body – our words carry the origins of embodied meaning-making. When we understand something (literally, stand under it), we say that we see its meaning or that we grasp it. Compassion or sympathy refer to a visceral response we have to another person’s suffering. Even a relatively abstract word such as truth comes from the word for tree: we articulate a concept meaning “truth” by analogy to our experience of a deeply rooted tree. Losing the connection between words and their embodied roots renders language meaningless. As the poet Richard Wilbur writes, “Ask us, prophet, how we shall call / Our natures forth when that live tongue is all / Dispelled.” There may not even “be lofty or long standing / When the bronze annals of the oak-tree close.” Language is thus a palimpsest of tacit, embodied sense-making, and computer hardware simply cannot mean in the way a human person can. In articulating the embodied, tacit dimensions of knowledge Michael Polanyi writes, “we can know more than we can tell.” Precisely the reverse is true of any computer hardware.

    The widespread adoption of LLMs will ultimately erode the possibility of convivial meaning-making. Ivan Illich contrasts convivial technologies and social arrangements, which enlarge “the range of each person’s competence, control, and initiative,” with the industrial technocracy that “leads to a specialization of functions, institutionalization of values and centralization of power and turns people into the accessories of bureaucracies or machines.” And he warns that “society can be destroyed when further growth of mass production renders the milieu hostile, when it extinguishes the free use of the natural abilities of society’s members, when it isolates people from each other and locks them into a man-made shell, when it undermines the texture of community by promoting extreme social polarization and splintering specialization, or when cancerous acceleration enforces social change at a rate that rules out legal, cultural, and political precedents as formal guidelines to present behavior.” While much of the hype around ChatGPT lauds its productivity gains and egalitarian possibilities, the actual results will be more likely to exacerbate inequalities. Those who control these tools will profit; those who are on the receiving end of them will be left to make their way in a degraded intellectual ecosystem.

    Take, for instance, the statements of a key leader who made the decision to release ChatGPT publicly last year. OpenAI CEO Sam Altman sounds as if he’s concerned, not merely for corporate profit, but for the common good. As one Wall Street Journal article reports, “he fears what could happen if AI is rolled out into society recklessly.” Yet he overruled the concerns of his own employees that the decision to publicly release ChatGPT would be reckless, and he insists that the way to get AI “right is to have people engage with it, explore these systems, study them, to learn how to make them safe.” In other words, he treats human society as a vast laboratory of caged guinea pigs, with no concern for how they might be harmed by his social experiment. As Alan Jacobs points out, Altman’s attitude isn’t altruistic but sociopathic. It’s not surprising, then, that his technology also displays sociopathic tendencies.

    Several people I’ve shared these concerns with have cited the possibilities for ChatGPT to help individuals for whom English is a second language or people with language disabilities. Or they’ve suggested it might serve as a personal tutor for kids who need extra help and don’t have a teacher who can give them the time they need. In this latter case, however, what is “solved” is not the needs of the student but the effort required to care for a student; if a student is patiently helped by an adult, the student is being told that she matters, that she is worth the time and attention of another person. If this student instead receives answers automatically generated by a computer, she is being told she’s not worth someone’s time. The medium is the message.

    LLMs will indeed benefit some particular individuals. But while some find ways to leverage these tools to maximize their individual productivity, their ways of enframing the world and parasitically feeding on human language will degrade our shared verbal commons: the profits will be privatized and the costs borne by all. LLMs will further pollute an informational ecosystem that already poses unequal temptations to offload thinking: those who have been formed and disciplined to think well may find ways to use these tools effectively, but students and others who haven’t yet experienced the deep rewards of grappling with complex ideas will be the ones most tempted to forgo the work of disciplining thought. And given the intensive computing costs required to train and run LLMs, these tools will centralize power and increase the gap between those who shape language and those who passively consume it. Social media didn’t democratize speech so much as it degraded our informational ecosystem; it democratized the need to be vigilant and discerning as we sift through vast quantities of debris in search of reliable institutions and information.

    In his brief essay “In Defense of Literacy,” Berry argues that verbal competence is a necessity for people awash in a sea of “prepared, public language,” language intended to sell or persuade or enrage. Berry admonishes that in this situation, people who neglect to cultivate verbal acuity “enfranchise their exploiters.” His concluding warning about the consequences of illiteracy – a lack of facility with words and an ignorance of the stories by which our ancestors have handed down wisdom regarding our human condition – rings even more true today: “Longer perspective will show that [literacy] alone can preserve in us the possibility of an accurate judgment of ourselves, and the possibilities of correction and renewal. Without it, we are adrift in the present, in the wreckage of yesterday, in the nightmare of tomorrow.”

    LLMs are a technology suited to a decadent culture, one that chases easy profits rather than tackles the real challenges we face. It’s easier to make money rearranging words according to various probabilities than it is to make a living improving the health of our topsoil, communities, and souls. When we sit in front of a computer typing prompts into ChatGPT and watching it effortlessly spit out sentence after sentence, we may experience the rush of power. This thing can do whatever I ask it to do! What is less apparent is the seductive power such tools exercise over us. Paul Kingsnorth dramatizes this lure in his dystopian novel Alexandria, about a remnant community of humans holding out against a massive AI. One of the agents for this AI entices these remaining persons to upload their consciousness onto the AI and escape the difficulties of existence: “As Alexandria became more accessible, everyone wanted in. If your life on Earth is going to be a hardscrabble in dying soil, or a struggle to survive in a lawless megacity slum, why continue it any longer than necessary?” Tasks such as writing well, thinking well, and living well are hard, particularly for fallen and fallible creatures. But we gain the competence to meet these challenges responsibly only by careful, effortful practice. No technological shortcut, no forbidden fruit, will alter this creatural reality.

    We cannot blithely adopt LLMs without becoming complicit in their Faustian bargain and without, as Berry warns, enfranchising our exploiters. Neither, however, can we pretend that we live in a world where these technologies do not exist. And I have no expectation that the development of these AIs will slow down or be conducted with real care for their externalized cultural costs. There’s simply too much easy money to be made selling shoddy, knock off performances of “intelligence.” Berry writes that “it is easy for me to imagine that the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines.” As has been clear for quite some time, even those of us who want to live as creatures will have to figure out how to do so in a world designed and manufactured by those who prefer to live as machines. LLMs create “no absolutely new situation;” they are simply the latest reminder – the most recent apocalypse – of the technopoly we have long been building.

    To live as creatures in a world built for machines, we will need to patiently and creatively make do. I get this phrase “making do” from the French Catholic writer Michel de Certeau who describes the possibilities that people have in their everyday lives to find ways that creatively resist or subvert inhuman systems. Practicing the productive effort that develops our capacities as free persons, made in the image of God, will not be easy in the world we find ourselves. But it remains possible. Let me conclude with an image of such possibility.

    In a classic Wallace and Gromit film, Wallace purchases a pair of automatic trousers as a gift for his dog. He hooks Gromit’s leash to the machine and programs it to take Gromit for a walk. Gromit comes up with the only sensible response to such an absurdity: he ties his end of the leash to a toy dog and, while the mechanical trousers march around the park pulling the toy, Gromit plays on the playground. (The rest of the film explores the vulnerability such technologies have to being commandeered by nefarious actors.) As Alan Jacobs suggests, Gromit embodies the sane reaction to this situation: Let the faculty-bots grade the essays produced by the student-bots. Let the executive-bots read the memos written by the intern-bots. Those of us who still want to live and think as humans can step aside and create subaltern communities that value and reward and inculcate productive effort. We need such communities, functioning as ecological islands or micro-habitats within a degraded intellectual ecosystem, because we cannot foster human disciplines as individuals. Think of the way that good churches promote the spiritual disciplines that cultivate holiness. Or that sporting institutions fence out cheating and celebrate the focused physical effort that leads to athletic excellence. Or that effective schools teach and honor intellectual disciplines that lead to wisdom. Or that families train children to practice honesty and service and compassion so that they might be a blessing to their neighbors. These communities must challenge their members to pursue the answers to fundamental questions: what are our responsibilities as humans? What virtues and disciplines and practices must we cultivate in order to meet these responsibilities? By keeping these questions alive, and by modeling ways of forming humans capable of answering them, such communities can sustain a kind of living memory. They can allow their members to become not mere prompt engineers, but humans who embody answers to our ultimate questions in well-lived lives.

    Contributed By JeffreyBilbro Jeffrey Bilbro

    Jeffrey Bilbro is the editor-in-chief at Front Porch Republic and the author of several books.

    Learn More
    4 Comments
    You have ${x} free ${w} remaining. This is your last free article this month. We hope you've enjoyed your free articles. This article is reserved for subscribers.

      Already a subscriber? Sign in

    Try 3 months of unlimited access. Start your FREE TRIAL today. Cancel anytime.

    Start free trial now