Christians, by and large, don’t like tech culture. To many, Silicon Valley seems irredeemably hostile to New Testament values. The young people working there, in this view, are hedonistic digital yuppies riding a fiscally conservative, socially liberal ideology to its dystopian endpoint, atomized individuals with no ties of loyalty except to over-powerful and under-regulated tech firms.

There are good reasons for this ­skepticism, but it’s missing something. Strange and surprising subcultures are thriving in the hothouse of tech country. Among them is a loose-knit community calling themselves “rationalists.” Based in California, this group is united by a heady mix of futuristic idealism and communitarianism.

As an outsider, I’m by turns fascinated and frightened by the rationalists and their worldview. They can be articulate critics of the tech world’s dominant ideology. They’re also social experimenters who serve as their own guinea pigs.

I first came across the rationalist commu­nity by clicking links on the blog of the woman I would one day marry, Leah Libresco. My wife thinks her conversion to Catholicism is partly attributable to discussions she was having on the rationalist internet. Leah loved the rationalists’ focus on seeking truth above all else and investigating ways to set aside bias in order to do that. Leah says, “I was particularly taken with an essay by the Bay Area thinker Eliezer Yudkowsky on the virtues of rationality, especially that of lightness – holding oneself so as to be ready to be moved by evidence. It helped me, as Christianity became more and more plausible, to be ready to give in – to let myself be moved by truth, and ultimately, by God.”

I’m by turns fascinated and frightened by rationalists and their world view.

While I don’t know of many other rationalists who’ve made a similar journey, Leah’s experience has shown me that Christians need to take the rationalists’ ideals seriously. We can affirm their commitment to learning and living the truth; we can applaud their attempts to create community; and we can take a cue from their desire to do good on a global scale. We can also learn from their hunger for the sacred, a desire for meaning that drives them to engineer new secular sacraments and that offers a vivid glimpse into a different kind of religious seeking.

Origins

The rationalist community formed on the internet, first on the economics blog Overcoming Bias and then, especially, on its spinoff website, LessWrong.com. This site was cofounded by Eliezer Yudkowsky, a self-taught researcher in the world of artificial intelligence (AI) frightened by the possibility that these programs might one day become unfriendly to humankind and frustrated that people in his field did not prioritize investigating safety measures to protect humanity from them. What if we created something much smarter and therefore more powerful than us and it didn’t share our values but instead sought an alien goal of its own – like repurposing all the matter in the universe into paper clips?

Yudkowsky felt he could best encourage AI safety research by creating a culture of rational thinking. LessWrong is an attempt at creating that culture, and it has attracted a crowd interested in AI safety. The community has also come to discuss and sometimes emulate Yudkowsky’s other interests: the Bayesian theory of statistics, polyamory, and a sci-fi future of cryonic freezing and mind-uploading immortality.

Rationalists take a somewhat paradoxical approach to keeping hold of their humanity in the face of technology. Apprehensive of the potential anti-human power of artificial intelligence, some in the community ponder the possibility that humanity might have to be changed radically to defeat death, perhaps shedding this mortal coil to become digital beings. By undergoing this change, we will preserve what they see as those things that are crucial to our humanity, like our boundless curiosity and creativity. (Our physical selves are, for the vast majority in the community, not intrinsic to who we are.) One rationalist, Alex Altair, says he believes that not only could we someday live indefinitely, but we could alter our minds so as to always be learning new things and “trying on different personalities.”

Altair, a software developer, was first drawn into the community by Yudkowsky’s fan-fiction epic Harry Potter and the Methods of Rationality. He loves the rationalists’ “world-scale ambition” and lack of bias toward localness, either in time or space. “Why care only about Earth when there are so many planets out there?” He sees no reason not to take the whole world – indeed, the whole of time – as a playing field. After all, if humanity survives to colonize the universe, he posits, it is likely that the vast majority of people who will ever exist are in our future. Averting existential risk and helping those future people is a major goal, a value that he holds because, he says, “All humans are pretty similar, compared to other objects in the universe.”

Altair thinks that anyone who reasons with enough “hardiness of mind” will come to see AI as one of the most important risks we face as a species. Though he admits his apocalyptic worries can sound absurd, Altair thinks they are gaining steam among people who think seriously about these things, starting with Nick Bostrom (author of Superintelligence: Paths, Dangers, Strategies) and spreading to Bill Gates and Elon Musk.

Bay Area rationalists at a Secular Solstice celebration Image from secularsolstice.com/blog

Rationalist Rituals

Despite their commitment to the idea that genuine community can exist on the internet, many rationalists have affirmed the importance of community in the flesh as well. Bay Area rationalists have even congregated in group houses. Another physical gathering place is the Center for Applied Rationality (CFAR), which advances the cause of combating existential AI risk by running intensive four-day workshops on rationality. The 2016 annual LessWrong survey of rationalists had about three thousand respondents, perhaps a ballpark number for the community’s size.

In building a shared culture, some rationalists draw on the example of religious communities. In a recent post on his blog Compass Rose, Ben Hoffman weighs the pros and cons of becoming a Quaker, given their track record of coming to what he considers correct positions (like abolitionism and pacifism) before the rest of society. In another post, he advocates the wisdom of Jewish Sabbath restrictions as a sanity check on the modern world: “If something like the Orthodox Sabbath seems impossibly hard, or if you try to keep it but end up breaking it every week – as my Reform Jewish family did – then you should consider that perhaps, despite the propaganda of the palliatives [e.g., fast food, Facebook], you are in a permanent state of emergency.” Contemporary society drowns us in noisy demands, he argues, producing spread-too-thin communities and lonely individuals.

Christians need to take the rationalists’ ideals seriously.

Altair has attended and helped organize iterations of the rationalists’ Secular Solstice celebration. The event had its genesis in rationalists’ observations of other cultures’ various rituals and traditions. “It may be weird to engineer that, but we’re all engineers,” says Altair. The first Secular Solstice event was spearheaded by musician Raymond Arnold in 2013. Arnold wrote in an introduction to a book of Solstice songs: “I have some oddly specific, nuanced, and weird beliefs. And I had the hubris to arrange a night of carefully designed ritual for myself and my friends to celebrate them.” The Solstice’s central structure is based on light and darkness. Attendees bring and light “oil lamps, LEDs, plasma balls and imitation lightsabers” that are eventually extinguished so that participants can experience darkness together. Finally, everything is lit again, a symbol of a brighter future. It’s a powerful visual, one that feels like a processed and repurposed Easter vigil, though it was apparently conceived in design as “reverse Hanukkah.”

Music is a central component of the event. One of Altair’s favorites is the curiously titled “The Word of God,” a deist encomium to scientific investigation with a refrain of variations on “Humans wrote the Bible, God wrote the rocks.” After a few Solstices, Arnold amped up the song’s secularism by altering the lyrics to “Humans wrote the book of earth, time wrote the rocks” and variations thereon – God and the Bible have disappeared even as objects of deist critique.

Apart from songs, the event features secular sermons and mindfulness exercises. Many of these focus on hopes and fears for the far-flung future, asking the attendees, “Are we being good ancestors?” It struck me that although Christians’ vision of the future is different, this would be a useful question to ask ourselves as well.

The Secular Solstice event spread to seven cities by 2016, and attendance has steadily increased, with people coming for the music, speeches, and communal solidarity on offer. Altair sees the celebration as hugely successful at capturing the trust-building and bonding functions of religious rituals.

Image by Raemon from LessWrong.com

Rationalist Religion?

There’s a parallel between the rationalists’ efforts to engineer ritual profundity and their way of describing human beings’ interior life. Rob Bensinger, research communications manager at Machine Intelligence Research Institute (MIRI), told me, “We can think of human minds as a kind of computer, or (if we reserve the word ‘computer’ for engineered artifacts) as a naturally occurring system that a computer could emulate.” Altair was confident that human minds could eventually be simulated perfectly: “The physical process that makes up the mind is all there is to the mind. If you simulate that in some other medium, you get the same thing.” David Souther, a CFAR alumnus and software engineer who considers himself on the periphery of the rationalist community, made the comparison more bluntly: “What makes us different from computers? Today, efficiency of processing.” He assured me that we’ll see computers as efficient as human minds in, at most, a few decades, and machine consciousness will probably follow that.

As with minds, so with holidays. If no phenomenon, not even human consciousness, is more or other than its underlying physical process, then simulating the relevant parts of the process in a new medium will also produce that phenomenon. Take candlelight, closeness, and song; run the simulation with rationalists; and – ta-da! – the output is culture. Altair was intrigued by the analogy that the rationalist community was a sort of simulation of a religious community. “Reductionism is one of our primary tools that we tend to reach for first,” he said.

This suggestive parallel, however, doesn’t mean rationalists are religious believers. To the contrary: LessWrong’s 2016 survey shows that believers in God were a tiny minority, with only 3.7 percent of respondents identifying as “committed theists” – a distant fourth after “atheist and not spiritual” (the winner by a mile at 48 percent), “agnostic” (8 percent), and “atheist but spiritual” (5.5 percent). Despite this, the problems the rationalists are trying to solve are essentially theological: What would a just, omnipotent entity be like? What eternal future should humans desire, and how can we get there? Thus the religious casts the ­rationalists take on, from Solstices to Sabbath-envy, shouldn’t be so surprising. The question from the AI research branch of the rationalists is “How can we responsibly make a god?” And the answer from the community-building branch is “First, make a church.”

Altair, Arnold, and others are responding to a lacuna in the present world not by shrugging but by trying to build something to fill it. After all, rationalists like Hoffman are quite right that mainstream contemporary culture is in many ways inhumane, even dehumanizing. Christians have even more resources to draw on to make this critique and to model our lives on. I wish more of us were as willing as the rationalists to look foolish in the eyes of the dominant culture in service of wholeness (and in our case, holiness).

To act more effectively on these good desires for wholeness, rationalists could use a more robust anthropology. The reductionism to which the rationalists so often resort poses challenges for a group that is seeking to preserve humanity against the threat of its potential extinction. What, in the minds of the rationalist community, are these humans whose well-being they are eager to promote? One thing that frustrated me in my conversation with Altair was the way he treated each person’s values as almost immutable, the way he resisted making normative claims about what we should value. Surely, rationality shouldn’t only help us achieve whatever we happen to want; shouldn’t it also help us shape our desires in accord with truth? According to Altair, this is a contentious issue among rationalists, though he himself has “cheated” by simply making his fundamental goal knowing true things. Though I tried, I could not get him to concede that some of those truths might be moral facts about human purpose.

Sexual morality, for example, is an area in which rationalists’ views clash starkly with what Christianity teaches as true. It’s almost as though rationalists view monogamous marriage as something like an outdated technology. Some Bay Area rationalists have paired off in unconventional ways such as handfasting, a novel commitment ceremony that binds the participating couple for a year and a day. In LessWrong’s 2016 survey, 13 percent of respondents said they preferred polyamorous relationships. In my view, here the rationalists are blind to grave dangers: they are making themselves – and, increasingly, their children – human test subjects in a social experiment that is bound to prove destructive.

The LessWrong rationalists have their critics. RationalWiki, another rationalist group, heaps the same scorn on Yudkowsky as it does on creationists and UFO conspiracy theorists, and essentially accuses him of leading a personality cult. But LessWrong participants are not blind to these dangers and write long sequences of blog posts on spotting and correcting bad aspects of cults. Rob Bensinger (MIRI) says he responds to this critique of the LessWrong community by asking the critic to be more specific: “I think ‘culty’ can be gesturing at a lot of different things, some of which are really important to be wary of in small ‘idea-oriented’ communities (the tendencies toward conformism and over­confidence), and others of which aren’t inherently concerning (such as weirdness).”

The events focus on hopes and fears for the future, asking, “Are we being good ancestors?”

The Yudkosky-as-cult-leader charge is undercut by the fact that plenty of rationalists are willing to critique him. When his utilitarianism leads him to abhorrent positions such as claiming it would be right to torture an innocent person if it saved a sufficiently high number of future people from minor annoyances, other rationalists in his community make counter­arguments. More light-heartedly, a humor page on LessWrong collected Chuck Norris–style Yudkowsky facts like “There was no ice age. Eliezer Yudkowsky just persuaded the planet to sign up for cryonics.”

Another stick commonly used to beat the rationalists is the infamous episode of Roko’s Basilisk, a 2010 thought experiment. A LessWrong contributor posited that a future, friendly, omnipotent AI might be obligated to recreate and then torture a simulation of anyone who didn’t do enough to bring about its existence. As each day we aren’t ruled by a benevolent god-computer is a needless waste of human life, the future AI might need to institute these punishments in order to motivate us to contribute all we can to AI research. (Yes, this does involve all sorts of mental time travel, but that’s a detail; don’t forget rationalists like to think globally in time and space.) The thought experiment’s force depends on the utilitarianism common to the community: an action is right if it leads to the greatest happiness for the greatest number of people. And it was, for some of those in the community, remarkably upsetting.

Yudkowsky deleted the original post and banned discussion of it, citing the psycho­logical harm it could cause readers who agonized over it. The whole incident is commonly used as evidence that the rationalists are crazy. But, at least in the 2016 LessWrong survey, only 10 percent of rationalists reported ever feeling any sort of anxiety about Roko’s Basilisk, and vanishingly few were still worrying about it. All in all, the episode mostly proves that some rationalists took seriously the stated eternal-life-or-eternal-death stakes of their beliefs. If hell does not exist, it might be necessary to invent it.

The Shape of Alief

In the end, I find I can only describe rationalists by using a term they introduced me to: alief, which means one of those attitudes that lie below our articulated beliefs but still inform our actions. Rationalists may not believe a spiritual creed, but many seem to share a strong alief in reverence, communion, and liturgy – even in the power of their pseudo-sacraments to help humanity pull itself out of the darkness and reach eternal life among the stars. I believe their desires – for meaning, for solidarity, for immortality – are true, truer even than they themselves know, even though they believe these things must be created by man in the face of an uncaring universe, not received as graces from the Creator.

On this front, Christians can engage those involved in the rationalist project. These atheists don’t live in a “disenchanted” world, but in one where their actions and beliefs have meaning, even eschatological meaning. The shape of their secular hope, the desire for a future ruled by a benevolent intelligence, points towards the inescapable human longing for the kingdom of God. Of course, in their enthusiastic pursuit of truth, community, and techno-futurist salvation, they have invented some novel errors. Yet, these are people with whom we can and should be talking, though it might require learning their idiosyncratic language. And, fortunately, one of their virtues is an openness to talking to people radically at odds with them.

Rationalists are people with whom we can and should be talking.

I told my rationalist interlocutors that, frankly, I feared the rationalist community could come to represent the social equivalent of the unfriendly AI that Yudkowsky is worried about. After all, what he fears is the rule of a superintelligent moral naïf. The rationalists are a new, smart, powerful entity, not particularly bound by conventional morality, interested in rebuilding the world in their image. How can we be sure they aren’t turning themselves and us into paper clips – maximizing an idiosyncratic goal at the expense of normal human values?

Altair said he thought many rationalists would take the comparison as a compliment, since it means I view them as very smart. But he also said he could understand someone fearing the scale of rationalist aims: “World-scale ambition comes with a world-scale potential for failure – not failure, destruction.” However, Altair asks critics of rationalism to talk to the rationalist community, so together we can test our beliefs and improve our mental map of the world. He says our common destiny should motivate cooperation: “Help us figure out how to make this one future that we share a good one.”