Plough My Account Sign Out
My Account
    View Cart

    Subtotal: $

    Checkout
    abstract photography depicting a line of fragmented silhouettes behind neon swirls

    Book Tour: On Being a Good Ancestor

    Phil Christman reviews Survival of the Richest by Douglas Rushkoff, The Good Ancestor by Roman Krznaric, and What We Owe the Future by William MacAskill.

    By Phil Christman

    December 1, 2022
    0 Comments
    0 Comments
    0 Comments
      Submit

    Book Tour is a bimonthly review by Phil Christman of new titles, each exploring a theme to trace hidden connections among books and writers.


    I once had a truly dreadful conversation. There is a G. K. Chesterton essay where he has a long encounter with one of those fin-de-siècle Satanist-artiste types who were so prevalent in the intellectual world of Chesterton’s youth, and it climaxes when the man says to young Chesterton, with a horribly bored mien, “What you call evil I call good.” Chesterton flees from the spot as from a site of true desecration. What I am about to describe was the only conversation I have ever had that affected me in a similar way.

    On the morning in question, I was running a table for a candidate for the board of regents at the university where I work. The candidate was, unlike most members of that august body, an actual university teacher, and I thought that perhaps his presence on said board would help to create a climate friendlier to those who do the work of the university. (Crazier things have happened.) I was making this case in the rushed, undignified, slogan-heavy fashion that such occasions demand – this is why I hate tabling – when I noticed a man, dressed at midday in flip-flops and a bathrobe (over what appeared to be a T-shirt and shorts), who lingered without really engaging.

    When I paused for breath, he said, suddenly brightening, “I’m a singulitarian. My priorities are somewhat different.” He went on to explain those priorities. What the university needed, as it turned out, was not better working conditions, higher salaries, more communitarian relations between town and gown, or justice rolling down like a river, but to divert all resources toward the creation of “strong” artificial intelligence devices. I believe his rationale had something to do with the inevitability of these devices’ arrival at some point in the future – they would, perhaps, be kinder to us later if we wasted no time creating them now – though for people fixated on strong AI, the exact rationale can change while the central object of veneration never moves. This can happen to any of us religious types. In any case, none of this was necessarily offensive to me in itself; I consider it silly, but I worship a God, born to a Palestinian Jewish peasant, who defeated death itself, so I appreciate the way life drives us toward convictions that are hard to explain to others.

    What bothered me, rather, was his final remark. “I want to make large numbers of people redundant,” he said, and gave me the ghastliest smile, a smile that combined the impish look of a small boy who has just emitted a terrible smell he is waiting for you to detect with the smug look of a debate-bro who has just played devil’s advocate for some proposition so vile (“The Confederacy had a point!”) that it, ironically, makes him undefeatable, in the way that Socrates can only partly refute Thrasymachus. Some ideas are so bad, founded on such cursed premises, that reason can’t debase itself enough to address them at all. You just mutter a prayer and move on.

    I dare say that most people, if asked, would agree that we have duties of some kind toward the future. I also think most people, if asked, would agree that our duties to the future are not those suggested by this man. His certainty about the shape of the future seems entirely unwarranted – will there ever be strong AI devices? Should that fact create in me a sense of obligation? Moreover, his “a-bird-in-the-bush-is-worth-the-livelihoods-of-all-the-birds-in-my-hand” blitheness toward actually existing intelligent lives is ghoulish. He seems like a character out of Lovecraft, trying to perform bad things now so that worse things may arrive later.

    However, if he has failed to solve the ethical problem of our duties to future people, we must do him this much justice: that’s a hard problem. Our responsibility to posterity, to those future people who are as yet merely notional, eludes exact formulation. We know it exists, but if we try to get too specific, we find ourselves assessing our duties toward phantoms, or worse, we find our sense of responsibility curdling into a desire to control. Our empire will last for a thousand years! Consider the White supremacist, who is, in one sense, a very long-term thinker: he is so worried about the far-future prospects (however defined) of not-yet-existent White children (whatever “White” will mean by the time these unfortunate children are born to these strange people who have such designs upon them) that he is willing to hurt real children now in their name. Next to him, my singulitarian doesn’t seem so bad.

    abstract photography depicting a line of fragmented silhouettes behind neon swirls

    Photograph by Jr Korpa

    The question of how to properly formulate these duties seems to be on many minds at the moment. Several recent books take it up. Douglas Rushkoff’s Survival of the Richest is by some distance the wisest, funniest, and most morally acute of them; it examines the doomsday plans of the super-wealthy and the deep cultural biases that allow them to, say, purchase survival bunkers in New Zealand without dying of shame. They know exactly what they want to bequeath the future: their enormous selves.

    In a scenario that resembles the publication history of Michael Lewis’s Liar’s Poker (1989) – Lewis wrote a book about the fraudulence and waste of the work that he did as a stock trader during the go-go 1980s, and was immediately showered with letters from aspiring yuppies who wanted to know how to get a job on Wall Street – in 2018 Rushkoff wrote a funny article about being invited to some big tech conference, where a man asked him, “How do I maintain authority over my security force after the event?” ("The Event" is the phrase some people use to describe a supposed oncoming collapse, ecological or otherwise, that will require the wealthy to find sumptuous and well-guarded hiding spots.) “For them,” Rushkoff reflected in the article, “the future of technology is really about just one thing: escape.” After posting this article, he was deluged with further requests to speak on the same sorts of topics, from the same sorts of folks.

    He has now written a full-length treatment of that desire for escape, its roots in Baconian rationality and in the scientism of the modern age and in the will to desecrate that he finds at the heart of capitalism. He gives a potted history of the internet, that place built by scientism and capitalism together. What we owe the future, argues Rushkoff, is a return to human-scaled lifeways. He invokes the Seven Generations principle found in the Great Law of the Haudenosaunee Confederacy (and also, in truncated form, on the label of certain cleaning products on your grocery-store shelves): Make decisions that give equal weight to the people of the seventh generation from now.

    The Seven Generations principle shows up as well in Roman Krznaric’s The Good Ancestor (2020). Krznaric believes that because of the pathologically short-term thinking that capitalism fosters, we have “colonized the future.” (Colonization: Now there’s a metaphor that hasn’t gotten stretched past the point of usefulness.) The children of the year 2090 don’t exist yet, and yet we’re already cooking the planet for them, without their input. (It’s funny how a society that uses children as the rationale for every political project – from building new libraries to harassing librarians – is so bad at actually protecting them. “Childhood,” like so many other abstract concepts, can become an idol, and then demand the very resources and attention needed by the actual beings for whom it is supposed to stand in.)

    Krznaric’s book is well-intentioned and TED-talky, a mostly unobjectionable bit of intellectual smooth jazz that, like a lot of writing on environmental issues, says “We are ruining the earth” when it needs to say “They (the owners of capital) are creating an economic structure wherein we (those who work for a living) can’t eat unless we help them wreck the earth.” This failure is puzzling, since he is otherwise willing enough to take controversial political stands (he praises Extinction Rebellion, a group of climate-change activists whose tactics seem designed to reinforce the stereotype of the environmentalist who wants to ruin your day and take away your livelihood).

    He is also oddly impressed by projects that, well, colonize the future, such as the Clock of the Long Now, which is a giant clock that some very impressive people – Brian Eno, the guy who runs Wired, several entrepreneurs – are trying to build inside a mountain. Once built, it will remind us all . . . that time exists? I’m not sure what it will remind us of. Do our great-great-great-descendants really need a giant art project like this, one that absolutely reeks of that clever-yet-stupid quality that marks it as a product of this one very specific era of conceptual art? I was already wondering what the humans of the future will make of so much contemporary art – all the shark carcasses and messy beds and shiny animal statues – when that stuff finally loses its value as a form of liquidity for hedge-fund managers. At least Jeff Koons’s sculptures could form part of a decent playground if we keep them from rusting. But a clock? It’s such an unexciting gift to leave one’s descendants, a monumental, millennia-long version of your aunt giving you cologne instead of a toy for your fifth birthday. Are we sure that they would not prefer a nice mound or pyramid, or two vast and trunkless legs of stone? Or just a mountain without a clock in it?

    Then there’s William MacAskill, author of What We Owe the Future. MacAskill is one of the founders of Effective Altruism, a philosophical tendency that combines hardcore utilitarianism with – at least in his case – a very real if extremely abstracted sort of kindheartedness, plus a deep belief that the factors that must determine our moral calculations include all sorts of future events that sound like unwritten Philip K. Dick novels. MacAskill is convinced, for example, that humans will soon develop Artificial General Intelligence, and that this fact does, indeed, obligate us. We need to be sure that our AGI don’t accidentally or intentionally kill us, for example. We need to optimize our knowledge of what is good, so that bad values are not “locked in” to the very programming and functioning of these machines. They will have hegemony, and we need to make sure it’s a good hegemony. It’s not clear how to do this, because, as even MacAskill will admit, we cannot know for sure that our current beliefs about ethics and politics are the right ones. (MacAskill at one point wistfully describes a future period, the “long reflection,” in which, through pluralistic social experimentation in many small communities, we slowly optimize our moral intelligence. There’s something sweet about it, like when an undergraduate tells you he’s going to go off alone for two weeks and figure out his life.) I am not, myself, really worried about the real-world impact of actual artificial intelligence. I don’t think the so-called Singularity is coming. We will make stupid machines that we tell ourselves are smarter than we are, and we will shrink ourselves to fit them, and this will have awful consequences. But that’s already happening.

    In the circles I hang out in, Christians like to bag on the Effective Altruists. They are accused of approaching moral problems as though these problems were mere games, taking place in a gray world where context, tradition, or particularity exert little to no gravitational pull upon our duties to each other. I think this criticism is substantially true, but that their very narrowness may sometimes allow them to see certain particular points clearly. Another criticism is that since the word “effective” can mean any number of things, the various factors selected by Effective Altruists to measure it will simply reflect the exact priorities of those people – rich people – who can afford to hire econometricians, or at least to study econometrics. The movement’s record on this issue is not entirely discouraging, though. For a long time their big move was to tell people to buy lots and lots of mosquito nets for people in those parts of Africa where malaria still rages. That may sound simply sensible, but when we consider the record of big-picture, world-managing, utilitarian-influenced philanthropic movements that come from the UK’s educated class, it seems a significant moral advance that suggests a real sincerity of purpose. Usually such people’s first thought re: Africa is some controlling eugenics nonsense about suppressing birth rates. (When the late Prince Phillip infamously told a newspaper reporter that he’d like to be reincarnated as a virus that solves overpopulation, he was showing his school colours perfectly.) It’s also an example where their narrowness leads them to see one thing clearly. I would not have thought to check that I could make the world better by buying a lot of people mosquito nets, but once it’s pointed out, the reasoning does seem sound.

    There are other things one can say in partial defense of the EA movement. For one thing, MacAskill apparently gives away most of his considerable income, and has talked a lot of well-heeled young people into signing tithing pledges. Social-science research suggests that a lot of us Christians give not the Bible-recommended ten percent, but one to two percent, to charity. Advantage: MacAskill. For another thing, it is apparently not uncommon in EA circles to donate your extra kidney, which is something I have only thought about doing, and then postponed deciding upon to some unknown future date. That’s another point for MacAskill and his followers. Further, MacAskill himself writes with a winning earnestness. He really does want to figure out what the right thing to do is, and do it. Is this a common enough trait that we can take it for granted? Many of his arguments do strike me as absurd, but it seems unfair to blame analytic philosophers for following out premises to the point where their implications become silly or strange; someone has to do it, because occasionally that is where truth is found. Analytic philosophers give themselves Galaxy Brain Syndrome as the Curies gave themselves radiation poisoning – so we don’t all have to. And MacAskill is capable of changing his mind when he’s wrong. One infamous EA argument holds that it makes more sense to enter a remunerative profession and then give a lot of money away than to seek meaningful work that makes the world better but pays less. This argument used to lead MacAskill to tell young people to work for oil companies – ethically and ecologically disastrous advice that he has stopped giving. (Now he tells them to become quants.)

    MacAskill is notably less worried about climate change than some of us. After crunching a lot of data and running through many scenarios, he doesn’t think it will probably end civilization. Viral pandemics and weapons of mass destruction are what really keep him up at night. I found it perversely comforting to worry about these childhood terrors again after so many years of climate anxiety. For him, carbon emissions are only likely to end civilization if they act in tandem with another Event, such as a nuclear war. If we are almost wiped out by such a war, and we’ve also burned all the coal, what will our descendants (after millennia of misery) do a second Industrial Revolution with? We had better keep that stuff in the ground for now, he reasons – lurching to the right conclusion after a ten-mile detour.

    MacAskill’s book also functions as a popular introduction to the field of population ethics. This field is rich in paradoxical results. For example, MacAskill embraces the so-called “repugnant conclusion,” as the famous ethicist (and MacAskill’s grad-school advisor) Derek Parfit called it. Basically, the idea is that if we accept a seemingly intuitive assumption – that it’s good for the universe to have as much happiness within it as possible – then a sufficiently large society in which most people are only barely happy trumps a smaller society where everyone is very happy. MacAskill bites this bullet, and argues that we should do everything we can to bring such a barely acceptable but hugely populous world to pass. We owe it to the future.

    You may have some questions at this point. For one thing, you may wonder whether, even if you adopt MacAskill’s goal as your own, it actually affects your life. “Help bring about a world of trillions of people, living all across the galaxy, assisted by AI that is as moral as we can make it”: This is a project to which only the privileged can contribute, and only some of them. How people who lack either the means or the talent to become scientists can help, and whether their lives can be meaningful if they do not, is unclear. Critics treat this as a sign that EA is inherently morally corrupt, but it may just mean that MacAskill knows his audience. If you have the ear of relatively privileged people, as he does, you should urge them to make themselves useful. I would rather the rich listen to MacAskill than to Ayn Rand, as they so often find themselves inclined to do, or Anton LaVey, who rephrased Ayn Rand’s message in a mock-religious register, and who was likewise a favorite among some of the early Silicon Valley types. As I’ve said, MacAskill also believes in tithing your income (or more than that), so he’s still doing more good than a lot of the people the wealthy tend to turn to as gurus. (It’s a repugnant conclusion, but I stand by it.)

    More fundamentally, you may wonder whether happiness scales like this, whether it is additive, or whether misery is. You may wonder too, if even MacAskill concedes that we don’t know basic answers to such questions as What is good and What is a good society, that we could use a few centuries of Great Reflection to find those answers, why he proceeds as though he already has them in hand.

    Interestingly, MacAskill agrees with Douglas Rushkoff in the exact place where I would not have expected it. Rushkoff is a degrowther; he thinks that even replacing carbon with solar and wind will lead to increased emissions (due to the carbon burned in the making of components for these alternative energy sources) and environmental destruction. (I do not have the technical expertise to check his math here.) According to Rushkoff, we must simply pull back on our economic activity; we’ve gotten all the good out of economic growth that we’re going to get. MacAskill doesn’t concede that we’re at this point, but he does think it will come sooner or later. He foresees a final steady-state economy. He also worries that we’ve maxed out on scientific development and discovery: there haven’t been any more decades like the 1910s for a while. (It’s nice to know STEM people worry about this, that it’s not just us writers who look back to the dizzyingly fast timescale of twentieth-century Modernism.) If we throw enough people at the problem, maybe this myriad of mediocre scientists can still outdo that Golden Age one or two more times, as a whole lot of just-slightly-happy-to-be-alive people create more units of happiness than does a small group of the elite-elect.

    The problem with MacAskill’s “longtermism,” as it’s called – or my ultimate problem with it – is that it is programmatically irony-free. In this way, too, it is extremely of its era. So much of culture at the moment is either protectively swathed in nihilistic irony or arm-twistingly sincere, and the two tendencies frequently goad each other into grotesque self-caricature. MacAskill, simply by assuming that our projects will mostly reflect our choices, values, and intentions, rules out the tendency of historical events to mean the opposite of what their authors intended. Certainly a lot of the best things I’ve done have been by accident. Of course we have responsibilities to future people. We discharge them by acting as thoughtfully and kindly as we can, and making whatever good things it falls to our lot to make. But there’s a difference between taking responsibility for our actions and treating the future as our problem to solve. The Haudenosaunee certainly don’t think it is, and neither do any other truly religious people. Many Christians have become so Americanized, so integrated into a structure of world-destroying capitalism, that they could afford to heed Rushkoff and Krznaric on some points, and even, on fewer points, MacAskill. We need to think about being good ancestors. But the emphasis needs to be more on good, which we partly control, than on ancestor, which we don’t.

    Contributed By PhilChristman Phil Christman

    Phil Christman teaches first-year writing at the University of Michigan and is the editor of the Michigan Review of Prisoner Creative Writing.

    Learn More
    0 Comments
    You have ${x} free ${w} remaining. This is your last free article this month. We hope you've enjoyed your free articles. This article is reserved for subscribers.

      Already a subscriber? Sign in

    Try 3 months of unlimited access. Start your FREE TRIAL today. Cancel anytime.

    Start free trial now