I’m writing this kind of as a reaction to The Dial of Progress by Zvi Mowshowitz already sensing it might be a tad too long and slightly off topic to just be a regular comment.
I.
I would call myself a “soft accelerationist” on AI, and this has probably been the case since before I finally got around to reading the sequences—which is the point where, for most of us, there’s persuasive reasons and reasoning to consider the worst case scenarios with regards to AGI. But I’ve been raised too thoroughly on a diet of science fiction and boy do I want that future so much, and it seems so obvious to me that AI is the most likely of a very few possible paths to get there.
But still, that’s kind of a milquetoast take and not exactly an ironclad argument against the idea that the failure mode is universal annihilation, so I hadn’t previously been able to put words to why, knowing the baseline arguments for deceleration, I can’t bring myself to fully embrace that position. However, I recently found a pretty good framing via an old Bryan Caplan post that gels well for me:
2. Opportunity cost. Superman doesn’t just have the power to destroy the world; he also has the power to save it. If there’s a 1.1% chance that Superman will one day save the world if Batman lets him live, that amply justifies living with a 1% risk that he’ll one day destroy the the world. And given the hazards of the DC Universe, the world is clearly safer with Superman than without him.
I’m probably too selfish enough to be okay with the gamble but also I embrace a sort of Hansonian-esque willingness-to-accept the idea that the future of humanity has the option of not resembling humanity today. If the AI we make eats the universe there’s still some sense in which that’s us.
II.
I’m going to insert a very quick intro-to-AI-killeveryone-ism aside here because my dad might read this and I already stumbled a little trying to explain it before:
It’s worth bearing in mind that the AI extinction scenario isn’t Terminator. There is no resistance, there is no “oops well let’s not do that again”. The classic Yudkowskian scenario is that an AI learns to bootstrap itself in secret, solves nanotechnology in a few minutes, pays some rando a billion dollars to order some chemicals off Amazon and mix them, then a few days later everyone on the planet falls over dead in the same second. Climate change or meteors aren’t likely to actually kill everyone, an AI might be motivated to exhaustively finish the job. It’s not that the AI hates or fears humans, it’s that an AI that hasn’t been properly prepared to preserve humanity on humanity’s terms can be given virtually any goal and be at risk of deciding our atoms are best used elsewhere.
III.
I remember reading the description for Automated Wealth and Extreme Consumption from Endless Space and being excited by the picture of a society liberated by background economic automation and whose pursuit of happiness could be much more literal.
With a large enough portfolio of investments and sufficiently advanced systems, it is possible to guarantee a positive return. Based on computing systems powered largely by Dust, investments are likely to be net positive at least until the heat death of the universe. Beyond that point, contract re-negotiation is likely.
Given a fixed and guaranteed flow of positive income, elimination of waste, and Dust-enhanced systems to monitor manufacturing and logistics, society's most difficult challenge becomes consumption. Hoarding wealth is inefficient, yet only so many luxuries can be consumed. Decisions, decisions...
The closest I’ll ever get to religion is the idea that AI is the path to that heaven. We’re already sort of beginning to grapple with the idea of “what do we do with our economy when it’s automated and spiraling out of positive-growth-control and us meatbags aren’t strictly a necessary part of it?” Are we going to have the political will to recognize what’s happening and resist the urge to concentrate wealth instead of UBI’ing it around?
There’s another scenario in this arena that fits that we can play around with; Robin Hanson’s Age of Em posits a similar type of post-meatbag society. As described by Scott Alexander:
So, what is the Age of Em?
According to Hanson, AI is really hard and won’t be invented in time to shape the posthuman future. But sometime a century or so from now, scanning technology, neuroscience, and computer hardware will advance enough to allow emulated humans, or “ems”. Take somebody’s brain, scan it on a microscopic level, and use this information to simulate it neuron-by-neuron on a computer. A good enough simulation will map inputs to outputs in exactly the same way as the brain itself, effectively uploading the person to a computer. Uploaded humans will be much the same as biological humans. Given suitable sense-organs, effectuators, virtual avatars, or even robot bodies, they can think, talk, work, play, love, and build in much the same way as their “parent”. But ems have three very important differences from biological humans.
First, they have no natural body. They will never need food or water; they will never get sick or die. They can live entirely in virtual worlds in which any luxuries they want – luxurious penthouses, gluttonous feasts, Ferraris – can be conjured out of nothing. They will have some limited ability to transcend space, talking to other ems’ virtual presences in much the same way two people in different countries can talk on the Internet.
Second, they can run at different speeds. While a normal human brain is stuck running at the speed that physics allow, a computer simulating a brain can simulate it faster or slower depending on preference and hardware availability. With enough parallel hardware, an em could experience a subjective century in an objective week. Alternatively, if an em wanted to save hardware it could process all its mental operations v e r y s l o w l y and experience only a subjective week every objective century.
Third, just like other computer data, ems can be copied, cut, and pasted. One uploaded copy of Robin Hanson, plus enough free hardware, can become a thousand uploaded copies of Robin Hanson, each living in their own virtual world and doing different things. The copies could even converse with each other, check each other’s work, duel to the death, or – yes – have sex with each other. And if having a thousand Robin Hansons proves too much, a quick ctrl-x and you can delete any redundant ems to free up hard disk space for Civilization 6
Scott goes on to describe the em economy as identifiably dystopian: extreme competition with a very few select ems copy/pasted beyond ad nauseam for most tasks, ems being ruthlessly manipulated to optimize for productivity, doing unimaginable amounts of work for subsistence wages. But also:
The real winners of this ultra-fast-growing economy? Ordinary humans. While humans will be way too slow and stupid to do anything useful, they will tend to have non-subsistence amounts of money saved up from their previous human lives, and also be running at speeds thousands of times slower than most of the economy. When the economy doubles every day, so can your bank account. Ordinary humans will become rarer, less relevant, but fantastically rich – a sort of doddering Neanderthal aristocracy spending sums on a cheeseburger that could support thousands of ems in luxury for entire lifetimes.
We can maybe imagine a similar scenario by tweaking the premise. For example, it’s perhaps a little more clear now that we might arrive at AI before brain scanning technology. What if the dystopian brunt was borne by AI, a synthetic economy underwriting automated wealth to enable a high consumption biological society? We can finally stop arguing about capitalism versus communism by achieving the extreme utopia of both simultaneously.
Age of AI.
Sometimes I “worry” not that I’m living in a simulation, but that I’m an individual in a far future world whose entertainment is living as me in a simulation.
IV.
Suppose there was indeed a Dial of Progress, and they gave me access to it.
What would I do?
On any practical margin, I would crank that sucker as high as it can go. There’s a setting that would be too high even for me, but I don’t expect the dial to offer it.
What about AI? Wouldn’t that get us all killed?
Well, maybe. That is a very real risk.
I’d still consider the upsides too big to ignore. Being able to have an overall sane, prosperous society, where people would have the slack to experiment and think, and not be at each others’ throats, with an expanding pie and envisioning a positive future, would put is in a much better place. That includes making much better decisions on AI. People would feel less like they have no choice, either personally or as part of a civilization, less like they couldn’t speak up if something wasn’t right.
People need something to protect, to hope for and fight for, if we want them to sacrifice in the name of the future. Right now, too many don’t have that.
I too would crank the dial. I think the benefits are worth the risks, but we’ll grant that I’m willing to be more flexible on how I interpret the risks. As mentioned previously: I can stand the idea that even the AI that eats the universe is in some sense us, with a long spectrum gradient of “us” going through ems also being us all the way to us continuing/being as us now. I wouldn’t put myself on their side, but I wonder if maybe some amount of AI-not-kill-everyone-ism is this idea in some form or in an indirect sense; some combination of the benefits being extremely high and the risks, even of extinction, not being entirely unacceptable as long as something lives on.