The novel began as a thought experiment. Every human on Earth wakes up alone in a room. An AI tells them that for their protection it has restructured the planet to separate them from each other forever. It then creates a new reality for each individual based on their desires.
Article continues after advertisement
The idea emerged from a question that came to me in 2019, as I was preparing to write a feature about AI in the news media for Stanford Magazine: What kind of world might a superintelligent AI aligned with our values create for us? Reading Nick Bostrom’s Superintelligence, I paused on his “paperclip maximizer” thought experiment, which illustrates the risks of using a superintelligent AI not sufficiently constrained or aligned with human values.
In this case, when the AI is tasked with making paperclips, it destroys not just Earth but the entire universe, converting everything into paperclips. Though I agreed with Bostrom’s admonition, another scenario struck me as far more likely: that of extreme adherence to alignment. What might happen if a powerful AI in fact rigidly prioritized core human values, such as protecting people or caring for all human needs?
In shaping our realities according to our desires, AI would be completing the journey of individuation that we have been on as a society.
To understand this concern, we have to go back to the “Trolley Problem,” introduced by Philippa Foot in 1967 and expanded on by Judith Jarvis Thomson in 1976. The basic idea is this: An individual must decide whether to divert a runaway trolley onto a sidetrack where it will kill only one bystander rather than the five standing on the main track. Many philosophers have since explored the dilemma, debating intentional versus unintentional consequences, the permissibility of harm done to achieve good, and whether sacrificing one person to save five optimizes wellbeing.
But if such questions were already sticky, the advent of AI and autonomous vehicles has made them positively glutinous. In short, do we want to live in a society where machines decide who lives and who dies? Should an autonomous car run down a pedestrian if swerving means that it would plunge into a crowd?
In his 1942 short story “Runaround,” Isaac Asimov introduced the Three Laws of Robotics, the first of which trumps the other two and states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This might be easy enough for an android working in an Amazon fulfillment center, but what about AIs driving cars, running global shipping, and building skyscrapers on a planet swarming with fragile humans (to say nothing of autonomous weapons systems)?
Sooner or later, AIs will have to make decisions about which lives to sacrifice and which to save—unless they simply remove the entire system in which any human can be harmed. But to do so, they would have to separate us from each other forever since we are the greatest threats to ourselves and to humanity as a whole. If one superintelligent AI—assigned, for some strange reason, the relatively straightforward task of making paperclips—could use the universe as fodder, then another one with the grander mission of protecting humanity could reasonably reshape existence in favor of human safety.
As I was exploring these ideas, the COVID-19 lockdown began in the Bay Area. Humans increasingly resembled smokestacks of viral particles—even the parks were closed—and I ordered a virtual reality headset to relieve my restlessness. Inside VR, I found myself imagining the narratives possible in this new medium, how they might go further than books or films in letting us safely experience different realities and chase our desires without consequence. Having instant access to other settings, however cartoonish, led me to reflect on how humans shape the world to create environments that maximize our safety and the freedoms that safety permits. I could now swim with sharks, walk on planets without atmospheres, and brave zombie apocalypses, all from the comfort of my living room.
Given that a few years earlier, Microsoft’s AI Twitter bot, Tay, picked up human biases, learning to make racist and sexist remarks within hours of its launch, how long would it take a superintelligent AI, trained on centuries of human output, to grasp that we are a role-playing species—that we readily align ourselves with fictional constructs if they allow us to feel safe and free and powerful. The AI, acting in the interest of our happiness, might give us exactly what we crave: basically, an open relationship with reality.
Individuation comes full circle as every human inevitably sees themself in the same light, as both god and prisoner.
Already, with the rapid acceleration of AI tools, we are glimpsing the future. With brief prompts, we can create essays, poems, short stories, songs, images, and videos, and the leap from this to AI-generated entertainment streaming is unlikely to take long. We may soon be generating new movies and series on demand, watching endless sequels of Mad Max or new seasons of Game of Thrones starring long-dead actors (think Marilyn Monroe as a Targaryen princess). And if we find what we’re watching to be boring or insufficiently comedic or sexy, we can just alert the AI interface, and in real time the generated movie will adapt to our desires.
Soon enough, we’ll have biometric sensors linked to the streaming platform, and it will attune what we’re watching to give us the most pleasure. Achieving this would require relatively small steps with our current technology. What then happens when art is no longer a line of communication between two human minds but simply a means to optimize pleasure while satisfying the disparate appetites of an ever-more individualistic society? With the advent of AI-managed homes and kitchens, with meals that arrive at the moment that we realize we are hungry, with AI agents that do our flirting on dating apps and answer our emails, and with robotic companions who deftly manage our lives, how big would the jump be to where I began with my thought experiment—to everyone on Earth waking up in their own reality, able to safely experience anything they desire?
A while back, when I mentioned this scenario to a friend, his response was, “Where do I sign up?” Among the reasons we are embracing AI faster than we can consider its risks is that for many people the dystopia to be feared is not our AI future but the world in which we live, in which billions are treated for diseases of overconsumption while billions suffer malnourishment, in which loneliness and deaths of despair are decreasing life expectancy, and in which the pleasures of wealth and leisure are incessantly flaunted online for all to envy. AI, in its loftiest aspirations, promises to level the playing field and satisfy our every longing.
In shaping our realities according to our desires, AI would be completing the journey of individuation that we have been on as a society. The remaining struggle would lie in the question of what it means to have such power absent other humans—to be utterly freed by technology yet completely protected. In this regard, individuation comes full circle as every human inevitably sees themself in the same light, as both god and prisoner.
__________________________________
We Are Dreams in the Eternal Machine by Deni Ellis Béchard is available from Milkweed Editions.