We’re Already at Risk of Ceding Our Humanity to AI

Date:

Share post:


Machines
It’s 2019. I’m in a bar in Providence, Rhode Island, chatting with graduate students and researchers with PhDs. One of them, who holds a PhD in Latin American literature, observed approvingly that programmers were now teaching robots how to write poetry. But this struck me as a total waste of time. Why would anyone bother making robots who could write poetry, except to advance the field of robotics? His answer: it was worth building robots who could write poetry “so that people won’t have to.”

Article continues after advertisement

What would we do instead, and why would anyone want to read robot poetry, anyway? He replied that we would do nothing at all; “we could watch movies.” Despite the rest of the group arguing with and (mostly) disagreeing with him for half the evening, my colleague stuck to his guns: it would be handy to have robots writing poetry for people.

In that moment we were at odds about the essence of humanity. I was baffled and disturbed by his position. It was one thing to design robots to clean up nuclear waste or to perform boring, repetitive tasks so that—at least in theory—people could do more intellectually or creatively rewarding work, including (I imagine) building robots and writing poetry. But to get robots to write poetry “so that we don’t have to” seemed a toe dip in a new pool of dangerous waters—waters that might dissolve what “human” means entirely.

Did poets want robots or so-called AI (artificial intelligence) “saving” them the effort of writing poetry? Forms of creative expression like writing poetry, painting, making music, or dancing are things people do because they want to be building something with their own minds and bodies. There is something inside them they want to say, with ink, paint, notes, or twirls. Creative people are driven to experience the often difficult, dangerous, tear-storm-filled, and, basically, epic journey of figuring out their message and how to say it, growing with the effort, and becoming someone a little different by the end. For poets like this, “stopping writing” (as my colleague put it) isn’t going to cut it.

If everything that makes us irreducible to algorithms, everything that is too complex to turn into numbers on spreadsheets, is dismissed and ignored—humanity will be over.

The arts are activities that people engage in for fun. Whether they write murder mysteries set at the office or play the accordion in an amateur band, they make things to be in the world and to make sense of it, to feel themselves growing, to connect with other humans, to chase that elusive flow state of being in the zone, making something, however weird or imperfect it may turn out to be. Would they stop because they didn’t “need” to (whatever that means) anymore? I doubt it. They are already doing something they don’t “need” to do.

Article continues after advertisement

Then there are people who read poetry (or look at paintings or enjoy music or novels). Would I want to live in a world in which machines wrote all the poetry? No. When I read a poem or listen to a jazz trio, I’m connecting directly with people. And there’s enough poetry and music out there in the world already—no machines are necessary for me to not run out of entertainment. When I follow the work of a contemporary writer, artist, or musician, it is to experience what a human is making in the world right now—not what a predictive text algorithm might plagiarize out of text in its database. To aim to replace future artworks with AI-generated knockoffs is to misunderstand people who make art as well as those who want to spend time with art.

But at the heart of my colleague’s provocative position was a utopian ideal: of a future in which technology was advanced enough to “do everything,” even write poetry, so that no one needed to work. Yet this position wasn’t convincing either. His utopia sounded more than a little dull, and nobody wants to be bored out of their minds.

To not do anything requiring physical or mental effort, ever, sounded like the dystopian future of Pixar’s Wall-E (short for Waste Allocation Load Lifter: Earth Class), where humans have become lumpen starship-dwellers who do, basically, nothing. They spend their waking hours watching screens, sipping drinks through straws, permanently living at the movies, their muscles atrophied from sitting for too long in lounge chairs. These people have become passive consumers and nothing else: technology does almost everything for them.

To be sure, it would be great to live in a world where people didn’t have to do or make anything in order not to starve to death. But the reason people are starving today isn’t because there aren’t enough machines to do work. In fact, the ballooning of tech companies in the twenty-first century has coincided with a meteoric rise in income inequality to levels last seen in the United States with the Gilded Age robber barons of the nineteenth century.

It’s social and political, not technological, change that’s needed to save millions from long hours of poorly paid, precarious, or backbreaking work in service of corporations posting gigantic profits. And writing poetry isn’t something that its practitioners do to make much money (except for the luckiest few), but because they want to make things and to do things. But by misunderstanding the full palette of the human, advocates of “so that people won’t have to” tech goals also misunderstand what a utopian or simply better future for all might look like.

Article continues after advertisement

Back in 2019 it already seemed urgent to write something about human relationships with machines: robots, neural networks, and artificial intelligence. To me the extraordinary and undertapped creative potential of flesh-and-blood people was one of the reasons why we matter. Creativity defines humanity. Every one of us is unique and distinctive no matter how many extraterrestrial life-forms may exist. The idea that machines could or should take over making art or writing diminishes the work of teachers, of human-authored books about how to make art and to write, and the fact that people can learn stuff.

Yet even a scholar of literature (my Providence colleague) could envision a life of idleness consuming algorithm-generated content as progress. What did that say about the value people placed on appreciating things made by humans rather than by machines? Were human experience and creativity going to become redundant, and, if so, who would we even be? By deciding what robots are for, we are defining what humans are.

Fast-forward to 2024. Stories about generative AI are everywhere. Human labor is increasingly framed as an obstacle to corporate profits. Authors, artists, and their guilds are suing companies like OpenAI that are using the “fair use” clause in copyright law—a law devised to enable human beings to quote one another briefly, giving due credit to the author they’re quoting, without the trouble and expense of paying for permission to reproduce other people’s work—to feed copyrighted works into their LLMs (large language models).

The goal of comprehensive automation dismisses the uniqueness of each of us. It monstrifies us by making our essence something that is no longer seen as typical or even as normal. Suddenly, humans are framed as the problem—as redundant. Tech companies are framing those aspects of humanity that robots can mimic as the only forms of action or content with value in the future. But if everything that makes us irreducible to algorithms, everything that is too complex to turn into numbers on spreadsheets, is dismissed and ignored—humanity will be over, dehumanized by society, even if there are still humans on earth. Insidiously dehumanizing narratives warp how people define the human, framing us as inconveniences to the march toward maximum profits for a select few.

Dehumanization is a form of monstrification. It’s not about saying someone is a different species, although dehumanizing language can include slurs like that. The kind of economic dehumanization I’m talking about here is where someone deems that people are nothing more than the products of their labor and where they don’t get to have needs any more than a toaster gets to have needs. It’s the sort of narrative that claims that workers don’t need anything beyond what it takes to drag them into the office and, maybe, to not get hauled off the street for being homeless in a place where what little they are paid barely allows them to afford shelter.

Article continues after advertisement

The world of creators is just one arena in which machines are reconfiguring definitions of the human, as we’ll see. The rhetoric from Silicon Valley is of “saving humanity.” But far from saving anyone, creating and “training” LLMs has become another way to exploit people. These companies are hiring, for a pittance, remote gig workers in the Global South as content moderators to “train” racial bias and harmful language out of LLMs, exposing workers to horrific racist and sexist content in the process. In 2023 moderators based in Kenya called for a government investigation into the trauma caused by moderating OpenAI’s ChatGPT content.

To be sure, there are global problems that are so urgent and severe that we should explore every avenue, including AI, to solve them. Key among them is the climate emergency. And in medicine there are things that AI (with proper guardrails) can do to extend and enhance the work of people and to save or improve lives. But how we think about and legislate corporations that develop AI and claim to own our data will determine the future of the idea of “human” and what “human rights” will mean.

Robots
Long before technology invaded our virtual worlds, machines had been clunking around in the physical world. Today wealthier homes are littered with innovations, from Roombas (robot vacuum cleaners) to Siri (Apple’s digital assistant). These products blur the boundaries between natural and artificial and between human and machine. Robots are as much a part of how societies imagine the future as they are already ordinary in the present. “Robot” conjures up disembodied metal arms in factories, apocalyptic machines like Dr. Who’s Daleks (blenders on wheels determined to “Exterminate! Exterminate!” people), or perhaps the alarming robot dogs designed by companies like Boston Dynamics. People instinctively fear a robot in action. In fact, current robots are delicate, easily fooled, and even more easily disabled with low-tech hacks like a bucket of water.

What have machines got to do with monsters? In the history of monsters, machines are everywhere. Machines that simulate or augment human bodies span an unsettling continuum from technology to humanity: robots, androids (robots designed to resemble humans, as far as possible), geminoids (androids designed to resemble women), cyborgs, Roombas, holograms. For today’s nonroboticist “robotic” carries a sense of clumsy hunks of clanking metal.

But some machines in science fiction are almost indistinguishable from human beings. People fantasize about building machines that are monstrous: that reduce the gap between humanity and machinery. Some sci-fi “machines” contain human body parts or began as human beings (like some sci-fi cyborgs). They challenge the categories of human and machine by being both and neither: monsters. Other contraptions have little more autonomy than a basic toaster (although even some toasters now have “smart” capabilities).

Article continues after advertisement

We can tell humans from robots because robots are built, not born. Still, just as definitions of “human” and “animal” affect each other, so do definitions of “human” and “machine” and concepts like “AI.” After all, how can something that doesn’t have a mind and isn’t alive be intelligent? For roboticists a robot fulfils four criteria: they have a physical form; they can sense the world; they can analyze sensory data and make evaluations; and they can act on their findings. “Robot” is a general term encompassing everything from a sensor-controlled vacuum cleaner to androids.

The word “robot” in English dates back to the 1920s, to a play by the Czech writer Karel Čapek, called R.U.R., or Rossum’s Universal Robots. Robota is the Czech word for a person forced to perform labor. The play, translated into multiple European languages within two years of its publication, features artificial beings created to work in factories. Novels, comics, and short stories soon established “robot” in English to mean a fully artificial, mechanical being devoid of flesh and blood, moving, sensing, thinking, and doing and yet somehow also lacking a mind—robotic?

While a robot is an independently moving decision-making machine, it doesn’t have to have a convincing body, voice, or set of movements. The most basic robots lack the typical human body-part complement of two arms, two legs, a head, and a torso. Such beings might include Roombas, R2-D2 from Star Wars, and perhaps the Tin Man from The Wizard of Oz, who has the standard number of limbs for a human but is clearly a clunking hulk of, well, tin.

The category of the human is under threat from stories about how machines are better. The question we need to be asking is, better for whom?

An android (or geminoid) is a closer mimic of human form, voice, movement, and behavior, perhaps even covered in imitation skin. So while Star Wars’ R2-D2 is a rudimentary robot (it looks and acts like a beeping vacuum cleaner), the human-sized, human-shaped, human-(albeit British)-voiced C-3PO is an android. Star Trek: The Next Generation’s Lieutenant Commander Data is also an android. Arnold Schwarzenegger’s character in The Terminator is an android, and Blade Runner features androids who are virtually indistinguishable from flesh-and-blood people. These androids are distinctly different from one another, and perhaps some would call them robots rather than androids on their personal scale from clanking lump of tin to human being.

Whatever they’re called, real and science fiction autonomous(ish) machines exist on a continuum with humans in terms of their (science fiction) appearance, movements, talents, and mannerisms. One step closer to us in appearance than androids are cyborgs. This is where things start feeling really creepy. Cyborgs are organic beings with added robot parts. Like some androids, they can be indistinguishable from a fully organic human. Battlestar Galactica is one of the many science fiction entertainments awash with beings that can be called cyborgs.

This is not a hard and fast terminology. Some machines may not fit my ersatz taxonomy, or yours. C-3PO in Star Wars is physically the archetypal robot: a tin man with jerky (albeit charming) movements. Yet in speech and interactions he is indistinguishable from a pompous, highly educated, nervous human. With dialogue and mannerisms that mirror the Niles Crane character of the comedy franchise Frasier a couple of decades later, C-3PO is surely an android rather than a mere robot. In Star Wars both C-3PO and R2-D2 are called “droids.” Each of us will classify them a little differently. What interests me is what happens next: What does thinking about androids—or cyborgs or robots—do to how people think about humans and monsters?

To Frankenstein and Beyond
Ancient writings about robots reveal how people have tried to make sense of what exactly life is and whether or not it can be built rather than born. Automata in ancient literature are entities that are neither conjured into existence by gods nor created using magic. In Byzantium and the Islamicate world in the Middle Ages, there was a long and continuous tradition of mechanical rather than magical models to explain the world and the human body.

There were also mechanical devices that we might call robots today: what historian of science Elly Truitt defines as “self-moving or self-sustaining manufactured objects.” There were robots in the medieval Mongol Empire. In western Europe automata first appeared as gifts from farther afield, beginning with a ninth-century automaton from Baghdad. The word “automaton” appeared in the sixteenth century, and its first recorded use is in scholar and novelist François Rabelais’ Gargantua and Pantagruel, which contains “little automated machines” that “move by themselves.”

How does someone decide whether something is magic—an instance of breaking nature’s rules using a supernatural power—or just a science they don’t possess? Speculative writing since classical antiquity has grappled with this question. In preindustrial Europe thinkers and practitioners sometimes disagreed over whether mechanical devices that appeared to go against nature were products of sorcery and diabolism or merely pleasurable mechanical marvels such as ingenious devices or engines. That fear of reaching beyond human ability and breaking a social or spiritual contract reverberates through writings past and present.

Fiction and films have cautioned that imbuing the nonliving with the power to act can go very wrong. In Mary Shelley’s classic monster novel, Frankenstein, the scientist Victor Frankenstein devises a being out of body parts from corpses, imbuing them with life and sentience using electricity. The result, predictably, is horrific and tragic. In the early 1980s, the movies The Terminator and Blade Runner did for androids what Shelley did for reanimated corpses: provide a cautionary tale about how much could go wrong.

The Terminator movie, the first of a long franchise, is set in the Los Angeles of 1984, into which a terminator android from a postapocalyptic future was sent back in time. Skynet, an AI “Global Digital Defense Network,” had decided that humans were the problem and had begun a nuclear war. In Blade Runner, inspired by Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep?, viewers follow a bounty hunter across a Los Angeles of 2019 (then far in the future) as he tracks fugitive synthetic beings he has orders to kill or “retire.” These scenarios share a concern for the safety of humanity in the wake of a rival species on earth. (To be sure, humanity already has plenty of rival species on earth, most of them microscopic. But still.)

Cut to 2023 and the real world. Malfunctioning robotaxis in San Francisco and a writers’ and actors’ strike in Hollywood are just two of the AI stories making headlines in California. Screenwriters faced the threat of job losses, as LLMs, trained on human writing, without authors’ consent, enable predictive text to create plausible content drafts. After months of strikes and negotiations, the final contract contained a provision that enables the manipulation of an actor’s digital likeness, opening the door to synthetic performances by a few big-name actors and fewer jobs for other actors and production crews.

An AI-generated doppelganger of Tom Hanks, created without his consent, has already appeared on the internet. Hanks is one of several celebrities whose voice or likeness have been duplicated. The concerns of Hollywood’s writers and actors were part of a larger set of grievances about how studio and streaming-platform executives have chosen to distribute profits from the labor of workers. As I type these words in 2024, the news industry is hemorrhaging jobs on newspapers and magazines, as search engines like Google have decided not to distinguish between real content and AI-generated content on Google News. Advertising revenue is driven by clicks.

Our data is being packaged and sold, and groups who have historically faced the greatest exploitation are being exploited the most, in a phenomenon that Ulises A. Mejias and Nick Couldry have called a “data grab.” Safiya U. Noble has shown how “algorithms of oppression” compound the legacies of sexism and slavery by building human prejudices into their data and algorithms. Cathy O’Neil has characterized algorithmic models—proprietary black boxes that receive no oversight—as “weapons of math destruction.” The present and the future as they are currently unfolding seem as bad, if different, from the apocalyptic scenarios of science fiction. The category of the human is under threat from stories about how machines are better. The question we need to be asking is, better for whom?

__________________________________

Excerpted from Humans: A Monstrous History by Surekha Davies, courtesy of the University of California Press. Copyright © 2025. Featured image: Roland Molnár, used under CC BY-SA 2.0.

Surekha Davies



Source link

Nicole Lambert
Nicole Lambert
Nicole Lamber is a news writer for LinkDaddy News. She writes about arts, entertainment, lifestyle, and home news. Nicole has been a journalist for years and loves to write about what's going on in the world.

Recent posts

Related articles

How librarians saved the day in World War II.

February 6, 2025, 10:48am In her new book, Book and Dagger: How Scholars and Librarians Became the Unlikely...

Lit Hub Daily: February 6, 2025

The Best of the Literary Internet, Every Day ...

Carving Our Canoes: On the Value of Building a Communal Life in an Atomized World

I’m a younger sibling. My elder sister Delys speaks for me, and my big brother Steve speaks...

Does Any of This Matter? Am I the Literary Asshole?

Welcome back! That’s right, it’s time for another incredible installment of Am I the Literary Asshole?, the...

How a Norwegian Scientist Used Unconventional Means to Reach the North Pole

We were fighting sleet and wind after yet another fruitless day hunting for fossils on Canada’s Ellesmere...

Libraries are already contending with crappy, AI-generated books.

February 5, 2025, 2:31pm This week, 404 Media, which is publishing some really essential writing these days and...

The world of groundhog prognosticators is much weirder—and darker—than you thought.

February 5, 2025, 9:20am Photo by AP Photo/Brynn Anderson via The Buffalo News Groundhog Day was over the weekend,...