Silicon Valley is debating if AI weapons should be allowed to decide to kill

Date:

Share post:


In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous — meaning an AI algorithm would make the final decision to kill someone. “Congress doesn’t want that,” the defense tech founder told TechCrunch. “No one wants that.” 

But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons — or at least a heavy skepticism of arguments against them. The U.S.’s adversaries “use phrases that sound really good in a sound bite: Well, can’t you agree that a robot should never be able to decide who lives and dies?” Luckey said during a talk earlier this month at Pepperdine University. “And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?” 

When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn’t mean that robots should be programmed to kill people on their own, just that he was concerned about “bad people using bad AI.”

In the past, Silicon Valley has erred on the side of caution. Take it from Luckey’s cofounder, Trae Stephens. “I think the technologies that we’re building are making it possible for humans to make the right decisions about these things,” he told Kara Swisher last year. “So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously.” 

The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens’ perspectives, and said that Stephens didn’t mean that a human should always make the call, but just that someone is accountable. 

To be fair, the stance of the U.S. government itself is similarly ambiguous. The U.S. military currently does not purchase fully autonomous weapons; but it does not ban companies from making them nor does it explicitly ban them from selling such things to foreign countries. Last year, the U.S. released updated guidelines for AI safety in the military that have been endorsed by many U.S. allies and requires top military officials to approve of any new autonomous weapon; yet the guidelines are voluntary (Anduril said it is committed to following the guidelines), and U.S. officials have continuously said it’s “not the right time” to consider any binding ban on autonomous weapons. 

Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to “press the button every time it fires.” He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. 

“You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I’m a staffer who’s never played this game before,” he said. “I could destroy us in the battle.” 

When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn’t be the ones setting the agenda on lethal AI. “The key context to what I was saying is that our companies don’t make the policy, and don’t want to make the policy: it’s the job of elected officials to make the policy,” he said. “But they do need to educate themselves on the nuance to do a good job.” 

He also reiterated a willingness to consider more autonomy in weapons. “It’s not a binary as you suggest — ‘fully autonomous or not’ isn’t the correct policy question. There’s a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do,” he said. “Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what’s necessary to win with American lives on the line.”

Activists and human rights groups have long tried and failed to establish international bans on autonomous lethal weapons — bans that the U.S. has resisted signing. But the war in Ukraine may have turned the tide against activists, providing both a trove of combat data and a battlefield for defense tech founders to test on. Currently, companies integrate AI into weapons systems, although they still require a human to make the final decision to kill. 

Meanwhile, Ukrainian officials have pushed for more automation in weapons, hoping it’ll give them a leg-up over Russia. “We need maximum automation,” said Mykhailo Fedorov, Ukraine’s minister of digital transformation, in an interview with The New York Times. “These technologies are fundamental to our victory.”

For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.’s hand. At a UN debate on AI arms last year, a Russian diplomat was notably coy. “We understand that for many delegations the priority is human control,” he said. “For the Russian Federation, the priorities are somewhat different.”

At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to “teach the Navy, teach the DoD, teach Congress” about the potential of AI to “hopefully get us ahead of China.” 

Lonsdale’s and Luckey’s affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets. 



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Khosla Ventures just backed OpenAI with $405M more, but not necessarily with its own capital

Khosla Ventures has raised $405 million for OpenAI, according to a regulatory filing. Based on the filing alone,...

Hit by hurricanes? FCC says you qualify for internet and mobile service subsidies

As hurricanes batter the Atlantic and Gulf coasts of the United States, many people’s livelihoods are on...

Instagram blames moderation issues on human reviewers, not AI

Instagram head Adam Mosseri on Friday addressed the moderation issues that saw Instagram and Threads users losing...

Announcing more judges for Startup Battlefield 200 at TechCrunch Disrupt 2024

Startup Battlefield 200 is a major highlight at every Disrupt, and we’re thrilled to find out which...

Temu gets more questions from the EU about illegal product risks

The European Union has dialed up scrutiny of Chinese ecommerce marketplace Temu, asking for more information about...

Xbox will soon let players buy games directly in its Android app following Google antitrust ruling

Xbox President Sarah Bond announced that players will soon be able to play and purchase Xbox games...

ByteDance lays off hundreds of TikTok employees in shift to AI content moderation

ByteDance’s TikTok is laying off hundreds of employees, mainly in Malaysia, according to Reuters. The cuts come...

How AI-generated content is upping the workload for Wikipedia editors

As AI-generated slop takes over increasing swathes of the user-generated Internet thanks to the rise of large...