Silicon Valley is debating if AI weapons should be allowed to decide to kill

Date:

Share post:


In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous — meaning an AI algorithm would make the final decision to kill someone. “Congress doesn’t want that,” the defense tech founder told TechCrunch. “No one wants that.” 

But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons — or at least a heavy skepticism of arguments against them. The U.S.’s adversaries “use phrases that sound really good in a sound bite: Well, can’t you agree that a robot should never be able to decide who lives and dies?” Luckey said during a talk earlier this month at Pepperdine University. “And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?” 

When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn’t mean that robots should be programmed to kill people on their own, just that he was concerned about “bad people using bad AI.”

In the past, Silicon Valley has erred on the side of caution. Take it from Luckey’s cofounder, Trae Stephens. “I think the technologies that we’re building are making it possible for humans to make the right decisions about these things,” he told Kara Swisher last year. “So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously.” 

The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens’ perspectives, and said that Stephens didn’t mean that a human should always make the call, but just that someone is accountable. 

To be fair, the stance of the U.S. government itself is similarly ambiguous. The U.S. military currently does not purchase fully autonomous weapons; but it does not ban companies from making them nor does it explicitly ban them from selling such things to foreign countries. Last year, the U.S. released updated guidelines for AI safety in the military that have been endorsed by many U.S. allies and requires top military officials to approve of any new autonomous weapon; yet the guidelines are voluntary (Anduril said it is committed to following the guidelines), and U.S. officials have continuously said it’s “not the right time” to consider any binding ban on autonomous weapons. 

Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to “press the button every time it fires.” He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. 

“You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I’m a staffer who’s never played this game before,” he said. “I could destroy us in the battle.” 

When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn’t be the ones setting the agenda on lethal AI. “The key context to what I was saying is that our companies don’t make the policy, and don’t want to make the policy: it’s the job of elected officials to make the policy,” he said. “But they do need to educate themselves on the nuance to do a good job.” 

He also reiterated a willingness to consider more autonomy in weapons. “It’s not a binary as you suggest — ‘fully autonomous or not’ isn’t the correct policy question. There’s a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do,” he said. “Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what’s necessary to win with American lives on the line.”

Activists and human rights groups have long tried and failed to establish international bans on autonomous lethal weapons — bans that the U.S. has resisted signing. But the war in Ukraine may have turned the tide against activists, providing both a trove of combat data and a battlefield for defense tech founders to test on. Currently, companies integrate AI into weapons systems, although they still require a human to make the final decision to kill. 

Meanwhile, Ukrainian officials have pushed for more automation in weapons, hoping it’ll give them a leg-up over Russia. “We need maximum automation,” said Mykhailo Fedorov, Ukraine’s minister of digital transformation, in an interview with The New York Times. “These technologies are fundamental to our victory.”

For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.’s hand. At a UN debate on AI arms last year, a Russian diplomat was notably coy. “We understand that for many delegations the priority is human control,” he said. “For the Russian Federation, the priorities are somewhat different.”

At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to “teach the Navy, teach the DoD, teach Congress” about the potential of AI to “hopefully get us ahead of China.” 

Lonsdale’s and Luckey’s affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets. 



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Meta COO Sheryl Sandberg sanctioned by judge for allegedly deleting emails

A Delaware judge has sanctioned Sheryl Sandberg, Meta’s former COO and board member, for allegedly deleting emails...

Microsoft is no longer OpenAI’s exclusive cloud provider

Microsoft was once the exclusive provider of data center infrastructure for OpenAI to train and run its...

Scale AI’s Alexandr Wang has published an open letter lobbying Trump to invest in AI

Alexandr Wang, the CEO of Scale AI, has taken out a full-page ad in The Washington Post...

Perplexity launches Sonar, an API for AI search

Perplexity on Tuesday launched an API service called Sonar, allowing enterprises and developers to build the startup’s...

Trump targets EV charging funding programs Tesla benefits from

President Donald Trump is trying to halt the flow of funding for EV charging infrastructure from two...

Spotify introduces educational audio courses, starting in the UK

Spotify is expanding its streaming service to now include educational courses in addition to music, podcasts, and...

Funding to fintechs continues to decline, but at a slower pace

Welcome to TechCrunch Fintech!  This week, we’re looking at just how much fintech startups raised in 2024, a...

Forum software NodeBB joins the fediverse

Before there was social media, there were internet forums. Millions of forum sites continue to operate, which...