Hundreds of AI luminaries sign letter calling for anti-deepfake legislation

Date:

Share post:


Hundreds in the artificial intelligence community have signed an open letter calling for strict regulation of AI-generated impersonations, or deepfakes. While this is unlikely to spur real legislation (despite the House’s new task force), it does act as a bellwether for how experts lean on this controversial issue.

The letter, signed by over 500 people in and adjacent to the AI field at time of publishing, declares that “Deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes.”

They call for full criminalization of deepfake child sexual abuse materials (CSAM, AKA child pornography) regardless of whether the figures depicted are real or fictional. Criminal penalties are called for in any case where someone creates or spreads harmful deepfakes. And developers are called on to prevent harmful deepfakes from being made using their products in the first place, with penalties if their preventative measures are inadequate.

Among the more prominent signatories of the letter are:

  • Jaron Lanier
  • Frances Haugen
  • Stuart Russell
  • Andrew Yang
  • Marietje Schaake
  • Steven Pinker
  • Gary Marcus
  • Oren Etzioni
  • Genevieve smith
  • Yoshua Bengio
  • Dan Hendrycks
  • Tim Wu

Also present are hundreds of academics from across the globe and many disciplines. In case you’re curious, one person from OpenAI signed, a couple from Google Deepmind, and none at press time from Anthropic, Amazon, Apple, or Microsoft (except Lanier, whose position there is non-standard). Interestingly they are sorted in the letter by “Notability.”

This is far from the first call for such measures; in fact they have been debated in the EU for years before being formally proposed earlier this month. Perhaps it is the EU’s willingness to deliberate and follow through that activated these researchers, creators, and executives to speak out.

Or perhaps it is the slow march of KOSA towards acceptance — and its lack of protections for this type of abuse.

Or perhaps it is the threat of (as we have already seen) AI-generated scam calls that could sway the election or bilk naive folks out of their money.

Or perhaps it is yesterday’s task force being announced with no particular agenda other than maybe writing a report about what some AI-based threats might be and how they might be legislatively restricted.

As you can see, there is no shortage of reasons for those in the AI community to be out here waving their arms around and saying “maybe we should, you know, do something?!”

Whether anyone will take notice of this letter is anyone’s guess — no one really paid attention to the infamous one calling for everyone to “pause” AI development, but of course this letter is a bit more practical. If legislators decide to take on the issue, an unlikely event given it’s an election year with a sharply divided congress, they will have this list to draw from in taking the temperature of AI’s worldwide academic and development community.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

India weighs delaying caps on UPI market share in win for PhonePe, Google Pay

India’s mobile payments regulator is likely to extend the deadline for imposing market share caps on the...

Thai food delivery app Line Man Wongnai weighs IPO in Thailand, US in 2025

Line Man Wongnai, an on-demand food delivery service in Thailand, is considering an initial public offering on...

Apple’s ‘Crush’ ad is disgusting

Apple can generally be relied on for clever, well-produced ads, but it missed the mark with its...

OpenAI offers a peek behind the curtain of its AI’s secret instructions

Ever wonder why conversational AI like ChatGPT says “Sorry, I can’t do that” or some other polite...

US Patent and Trademark Office confirms another leak of filers’ address data

The federal government agency responsible for granting patents and trademarks is alerting thousands of filers whose private...

Encrypted services Apple, Proton and Wire helped Spanish police identify activist

As part of an investigation into people involved in the pro-independence movement in Catalonia, the Spanish police...

Match looks to Hinge as Tinder fails

Match Group, the company that owns several dating apps, including Tinder and Hinge, released its first-quarter earnings...

Gratitude Plus makes social networking positive, private and personal

Private social networking is making a comeback. Gratitude Plus, a startup that aims to shift social media...