Meta’s new AI deepfake playbook: More labels, fewer takedowns

Date:

Share post:


Meta has announced changes to its rules on AI-generated content and manipulated media following criticism from its Oversight Board. Starting next month, the company said, it will label a wider range of such content, including by applying a “Made with AI” badge to deepfakes. Additional contextual information may be shown when content has been manipulated in other ways that pose a high risk of deceiving the public on an important issue.

The move could lead to the social networking giant labelling more pieces of content that have the potential to be misleading — important in a year of many elections taking place around the world. However, for deepfakes, Meta is only going to apply labels where the content in question has “industry standard AI image indicators,” or where the uploader has disclosed it’s AI-generated content.

AI generated content that falls outside those bounds will, presumably, escape unlabelled. 

The policy change is also likely to lead to more AI-generated content and manipulated media remaining on Meta’s platforms, since it’s shifting to favor an approach focused on “providing transparency and additional context,” as the “better way to address this content” (rather than removing manipulated media, given associated risks to free speech).

So, for AI-generated or otherwise manipulated media on Meta platforms like Facebook and Instagram, the playbook appears to be: more labels, fewer takedowns.

Meta said it will stop removing content solely on the basis of its current manipulated video policy in July, adding in a blog post published Friday that: “This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media.”

The change of approach may be intended to respond to rising legal demands on Meta around content moderation and systemic risk, such as the European Union’s Digital Services Act. Since last August the EU law has applied a set of rules to its two main social networks that require Meta to walk a fine line between purging illegal content, mitigating systemic risks and protecting free speech. The bloc is also applying extra pressure on platforms ahead of elections to the European Parliament this June, including urging tech giants to watermark deepfakes where technically feasible.

The upcoming US presidential election in November is also likely on Meta’s mind.

Oversight Board criticism

Meta’s advisory Board, which the tech giant funds but permits to run at arm’s length, reviews a tiny percentage of its content moderation decisions but can also make policy recommendations. Meta is not bound to accept the Board’s suggestions but in this instance it has agreed to amend its approach.

In a blog post published Friday, Monika Bickert, Meta’s VP of content policy, the company said it’s amending its policies on AI-generated content and manipulated media based on the Board’s feedback. “We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say,” she wrote.

Back in February, the Oversight Board urged Meta to rethink its approach to AI-generated content after taking on the case of a doctored video of President Biden which had been edited to imply a sexual motive to a platonic kiss he gave his granddaughter.

While the Board agreed with Meta’s decision to leave the specific content up they attacked its policy on manipulated media as “incoherent” — pointing out, for example, that it only applies to video created through AI, letting other fake content (such as more basically doctored video or audio) off the hook. 

Meta appears to have taken the critical feedback on board.

“In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving,” Bickert wrote. “As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.

“The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommended a ‘less restrictive’ approach to manipulated media like labels with context.”

Earlier this year, Meta announced it was working with others in the industry on developing common technical standards for identifying AI content, including video and audio. It’s leaning on that effort to expand labelling of synthetic media now.

“Our ‘Made with AI’ labels on AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content,” said Bickert, noting the company already applies ‘Imagined with AI’ labels to photorealistic images created using its own Meta AI feature.

The expanded policy will cover “a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling”, per Bickert.

“If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context,” she wrote. “This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere.”

Meta said it won’t remove manipulated content — whether AI-based or otherwise doctored — unless it violates other policies (such as voter interference, bullying and harassment, violence and incitement, or other Community Standards issues). Instead, as noted above, it may add “informational labels and context” in certain scenarios of high public interest.

Meta’s blog post highlights a network of nearly 100 independent fact-checkers which it says it’s engaged with to help identify risks related to manipulated content.

These external entities will continue to review false and misleading AI-generated content, per Meta. When they rate content as “False or Altered” Meta said it will respond by applying algorithm changes that reduce the content’s reach — meaning stuff will appear lower in Feeds so fewer people see it, in addition to Meta slapping an overlay label with additional information for those eyeballs that do land on it.

These third party fact-checkers look set to face an increasing workload as synthetic content proliferates, driven by the boom in generative AI tools. And because more of this stuff looks set to remain on Meta’s platforms as a result of this policy shift.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

General Catalyst-backed Jasper Health lays off staff

Jasper Health, a cancer care platform startup, laid off a substantial part of its workforce, TechCrunch has...

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Entertainment giant Live Nation has confirmed its ticketing subsidiary Ticketmaster has been hacked. Live Nation confirmed the data...

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

An autonomous pod. A solid-state battery-powered sports car. An electric pickup truck. A convertible grand tourer EV...

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said...

Hacked, leaked, exposed: Why you should never use stalkerware apps

Last week, an unknown hacker broke into the servers of the U.S.-based stalkerware maker pcTattletale. The hacker...

Mill’s redesigned food waste bin really is faster and quieter than before

When someone says a product is “new and improved,” it’s wise to take it with a grain...

Google admits its AI Overviews need work, but we’re all helping it beta test

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the...

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of...