UK’s internet watchdog toughens approach to deepfake porn

Date:

Share post:


Ofcom, the U.K.’s internet safety regulator, has published another new draft guidance as it continues to implement the Online Safety Act (OSA) — the latest set of recommendations aim to support in-scope firms to meet legal obligations to protect women and girls from online threats like harassment and bullying, misogyny, and intimate image abuse.

The government has said that protecting women and girls is a priority for its implementation of the OSA. Certain forms of (predominantly) misogynist abuse — such as sharing intimate images without consent or using AI tools to create deepfake porn that targets individuals — are explicitly set out in the law as enforcement priorities.

The online safety regulation, which was approved by the U.K. parliament back in September 2023, has faced criticism that it’s not up to the task of reforming platform giants, despite containing substantial penalties for non-compliance — up to 10% of global annual turnover.

Child safety campaigners have also expressed frustration over how long it’s taking to implement the law, as well as doubting whether it will have the desired effect.

In an interview with the BBC in January, even the technology minister Peter Kyle — who inherited the legislation from the previous government — called it “very uneven” and “unsatisfactory.” But the government is sticking with the approach. Part of the discontent around the OSA can be traced back to the long lead time ministers allowed for implementing the regime, which requires parliament to approve Ofcom compliance guidance.

However, enforcement is expected to start to kick in soon in relation to core requirements on tackling illegal content and child protection. Other aspects of OSA compliance will take longer to implement. And Ofcom concedes this latest package of practice recommendations won’t become fully enforceable until 2027 or later.

Approaching the enforcement start line

“The first duties of the Online Safety Act are coming into force next month,” Ofcom’s Jessica Smith, who led development of the female safety-focused guidance, told TechCrunch in an interview. “So we will be enforcing against some of the core duties of the Online Safety Act ahead of this guidance [itself becoming enforceable].”

The new draft guidance on keeping women and girls safe online is intended to supplement earlier broader Ofcom guidance on illegal content — which also, for example, provides recommendations for protecting minors from seeing adult content online.

In December, the regulator published its finalized guidance on how platforms and services should shrink risks related to illegal content, an area where child protection is a clear priority.

It has also previously produced a Children’s Safety Code, which recommends online services dial up age checks and content filtering to ensure kids are not exposed to inappropriate content such as pornography. And as it’s worked toward implementing the online safety regime, it’s also developed recommendations for age assurance technologies for adult content websites, with the aim of pushing porn sites to take effective steps preventing minors from accessing age-inappropriate content.

The latest set of guidance was developed with help from victims, survivors, women’s advocacy groups and safety experts, per Ofcom. It covers four major areas where the regulator says females are disproportionately affected by online harm — namely: online misogyny; pile-ons and online harassment; online domestic abuse; and intimate image abuse.

Safety by design

Ofcom’s top-line recommendation urges in-scope services and platforms to take a “safety by design” approach. Smith told us the regulator wants to encourage tech firms to “take a step back” and “think about their user experience in the round.” While she acknowledged some services have put in place some measures that are helpful in shrinking online risks in this area, she argued there’s still a lack of holistic thinking when it comes to prioritizing the safety of women and girls.

“What we’re really asking for is just a sort of step change in how the design processes work,” she told us, saying the goal is to ensure that safety considerations are baked into product design.

She highlighted the rise of image generating AI services, which she noted have led to “massive” growth in deepfake intimate image abuse as an example of where technologists could have taken proactive measures to crimp the risks of their tools being weaponized to target women and girls — yet did not.

“We think that there are sensible things that services could do at the design phase which would help to address the risk of some of those harms,” she suggested.

Examples of “good” industry practices Ofcom highlights in the guidance includes online services taking actions such as:

  • Removing geolocation by default (to shrink privacy/stalking risks);
  • Conducting ‘abusability’ testing to identify how a service could be weaponized/misused;
  • Taking steps to boost account security;
  • Designing in user prompts that are intended to make posters think twice before posting abusive content;
  • And offering accessible reporting tools that let users report issues.

As is the case with all Ofcom’s OSA guidance not every measure will be relevant for every type or size of service — since the law applies to online services large and small, and cuts across various arenas from social media, to online dating, gaming, forums and messaging apps, to name a few. So a big part of the work for in-scope companies will be understanding what compliance means in the context of their product.

When asked if Ofcom had identified any services currently meeting the guidance’s standards, Smith suggested they had not. “There’s still a lot of work to do across the industry,” she said.

She also tacitly acknowledged that there may be growing challenges given some of the retrograde steps taken vis-à-vis trust and safety by some major industry players. For example, since taking over Twitter and rebranding the social network as X, Elon Musk has gutted its trust and safety headcount — in favor of pursuing what he has framed as a maximalist approach to free speech.

In recent months, Meta — which owns Facebook and Instagram — appears to have taken some mimicking steps, saying it’s ending thirty-party fact-checking contracts in favor of deploying an X-style “community notes” system of crowdsourced labelling on content disputes, for example.

Transparency

Smith suggested that Ofcom’s response to such high-level shifts — where operators’ actions could risk dialling up, rather than damping down, online harms — will focus on using transparency and information-gathering powers it wields under the OSA to illustrate impacts and drive user awareness.

So, in short, the tactic here looks set to be ‘name and shame’ — at least in the first instance.

“Once we finalize the guidance, we will produce a [market] report … about who is using the guidance, who is following what steps, what kind of outcomes they’re achieving for their users who are women and girls, and really shine a light on what protections are in place on different platforms so that users can make informed choices about where they spend their time online,” she told us.

Smith suggested that companies wanting to avoid the risk of being publicly shamed for poor performance on women’s safety will be able to turn to Ofcom’s guidance for “practical steps” on how to improve the situation for their users, and address the risk of reputational harm too.

“Platforms that are operating in the UK will have to comply with the UK law,” she added in the context of the discussion on major platforms de-emphasizing trust and safety. “So that means complying with the illegal harms duties and the protection of children duties under the Online Safety Act.”

“I think this is where our transparency powers also come in — if the industry is changing direction and harms are increasing, this is where we will be able to shine a light and share relevant information with UK users, with media, with parliamentarians.”

Tech to tackle deepfake porn

One type of online harm where Ofcom is explicitly beefing up its recommendations even before it’s actively started OSA enforcement is intimate image abuse — as the latest draft guidance suggests the use hash matching to detect and remove such abusive imagery, whereas earlier Ofcom recommendations did not go that far.

“We’ve included additional steps in this guidance that go beyond what we’ve already set out in our codes,” Smith noted, confirming Ofcom plans to update its earlier codes to incorporate this change “in the near future.”

“So this is a way of saying to platforms that you can get ahead of that enforceable requirement by following the steps that are set down in this guidance,” she added.

Ofcom recommended the use of hash matching technology to counter intimate image abuse due to a substantial increase in this risk, per Smith — especially in relation to AI-generated deepfake image abuse.

“There was more deepfake intimate image abuse reported in 2023 than in all previous years combined,” she noted, adding that Ofcom has also gathered more evidence on the effectiveness of hash matching to tackle this harm.

The draft guidance as a whole will now undergo consultation — with Ofcom inviting feedback until May 23, 2025 — after which it will produce final guidance by the end of this year.

A full 18 months after that, Ofcom will then produce its first report reviewing industry practice in this area.

“We’re getting into 2027 before we’re producing our first report on who’s doing what [to protect women and girls online] — but there’s nothing to stop platforms acting now,” she added.

Responding to criticism that the OSA is taking Ofcom too long to implement, she said it’s right that the regulator consults on compliance measures. However, with the final measure taking effect next month, she noted that Ofcom anticipates a shift in the conversation surrounding the issue, too.

“[T]hat will really start to change the conversation with platforms, in particular,” she predicted, adding that it will also be in a position to start demonstrating progress on moving the needle when it comes to reducing online harms.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Fyre Festival 2 is coming, and it already sounds bananas (and not in a good way)

Billy McFarland is back, with Fyre Festival 2. Scheduled to take place in Isla Mujeres, Mexico from...

Automattic-owned Beeper is releasing redesigned desktop and iOS apps

WordPress.com owner Automattic last year acquired the multi-service messaging app Beeper for $125 million and said it...

1,000 artists release ‘silent’ album to protest UK copyright sell-out to AI

The U.K. government is pushing forward with plans to attract more AI companies to the region by...

Web Summit attendees aren’t buying Scale AI CEO’s push for America ‘to win the AI war’

In a bold move last month, Scale AI CEO Alexandr Wang took out a full-page ad in...

Anthropic reportedly ups its next funding round to $3.5B

Anthropic’s next funding round is reportedly growing larger. Anthropic, which makes the AI chatbot Claude, is finalizing a...

Chegg sues Google over AI search summaries

Edtech company Chegg has sued Google claiming that the tech giant’s AI summaries of search results have...

Even Elon Musk forgets that X isn’t Twitter sometimes

Do you sometimes refer to X by its old name, Twitter? That’s okay. Even Elon Musk, the...

Holmes and Balwani’s appeal falls flat as court upholds fraud convictions

Elizabeth Holmes and Ramesh “Sunny” Balwani’s appeal to overturn their fraud convictions and reduce their prison sentences...