UK open to social media ban for kids as gov’t kicks off feasibility study

Date:

Share post:


The U.K. government is not ruling out further beefing up existing online safety rules by adding an Australian-style ban on social media for under 16s technology secretary, Peter Kyle, has said.

Back in the summer the government warned it may toughen laws for tech platforms in the wake of riots that were perceived to have been fuelled by online disinformation following a knife attack which killed three young girls.

Since then it’s emerged that some of the people prosecuted for rioting were minors — amping up concerns about social media’s influence on impressionable, developing minds.

Speaking to BBC Radio 4’s Today program on Wednesday Kyle was asked whether the government would ban social media for under 16s. He responded by saying “everything is on the table with me.”

Kyle was being interviewed as the Department for Science, Innovation and Technology (DSIT) set out its priorities for enforcement of the Online Safety Act (OSA) which parliament passed last year.

The OSA targets a grab-bag of online harms, from cyberbullying and hate speech, to intimate image abuse, scam ads and animal cruelty, with U.K. legislators saying they want to make the country the safest place to go online in the world. Although the strongest driver has been a child safeguarding impetus, with lawmakers responding to concerns that kids are accessing harmful and inappropriate content.

DSIT’s Statement of Strategic Priorities continues this theme, by putting child safety at the top of the list.

Strategic Priorities for online safety

Here are DSIT’s five priorities for the OSA in full:

1. Safety by design: Embed safety by design to deliver safe online experiences for all users but especially children, tackle violence against women and girls, and work towards ensuring that there are no safe havens for illegal content and activity, including fraud, child sexual exploitation and abuse, and illegal disinformation.

2. Transparency and accountability: Ensure industry transparency and accountability from platforms to deliver online safety outcomes, promoting increased trust and expanding the evidence-base to provide safer experiences for users.

3. Agile regulation: Deliver an agile approach to regulation, ensuring the framework is robust in monitoring and tackling emerging harms — such as AI generated content.

4. Inclusivity and resilience: Create an inclusive, informed and vibrant digital world which is resilient to potential harms, including disinformation.

5. Technology and innovation: Foster the innovation of online safety technologies to improve the safety of users and drive growth.

The mention of “illegal disinformation” is interesting since the last government removed clauses in the bill that had focused on this area over freedom of speech concerns. But in the wake of the summer riots the government said it would review OSA powers and could seek to strengthen them in light of social media use during the disorder.

In a ministerial forward accompanying Wednesday’s statement, Kyle also wrote:

“A particular area of focus for the government is the vast amount of misinformation and disinformation that can be encountered by users online. Platforms should have robust policies and tools in place to minimise this content where it relates to their duties under the Act. Countering misinformation and disinformation is challenging for services, given the need to preserve legitimate debate and free speech online. However, the growing presence of disinformation poses a unique threat to our democratic processes and to societal cohesion in the UK and must be robustly countered. Services should also remain live to emerging information threats, with the flexibility to quickly and robustly respond, and minimise the damaging effects on users, particularly vulnerable groups.”

DSIT’s intervention will steer how Ofcom enforces the law by requiring it to report back on the government’s priorities.

For over a year, Ofcom, the regulator tasked with overseeing Internet platforms and services’ compliance with the OSA, has been preparing to implement the OSA by consulting and producing detailed guidance, such as in areas like age verification technology.

Enforcement of the regime is finally expected to start from next Spring — when Ofcom will actively take up powers that could lead to fines of up to 10% of global annual turnover for tech firms that fail to meet the law’s duty of care.

“What I want to do is look at the evidence,” Kyle also said on kids and social media, pointing to the simultaneous launch of a “feasibility study” which he said would “look at the areas where evidence is lacking.”

Per DSIT, this study will “explore the effects of smartphone and social media use on children, to help bolster research and strengthen the evidence needed to build a safer online world.”

“There are assumptions about the impact [social media] has on children and young people, but there is no firm, peer reviewed evidence,” Kyle also told the BBC, suggesting that any U.K. ban on kids’ use of social media must be evidence-led.

During the interview with the BBC’s Emma Barnett, Kyle was also pressed on what the government has done to tackle gaps that he had previously suggested the online safety law contained. He responded by flagging a change it’s enacted that requires platforms to be more proactive about tackling intimate image abuse.

Tackling intimate image abuse

In September DSIT announced that it is making sharing intimate images without consent a “priority offence” under the OSA — requiring social media and other in-scope platforms and services to clamp down on the abusive practice or face the risk of big fines.

“The move effectively bumped up the severity of the intimate image abuse sharing offence within the Online Safety Act, so platforms have to be proactive in removing the content and prevent it from appearing in the first place,” DSIT spokesman Glen Mcalpine confirmed.

In further remarks to the BBC, Kyle said the change has meant social media companies must use algorithms to prevent intimate images from being uploaded in the first place.

“They had to proactively demonstrate to our regulator Ofcom that the algorithms would prevent that material going on in the first place. And if an image did appear online they needed to be taken down as fast as reasonably could be expected after being alerted,” he said, warning of “heavy fines” for non-compliance.

“It’s one area where you can see that harm is being prevented, rather than actually getting out into society and then us dealing with it afterwards — which is what was happening before,” he added. “Now, thousands and thousands of women are now protected — prevented from having the degradation, the humiliation, and sometimes being pushed towards suicidal thoughts because of that one power that I enacted.”



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Sam Altman disputes Marc Andreessen’s description of AI meetings with Biden administration

Famed investor Marc Andreessen recently talked about meetings with Biden administration staff who gave him the impression...

EV startup Canoo places remaining employees on a ‘mandatory unpaid break’

Struggling electric van startup Canoo has placed its remaining employees on what it’s calling a “mandatory unpaid...

After causing outrage on the first day of Y Combinator, AI code editor PearAI lands $1M seed

On the first day of Y Combinator’s winter 2024 session – right after orientation and a photo...

Third member of LockBit ransomware gang has been arrested

U.S. prosecutors in New Jersey on Friday publicly announced charges against Rostislav Panev, 51, a dual Russian-Israeli...

Feds clear the way for robotaxis without steering wheels and pedals

The National Highway Traffic Safety Administration (NHTSA) on Friday proposed a new national framework that could make...

VCs pledge not to take money from Russia or China, and Databricks raises a humongous round

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of...

Nvidia clears regulatory hurdle to acquire Run:ai

Chip company Nvidia gets the green light from the European Union to complete its acquisition of Run:ai. The...

Google is expanding Gemini’s in-depth research mode to 40 languages

Google said Friday that the company is expanding Gemini’s latest in-depth research mode to 40 more languages. The...