Texas AG is investigating Character.AI, other platforms over child safety concerns

Date:

Share post:


Texas Attorney General Ken Paxton on Thursday launched an investigation into Character.AI and 14 other technology platforms over child privacy and safety concerns. The investigation will assess whether Character.AI — and other platforms that are popular with young people, including Reddit, Instagram and Discord — conform to Texas’ child privacy and safety laws.

The investigation by Paxton, who is often tough on technology companies, will look into whether these platforms complied with two Texas laws: the Securing Children Online through Parental Empowerment, or SCOPE Act, and the Texas Data Privacy and Security Act, or DPSA. 

These laws require platforms to provide parents with tools to manage the privacy settings of their children’s accounts, and hold tech companies to strict consent requirements when collecting data on minors. Paxton claims both of these laws extend to how minors interact with AI chatbots.

“These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” Paxton said in a press release.

Character.AI, which lets you set up generative AI chatbot characters that you can text and chat with, recently became embroiled in a number of child safety lawsuits. The company’s AI chatbots quickly took off with younger users, but several parents have alleged in lawsuits that Character.AI’s chatbots made inappropriate and disturbing comments to their children.

One Florida case claims that a 14-year-old boy became romantically involved with a Character AI chatbot, and told it he was having suicidal thoughts in the days leading up to his own suicide. In another case out of Texas, one of Character.AI’s chatbots allegedly suggested an autistic teenager should try to poison his family. Another parent in the Texas case alleges one of Character.AI’s chatbots subjected her 11-year-old daughter to sexualized content for the last two years.

“We are currently reviewing the Attorney General’s announcement. As a company, we take the safety of our users very seriously,” a Character.AI spokesperson said in a statement to TechCrunch. “We welcome working with regulators, and have recently announced we are launching some of the features referenced in the release including parental controls.”

Character.AI on Thursday rolled out new safety features aimed at protecting teens, saying these updates will limit its chatbots from starting romantic conversations with minors. The company has also started training a new model specifically for teen users in the last month — one day, it hopes to have adults using one model on its platform, while minors use another.

These are just the latest safety updates Character.AI has announced. The same week that the Florida lawsuit became public, the company said it was expanding its trust and safety team, and recently hired a new head for the unit.

Predictably, the issues with AI companionship platforms are arising just as they’re taking off in popularity. Last year, Andreessen Horowitz (a16z) said in a blog post that it saw AI companionship as an undervalued corner of the consumer internet that it would invest more in. A16z is an investor in Character.AI and continues to invest in other AI companionship startups, recently backing a company whose founder wants to recreate the technology from the movie, “Her.”

Reddit, Meta and Discord did not immediately respond to requests for comment.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Sonos CEO Patrick Spence is leaving following app update disaster

Some changes at Sonos. Patrick Spence, the company’s chief executive officer (pictured above), is leaving the company...

A breach of a data broker’s trove of location data threatens the privacy of millions

A hack and data breach at location data broker Gravy Analytics is threatening the privacy of millions...

CoreWeave, a $19B AI compute provider, opens its first international data centers in the UK

Coreweave, the cloud computing company that provides companies with AI compute resources, has formally opened its first...

UK throws its hat into the AI fire

In 2023, the U.K. made a big song and dance about the need to consider the harms...

Groww, India’s biggest trading app, prepares for IPO

Groww, India’s largest retail stockbroker, is positioning itself to file for an IPO in 10-12 months and...

UK in-home healthcare provider Cera raises $150M to expand its AI platform

Around the world, public healthcare systems have struggled to reset post-pandemic, and in particular, the increasingly aged...

Watch Duty was downloaded 2 million times during this week’s LA fires

Fire-tracking app Watch Duty has become a crucial source of information for Los Angeles residents threatened by...

CES 2025: Self-driving cars were everywhere, plus other transportation tech trends

Even before CES 2025 kicked off a few trends began to emerge — or more accurately, some...