Texas AG is investigating Character.AI, other platforms over child safety concerns

Date:

Share post:


Texas Attorney General Ken Paxton on Thursday launched an investigation into Character.AI and 14 other technology platforms over child privacy and safety concerns. The investigation will assess whether Character.AI — and other platforms that are popular with young people, including Reddit, Instagram and Discord — conform to Texas’ child privacy and safety laws.

The investigation by Paxton, who is often tough on technology companies, will look into whether these platforms complied with two Texas laws: the Securing Children Online through Parental Empowerment, or SCOPE Act, and the Texas Data Privacy and Security Act, or DPSA. 

These laws require platforms to provide parents with tools to manage the privacy settings of their children’s accounts, and hold tech companies to strict consent requirements when collecting data on minors. Paxton claims both of these laws extend to how minors interact with AI chatbots.

“These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” Paxton said in a press release.

Character.AI, which lets you set up generative AI chatbot characters that you can text and chat with, recently became embroiled in a number of child safety lawsuits. The company’s AI chatbots quickly took off with younger users, but several parents have alleged in lawsuits that Character.AI’s chatbots made inappropriate and disturbing comments to their children.

One Florida case claims that a 14-year-old boy became romantically involved with a Character AI chatbot, and told it he was having suicidal thoughts in the days leading up to his own suicide. In another case out of Texas, one of Character.AI’s chatbots allegedly suggested an autistic teenager should try to poison his family. Another parent in the Texas case alleges one of Character.AI’s chatbots subjected her 11-year-old daughter to sexualized content for the last two years.

“We are currently reviewing the Attorney General’s announcement. As a company, we take the safety of our users very seriously,” a Character.AI spokesperson said in a statement to TechCrunch. “We welcome working with regulators, and have recently announced we are launching some of the features referenced in the release including parental controls.”

Character.AI on Thursday rolled out new safety features aimed at protecting teens, saying these updates will limit its chatbots from starting romantic conversations with minors. The company has also started training a new model specifically for teen users in the last month — one day, it hopes to have adults using one model on its platform, while minors use another.

These are just the latest safety updates Character.AI has announced. The same week that the Florida lawsuit became public, the company said it was expanding its trust and safety team, and recently hired a new head for the unit.

Predictably, the issues with AI companionship platforms are arising just as they’re taking off in popularity. Last year, Andreessen Horowitz (a16z) said in a blog post that it saw AI companionship as an undervalued corner of the consumer internet that it would invest more in. A16z is an investor in Character.AI and continues to invest in other AI companionship startups, recently backing a company whose founder wants to recreate the technology from the movie, “Her.”

Reddit, Meta and Discord did not immediately respond to requests for comment.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

OpenAI co-founder Ilya Sutskever believes superintelligent AI will be ‘unpredictable’

OpenAI co-founder Ilya Sutskever spoke on a range of topics at NeurIPS, the annual AI conference, Friday...

OpenAI whistleblower found dead in San Francisco apartment

A former OpenAI employee, Suchir Balaji, was recently found dead in his San Francisco apartment, according to...

AI helps Telegram remove 15 million suspect groups and channels in 2024

Telegram has been under unprecedented pressure to clean up its platform this year, after its founder Pavel...

G2 Ventures Partners is raising $750 million for a third fund

G2 Venture Partners, the high-profile firm that spun out of Kleiner Perkins Caufield & Byers, is raising...

Databricks is on track to raise a record $9.5+ billion round at $60B valuation

Databricks is close to finalizing a $9.5 billion round at a $60 billion valuation, including a secondary...

Exxon can’t resist the AI power gold rush

AI continues to reshuffle power and energy markets with even oil giants like Exxon Mobil getting into...

See what Google’s Project Astra AR glasses can do (for a select few beta testers)

Google has released a prototype of Project Astra’s AR glasses for testing in the real world. The...

EVgo secures $1.25 billion loan amid Biden’s rush to approve clean energy loans

Electric vehicle charging startup EVgo is the latest company to secure funds from the U.S. Department of...