OpenAI pledges to give U.S. AI Safety Institute early access to its next model

Date:

Share post:


OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing.

The announcement, which Altman made in a post on X late Thursday evening, was light on details. But it — along with a similar deal with the U.K.’s AI safety body struck in June — appears to be intended to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies.

In May, OpenAI effectively disbanded a unit working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue. Reporting — including ours — suggested that OpenAI cast aside the team’s safety research in favor of launching new products, ultimately leading to the resignation of the team’s two co-leads, Jan Leike (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who started his own safety-focused AI company, Safe Superintelligence Inc.).

In response to a growing chorus of critics, OpenAI said it would eliminate its restrictive non-disparagement clauses that implicitly discouraged whistleblowing and create a safety commission, as well as dedicate 20% of its compute to safety research. (The disbanded safety team had been promised 20% of OpenAI’s compute for its work, but ultimately never received this.) Altman re-committed to the 20% pledge and re-affirmed that OpenAI voided the non-disparagement terms for new and existing staff in May.

The moves did little to placate some observers, however — particularly after OpenAI staffed the safety commission will all company insiders including Altman and, more recently, reassigned a top AI safety executive to another org.

Five senators including Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a recent letter addressed to Altman. OpenAI chief strategy officer Jason Kwon responded to the letter today, writing the OpenAI “[is] dedicated to implementing rigorous safety protocols at every stage of our process.”

The timing of OpenAI’s agreement with the U.S. AI Safety Institute seems a tad suspect in light of the company’s endorsement earlier this week of the Future of Innovation Act, a proposed Senate bill that would authorize the Safety Institute as an executive body that sets standards and guidelines for AI models. The moves together could be perceived as an attempt at regulatory capture — or at the very least an exertion of influence from OpenAI over AI policymaking at the federal level.

Not for nothing, Altman is among the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations for the “safe and secure development and deployment of AI” throughout the U.S.’ critical infrastructures. And OpenAI has dramatically increased its expenditures on federal lobbying this year, spending $800,000 in the first six months of 2024 versus $260,000 in all of 2023.

The U.S. AI Safety Institute, housed within the Commerce Department’s National Institute of Standards and Technology, consults with a consortium of companies that includes Anthropic as well as big tech firms like Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with working on actions outlined in President Joe Biden’s October AI executive order, including developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

MIT develops recyclable 3D printed glass blocks for construction

3D printing has been praised as an alternative to traditional construction. It promises to deliver faster construction...

This robotic knee exoskeleton is made from consumer braces and drone motors

Robotic exoskeletons are an increasingly popular method for assisting human labor in the workplace. Those that specifically...

Apple, Google wallets now support California driver’s licenses

California residents can now store their driver’s license or state ID in their Apple Wallet apps, the...

Apple Intelligence is now live in public beta. Here’s what it offers and how to enable it.

Apple Intelligence took another major step toward mainstream availability Thursday with the launch of the iOS 18.1,...

Google rolls out automatic passkey syncing via Password Manager

Passkeys, the digital credentials that let you sign into apps and websites without entering a password, are...

Quilt, Furno Materials, and RA Capital Management share the stage at TechCrunch Disrupt 2024

Launching a new product is challenging, but doing it in a space dominated by tech giants requires...

Announcing our next wave of Startup Battlefield judges at TechCrunch Disrupt 2024

Startup Battlefield 200 is a major highlight at every Disrupt, and we’re thrilled to find out which...

Amazon debuts an AI assistant for sellers, Project Amelia

Amazon sellers now have access to an AI assistant designed to help them grow their business by...