OpenAI pledges to give U.S. AI Safety Institute early access to its next model

Date:

Share post:


OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing.

The announcement, which Altman made in a post on X late Thursday evening, was light on details. But it — along with a similar deal with the U.K.’s AI safety body struck in June — appears to be intended to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies.

In May, OpenAI effectively disbanded a unit working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue. Reporting — including ours — suggested that OpenAI cast aside the team’s safety research in favor of launching new products, ultimately leading to the resignation of the team’s two co-leads, Jan Leike (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who started his own safety-focused AI company, Safe Superintelligence Inc.).

In response to a growing chorus of critics, OpenAI said it would eliminate its restrictive non-disparagement clauses that implicitly discouraged whistleblowing and create a safety commission, as well as dedicate 20% of its compute to safety research. (The disbanded safety team had been promised 20% of OpenAI’s compute for its work, but ultimately never received this.) Altman re-committed to the 20% pledge and re-affirmed that OpenAI voided the non-disparagement terms for new and existing staff in May.

The moves did little to placate some observers, however — particularly after OpenAI staffed the safety commission will all company insiders including Altman and, more recently, reassigned a top AI safety executive to another org.

Five senators including Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a recent letter addressed to Altman. OpenAI chief strategy officer Jason Kwon responded to the letter today, writing the OpenAI “[is] dedicated to implementing rigorous safety protocols at every stage of our process.”

The timing of OpenAI’s agreement with the U.S. AI Safety Institute seems a tad suspect in light of the company’s endorsement earlier this week of the Future of Innovation Act, a proposed Senate bill that would authorize the Safety Institute as an executive body that sets standards and guidelines for AI models. The moves together could be perceived as an attempt at regulatory capture — or at the very least an exertion of influence from OpenAI over AI policymaking at the federal level.

Not for nothing, Altman is among the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations for the “safe and secure development and deployment of AI” throughout the U.S.’ critical infrastructures. And OpenAI has dramatically increased its expenditures on federal lobbying this year, spending $800,000 in the first six months of 2024 versus $260,000 in all of 2023.

The U.S. AI Safety Institute, housed within the Commerce Department’s National Institute of Standards and Technology, consults with a consortium of companies that includes Anthropic as well as big tech firms like Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with working on actions outlined in President Joe Biden’s October AI executive order, including developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Bill to ban social media use by under-16s arrives in Australia’s parliament

Legislation to ban social media for under 16s has been introduced in the Australian parliament. The country’s...

Lighthouse, an analytics provider for the hospitality sector, lights up with $370M at a $1B valuation

Here is yet one more sign of the travel industry’s noticeable boom: a major growth round for...

DOJ: Google must sell Chrome to end monopoly

The United States Department of Justice argued Wednesday that Google should divest its Chrome browser as part...

WhatsApp will finally let you unsubscribe from business marketing spam

WhatsApp Business has grown to over 200 million monthly users over the past few years. That means there...

OneCell Diagnostics bags $16M to help limit cancer reoccurrence using AI

Cancer, one of the most life-threatening diseases, is projected to affect over 35 million people worldwide in...

India’s Arzooo, once valued at $310M, sells in distressed deal

Arzooo, an Indian startup founded by former Flipkart executives that sought to bring “best of e-commerce” to...

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit

Lawyers for The New York Times and Daily News, which are suing OpenAI for allegedly scraping their...

Hyundai reveals the Ioniq 9, its biggest EV to date

Hyundai revealed Wednesday the new Ioniq 9, an all-electric three-row SUV — and its largest EV to...