OpenAI pledges to give U.S. AI Safety Institute early access to its next model

Date:

Share post:


OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing.

The announcement, which Altman made in a post on X late Thursday evening, was light on details. But it — along with a similar deal with the U.K.’s AI safety body struck in June — appears to be intended to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies.

In May, OpenAI effectively disbanded a unit working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue. Reporting — including ours — suggested that OpenAI cast aside the team’s safety research in favor of launching new products, ultimately leading to the resignation of the team’s two co-leads, Jan Leike (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who started his own safety-focused AI company, Safe Superintelligence Inc.).

In response to a growing chorus of critics, OpenAI said it would eliminate its restrictive non-disparagement clauses that implicitly discouraged whistleblowing and create a safety commission, as well as dedicate 20% of its compute to safety research. (The disbanded safety team had been promised 20% of OpenAI’s compute for its work, but ultimately never received this.) Altman re-committed to the 20% pledge and re-affirmed that OpenAI voided the non-disparagement terms for new and existing staff in May.

The moves did little to placate some observers, however — particularly after OpenAI staffed the safety commission will all company insiders including Altman and, more recently, reassigned a top AI safety executive to another org.

Five senators including Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a recent letter addressed to Altman. OpenAI chief strategy officer Jason Kwon responded to the letter today, writing the OpenAI “[is] dedicated to implementing rigorous safety protocols at every stage of our process.”

The timing of OpenAI’s agreement with the U.S. AI Safety Institute seems a tad suspect in light of the company’s endorsement earlier this week of the Future of Innovation Act, a proposed Senate bill that would authorize the Safety Institute as an executive body that sets standards and guidelines for AI models. The moves together could be perceived as an attempt at regulatory capture — or at the very least an exertion of influence from OpenAI over AI policymaking at the federal level.

Not for nothing, Altman is among the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations for the “safe and secure development and deployment of AI” throughout the U.S.’ critical infrastructures. And OpenAI has dramatically increased its expenditures on federal lobbying this year, spending $800,000 in the first six months of 2024 versus $260,000 in all of 2023.

The U.S. AI Safety Institute, housed within the Commerce Department’s National Institute of Standards and Technology, consults with a consortium of companies that includes Anthropic as well as big tech firms like Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with working on actions outlined in President Joe Biden’s October AI executive order, including developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

NFL quarterback turned-founder Colin Kaepernick on the challenges facing disrupters

Former NFL quarterback and civil rights activist Colin Kaepernick took the stage at TechCrunch Disrupt 2024 on...

And the winner of Startup Battlefield at Disrupt 2024 is . . . Salva Health

Over the last three days, 20 startups participated in the incredibly competitive Startup Battlefield at TechCrunch Disrupt....

Rivian’s chief software officer says in-car buttons are ‘an anomaly’

The trend of big touchscreens in cars has left many yearning for the not-so-distant days when most...

Zoox co-founder on Tesla self-driving: ‘they don’t have technology that works’

Zoox co-founder and CTO Jesse Levinson doesn’t believe Tesla will launch a robotaxi ride-hailing service in California...

The cybsecurity problems and opportunities facing open-source startups

Open-source software is everywhere, and in everything.Many startups are pursuing explicitly open-source business models. But every company...

Matt Mullenweg talks about Automattic’s staffing issues and financials at TechCrunch Disrupt

WordPress co-creator and Automattic CEO Matt Mullenweg talked about his fight with WP Engine, Automattic’s staffing issues, and...

Modash is flipping the influencer marketing script by connecting brands with the long tail of creators

Estonia-based startup Modash has raised a $12 million Series A led by henQ, a Dutch VC firm...

This Week in AI: A preview of Disrupt 2024’s stacked AI panels

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday,...