OpenAI presents its preferred version of AI regulation in a new ‘blueprint’

Date:

Share post:


OpenAI on Monday published what it’s calling an “economic blueprint” for AI: a living document that lays out policies the company thinks it can build on with the U.S. government and its allies.

The blueprint, which includes a forward from Chris Lehane, OpenAI’s VP of global affairs, asserts that the U.S. must act to attract billions in funding for the chips, data, energy, and talent necessary to “win on AI.”

“Today, while some countries sideline AI and its economic potential,” Lehane wrote, “the U.S. government can pave the road for its AI industry to continue the country’s global leadership in innovation while protecting national security.”

OpenAI has repeatedly called on the U.S. government to take more substantive action on AI and infrastructure to support the technology’s development. The federal government has largely left AI regulation to the states, a situation OpenAI describes in the blueprint as untenable.

In 2024 alone, state lawmakers introduced almost 700 AI-related bills, some of which conflict with others. Texas’ Responsible AI Governance Act, for example, imposes onerous liability requirements on developers of open source AI models.

OpenAI CEO Sam Altman has also criticized existing federal laws on the books, such as the CHIPS Act, which aimed to revitalize the U.S. semiconductor industry by attracting domestic investment from the world’s top chipmakers. In a recent interview with Bloomberg, Altman said that the CHIPS Act “[has not] been as effective as any of us hoped,” and that he thinks there’s “a real opportunity” for the Trump administration to “to do something much better as a follow-on.”

“The thing I really deeply agree with [Trump] on is, it is wild how difficult it has become to build things in the United States,” Altman said in the interview. “Power plants, data centers, any of that kind of stuff. I understand how bureaucratic cruft builds up, but it’s not helpful to the country in general. It’s particularly not helpful when you think about what needs to happen for the U.S. to lead AI. And the U.S. really needs to lead AI.”

To fuel the data centers necessary to develop and run AI, OpenAI’s blueprint recommends “dramatically” increased federal spending on power and data transmission, and meaningful buildout of “new energy sources,” like solar, wind farms, and nuclear. OpenAI — along with its AI rivals — has previously thrown its support behind nuclear power projects, arguing that they’re needed to meet the electricity demands of next-generation server farms.

Tech giants Meta and AWS have run into snags with their nuclear efforts, albeit for reasons that have nothing to do with nuclear power itself.

In the nearer term, OpenAI’s blueprint proposes that the government “develop best practices” for model deployment to protect against misuse, “streamline” the AI industry’s engagement with national security agencies, and develop export controls that enable the sharing of models with allies while “limit[ing]” their export to “adversary nations.” In addition, the blueprint encourages that the government share certain national security-related information, like briefings on threats to the AI industry, with vendors, and help vendors secure resources to evaluate their models for risks.

“The federal government’s approach to frontier model safety and security should streamline requirements,” the blueprint reads. “Responsibly exporting … models to our allies and partners will help them stand up their own AI ecosystems, including their own developer communities innovating with AI and distributing its benefits, while also building AI on U.S. technology, not technology funded by the Chinese Communist Party.”

OpenAI already counts a few U.S. government departments as partners, and — should its blueprint gain currency among policymakers — stands to add more. The company has deals with the Pentagon for cybersecurity work and other, related projects, and it has teamed up with defense startup Anduril to supply its AI tech to systems the U.S. military uses to counter drone attacks.

In its blueprint, OpenAI calls for the drafting of standards “recognized and respected” by other nations and international bodies on behalf of the U.S. private sector. But the company stops short of endorsing mandatory rules or edicts. “[The government can create] a defined, voluntary pathway for companies that develop [AI] to work with government to define model evaluations, test models, and exchange information to support the companies safeguards,” the blueprint reads.

The Biden administration took a similar tack with its AI Executive Order, which sought to enact several high-level, voluntary AI safety and security standards. The executive order established the U.S. AI Safety Institute (AISI), a federal government body that studies risks in AI systems, which has partnered with companies including OpenAI to evaluate model safety. But Trump and his allies have pledged to repeal Biden’s executive order, putting its codification — and the AISI — at risk of being undone.

OpenAI’s blueprint also addresses copyright as it relates to AI, a hot-button topic. The company makes the case that AI developers should be able to use “publicly available information,” including copyrighted content, to develop models.

OpenAI, along with many other AI companies, trains models on public data from across the web. The company has licensing agreements in place with a number of platforms and publishers, and offers limited ways for creators to “opt out” of its model development. But OpenAI has also said that it would be “impossible” to train AI models without using copyrighted materials, and a number of creators have sued the company for allegedly training on their works without permission.

“[O]ther actors, including developers in other countries, make no effort to respect or engage with the owners of IP rights,” the blueprint reads. “If the U.S. and like-minded nations don’t address this imbalance through sensible measures that help advance AI for the long-term, the same content will still be used for AI training elsewhere, but for the benefit of other economies. [The government should ensure] that AI has the ability to learn from universal, publicly available information, just like humans do, while also protecting creators from unauthorized digital replicas.”

It remains to be seen which parts of OpenAI’s blueprint, if any, influence legislation. But the proposals are a signal that OpenAI intends to remain a key player in the race for a unifying U.S. AI policy.

In the first half of last year, OpenAI more than tripled its lobbying expenditures, spending $800,000 versus $260,000 in all of 2023. The company has also brought former government leaders into its executive ranks, including ex-Defense Department official Sasha Baker, NSA chief Paul Nakasone, and Aaron Chatterji, formerly the chief economist at the Commerce Department under President Joe Biden.

As it makes hires and expands its global affairs division, OpenAI has been more vocal about which AI laws and rules it prefers, for instance throwing its weight behind Senate bills that would establish a federal rule-making body for AI and provide federal scholarships for AI R&D. The company has also opposed bills, in particular California’s SB 1047, arguing that it would stifle AI innovation and push out talent.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Accel doubles down on Sarla Aviation’s ambition to develop electric air taxis in India

Sarla Aviation launched one year ago with a pitch built for India’s congested streets. The electric air...

Elon Musk tweets so much, people bet over $1M weekly to guess how many posts

Will Elon Musk post more than 400 tweets this week? More than 800? Estimate correctly and you...

Nintendo Switch 2 could be announced this week: The rumors (and facts) so far

With CES 2025 finally in the rearview, it’s time to move on to the next round of...

Hackers are exploiting a new Fortinet firewall bug to breach company networks

Security researchers say malicious hackers have been exploiting a newly discovered vulnerability in Fortinet firewalls to break...

DJI Flip is a $439, fully foldable camera drone

Four short months after introducing the truly palm-size Neo, DJI is back with another pint-sized consumer drone....

CBRE buys remainder of co-working company Industrious at an $800M valuation

Real estate giant CBRE announced Tuesday that it is acquiring the rest of co-working startup Industrious, in which...

Biden administration opens up federal land to AI data centers

With less than a week left in office, President Joe Biden is not done leaving his mark...

WhatsApp is adding a way to turn selfies into stickers

WhatsApp is adding new features including creating new stickers from selfies, sharing sticker packs, and adding new...