U.S. laws regulating AI prove elusive, but there may be hope

Date:

Share post:


Can the U.S. meaningfully regulate AI? It’s not at all clear yet. Policymakers have achieved progress in recent months, but they’ve also had setbacks, illustrating the challenging nature of laws imposing guardrails on the technology.

In March, Tennessee became the first state to protect voice artists from unauthorized AI cloning. This summer, Colorado adopted a tiered, risk-based approach to AI policy. And in September, California Governor Gavin Newsom signed dozens of AI-related safety bills, a few of which require companies to disclose details about their AI training.

But the U.S. still lacks a federal AI policy comparable to the EU’s AI Act. Even at the state level, regulation continues to encounter major roadblocks.

After a protracted battle with special interests, Governor Newsom vetoed bill SB 1047, a law that would have imposed wide-ranging safety and transparency requirements on companies developing AI. Another California bill targeting the distributors of AI deepfakes on social media was stayed this fall pending the outcome of a lawsuit.

There’s reason for optimism, however, according to Jessica Newman, co-director of the AI Policy Hub at UC Berkeley. Speaking on a panel about AI governance at TechCrunch Disrupt 2024, Newman noted that many federal bills might not have been written with AI in mind, but still apply to AI — like anti-discrimination and consumer protection legislation.

“We often hear about the U.S. being this kind of ‘Wild West’ in comparison to what happens in the EU,” Newman said, “but I think that is overstated, and the reality is more nuanced than that.”

To Newman’s point, the Federal Trade Commission has forced companies surreptitiously harvesting data to delete their AI models, and is investigating whether the sales of AI startups to big tech companies violates antitrust regulation. Meanwhile, the Federal Communications Commission has declared AI-voiced robocalls illegal, and has floated a rule that AI-generated content in political advertising be disclosed.

President Joe Biden has also attempted to get certain AI rules on the books. Roughly a year ago, Biden signed the AI Executive Order, which props up the voluntary reporting and benchmarking practices many AI companies were already choosing to implement.

One consequence of the executive order was the U.S. AI Safety Institute (AISI), a federal body that studies risks in AI systems. Operating within the National Institute of Standards and Technology, the AISI has research partnerships with major AI labs like OpenAI and Anthropic.

Yet, the AISI could be wound down with a simple repeal of Biden’s executive order. In October, a coalition of over 60 organizations called on Congress to enact legislation codifying the AISI before year’s end.

“I think that all of us, as Americans, share an interest in making sure that we mitigate the potential downsides of technology,” AISI director Elizabeth Kelly, who also participated in the panel, said.

So is there hope for comprehensive AI regulation in the States? The failure of SB 1047, which Newman described as a “light touch” bill with input from industry, isn’t exactly encouraging. Authored by California State Senator Scott Wiener, SB 1047 was opposed by many in Silicon Valley, including high-profile technologists like Meta’s chief AI scientist, Yann LeCun.

This being the case, Wiener, another Disrupt panelist, said he wouldn’t have drafted the bill any differently — and he’s confident broad AI regulation will eventually prevail.

“I think it set the stage for future efforts,” he said. “Hopefully, we can do something that can bring more folks together, because the reality all of the large labs have already acknowledged is that the risks [of AI] are real and we want to test for them.”

Indeed, Anthropic last week warned of AI catastrophe if governments don’t implement regulation in the next 18 months.

Opponents have only doubled down on their rhetoric. Last Monday, Khosla Ventures founder Vinod Khosla called Wiener “totally clueless” and “not qualified” to regulate the real dangers of AI. And Microsoft and Andreessen Horowitz released a statement rallying against AI regulations that might affect their financial interests.

Newman posits, though, that pressure to unify the growing state-by-state patchwork of AI rules will ultimately yield a stronger legislative solution. In lieu of consensus on a model of regulation, state policymakers have introduced close to 700 pieces of AI legislation this year alone.

“My sense is that companies don’t want an environment of a patchwork regulatory system where every state is different,” she said, “and I think there will be increasing pressure to have something at the federal level that provides more clarity and reduces some of that uncertainty.”



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Meta, X approved ads containing violent anti-Muslim, antisemitic hate speech ahead of German election, study finds

Social media giants Meta and X (formerly Twitter) approved ads targeting users in Germany with violent anti-Muslim...

Court filings show Meta staffers discussed using copyrighted content for AI training

For years, Meta employees have internally discussed using copyrighted works obtained through legally questionable means to train...

Brian Armstrong says Coinbase spent $50M fighting SEC lawsuit – and beat it

Coinbase on Friday said the SEC has agreed to drop the lawsuit against the company with prejudice,...

iOS 18.4 will bring Apple Intelligence-powered ‘Priority Notifications’

Apple on Friday released its first developer beta for iOS 18.4, which adds a new “Priority Notifications”...

Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Nvidia founder and CEO Jensen Huang said the market got it wrong when it comes to DeepSeek’s...

Report: OpenAI plans to shift compute needs from Microsoft to SoftBank

OpenAI is forecasting a major shift in the next five years around who it gets most of...

Norway’s 1X is building a humanoid robot for the home

Norwegian robotics firm 1X unveiled its latest home robot, Neo Gamma, on Friday. The humanoid system will...

Sakana walks back claims that its AI can dramatically speed up model training

This week, Sakana AI, an Nvidia-backed startup that’s raised hundreds of millions of dollars from VC firms,...