U.S. laws regulating AI prove elusive, but there may be hope

Date:

Share post:


Can the U.S. meaningfully regulate AI? It’s not at all clear yet. Policymakers have achieved progress in recent months, but they’ve also had setbacks, illustrating the challenging nature of laws imposing guardrails on the technology.

In March, Tennessee became the first state to protect voice artists from unauthorized AI cloning. This summer, Colorado adopted a tiered, risk-based approach to AI policy. And in September, California Governor Gavin Newsom signed dozens of AI-related safety bills, a few of which require companies to disclose details about their AI training.

But the U.S. still lacks a federal AI policy comparable to the EU’s AI Act. Even at the state level, regulation continues to encounter major roadblocks.

After a protracted battle with special interests, Governor Newsom vetoed bill SB 1047, a law that would have imposed wide-ranging safety and transparency requirements on companies developing AI. Another California bill targeting the distributors of AI deepfakes on social media was stayed this fall pending the outcome of a lawsuit.

There’s reason for optimism, however, according to Jessica Newman, co-director of the AI Policy Hub at UC Berkeley. Speaking on a panel about AI governance at TechCrunch Disrupt 2024, Newman noted that many federal bills might not have been written with AI in mind, but still apply to AI — like anti-discrimination and consumer protection legislation.

“We often hear about the U.S. being this kind of ‘Wild West’ in comparison to what happens in the EU,” Newman said, “but I think that is overstated, and the reality is more nuanced than that.”

To Newman’s point, the Federal Trade Commission has forced companies surreptitiously harvesting data to delete their AI models, and is investigating whether the sales of AI startups to big tech companies violates antitrust regulation. Meanwhile, the Federal Communications Commission has declared AI-voiced robocalls illegal, and has floated a rule that AI-generated content in political advertising be disclosed.

President Joe Biden has also attempted to get certain AI rules on the books. Roughly a year ago, Biden signed the AI Executive Order, which props up the voluntary reporting and benchmarking practices many AI companies were already choosing to implement.

One consequence of the executive order was the U.S. AI Safety Institute (AISI), a federal body that studies risks in AI systems. Operating within the National Institute of Standards and Technology, the AISI has research partnerships with major AI labs like OpenAI and Anthropic.

Yet, the AISI could be wound down with a simple repeal of Biden’s executive order. In October, a coalition of over 60 organizations called on Congress to enact legislation codifying the AISI before year’s end.

“I think that all of us, as Americans, share an interest in making sure that we mitigate the potential downsides of technology,” AISI director Elizabeth Kelly, who also participated in the panel, said.

So is there hope for comprehensive AI regulation in the States? The failure of SB 1047, which Newman described as a “light touch” bill with input from industry, isn’t exactly encouraging. Authored by California State Senator Scott Wiener, SB 1047 was opposed by many in Silicon Valley, including high-profile technologists like Meta’s chief AI scientist, Yann LeCun.

This being the case, Wiener, another Disrupt panelist, said he wouldn’t have drafted the bill any differently — and he’s confident broad AI regulation will eventually prevail.

“I think it set the stage for future efforts,” he said. “Hopefully, we can do something that can bring more folks together, because the reality all of the large labs have already acknowledged is that the risks [of AI] are real and we want to test for them.”

Indeed, Anthropic last week warned of AI catastrophe if governments don’t implement regulation in the next 18 months.

Opponents have only doubled down on their rhetoric. Last Monday, Khosla Ventures founder Vinod Khosla called Wiener “totally clueless” and “not qualified” to regulate the real dangers of AI. And Microsoft and Andreessen Horowitz released a statement rallying against AI regulations that might affect their financial interests.

Newman posits, though, that pressure to unify the growing state-by-state patchwork of AI rules will ultimately yield a stronger legislative solution. In lieu of consensus on a model of regulation, state policymakers have introduced close to 700 pieces of AI legislation this year alone.

“My sense is that companies don’t want an environment of a patchwork regulatory system where every state is different,” she said, “and I think there will be increasing pressure to have something at the federal level that provides more clarity and reduces some of that uncertainty.”



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

OneRail’s software helps solve the last-mile delivery problem

Last-mile delivery, the very last step of the delivery process, is a common pain point for companies....

Bill to ban social media use by under-16s arrives in Australia’s parliament

Legislation to ban social media for under 16s has been introduced in the Australian parliament. The country’s...

Lighthouse, an analytics provider for the hospitality sector, lights up with $370M at a $1B valuation

Here is yet one more sign of the travel industry’s noticeable boom: a major growth round for...

DOJ: Google must sell Chrome to end monopoly

The United States Department of Justice argued Wednesday that Google should divest its Chrome browser as part...

WhatsApp will finally let you unsubscribe from business marketing spam

WhatsApp Business has grown to over 200 million monthly users over the past few years. That means there...

OneCell Diagnostics bags $16M to help limit cancer reoccurrence using AI

Cancer, one of the most life-threatening diseases, is projected to affect over 35 million people worldwide in...

India’s Arzooo, once valued at $310M, sells in distressed deal

Arzooo, an Indian startup founded by former Flipkart executives that sought to bring “best of e-commerce” to...

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit

Lawyers for The New York Times and Daily News, which are suing OpenAI for allegedly scraping their...