NIST releases a tool for testing AI model risk

Date:

Share post:


The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, companies and the broader public, has re-released a testbed designed to measure how malicious attacks — particularly attacks that “poison” AI model training data — might degrade the performance of an AI system.

Called Dioptra (after the classical astronomical and surveying instrument), the modular, open source web-based tool, first released in 2022, seeks to help companies training AI models — and the people using these models — assess, analyze and track AI risks. Dioptra can be used to benchmark and research models, NIST says, as well as to provide a common platform for exposing models to simulated threats in a “red-teaming” environment.

“Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra,” NIST wrote in a press release. “The open source software, like generating child available for free download, could help the community, including government agencies and small to medium-sized businesses, conduct evaluations to assess AI developers’ claims about their systems’ performance.”

A screenshot of Diatropa’s interface.
Image Credits: NIST

Dioptra debuted alongside documents from NIST and NIST’s recently created AI Safety Institute that lay out ways to mitigate some of the dangers of AI, like how it can be abused to generate nonconsensual pornography. It follows the launch of the U.K. AI Safety Institute’s Inspect, a toolset similarly aimed at assessing the capabilities of models and overall model safety. The U.S. and U.K. have an ongoing partnership to jointly develop advanced AI model testing, announced at the U.K.’s AI Safety Summit in Bletchley Park in November of last year. 

Dioptra is also the product of President Joe Biden’s executive order (EO) on AI, which mandates (among other things) that NIST help with AI system testing. The EO, relatedly, also establishes standards for AI safety and security, including requirements for companies developing models (e.g. Apple) to notify the federal government and share results of all safety tests before they’re deployed to the public.

As we’ve written about before, AI benchmarks are hard — not least of which because the most sophisticated AI models today are black boxes whose infrastructure, training data and other key details are kept under wraps by the companies creating them. A report out this month from the Ada Lovelace Institute, a U.K.-based nonprofit research institute that studies AI, found that evaluations alone aren’t sufficient to determine the real-world safety of an AI model in part because current policies allow AI vendors to selectively choose which evaluations to conduct.

NIST doesn’t assert that Dioptra can completely de-risk models. But the agency does propose that Dioptra can shed light on which sorts of attacks might make an AI system perform less effectively and quantify this impact to performance.

In a major limitation, however, Dioptra only works out-of-the-box on models that can be downloaded and used locally, like Meta’s expanding Llama family. Models gated behind an API, such as OpenAI’s GPT-4o, are a no-go — at least for the time being.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

OneRail’s software helps solve the last-mile delivery problem

Last-mile delivery, the very last step of the delivery process, is a common pain point for companies....

Bill to ban social media use by under-16s arrives in Australia’s parliament

Legislation to ban social media for under 16s has been introduced in the Australian parliament. The country’s...

Lighthouse, an analytics provider for the hospitality sector, lights up with $370M at a $1B valuation

Here is yet one more sign of the travel industry’s noticeable boom: a major growth round for...

DOJ: Google must sell Chrome to end monopoly

The United States Department of Justice argued Wednesday that Google should divest its Chrome browser as part...

WhatsApp will finally let you unsubscribe from business marketing spam

WhatsApp Business has grown to over 200 million monthly users over the past few years. That means there...

OneCell Diagnostics bags $16M to help limit cancer reoccurrence using AI

Cancer, one of the most life-threatening diseases, is projected to affect over 35 million people worldwide in...

India’s Arzooo, once valued at $310M, sells in distressed deal

Arzooo, an Indian startup founded by former Flipkart executives that sought to bring “best of e-commerce” to...

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit

Lawyers for The New York Times and Daily News, which are suing OpenAI for allegedly scraping their...