‘Model collapse’: Scientists warn against letting AI eat its own tail

Date:

Share post:


When you see the mythical Ouroboros, it’s perfectly logical to think, “Well, that won’t last.” A potent symbol — swallowing your own tail — but difficult in practice. It may be the case for AI as well, which, according to a new study, may be at risk of “model collapse” after a few rounds of being trained on data it generated itself.

In a paper published in Nature, British and Canadian researchers led by Ilia Shumailov at Oxford show that today’s machine learning models are fundamentally vulnerable to a syndrome they call “model collapse.” As they write in the paper’s introduction:

We discover that indiscriminately learning from data produced by other models causes “model collapse” — a degenerative process whereby, over time, models forget the true underlying data distribution …

How does this happen, and why? The process is actually quite easy to understand.

AI models are pattern-matching systems at heart: They learn patterns in their training data, then match prompts to those patterns, filling in the most likely next dots on the line. Whether you ask, “What’s a good snickerdoodle recipe?” or “List the U.S. presidents in order of age at inauguration,” the model is basically just returning the most likely continuation of that series of words. (It’s different for image generators, but similar in many ways.)

But the thing is, models gravitate toward the most common output. It won’t give you a controversial snickerdoodle recipe but the most popular, ordinary one. And if you ask an image generator to make a picture of a dog, it won’t give you a rare breed it only saw two pictures of in its training data; you’ll probably get a golden retriever or a Lab.

Now, combine these two things with the fact that the web is being overrun by AI-generated content and that new AI models are likely to be ingesting and training on that content. That means they’re going to see a lot of goldens!

And once they’ve trained on this proliferation of goldens (or middle-of-the road blogspam, or fake faces, or generated songs), that is their new ground truth. They will think that 90% of dogs really are goldens, and therefore when asked to generate a dog, they will raise the proportion of goldens even higher — until they basically have lost track of what dogs are at all.

This wonderful illustration from Nature’s accompanying commentary article shows the process visually:

Image Credits: Nature

A similar thing happens with language models and others that, essentially, favor the most common data in their training set for answers — which, to be clear, is usually the right thing to do. It’s not really a problem until it meets up with the ocean of chum that is the public web right now.

Basically, if the models continue eating each other’s data, perhaps without even knowing it, they’ll progressively get weirder and dumber until they collapse. The researchers provide numerous examples and mitigation methods, but they go so far as to call model collapse “inevitable,” at least in theory.

Though it may not play out as the experiments they ran show it, the possibility should scare anyone in the AI space. Diversity and depth of training data is increasingly considered the single most important factor in the quality of a model. If you run out of data, but generating more risks model collapse, does that fundamentally limit today’s AI? If it does begin to happen, how will we know? And is there anything we can do to forestall or mitigate the problem?

The answer to the last question at least is probably yes, although that should not alleviate our concerns.

Qualitative and quantitative benchmarks of data sourcing and variety would help, but we’re far from standardizing those. Watermarks of AI-generated data would help other AIs avoid it, but so far no one has found a suitable way to mark imagery that way (well … I did).

In fact, companies may be disincentivized from sharing this kind of information, and instead hoard all the hyper-valuable original and human-generated data they can, retaining what Shumailov et al. call their “first mover advantage.”

[Model collapse] must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.

… [I]t may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale.

Add it to the pile of potentially catastrophic challenges for AI models — and arguments against today’s methods producing tomorrow’s superintelligence.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

EV startup Canoo files for bankruptcy and ceases operations

Seven-year-old electric vehicle startup Canoo has filed for bankruptcy and will “cease operations immediately.” The company is...

Amazon suspends US drone deliveries following crash at testing facility

Amazon has paused testing of its delivery drones following a crash involving two of its models, according...

ChatGPT’s head of product will testify in the US government’s case against Google

The U.S. government wants to prove that Google’s competitors face overwhelming barriers to entry as part of...

Netradyne snags $90M at $1.35B valuation to expand smart dashcams for commercial fleets

Distracted driving is one of the leading causes of car accidents and a major reason why auto...

Perplexity acquires Read.cv, a social media platform for professionals

Read.cv, a social media platform for professionals that competed with LinkedIn, has been acquired by AI-powered search...

TikTok ban: How to download your videos and data

The Supreme Court has upheld a ban on TikTok. Before the app goes dark on Sunday, you’re...

AI startup Character AI tests games on the web

Character AI, a startup that lets users chat with different AI-powered characters, is now testing games on...

Bluesky saw 17x increase in moderation reports in 2024 after rapid growth

Bluesky on Friday published its moderation report for the past year, noting the sizable growth the social...