Why DeepSeek’s new AI model thinks it’s ChatGPT

Date:

Share post:


Earlier this week, DeepSeek, a well-funded Chinese AI lab, released an “open” AI model that beats many rivals on popular benchmarks. The model, DeepSeek V3, is large but efficient, handling text-based tasks like coding and writing essays with ease.

It also seems to think it’s ChatGPT.

Posts on X — and TechCrunch’s own tests — show that DeepSeek V3 identifies itself as ChatGPT, OpenAI’s AI-powered chatbot platform. Asked to elaborate, DeepSeek V3 insists it is a version of OpenAI’s GPT-4 model released in 2023.

The delusions run deep. If you ask DeepSeek V3 a question about DeepSeek’s API, it’ll give you instructions on how to use OpenAI’s API. DeepSeek V3 even tells some of the same jokes as GPT-4 — down to the punchlines.

So what’s going on?

Models like ChatGPT and DeepSeek V3 are statistical systems. Trained on billions of examples, they learn patterns in those examples to make predictions — like how “to whom” in an email typically precedes “it may concern.”

DeepSeek hasn’t revealed much about the source of DeepSeek V3’s training data. But there’s no shortage of public datasets containing text generated by GPT-4 via ChatGPT. If DeepSeek V3 was trained on these, the model might’ve memorized some of GPT-4’s outputs and is now regurgitating them verbatim.

“Obviously, the model is seeing raw responses from ChatGPT at some point, but it’s not clear where that is,” Mike Cook, a research fellow at King’s College London specializing in AI, told TechCrunch. “It could be ‘accidental’ … but unfortunately, we have seen instances of people directly training their models on the outputs of other models to try and piggyback off their knowledge.”

Cook noted that the practice of training models on outputs from rival AI systems can be “very bad” for model quality, because it can lead to hallucinations and misleading answers like the above. “Like taking a photocopy of a photocopy, we lose more and more information and connection to reality,” Cook said.

It might also be against those systems’ terms of service.

OpenAI’s terms prohibit users of its products, including ChatGPT customers, from using outputs to develop models that compete with OpenAI’s own.

OpenAI and DeepSeek didn’t immediately respond to requests for comment. However, OpenAI CEO Sam Altman posted what appeared to be a dig at DeepSeek and other competitors on X Friday.

“It is (relatively) easy to copy something that you know works,” Altman wrote. “It is extremely hard to do something new, risky, and difficult when you don’t know if it will work.”

Granted, DeepSeek V3 is far from the first model to misidentify itself. Google’s Gemini and others sometimes claim to be competing models. For example, prompted in Mandarin, Gemini says that it’s Chinese company Baidu’s Wenxinyiyan chatbot.

And that’s because the web, which is where AI companies source the bulk of their training data, is becoming littered with AI slop. Content farms are using AI to create clickbait. Bots are flooding Reddit and X. By one estimate, 90% of the web could be AI-generated by 2026.

This “contamination,” if you will, has made it quite difficult to thoroughly filter AI outputs from training datasets.

It’s certainly possible that DeepSeek trained DeepSeek V3 directly on ChatGPT-generated text. Google was once accused of doing the same, after all.

Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, said the cost savings from “distilling” an existing model’s knowledge can be attractive to developers, regardless of the risks.

“Even with internet data now brimming with AI outputs, other models that would accidentally train on ChatGPT or GPT-4 outputs would not necessarily demonstrate outputs reminiscent of OpenAI customized messages,” Khlaaf said. “If it is the case that DeepSeek carried out distillation partially using OpenAI models, it would not be surprising.”

More likely, however, is that a lot of ChatGPT/GPT-4 data made its way into the DeepSeek V3 training set. That means the model can’t be trusted to self-identify, for one. But what is more concerning is the possibility that DeepSeek V3, by uncritically absorbing and iterating on GPT-4’s outputs, could exacerbate some of the model’s biases and flaws.


TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.






Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

European embedded banking startup Swan adds another $44 million to its Series B

French startup Swan has raised another €42 million (around $44 million at current exchange rates). The company...

SoftBank in talks to invest as much as $25B in OpenAI, report says

SoftBank is in talks to invest up to $25 billion in OpenAI as part of a broader...

Meta says end of fact-checking hasn’t impacted ad spend

Meta says its controversial decision to put an end to its fact-checking program hasn’t impacted advertiser spend....

Zuck shrugs off DeepSeek, vows to spend hundreds of billions on AI

U.S. markets panicked on Monday over speculation that DeepSeek’s AI models would crush demand for GPUs, with...

LinkedIn passes $2B in premium revenue in 12 months, with overall revenue up 9% on the year

LinkedIn, the social platform where people look for and talk about work, may be less visible in...

Elon Musk claims Tesla will launch a self-driving service in Austin in June

Tesla CEO Elon Musk said Wednesday his company will launch a paid ride-hailing robotaxi service in Austin,...

Threads adds another 20M monthly users since December, reaching 320M

Threads, Meta’s microblogging service, is growing at a fast pace as users gravitate to the app over...

Hackers are hijacking WordPress sites to push Windows and Mac malware

Hackers are exploiting outdated versions of WordPress and plugins to alter thousands of websites in an attempt...