AI isn’t very good at history, new paper finds

Date:

Share post:


AI might excel at certain tasks like coding or generating a podcast. But it struggles to pass a high-level history exam, a new paper has found.

A team of researchers has created a new benchmark to test three top large language models (LLMs) — OpenAI’s GPT-4, Meta’s Llama, and Google’s Gemini — on historical questions. The benchmark, Hist-LLM, tests the correctness of answers according to the Seshat Global History Databank, a vast database of historical knowledge named after the ancient Egyptian goddess of wisdom. 

The results, which were presented last month at the high-profile AI conference NeurIPS, were disappointing, according to researchers affiliated with the Complexity Science Hub (CSH), a research institute based in Austria. The best-performing LLM was GPT-4 Turbo, but it only achieved about 46% accuracy — not much higher than random guessing. 

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history. They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task,” said Maria del Rio-Chanona, one of the paper’s co-authors and an associate professor of computer science at University College London.

The researchers shared sample historical questions with TechCrunch that LLMs got wrong. For example, GPT-4 Turbo was asked whether scale armor was present during a specific time period in ancient Egypt. The LLM said yes, but the technology only appeared in Egypt 1,500 years later. 

Why are LLMs bad at answering technical historical questions, when they can be so good at answering very complicated questions about things like coding? Del Rio-Chanona told TechCrunch that it’s likely because LLMs tend to extrapolate from historical data that is very prominent, finding it difficult to retrieve more obscure historical knowledge.

For example, the researchers asked GPT-4 if ancient Egypt had a professional standing army during a specific historical period. While the correct answer is no, the LLM answered incorrectly that it did. This is likely because there is lots of public information about other ancient empires, like Persia, having standing armies.

“If you get told A and B 100 times, and C 1 time, and then get asked a question about C, you might just remember A and B and try to extrapolate from that,” del Rio-Chanona said.

The researchers also identified other trends, including that OpenAI and Llama models performed worse for certain regions like sub-Saharan Africa, suggesting potential biases in their training data.

The results show that LLMs still aren’t a substitute for humans when it comes to certain domains, said Peter Turchin, who led the study and is a faculty member at CSH. 

But the researchers are still hopeful LLMs can help historians in the future. They’re working on refining their benchmark by including more data from underrepresented regions and adding more complex questions.

“Overall, while our results highlight areas where LLMs need improvement, they also underscore the potential for these models to aid in historical research,” the paper reads.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Beta Technologies’ bet on electric flight and Hyundai’s new Tesla charging port comes up short

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of...

Amazon shuts down Chime, its Zoom alternative

Amazon Chime, the tech giant’s underwhelming alternative to Zoom and Google Meet, is shutting down for good....

Cherryrock Capital raises new $172M fund from all-star investors to back diverse founders

Cherryrock Capital, founded by ex-TaskRabbit CEO Stacy Brown-Philpot, announced Wednesday the closing of its $172 million Fund...

Instagram’s new ad format lets creators get paid for testimonials in comments

Instagram is introducing a new way for creators to work with brands to make money by recommending...

Twitch caps streamers’ storage at 100 hours of highlights and uploads

Twitch on Wednesday announced it will begin limiting streamers to 100 hours of highlights and uploads, and...

UK healthcare giant HCRG confirms hack after ransomware gang claims theft of sensitive data

U.K. healthcare giant HCRG Care Group has confirmed it’s investigating a cybersecurity incident after a ransomware gang...

Amazon is shutting down its app store on Android

Amazon will discontinue its app store for Android on August 20 this year. The company sent a...

Valar Atomics comes out of stealth with $19M and a pilot reactor site

Companies developing small modular nuclear reactors (SMRs) have raised more than $1.5 billion in the past year,...