Should AI Bots Do Science?

Date:

Share post:


Cong Lu has long been fascinated by how to use technology to make his job as a research scientist more efficient. But his latest project takes the idea to an extreme.

Lu, who is a postdoctoral research and teaching fellow at the University of British Columbia, is part of a team building an “AI Scientist” with the ambitious goal of creating an AI-powered system that can autonomously do every step of the scientific method.

“The AI Scientist automates the entire research lifecycle, from generating novel research ideas, writing any necessary code, and executing experiments, to summarizing experimental results, visualizing them, and presenting its findings in a full scientific manuscript,” says a write-up on the project’s website. The AI system even attempts a “peer review,” of the research paper, which essentially brings in another chatbot to check the work of the first.

An initial version of this AI Scientist has already been released — anyone can download the code for free. And plenty of people have. It did the coding equivalent of going viral, with more than 7,500 people liking the project on the code library GitHub.

To Lu, the goal is to accelerate scientific discovery by letting every scientist effectively add Ph.D.-level assistants to quickly push boundaries, and to “democratize” science by making it easier to conduct research.

“If we scale up this system, it could be one of the ways that we truly scale scientific discovery to thousands of underfunded areas,” he says. “A lot of times the bottleneck is on good personnel and years of training. What if we could deploy hundreds of scientists on your pet problems and have a go at it?”

But he admits there are plenty of challenges to the approach — such as preventing the AI systems from “hallucinating,” as generative AI in general is prone to do.

And if it works, the project raises a host of existential questions about what role human researchers — the workforce that powers much of higher education — would play in the future.

The project comes at a moment where other scientists are raising concerns about the role of AI in research.

A paper out this month, for instance, found that AI chatbots are already being used to create fabricated research papers that are showing up in Google Scholar, often on contentious topics like climate research.

And as tech firms continue to release more-powerful chatbots to the public — like the new version of ChatGPT put out by OpenAI this month — prominent AI experts are raising fresh concerns that AI systems could leap guardrails in ways that threaten global safety. After all, part of “democratizing research” could lead to greater risk of weaponizing science.

It turns out the bigger question may be whether the latest AI technology is even capable of making novel scientific breakthroughs by automating the scientific process, or there’s something uniquely human about the endeavor.

Checking for Errors

The field of machine learning — the only field the AI Scientist tool is designed for so far — may be uniquely suited for automation.

For one thing, it is highly structured. And even when humans do the research, all of the work happens on a computer.

“For anything that requires a wet lab or hands-on stuff, we’ve still got to wait for our robotic assistants to show up,” Lu says.

But the researcher says that pharmaceutical companies have already done significant work to automate the process of drug discovery, and he believes AI could take those measures further.

One practical challenge for the AI Scientist project has been avoiding AI hallucinations. For instance, Lu says that because large language models continually generate the next character or “token” based on probability derived from training data, there are times when such systems might produce errors when copying data. For instance, the AI Scientist might enter 7.1 when the correct number in a dataset was 9.2, he says.

To prevent that, his team is using a non-AI system when moving some data, and having the system “rigorously check through all of the numbers,” to detect any errors and correct them. He says a second version of the team’s system that they expect to release later this year will be more accurate than the current one when it comes to handling data.

Even in the current version, the project’s website boasts that the AI Scientist can carry out research far cheaper than human Ph.D.s can, estimating that a research paper can be created — from idea generation to writing and peer review — for about $15 in computing costs.

Does Lu worry that the system will put researchers like himself out of work?

“With the current capabilities of AI systems, I don’t think so,” says Lu. “I think right now it’s mainly an extremely powerful research assistant that can help you take the first steps and early explorations on all the ideas that you never had time for, or even help you brainstorm and investigate a few ideas on a new topic for you.”

Down the road, if the tool improves, though, Lu admits it could eventually raise tougher questions for the role of human researchers. Though in that context research will not be the only thing transformed by advanced AI tools. For now, though, he sees it as what he calls a “force multiplier.”

“It’s just like how code assistants now let anyone very simply code up a mobile game app or a new website,” he says.

The project’s leaders have put in guardrails on the kinds of projects it can attempt, to prevent the system from becoming an AI mad scientist.

“We don’t really want loads of new viruses or lots of different ways to make bombs,” he says.

And they’ve limited the AI Scientist to a maximum of running two or three hours at a time, he says, “so we have control of it,” noting that there’s only so much “havoc it could wreak in that time.”

Multiplying Bad Science?

As the use of AI tools spreads rapidly, some scientists worry that they could be used to actually hinder scientific progress by flooding the web with fabricated papers.

When researcher Jutta Haider, a professor of librarianship, information, education and IT at the Swedish School of Library and Information Science, went looking on Google Scholar for papers with AI-fabricated results, she was surprised at how many she found.

“Because it was really badly produced ones,” she explains, noting that the papers were clearly not written by a human. “Just simple proofreading should have eliminated those.”

She says she expects there are many more AI-fabricated papers that her team did not detect. “It’s the tip of the iceberg,” she says, since AI is getting more sophisticated, so it will be increasingly difficult to tell if something was human- or AI-written.

One problem, she says, is that it is easy to get a paper listed in Google Scholar, and if you are not a researcher yourself, it may be difficult to tell reputable journals and articles from those created by bad actors trying to spread misinformation or add fabricated work to their CV and hope no one checks where it is published.

“Because of the publish-or-perish paradigm that rules academia, you can’t make a career without publishing a lot,” Haider says. “But some of the papers are really bad, so nobody will probably make a career with those ones that we found.”

She and her colleagues are calling on Google to do more to scan for AI-fabricated articles and other junk science. “What I really recommend Google Scholar do is hire a team of librarians to figure out how to change it,” she adds. “It isn’t transparent. We don’t know how it populates the index.”

EdSurge reached out to Google officials but got no response.

Lu, of the AI Scientist project, says that junk science papers have been a problem for a while, and he shares the concern that AI could make the phenomenon more pervasive. “We recommend whenever you run the AI Scientist system, that anything that is AI-generated should be watermarked so it is verifiably AI-generated and it cannot be passed off as a real submission,” he says.

And he hopes that AI can actually be used to help scan existing research — whether written by humans or bots — to ferret out problematic work.

But Is It Science?

While Lu says the AI Scientist has already produced some useful results, it remains unclear whether the approach can lead to novel scientific breakthroughs.

“AI bots are really good thieves in many ways,” he says. “They can copy anyone’s art style. But could they invent a new art style that hasn’t been seen before? It’s hard to say.”

He says there is a debate in the scientific community about whether major discoveries come from a pastiche of ideas over time or involve unique acts of human creativity and genius.

“For instance, were Einstein’s ideas new, or were those ideas in the air at the time?” he wonders. “Often the right idea has been staring us in the face the whole time.”

The consequences of the AI Scientist will hinge on that philosophical question.

For Haider, the Swedish scholar, she’s not worried about AI ever usurping her job.

“There’s no point for AI to be doing science,” she says. “Science comes from a human need to understand — an existential need to want to understand – the world.”

“Maybe there will be something that mimics science,” she concludes, “but it’s not science.”



Source link

Alexandra Williams
Alexandra Williams
Alexandra Williams is a writer and editor. Angeles. She writes about politics, art, and culture for LinkDaddy News.

Recent posts

Related articles

Which Language 'Superpowers' Do Bilingual Students Bring to U.S. Schools?

Los datos ya llegaron, y revelan algo interesante sobre los estudiantes bilingües de los Estados Unidos. No...

Supporting Young Students’ Social-Emotional Needs in the Post-COVID Era

The COVID-19 pandemic has left a lasting impact on students’ social-emotional well-being. As schools return to in-person...

As Student Smartphone Use Increases, So Does Our Need for Consistent School Policies

Each fall, every teacher must wage a few key wars with a new class of students. In...

Cash-Starved Districts Are Turning to Four-Day School Weeks. Will That Harm Students?

The need was becoming dire. A school district in Brighton, in the Denver metro area of Colorado,...

To Address the ‘Homework Gap,’ Is It Time to Revamp Federal Connectivity Programs?

One of the lessons of the COVID-19 pandemic was that many families didn’t have reliable internet access...

Get Started, Then Get Better: Prioritizing Action in a PLC

“Don’t do that.” Those were the words out of Dr. Richard DuFour’s mouth more than a decade...

States Turn to Employers to Boost Child Care Benefits

This story was originally published by The 19th.As efforts to expand the child tax credit and provide...

How AI Can Foster Creative Thinking in the Classroom and Beyond

For many years, educators have envisioned personalized learning as a way to tailor education to each student's...