New AI Tools Are Promoted As Study Aids for Students. Are They Doing More Harm Than Good?

Date:

Share post:


Once upon a time, educators worried about the dangers of CliffsNotes — study guides that rendered great works of literature as a series of bullet points that many students used as a replacement for actually doing the reading.

Today, that sure seems quaint.

Suddenly, new consumer AI tools have hit the market that can take any piece of text, audio or video and provide that same kind of simplified summary. And those summaries aren’t just a series of quippy text in bullet points. These days students can have tools like Google’s NotebookLM turn their lecture notes into a podcast, where sunny-sounding AI bots banter and riff on key points. Most of the tools are free, and do their work in seconds with the click of a button.

Naturally, all this is causing concern among some educators, who see students off-loading the hard work of synthesizing information to AI at a pace never before possible.

But the overall picture is more complicated, especially as these tools become more mainstream and their use starts to become standard in business and other contexts beyond the classroom.

And the tools serve as a particular lifeline for neurodivergent students, who suddenly have access to services that can help them get organized and support their reading comprehension, teaching experts say.

“There’s no universal answer,” says Alexis Peirce Caudell, a lecturer in informatics at Indiana University at Bloomington who recently did an assignment where many students shared their experience and concerns about AI tools. “Students in biology are going to be using it in one way, chemistry students are going to be using it in another. My students are all using it in different ways.”

It’s not as simple as assuming that students are all cheaters, the instructor stresses.

“Some students were concerned about pressure to engage with tools — if all of their peers were doing it that they should be doing it even if they felt it was getting in the way of their authentically learning,” she says. They are asking themselves questions like, “Is this helping me get through this specific assignment or this specific test because I’m trying to navigate five classes and applications for internships” — but at the cost of learning?

It all adds new challenges to schools and colleges as they attempt to set boundaries and policies for AI use in their classrooms.

Need for ‘Friction’

It seems like just about every week -— or even every day — tech companies announce new features that students are adopting in their studies.

Just last week, for instance, Apple released Apple Intelligence features for iPhones, and one of the features can recraft any piece of text to different tones, such as casual or professional. And last month ChatGPT-maker OpenAI released a feature called Canvas that includes slider bars for users to instantly change the reading level of a text.

Marc Watkins, a lecturer of writing and rhetoric at the University of Mississippi, says he is worried that students are lured by the time-saving promises of these tools and may not realize that using them can mean skipping the actual work it takes to internalize and remember the material.

“From a teaching, learning standpoint, that’s pretty concerning to me,” he says. “Because we want our students to struggle a little bit, to have a little bit of friction, because that’s important for their learning.”

And he says new features are making it harder for teachers to encourage students to use AI in helpful ways — like teaching them how to craft prompts to change the writing level of something: “It removes that last level of desirable difficulty when they can just button mash and get a final draft and get feedback on the final draft, too.”

Even professors and colleges that have adopted AI policies may need to rethink them in light of these new types of capabilities.

As two professors put it in a recent op-ed, “Your AI Policy Is Already Obsolete.”

“A student who reads an article you uploaded, but who cannot remember a key point, uses the AI assistant to summarize or remind them where they read something. Has this person used AI when there was a ban in the class?” ask the authors, Zach Justus, director of faculty development at California State University, Chico, and Nik Janos, a professor of sociology there. They note that popular tools like Adobe Acrobat now have “AI assistant” features that can summarize documents with the push of a button. “Even when we are evaluating our colleagues in tenure and promotion files,” the professors write, “do you need to promise not to hit the button when you are plowing through hundreds of pages of student evaluations of teaching?”

Instead of drafting and redrafting AI policies, the professors argue that educators should work out broad frameworks for what is acceptable help from chatbots.

But Watkins calls on the makers of AI tools to do more to mitigate the misuse of their systems in academic settings, or as he put it when EdSurge talked with him, “to make sure that this tool that is being used so prominently by students [is] actually effective for their learning and not just as a tool to offload it.”

Uneven Accuracy

These new AI tools raise a host of new challenges beyond those at play when printed CliffsNotes were the study tool du jour.

One is that AI summarizing tools don’t always provide accurate information, due to a phenomenon of large language models known as “hallucinations,” when chatbots guess at facts but present them to users as sure things.

When Bonni Stachowiak first tried the podcast feature on Google’s NotebookLM, for instance, she said she was blown away by how lifelike the robot voices sounded and how well they seemed to summarize the documents she fed it. Stachowiak is the host of the long-running podcast, Teaching in Higher Ed, and dean of teaching and learning at Vanguard University of Southern California, and she regularly experiments with new AI tools in her teaching.

But as she tried the tool more, and put in documents on complex subjects that she knew well, she noticed occasional errors or misunderstandings. “It just flattens it — it misses all of this nuance,” she says. “It sounds so intimate because it’s a voice and audio is such an intimate medium. But as soon as it was something that you knew a lot about it’s going to fall flat.”

Even so, she says she has found the podcasting feature of NotebookLM useful in helping her understand and communicate bureaucratic issues at her university — such as turning part of the faculty handbook into a podcast summary. When she checked it with colleagues who knew the policies well, she says they felt it did a “perfectly good job.” “It is very good at making two-dimensional bureaucracy more approachable,” she says.

Peirce Caudell, of Indiana University, says her students have raised ethical issues with using AI tools as well.

“Some say they’re really concerned about the environmental costs of generative AI and the usage,” she says, noting that ChatGPT and other AI models require large amounts of computing power and electricity.

Others, she adds, worry about how much data users end up giving AI companies, especially when students use free versions of the tools.

“We’re not having that conversation,” she says. “We’re not having conversations about what does it mean to actively resist the use of generative AI?”

Even so, the instructor is seeing positive impacts for students, such as when they use a tool to help make flashcards to study.

And she heard about a student with ADHD who had always found reading a large text “overwhelming,” but was using ChatGPT “to get over the hurdle of that initial engagement with the reading and then they were checking their understanding with the use of ChatGPT.”

And Stachowiak says she has heard of other AI tools that students with intellectual disabilities are using, such as one that helps users break down large tasks into smaller, more manageable sub-tasks.

“This is not cheating,” she stresses. “It’s breaking things down and estimating how long something is going to take. That is not something that comes naturally for a lot of people.”



Source link

Alexandra Williams
Alexandra Williams
Alexandra Williams is a writer and editor. Angeles. She writes about politics, art, and culture for LinkDaddy News.

Recent posts

Related articles

What to Know About the Rise of Smartwatches Among Kids

All year long, schools have been grappling with how to respond to student cellphone use, which, according...

Rethinking Digital Citizenship

The need to teach responsible and ethical digital habits has never been more pressing. For students, digital...

With Card Games, Coloring Sessions and ‘Hang Out Times,’ Professors Rethink Office Hours

Office hours for Patrick Cafferty’s biology classes are anything but traditional. Sometimes, students will go on runs...

When Students Miss School, Teachers Enjoy Their Jobs Less

Since the pandemic, the number of students who are missing class has risen. More than a quarter...

Researchers Try Using AI Chatbots to Conduct Interviews for Social Science Studies

As the legislative election in France approached this summer, a research team decided to reach out to...

How Digital Credentials Can Elevate Existing Programs

As interest in skills-based hiring increases, more and more companies and states are eliminating degree requirements. In...

What Bilingual Education Reveals About Race in the U.S.

Looking back at her youth growing up in Douglas, Arizona — nestled up snugly against its Mexican...

Why Soft Skills Matter More Than Ever

In our tech-driven world, the value of human connection can’t be overstated. While mastering technical skills is...