If artificial intelligence doesn’t help solve disparities, advocates worry, it will worsen them.
Hazard Lights
AI has been used in education since at least the 1970s. But the recent barrage of technology has coincided with a more intense spotlight on disparities in student outcomes, fueled by the pandemic and social movements such as protests over the killing of George Floyd. AI has fed hopes of reaching more equality thanks to its promise to increase personalized learning and to boost efficiency and sustainability for an overworked teaching force.
In late 2022, the White House released a “Blueprint for an AI Bill of Rights,” hoping that it would strengthen privacy rights. And last year, the U.S. Department of Education, along with the nonprofit Digital Promise, weighed in with recommendations for making sure this technology can be used “responsibly” in education to increase equity and support overburdened teachers.
If you ask some researchers, though, it’s not enough.
There have been fears that AI will accidentally magnify biases either by relying on algorithms that are trained on biased data, or by other methods such as automating assessments that ignore student experiences even while sorting them into different learning paths.
Now, some early data suggests that AI could indeed widen disparities. For instance: Lake’s organization, a national research and policy center that’s associated with Arizona State University’s Mary Lou Fulton Teachers College, released a report this spring that looked at K-12 teachers’ use of virtual learning platforms, adaptive learning systems and chatbots. The report, a collaboration with the RAND Corporation, found that educators working in suburban schools already profess to having more experience with and training for AI than those in urban or rural schools.
The report also found that teachers in schools where more than half of students are Black, Hispanic, Asian, Pacific Islander or Native American had more experience using the tools — but less training — than teachers who work in majority-white schools.
If suburban students — on average, wealthier than urban or rural students — are receiving more preparation for the complexities of an AI-influenced world, it opens up really big existential questions, Lake says.
Big Promises — or Problems
So how can advocates push AI to deliver on its promise of serving all students?
It wouldn’t be responsible to lean on AI as the quick fix for all our economic shortages in schooling.
— Rina Bliss
It’s all about strategy right now, making smart investments and setting down smart policy, Lake says.
Another report from the Center on Reinventing Public Education calls for more work to engage states on effective testing and implementation in their schools, and for the federal government to put more detailed guardrails and guidance in place. The report, “Wicked Opportunities,” also calls for more investment into research and development. From its perspective, the worst outcome would be to leave districts to fend for themselves when it comes to AI.
Part of the reason urban districts are less prepared for AI may be complexity and the sheer number of issues they are facing, observers speculate. Superintendents in urban districts say they are overwhelmed, Lake says. She explains that while they may be excited by the opportunities of AI, superintendents are busy handling immediate problems: pandemic recovery, the end of federal relief funding, enrollment declines and potential school closures, mental health crises among students and absenteeism. What these leaders want is evidence that suggests which tools actually work, as well as help navigating edtech tools and training their teachers, she adds.
But other observers worry about whether AI is truly the answer for solving structural problems in schools broadly.
Introducing more AI to classrooms, at least in the short term, implies teaching students using screens and virtual learning, argues Rina Bliss, an associate professor of sociology at Rutgers University. But many students are already getting too much screen and online time at home, she says. It degrades their mental health and their ability to work through assignments, and educators should be cautious about adding more screen time or virtual learning, Bliss says.
Bliss also points to a “print advantage,” a bump in how much is learned from print materials compared to screens, which has to do with factors like engagement with the text and how quickly a student’s eyes can lock onto and stay focused on material. In her view, digital texts, especially when they are connected to the internet, are “pots of distractions,” and increasing screen-based instruction can actually disadvantage students.
Ultimately, she adds, an approach to instruction that overrelies on AI could reinforce inequality. It’s possible that these tools are setting up a tiered system, where affluent students attend schools that emphasize hands-on learning experiences while other schools increasingly depend on screens and virtual learning. These tools shouldn’t replace real-world learning, particularly in under-resourced schools, she adds. She worries that excessive reliance on this technology could create an “underclass of students” who are given artificial stopgaps to big problems like school understaffing and underfunding. It wouldn’t be responsible to lean on AI as the quick fix for all our economic shortages in schooling, Bliss argues.
So how should educators approach AI? Perhaps the correct posture is cautious hope and deliberate planning.
Nobody knows precisely how AI will impact education yet, argues Lake, of CRPE. It is not a panacea, but in her estimation there’s a real opportunity to use it to close learning gaps. So it’s important to craft plans to deliver on the potential: “A lot of people freeze when it comes to AI, and if they can instead think about what they want for their kids, their schools, and whether AI can help, that seems like a productive path to me, and a much more manageable one,” Lake says.
There’s nothing wrong with being hopeful, she adds.