By Jenna Somers
ChatGPT is here to stay, and educators need to adapt to their students using it—at least, that’s what news headlines have suggested for almost a year following ChatGPT’s unveiling. Much of the coverage has given voice to worries about the possibility that AI will hinder learning by doing students’ work for them. But the AI revolution has just begun, and some experts are seizing on AI’s positive potential to augment teaching and learning.
A growing number of those experts are faculty at Vanderbilt Peabody College of education and human development, including Scott Crossley, professor of special education, Bethany Rittle-Johnson, professor of psychology and human development, and Kelley Durkin, research assistant professor of teaching and learning. They have teamed up to launch data science challenges that will leverage the power of AI to advance K-12 education in writing and math. Supported by several private foundations, they will lead two challenges focused on improving student writing and one challenge to model students’ math misconceptions. Teams will compete to integrate AI models into automatic writing evaluation systems to better provide feedback to students. The math challenge aims to provide teachers and students with early feedback on probable misconceptions.
“The larger goal of these challenges is to better understand and model learner production, skill development, and knowledge development, so that we can then use the models in automatic essay and math evaluation systems that would not only score student work but give students’ guidance on improving their work,” Crossley said. “We know that students learn best through deliberate and spaced practice, and these automated evaluation systems would give students that type of practice, with the added benefit of a built-in tutor that helps them refine and correct their work.”
Specifically, regarding student writing, Crossley says, “These automated essay evaluation systems will give students feedback for revising essays. Feedback may include areas they may want to pay attention to as they revise or how to improve a thesis statement by linking it to evidence provided in the following paragraphs. It’s a way of giving students deliberate practice and discourse level feedback on their writing in the absence of a teacher, so that when they submit their essay, the teacher can pay more attention to argumentation, rhetorical style, tone—all those elements of writing that humans understand much better than machines,” Crossley said.
Cascading Challenges
In the first of the funded challenges, teams will compete to refactor winning models from the Feedback Prize—a series of competitions to develop open-source algorithms to help improve student writing. In this challenge, teams will leverage additional data to further refine the Feedback Prize winning models. The models will annotate student essays automatically to provide guidance on text organization, including giving feedback on the presence or absence of discourse elements like theses, claims, and evidence, as well as the quality of those discourse elements.
In the second challenge, industry organizations will integrate the models from the first challenge into digital learning platforms. These systems will be assessed on student and teacher engagement with the tool as well as speed and accuracy of feedback. Organizations will compete to receive $300,000 to integrate the writing tools.
On the mathematics front, teams will create algorithms that detect mathematical misconceptions in student responses to open-ended math questions. “Misconceptions are concepts that do not match the accepted view, and that form as students attempt to integrate existing knowledge with new information but do so in incorrect ways,” Rittle-Johnson said. “For instance, a student might incorrectly apply knowledge about whole numbers to decimals, thinking that .25 is greater than .5 because 25 is greater than 5. If this type of misconception is not detected and addressed early in a student’s education, it could lead to further misconceptions and harm long-term education outcomes.”
Rittle-Johnson, Durkin, and Rebecca Adler, a doctoral student in psychological sciences, are leading efforts to code the math response for potential misconceptions that underlie the students’ errors. The data science challenge will then invite teams to develop algorithms that automatically detect misconceptions in the responses. These algorithms could then be used in digital learning platforms. As students practice math problems on the platforms, the AI would provide feedback on their responses, including on potential misconceptions. A student could then rework the problem to correct their answer and revise their understanding to eliminate misconceptions. Likewise, teachers would have the benefit of knowing students’ misconceptions on an individual basis and whether a misconception existed among a whole cohort of students, which would support individualized and targeted approaches to address the misconceptions. If they are not addressed directly, they do not disappear and will re-emerge in the future, even if students are taught correct concepts and procedures.
A Record of Success
Using written responses to open-ended math questions from the National Assessment of Educational Progress (NAEP), Crossley led a Peabody College team that recently won top prize in the mathematics automated scoring challenge, a contest very similar to the one Rittle-Johnson is leading. The algorithm developed by Crossley’s team scored students’ responses to open-ended questions accurately and fairly, suggesting that, if implemented, it could reduce scoring costs and provide insights on students’ responses.
Crossley has collaborated extensively with the data science community to improve education outcomes. He is a member of the executive committee for the National AI Institute for Adult Learning and Online Education (AI-ALOE), a consortium of researchers and experts from multiple universities dedicated to improving online education for lifelong learning and workforce development. The Georgia Institute of Technology leads the AI-ALOE with funding from the National Science Foundation. Earlier this year, Crossley received a five-year, $1.2 million grant from NSF for his work associated with AI-ALOE’s efforts to upskill and reskill workers.
“We are developing a suite of AI technologies that can be integrated into corporate training, military training, and technical college classrooms to support people who are learning vocational skills,” Crossley said. “So, we’re interested in integrating AI into learning platforms to help transition and skill the next generation of workers to quicker credentials. It all comes back to the same basic idea of using AI to help move education forward and to amplify opportunities.”