By Jenna Somers
In April, the Office of Equity, Diversity, and Inclusion at Vanderbilt Peabody College of education and human development hosted the panel discussion “Ethical and Equity Considerations in the Age of AI” as part of the Peabody Dean’s Diversity Lecture series. The session explored societal, ethical, and moral questions regarding AI’s growing role in higher education, with special consideration of social justice, bias, equity, and discrimination concerns.
The panelists were Charreau Bell, senior data scientist at the Vanderbilt Data Science Institute and director of the undergraduate data science minor; Duane Watson, Frank W. Mayborn Chair in Cognitive Science, professor of psychology and human development, and associate provost for faculty development; and Alyssa Wise, professor of technology and education and director of LIVE, the Learning Innovation Incubator at Vanderbilt. Ashmeet Oberoi, associate professor of the practice of human and organizational development, moderated the panel.
Throughout the conversation, the role of human intellect in responsible AI use emerged as an essential theme. Because generative AI is trained on a huge body of text on the internet and designed to detect and repeat patterns of language use, it runs the risk of perpetuating societal biases and stereotypes. To mitigate these effects, the panelists emphasized the need to be intentional, critical, and evaluative when using AI, whether users are experts designing and training models at top-tier companies or college students completing an AI-based class assignment.
“There is a lot of work to do around AI literacy, and we can think about this in two parts,” Wise said. “One, how do you get chat bots to produce good answers? Two, how do you evaluate to know those answers are good? The truth is they can easily produce an answer that sounds good, but to get a better answer requires iterations of prompting, and so students need to learn to engage in dialogue with generative AI, critiquing its results and figuring out how to ask questions. Another point that’s interesting for inclusion, is to use it for perspective taking. AI tools, used well, can help students converse with different perspectives that they might not otherwise encounter.”
AI and Education Equity
The panelists explored how AI could be used to either promote or hinder equity among students who come from diverse backgrounds. Some may have less knowledge in certain subjects and skills than others. In such cases, generative AI, like ChatGPT, can be a level-setting tool to help these students catch up to their peers. However, whether and how schools implement AI can also exacerbate inequities.
“When generative AI came on the scene, there was a rush to ban it for fear of cheating, but it’s not going anywhere and it’s an incredibly powerful productivity tool, so I worry that some kids will have access to learning how to use these tools effectively and other kids won’t. That’s something that educators need to think about carefully as they balance learning goals—including learning how to use AI—with issues of equity,” Watson said.
Furthermore, the quality of generative AI responses can vary substantially, even when fed identical prompts. While the panelists encourage students to experiment with many forms of AI to improve their AI literacy, they acknowledge that depending on a student’s resources and their school’s resources, some students may have access to more sophisticated forms of generative AI, while others only have access to free versions.
“The difference between some of these models—open-source models, free models—and ones you might pay for—isn’t like the difference between cutting with a knife and a sharper knife; it’s the difference between cutting with a knife and a spoon, and that’s something that’s really important for educators to think about as well,” Wise said.
Transparency and Accountability
The panelists noted that transparency is a complex issue because scholars need to further research the subject, generative AI companies need to consider what data is informative and feasible to release, and policymakers around the world need to become AI literate to pass regulations. That said, Bell offered some practical suggestions.
“One thing tech companies can do is offer a richer understanding of the data that their models train on at each step in the training process. They can also provide documentation and examples about the core values used when human preferences are used to guide model development and fine-tune responses. Some companies present guiding principles on model safety and guardrails, and I think that is a great starting point from which to grow,” Bell said.
“We also need researchers from diverse academic backgrounds involved to help develop new metrics to reflect how these models behave in the context of our general society,” Bell added.
Since generative AI can only parrot the data it is trained on, the panelists emphasized a need to diversify the AI workforce to help mitigate biases that manifest in the development of models. If the workforce consists mostly of people from the same background, that limits the scope of viewpoints contributing to the design process.
When considering ethics and equity in the age of AI, this discussion clarified the importance of human intellect and responsibility at all levels, from tech companies to governments, and from coders to educators to students.
Resources for the Vanderbilt community
For support with AI, the panelists welcomed the Vanderbilt community to utilize these resources:
- AI Fridays with the Data Science Institute offers drop-in consultations, AI deep dive discussions, and demos.
- The Office of Faculty Development has an AI working group that explores ways in which faculty and some non-faculty use AI in innovative ways.
- LIVE hosts the Learning Innovations Series, which includes weekly presentations, interactions, and discussions with faculty, students, and visiting speakers. LIVE Sparks is a new category in the series, offering interactive sessions designed to stimulate new thinking and action.