When some schools are banning ChatGPT, Vanderbilt is embracing generative AI technology to unlock opportunities for research and learning.
By Michael Blanding
In her class on the politics of the French Revolution, Holly Tucker starts off by assigning a historical character to each student—Marquis de Lafayette, Jean-Jacques Rousseau, Louis XVI—and giving them a multi-page role sheet detailing their alter ego’s political affiliation, life philosophy and strategic goals. Monarchist or revolutionary firebrand, this will be the student’s persona for the next six weeks, as Tucker has the class role-play scenarios, seek out alliances and vanquish their enemies in a simulation of the crucial historical event.
She next encourages them to do something many teachers wouldn’t dream of—talk to ChatGPT. “One of their first assignments is to upload their role sheet and then ask questions about it,” says Tucker, Mellon Foundation Chair in the Humanities and professor of French, of the interaction with the generative artificial intelligence app that has exploded onto the internet, generating equal parts excitement and fear. “I have them say, ‘Can you help me find three characters I might want to correspond with, and help me brainstorm approaches to frame my letters to them?’”
From then on, artificial intelligence becomes a vital part of the course. Students use it to analyze vast amounts of data, to communicate with each other and to make strategic moves. In all, Tucker says, students write an average of 18,000 messages to each other, as AI helps keep track of the interactions and even offers student insights into potential outcomes of decisions they are considering—teaching students about the French Revolution in a uniquely immersive way.
“It’s a watershed moment for the humanities. We now have the ability to communicate with a computer using natural language,” Tucker says. “We owe it to ourselves to find ways to use generative AI to glean insights from our texts faster and ask new questions about the human endeavor.”
Tucker’s enthusiastic embrace of ChatGPT in her class is just one example of Vanderbilt faculty’s groundbreaking work in harnessing the growing power of artificial intelligence. At a time when some schools are banning ChatGPT in classes—concerned that students will lean too heavily on AI or use it to write papers for them—Vanderbilt has become a leader in integrating AI into academic work. Faculty are using AI in their research, as well as introducing it to students, teaching them how to use it as a tool and not a crutch.
WATCH: AI Unearths Untold Stories
Historians Jane Landers and Daniel Genkins leverage artificial intelligence and computer science techniques to scan through thousands of historical documents to form the Slave Societies Digital Archive, the world’s largest collection of historical records of Africans in the Atlantic World.
FUTURE OF LEARNING
“Everybody’s focused on artificial intelligence replacing humans,” says Jules White, professor of computer science and associate dean for strategic learning programs. “But what we want to focus on is augmented intelligence, where it’s all about amplifying human creativity and problem-solving. It’s like an exoskeleton for the mind—you help people create more interesting and expressive things than they could have done before, and at a larger scale.”
White leads the Future of Learning and Generative AI Initiative, a new interdisciplinary program to connect and advise faculty on how to make the most of the prodigious computing power generative AI can offer. A CNN article this summer said Vanderbilt was “among the early leaders taking a strong stance in support of generative AI,” specifically citing an 18-hour online course White created on the e-learning platform Coursera that teaches the fundamentals of “prompt engineering”—how to best fashion a prompt to ChatGPT and other AI platforms to return the most helpful response. The course has been taken by 240,000 people and counting.
In their simplest form, White says, generative AI models are trained to predict the next word in a sentence. If you give it “Mary had a little,” then it should predict “lamb.” The magic happens when you train it on vast amounts of data from the World Wide Web. “It turns out this has surprising ramifications when you do this at a large enough scale, as it learns patterns in our language and can do all of these computations.”
Most of the problems with ChatGPT and other generative AI platforms, White says, result from people not knowing how to use it properly. “People see a text box, and they think they need to use it like Google, which is exactly the wrong way to use it,” he explains. “That doesn’t give you augmented intelligence, that gives you minus intelligence.”
Rather than using it to search for information, he says, generative AI is most effective when treated like you are sending a text message to an incredibly smart and capable assistant. One example he uses in his class is to ask it to create a meal plan “that’s a fusion of food from Ethiopia and Uzbekistan, is keto-friendly and has ingredients I can get from the average grocery store.” There isn’t a page on Google that could give you that info, and yet ChatGPT can create a complete multi-week menu in seconds. Another trick, he adds, is to give it a role, for example, telling it to “act like a speech pathologist” before presenting it with a problem where a child mixes up words and asking it to diagnose possible causes. “Knowing the pattern of interaction allows it to tap into these emergent computational capabilities that no other system on the planet can perform.”
APPLYING AI TO VAST AMOUNTS OF DATA
Another use for the software is to analyze and summarize vast amounts of complex information. In the past, Doug Schmidt, Cornelius Vanderbilt Professor of Computer Science, has worked in government procurement for defense systems—for example, building a next-generation air traffic management or missile defense system. “It involves an enormous number of people over a very long time and at great expense,” he says. Comparing information in multiple reports by different people and then trying to figure out if they meet government regulations can be a major headache. “As luck would have it, large language models are very good at that,” says Schmidt, who is researching how generative AI can dramatically save time and money by comparing and synthesizing such reports.
On another level, Schmidt is using the same tools in his computer science classes, using a Chrome plugin called Glasp to take videos of his lectures and toss a transcript into ChatGPT to summarize the main points. Schmidt then asks it to generate several quiz questions based on the videos. “And boom, within seconds, I have fresh up-to-date questions on what I talked about in class,” he says. Of course, Schmidt reviews the questions to see if they are accurate—something easy to do since he generated the material himself—but the tool helps take away from tasks he finds tedious, so he can focus on those he enjoys, such as writing lectures and code.
As for the fears that students will cheat on tests and papers by using ChatGPT, he addresses that head-on by changing the way he designs problems. Instead of asking students to write very specific code they could easily generate using AI, he presents more open-ended questions that could be solved in a variety of ways, requiring students to use their own creativity. Of course, those tests are harder to grade, but Schmidt has come up with a solution to that, too, training ChatGPT to automatically find elements that should or shouldn’t be in the code. “Instead of hiring an army of graders and asking them to follow some rubric, I’ve found a way to automatically do something that used to require tremendous time and effort on the part of me and my TAs.”
Efforts by some to ban ChatGPT and other AI software, Schmidt says, are misguided. “They think this is a flash in the pan and are actively discouraging people from using it,” he says. “Our hypothesis is that in the very near future people who know how to use this stuff well are going to run rings around the people who don’t. They won’t be able to get anywhere near the level of productivity for the amount of effort expended.”
BUILDING SKEPTICAL AND INFORMED AI USERS
His colleague Jesse Spencer-Smith, chief data scientist and interim director of the Data Science Institute and professor of the practice of computer science, has gone a step further in embracing generative AI for coding. As someone who has taught artificial intelligence for 20 years, Spencer-Smith recently changed the name of his advanced coding class to AI-Assisted Coding. “Rather than trying to detect whether you used ChatGPT to solve a programming problem, we’ve turned it around, to say, ‘Use ChatGPT and get very efficient and know how to guide it,’” he says.
He encourages the same approach in the humanities, citing a clever exercise by a faculty member in the English department who specifically told students to use ChatGPT to write an essay—and then turn around and grade it, so they could see its shortcomings where it used poor phrasing or got information wrong, as well as how it can be used to help organize ideas. “It turns people into skeptical and informed users and also gets them to the point where they can understand what good writing is,” Spencer-Smith says.
On the other hand, generative AI can be an excellent tool for studying or brainstorming. The DSI and Center for Teaching have collaborated on a platform which acts as a personalized tutor, where students can upload a book chapter or paper they need to review, and the AI can generate questions to quiz them on the material. The entire transcript can then be uploaded so the professor can see what the student understands and where they need help. “I used to have help sessions where half the class would be in every week,” Spencer-Smith says. “Now, I have very few people because they are all using ChatGPT to explain concepts in a way customized to their background.”
DSI has also collaborated with faculty to integrate AI into their teaching and research. Recently, for example, it’s been working with Karan Jani, professor of physics and astronomy, on creating an AI model to identify gravity waves to detect the presence of black holes. “The idea is to train a model that could be used to solve not just one problem, but a host of problems, and then be known as the Vanderbilt foundational model for gravity waves,” Spencer-Smith says.
Beyond using AI for research, Spencer-Smith also directs the Data Science for Social Good program, a 10-week program for graduate students who receive a stipend to use AI for practical applications. One recent project with The Vanderbilt Kennedy Center Treatment and Research Institute for Autism Spectrum Disorders (TRIAD) helped develop a workplace app for people with autism, providing a virtual coach they could ask questions and receive help for navigating challenging situations. Another project worked with Professor Emerita of Psychology and Human Development Georgene Troseth and Professor Amy Booth, who spent years developing an app to help provide prompts to parents and other caregivers to better engage with their children when reading books. The AI Storybook project has finally realized their vision, using generative AI to suggest questions in real time for any children’s book.
“The whole purpose of this curriculum is to get students to understand how complex history is and how, during these watershed moments, history can turn on a dime. We don’t need to tell them that—they’re living it.”
—Holly Tucker
When and how to best deploy AI in classrooms is something best left to individual instructors rather than mandated by university-wide policy, says Doug Fisher, associate professor of computer science and computer engineering. “While the technology is changing so much, it makes sense for us to allow individuals to investigate different options and then come together to compare notes,” says Fisher, who previously oversaw programs in AI for the National Science Foundation and co-taught a class at Vanderbilt on AI ethics.
Some professors may decide that AI is not appropriate to use in entry-level classes, where students are better off developing their own skills before seeking out computer-assisted aid. At the same time, Fisher noted, AI’s propensity for bias—being trained on a vast corpus of data on an imperfect internet—might give instructors pause before using it around sensitive topics such as race and gender studies. While students might feel more comfortable discussing those ideas with an impersonal machine, “there could be problematic exchanges humans are better equipped to handle.”
However the technology is incorporated into the classroom or the lab, it’s clear that the adventure with AI—particularly generative AI—is just beginning. It may take months or years before students and faculty best understand where it can be used most effectively and how it can best augment learning and discovery. In the meantime, it’s clear that AI is affecting learning now. Tucker says that students in her French Revolution class are required to turn in 20 pages by the end of the course—but they are so engaged that the average student turns in 25. “I’d never used ChatGPT before,” says Remi Bristol, one of Tucker’s students. “Now I definitely see myself using it in the future, for readings in classes that are confusing for me or to help prepare for a job interview.”
A recent survey of executives by IBM predicted that up to 40 percent of the workforce will have to reskill to manage AI in the next three years. It’s clear from their experiences in Tucker’s class and others that Vanderbilt students will be ready for that challenge.
“The whole purpose of this curriculum is to get students to understand how complex history is and how, during these watershed moments, history can turn on a dime,” Tucker says. “We don’t need to tell them that—they’re living it.”