“Academic researchers must work urgently to map how artificial intelligence, open-source intelligence and online influence campaigns converge to serve hostile state objectives.”
The call to action comes from Vanderbilt University researchers Brett V. Benson and Brett J. Goldstein, who recently wrote a guest essay in The New York Times highlighting the growing threat of AI-driven propaganda and the essential role researchers play in addressing it.

Their essay, “The Era of A.I. Propaganda Has Arrived, and America Must Act,” draws on a large cache of documents recently uncovered by the Vanderbilt Institute of National Security. It reflects faculty-led work addressing major international security concerns.
“Supporting research at the intersection of national security and AI expands our understanding of the evolving threats facing democratic systems and global security. We’re committed to bold, high-impact research that meets challenges head-on,” said Liz Zechmeister, Vanderbilt’s interim chief research officer and senior associate provost for research and development.
In the trove of nearly 400 pages, being released in stages by the university, Goldstein, research professor of engineering science and management, and Benson, associate professor of political science, uncovered evidence that GoLaxy, a company with ties to the Chinese government, apparently deployed sophisticated, AI-driven propaganda campaigns in Hong Kong and Taiwan to shape public opinion and suppress dissent.
The discovery significantly changes how experts understand propaganda.

“Before, we knew propaganda could be effective and that foreign governments were pushing it. However, its reach was thought to be constrained by costs, scale and the human labor needed to sustain it,” Benson said. “The GoLaxy discovery showed those limits no longer apply.”
While the findings point to a leap in the potential scale of influence operations, they also reveal new levels of granularity, Goldstein said. “It’s not just about the total number of people. It’s about the ability to tailor messaging down to the individual. That hasn’t been done before.”

Benson and Goldstein also found the company has built data profiles on thousands of U.S. political figures, including congressional leaders. This collection should serve as a warning, Goldstein said. “Identifying and understanding the implications is something that’s going to potentially change the world and how we think about national security strategy.”
Goldstein said the GoLaxy document research was made possible by the interdisciplinary approach Vanderbilt encourages.
“It’s a great example of what Chancellor Diermeier calls ‘radical collaboration,’ done right,” he said. “Brett Benson is a political economist, and I’m a computer scientist. Together, we’re bringing our distinct expertise to decode these documents—proof that Vanderbilt is breaking down silos to tackle critical issues as true partners.”
Benson agreed, noting that the Institute of National Security played a pivotal role in enabling the work. “The Institute brought together experts from different backgrounds whose professional paths might never have crossed, and whose complementary strengths have made this and future research possible.”
Strategically, persuasively and without detection, advanced AI tools can be weaponized to quickly shape public opinion on a massive scale. Addressing the threat AI propaganda poses will require urgent collaboration across academia, government and the private sector, the researchers said.
The GoLaxy case reinforces the importance of research environments like Vanderbilt, where interdisciplinary collaboration, strong partnerships, mission-driven inquiry and commitment to real-world impact make it possible to confront fast-evolving threats.
“Universities are well positioned to lead this work … while remaining independent from commercial and political agendas,” Benson said. “(It’s) a neutrality that builds the trust needed to inform government, industry and the public.”
Collaboration and trust in academia may be democracy’s best defense against AI-driven propaganda.

