
In December, the National Academies of Sciences, Engineering and Medicine released the consensus study report, “Understanding and Addressing Misinformation about Science.” The report documents two years of research by a multi-disciplinary expert committee to “characterize the nature, scope, and impacts of misinformation about science, and provide guidance on interventions, policies, and future research.”
Lisa Fazio, associate professor of psychology and human development at Vanderbilt Peabody College, was a committee member. In the following interview, she offers guidance based on the committee’s report and her research about how people learn true and false information and how to correct errors in knowledge.

What are the most important or impactful lessons from this report for policymakers, science communicators and the public?
Misinformation about science is a complicated issue. Defining what is and is not science misinformation is not easy because science is a process and not a set of facts. What we consider misinformation at one point might not be misinformation later and can change.
In the report, we demonstrate that we know a lot about how science misinformation spreads, both intentionally and unintentionally, and some things that we can do to try and decrease that spread and decrease the impacts. It’s important when we’re talking about these issues that we’re talking about specifics, rather than this kind of broad boogeyman of misinformation, to make sure we’re talking about what the evidence shows.
Where is misinformation most widely spread? How and why?
We know the most about science misinformation that is spread on social media. And that’s because until a couple of years ago, scientists had pretty good access to Twitter data and could follow what was happening. We know less now that API access [application programming interface] has been diminished from Twitter, now X, Facebook and other sources. But the real sources are multi-faceted.
People can hear science misinformation on podcasts. They hear it on TV shows. They hear it from their friends and family. They see it when browsing the internet. They see it on social media. There are a bunch of sources and places that people can hear it, and it’s hard to track where exactly the misinformation is being spread and how.
What are some of the specific platform features or dynamics of social media that contribute to the spread of misinformation?
On social media it’s easy to take things out of context and to misleadingly place something in a new context. So, you’ve got a picture of a park with trash strewn all around it, and someone posts that as, “Look at what happened after this environmental protest. They left all this trash!” when in actuality it’s a photo after a concert or something like that; or people will take video game footage and claim that it’s actually war footage. Those types of things are very easy to do on social media and tend to be one of the major ways we see misinformation spread.
On platforms like TikTok, it’s very hard to see who’s posting the information and why you should believe them or not. There’s no easy quick way to tell is this person talking about their domain of expertise, or are they talking about something that they know nothing about? There, we lack the types of signals and cues that might be useful in discerning reliable information from unreliable information.
How can individuals, communities, and institutions proactively mitigate misinformation about science?
In the report, we talk about four places that we can possibly intervene on the system of science misinformation: supply, demand, distribution, and uptake.
One place is trying to reduce the supply of misinformation. That could be trying to change the ratio of true and false information by promoting good science journalism, increasing access to accurate information, or de-platforming or restricting false information—so, one example would be YouTube de-monetizing videos that have science misinformation in them.
The other place to intervene is demand. People are looking for accurate information. When they can’t find that, they often turn to misinformation. So, it’s important to try to provide information to people who need it by filling information voids, making sure that people have easy access to good science and health information, and reducing the kind of societal conditions that would make them turn towards these disinformation campaigns or pieces of science misinformation.
Then we also can intervene on the spread [distribution] of misinformation. Once it’s out there in the world, how far does it go? How many people see it? That’s a place where individuals can play a role. If you see or hear something on social media or from another source, think about, how do you know that it’s true before you pass it along to others. A lot of us, especially when we’re dealing with health information, have a view of, “Well, just in case it’s true, I would want people to know about this danger,” but that can actually lead to the spread of false information.
And then, finally, uptake. How do you prevent people from believing the misinformation once they see it? Part of that can be digital media literacy and media literacy more generally, figuring out what’s a good source, what’s a bad source. How do I know if what I’m reading is true? And this also includes placing warning labels on things that are likely misinformation, providing fact checks and other sorts of information so that people can tell whether what they’re reading is true or false.
Some people argue that fact checking is a form of censorship. How do you regard fact checking?
One thing to keep in mind is that fact checking is more speech. It is not censorship. Sometimes platforms decide to take down or reduce the spread of posts that have been fact checked, but the fact check itself is just providing more information. It’s additional free speech. It’s the journalist or organization providing additional context and information about what’s been posted.
Is it better for social media companies to fact check a post but not reduce its spread, thereby allowing more people to encounter the factual information; or, is it better to algorithmically reduce the spread of the original post?
One of the issues you find is that fact checking takes time, so it doesn’t tend to appear until most people have already seen the misinformation. I think it’s up to individual social media companies and platforms to decide what they want their rules to be around that. Most of them—what they would do—is reduce the spread of a post that had been fact checked—so not amplify it using their algorithms—but it would still exist on someone’s page. Although if you had a reputation for consistently posting things that are judged to be false, then you might see repercussions.
How do you think about the use of community notes by X and Meta vs. professional fact checkers?
I think professional fact checkers are a necessity, and community notes can be a nice supplement. There are benefits of community notes. We know that crowdsourced fact checking can work in some situations, but it can’t hold up the system on its own. You need fact checks that people can rely on. You need people who’ve been trained to do some aspects of fact checking. In fact, Americans agree. There have been a few surveys recently of what Americans think about fact checking and fact checkers, and the majority want fact checkers. They think that they’re useful. And if they see a role for community notes, they see it as kind of an assistant to the professional fact checkers.
The summary of the report states that, “science and medicine are among the most trusted institutions today,” but the report also indicates that since 2020 that trust has declined. What are the causes of this decline?
I don’t know if we know exactly what’s happening with the decline. One part is that scientists have become politicized, and political leaders have been attacking scientists.That’s clearly part of what’s going on. Also, during the COVID-19 pandemic, you had a situation where scientists were learning things on the fly, and their guidance was changing as they learned more. When you have a population that thinks of science as a set of facts, and then these facts are changing, that will reduce your trust in science.
In addition to teaching digital media literacy, another thing that’s useful to teach in school is this understanding that science is the process and not the outcome, and that our scientific understanding is constantly changing and being updated.
Should scientists and medical professionals work proactively to assume more public communications roles?
I don’t think that every scientist needs to be doing science communication, but I would like to see more of them doing it. It is an effective way to message to the public. Scientists are still fairly well trusted in the U.S. One of the things we’re up against is that a lot of the dis-informers have a lot of money and resources behind them and the scientific community doesn’t. Many of the organizations that are fighting this information are kind of small and scrappy. Providing more resources to organizations and scientists who are doing this well would be very helpful.
In the report we discuss that local community health organizations need to be aware of and proactively thinking about misinformation that might appear. We know now that with any natural disaster in the country, you’ll have a lot of science misinformation pop up afterwards, and it’s easier to fight back against that if you’ve thought about it a little bit beforehand. What are the types of narratives that will likely appear after this type of disaster? And what can we do to kind of nip them in the bud and get out in front of them, rather than always being on the back foot and trying to reply too late?
Fake scientific papers now proliferate in science literature. How should the scientific community address this problem within their own fields?
We talk about the problem of junk science in the report—journals that don’t actually do peer review, articles that seem like they come from scientific journals but actually don’t. One of the consequences of junk science is that you’ll see dis-informers citing research that looks like they’re citing scientific research. But when you actually dig into it, you find that it’s from these journals that don’t have high scientific standards and don’t follow traditional peer review. A problem right now is that Google Scholar does index some of these fake journals. One of the things that we recommend in the report is to start thinking about, can a consortium of scientific expert groups come together to identify reliable sources of information and promote that information?
If governments become disinterested in funding and promoting high-quality science, how can the donor community and the scientific community fill the financial void to continue promoting high-quality science?
There is definitely a role for philanthropy in those situations, especially when there might be specific areas of science that the government isn’t interested in funding. But we know that one of the reasons the U.S. has been so successful over the past several decades is because of our robust science community and our robust system of federal funding. It would be a real shame to lose that.
It’s also important to point out that, in the report, we discuss that the government should not be deciding what is and is not science misinformation. That is not their role. That should be the role of scientists, scientific organizations, journalists, institutions like that. The government itself shouldn’t be in the position of deciding this is good speech or bad speech.
How have other countries successfully addressed misinformation?
The intervention needs to be tailored to the country and the societal factors at play there. We’ve seen some Northern European countries have a great deal of success in teaching digital media literacy and having that as a part of the school curriculum.
There are also countries with much less political polarization, less distrust of the media. In the U.S., we only have two parties. People tend to be fairly distrustful of members of the other party, and that can add an extra layer of complication when you’re trying to deal with misinformation.
I’ll say Europe has done more [than the U.S.] in terms of trying to put regulations on social media companies and trying to track both what they’re doing and any impacts that’s having on the spread of false information.
How should schools and parents educate their kids on identifying and reducing the spread of misinformation?
Education definitely plays a role. It’s not a solution in and of itself, but these are useful skills that everyone should have and should be taught in schools. Figuring out things like, “What’s the source of this information, and why should I trust what they’re saying. Do they have the expertise? Do they have a hidden agenda?” Things like that.
One thing we found that’s really successful is to teach lateral reading, which is that, if you come across a website or a source that you’re not sure about, rather than digging into the website and trying to find out more about the organization on that website, open a second tab and start looking at what other people are saying about them. That’s how you might find out that, for example, this website is a PR campaign from an oil company, or this website is from a politically aligned group, or this news source is actually satire.
To what extent does misinformation originate from strategic disinformation campaigns?
We know that one prominent source of science misinformation is organized disinformation campaigns. Sometimes these are done to profit—so, promoting alternative medicines, supplements, or people’s own therapies. They’re also often promoted by industries and companies who want to disinform the public about the impacts of their industry. We saw science misinformation from tobacco companies. We see it from oil and gas producers. We see it from a lot of industries that are trying to protect themselves and promote information that goes against the scientific consensus. Another place we see it is in greenwashing and increasing fears about GMOs in order to be able to sell GMO-free products. You’ll see GMO-free labels on products for which there aren’t any GMO alternatives. It’s a mix of disinformation campaigns for direct profit and to protect the industry.
And then also, some people inadvertently share or start rumor campaigns when they don’t have accurate information or enough information. Not all science misinformation traces back to an intentional disinformation campaign, but a lot of it does.
How should people and organizations mitigate the spread of misinformation generated by AI? Can AI be used as a fact checking tool to reduce the spread of misinformation?
Some people have been working on AI-assisted fact checking for a long time. I think there are some systems that show some promise. I will say ChatGPT doesn’t work particularly well as a fact checker. There was a recent study showing that it’s kind of doubtful of some true information and gets some false information wrong. But AI systems can be useful for scanning for things that fact checkers should investigate further and helping speed up that pipeline.
In terms of AI-generated misinformation, so far, we see some of it, but not a ton, partially because it’s not all that resource-intensive to create it on your own. Like I said, these cheap fakes of taking videos out of context or writing a social media post are low lift, so AI could speed up its creation, but not by much. Maybe in some cases, we will see more AI-generated images being a basis for misinformation, but so far it hasn’t been a big factor. I worry more about AI images being used as a reason to distrust actual photos and videos. As a society we don’t want to be in the position where people distrust actual news footage because they can claim that it’s “just AI.”