The first proposed ethics updates in more than 40 years for researchers who use human beings in scientific experiments are flawed, said a Vanderbilt professor in an article in The New England Journal of Medicine
The proposal is now under review by the U.S. Department of Health & Human Services.
The updates will be an improvement, said Laura Stark, assistant professor of medicine, health and society at Vanderbilt University, in an essay published Sept. 15 by The New England Journal of Medicine and co-written with Jeremy Greene, professor of medicine and the history of medicine at Johns Hopkins School of Medicine.
But Stark says the new procedures have flaws, too. She described some of these in the article, which offers a short history on the issue dating back to the early 20th century.
“[rquote]Unfortunately, the proposed revisions include no requirement that policymakers systematically update the regulations in the future[/rquote],” Stark said in the essay. “We may not be able to predict the new forms that medical research will take, but we can build a regulatory structure flexible enough to accommodate inevitable change — without waiting another 40 years.”
Many others in the scientific community have voiced objections to the proposed updates, with more than 2,000 comments submitted to the U.S. Department of Health & Human Services. The most common objections regard the use of biospecimens, such as urine, blood, tissue, cells, DNA, RNA and protein from humans.
There is no deadline for the decision, and the Office of Management and Budget would have to approve any proposal before it would go into effect.
The Office for Human Research Protections (OHRP), a division of the U.S. Department of Health & Human Services, has been working on the updates since 2011. The current review system was set forth in the 1974 National Research Act. Under the act, researchers in the United States are required to get permission from their institution and their subjects for research involving humans. This differs from most of the rest of the world, where generally the federal government or an apparatus of it oversees the review process.
“The history of ethics review suggests that resistance to centralized review may stem from concern about who holds liability as well as from the desire to protect the distinct needs of specific populations,” Stark said in the essay.
The current system requires universities and other research institutions in the United States to form Institutional Review Boards (IRBs), to whom researchers submit their plans for experiments involving human beings. The IRBs decide if the research is ethical and can go forward. Believing that institutions are motivated by a desire to avoid lawsuits, many researchers have complained that the IRBs are too conservative.
These older rules bear the mark of an ethics-review system designed to manage the first control group of human subjects at the U.S. National Institutes of Health, Stark said. A control group participates in an experiment or study, but doesn’t receive treatment by the researchers. Instead, they are used as a benchmark to measure how the other tested subjects do.
The origins of IRBs to manage control groups at the NIH explain the patchwork of rules in place in the present day — and suggests how they might change.
Under the new rules, social scientists who do their work solely with interviews wouldn’t need any approval at all, and consent forms for those who agree to participate in experiments would be simplified.
Also, researchers would be able to ask people for blanket consent to use biospecimens and other human samples, instead of the current system in which they must know in advance what they want to do, or seek fresh approval for each new experiment. Other changes include a web-based tool, where researchers quickly will be able to find out online whether they must seek an IRB approval for their research.
Collaborations with colleagues at other institutions would be simplified under the new system. Currently, each institution’s IRB must approve research when two or more institutions work together. Under the new system, most multisite cooperative studies will require just one IRB.
“Under this centralized system, either one research site would take on review responsibilities for all sites involved or research teams would agree to use an IRB unaffiliated with any of the sites,” Stark writes in the essay.
Some objections to the changes are likely.
Many patient activists may take umbrage about the use of blanket approval for the use of biospecimens. Some researchers and human subjects with limited access to the internet could be shut out because much of the new system will be housed on the web, and clinical-trials researchers might find cumbersome a requirement to post their consent forms online.
Stark, a researcher with Vanderbilt’s Center for Medicine, Health and Society, published a book in 2012 examining how Institutional Review Boards work from the inside and how they got that way. The book is Behind Closed Doors: IRBs and the Making of Ethical Research. It is based on her yearlong observations of decision-making inside three IRBs, and on her historical research using previously unknown internal NIH records as a Stetten fellow at the NIH’s Office of NIH History.
Stark is now completing a book on the first control group at the NIH. It shows where the rules came from and how the origins of IRBs at the NIH explain the system in place in the present day.
The Center for Medicine, Health and Society at Vanderbilt is an innovative, multidisciplinary center that studies the social and societal dimensions of health and illness. Its scholarship, teaching and wide-ranging collaborative projects explore medicine and science in a wide array of cultural contexts, while at the same time fostering productive dialogue across disciplinary boundaries.