Charles Binkley and Subramaniam Vincent
A three-step process and a framework of questions to make ethical reporting decisions, with recent convalescent plasma reporting as an example.
If you are a journalist reporting on COVID-19, you may be running into scientific studies, reports, and findings on a whole host of topical matters: vaccines, treatments, epidemiology, and more. From March this year, findings emerged every week and month from studies on the propagation of the virus to antiviral treatments to vaccine trials and more. The recent reporting on the convalescent plasma trials are a good example. How do you assess the value of the scientific evidence so that your story does not overstate or understate the importance of the findings or their relevance?
To start off, let’s look at how doctors keep pace and assess levels of evidence in medical literature in general. There are 30,000 medical and scientific journals in the world publishing over two million articles annually. Not all the reports that get published have any relevance for actually treating patients. The goal of medical research is to improve the health of society. Two ways of doing this are by extending life (or preventing early death) and improving quality of life (or decreasing human suffering). While some of the studies that get published in medical journals actually affect how clinicians treat diseases, most do not.
But COVID-19 has recently thrust upon society the raw data from several important medical studies. What is more, there are likely many other related studies coming in the future. In the past, these results weren’t usually of much interest to the general population. Doctors would read and evaluate the studies, then decide if the results should be translated into clinical practice. This is called evidence-based medicine.
Because of the huge societal benefit that current COVID-19 trials may hold however, results are now being reported to the public at the same time that doctors are given the data to review. Since not everyone has the training or experience to decide what should be implemented and what should be ignored, it is important to understand the different kinds of trials and the weight they hold in guiding medical decisions.
The important thing about a medical study is that doctors want to make sure that the effect that they observe is absolutely, positively due to the intervention that they initiated and not some other factor. Some studies are better at ensuring this cause/effect relationship than others. Applying this to journalistic work, here is a basic framework for evaluation of medical evidence, all assessed based on how solid the cause/effect relationship is.
LOW: Studies with the lowest quality data are not very likely to ever be translated into clinical practice. The lowest quality study is based on expert opinion or a limited number of cases. It basically says, “I am an expert in this area and I did this thing to some patients and this is what happened.” These are not very convincing studies. Imagine a study in which convalescent plasma was given to three not very sick patients with COVID-19 and they all survived. Or they all died. Or the patients were all very sick, and they all recovered. Or they all died. It is hard to know what that means.
HIGH: Studies with the highest quality data are often a major driver of medical decisions. They are double blinded, randomized, controlled trials (RCT)—often referred to as the gold standard of studies. Patients are randomly assigned to receive either the therapy being studied, or the currently accepted best treatment. If there is no current treatment for the disease being studied, the treatment is compared to a placebo like sugar or salt water. The study is double blinded, meaning that neither the patient, physician, nor researcher knows who is getting what until after the study ends. A good RCT minimizes the variables so that doctors can be sure that the effect they observe is due to the intervention they are studying.
MIDDLE: Findings from this category require consistent observations across multiple studies, each conducted independently, before they are given serious consideration for clinical implementation.
When reviewing a study, journalists need to determine which bucket a study fits into. The advantage of this approach to evaluate medical science findings is that it helps journalists build news judgment around a vocabulary shared with doctors. And that means scientists will recognize the questions journalists ask, and in turn their responses can then be interpreted by reporters in the same shared context.
Walkthrough of the convalescent plasma example
This played out with the convalescent plasma trial for COVID-19. Not only was the benefit of convalescent plasma for patients with COVID-19 difficult to determine from the trial, the trial’s true importance may have been overstated by government officials. This can leave the public wondering what to, and what not to, believe from the government.
Patients with COVID-19 were transfused with plasma from patients who had COVID-19 previously and recovered. There were a lot of variables: when in their illness patients were transfused, how sick the patients were when they were transfused, and how many antibodies the transfused plasma contained. The study neither compared patients who received convalescent plasma to patients who did not receive it, nor did it decide in advance to randomly assign similarly sick patients to either receive plasma or not.
The study enrolled over 35,000 patients, which is great. Say that they had randomized half of those patients to receive plasma from people who had recovered from COVID-19, and half to receive plasma from patients who had never had COVID-19. Also, let’s say that all the patients were equally sick and the level of antibodies from the COVID-19 patients was the same in every dose administered. Finally, and essential to prevent bias, neither the patients, nor researchers, nor clinicians, knew which patient was getting which kind of plasma. The results of that kind of trial would carry a lot of weight in making decisions about whether or not convalescent plasma was beneficial.
The middle bucket needs careful parsing
All other studies are somewhere between the lowest and the highest levels. Like the gold standard RCT, these studies will initiate a treatment and then look for an effect. They differ in two important ways: randomization and bias. In these other studies, it is not decided in advance which patients will get which treatment. Rather, the patients are treated, usually based on the discretion of their physician, and an outcome is observed. If there is a “control” or comparison group, it will be a group of similar patients, selected after the fact, who did not receive the treatment being studied. Again, this is usually because their physician did not order or recommend the treatment. Finally, since decisions about giving the treatments were made by individual physicians, no one can know all the facts that went into that decision.
Clearly, there is more of a risk that these middleweight investigations, mostly composed of case-control and cohort studies, do not establish a true cause and effect relationship. However, the more of these types of studies that have similar findings, the more convincing the findings are. They will never be as good as a RCT, but the more well planned case-control and cohort studies that demonstrate the same result, the more likely there is a cause and effect relationship, and the more likely the intervention is to be accepted in medical practice.
Here are some questions journalists can ask to frame their inquiry and next steps:
1. What is the level of medical evidence in this study? (Data). Does it fall into the LOW, HIGH, or MIDDLE categories? Ascertain this with the scientists who published it.
If LOW, ask these questions, both of the scientists who issued the findings:
- Why is study significant now? How does it move the needle?
- What questions does it leave unanswered, according to scientists themselves?
- What are the scientists planning to do next?
If HIGH, ask these questions
- What impact will this study have on direct patient outcomes? (survival, quality of life, cost savings)
- Is this a new and revolutionary finding or one that is accepted and just needed to be validated?
- Will this study change the current practice and how?
If it’s in the MIDDLE (i.e. it’s neither LOW nor HIGH), ask these questions
- Do your findings support or challenge the findings of other investigators exploring the same issue?
- What further study is needed in order to translate your findings into clinical practice?
- How would you summarize your findings and their importance to the lay reader? Would you caution how the lay reader interprets the findings?
2. Next, corroborate your answers with what other medical professionals say. These would be peers of the scientists who have issued the findings you are considering reporting on. This is a good way to let medical professionals weigh on the level of evidence in a specific way that helps public education.
3. Use these answers to assess the findings’ newsworthiness in the public interest. If this study has been overstated in some other media outlet, the answers will help you decide whether you must do a story on a “debunking frame.” Overall, decide whether there’s even a need to be reporting a LOW or MEDIUM evidence quality study. How does it help? If you do report, explaining the three levels of evidence in medical literature and where this study fits would be helpful to the reader for its educative value. This is also transparency about your method of evaluation in action.
4. If the study falls into HIGH as corroborated by other medical professionals, you have a story of significance.