What Everyone Gets Wrong About Evidence-Based Medicine

The term “evidence-based’ gets thrown around a lot these days. You can find it everywhere (even on this website). From using it to promote products to clinicians using it to describe how they practice. But, there are several problems with the use (and overuse) of this term. #evidencebasedmedicine

The term “evidence-based’ gets thrown around a lot these days. You can find it everywhere (even on this website). From using it to promote products to clinicians using it to describe how they practice. But, there are several problems with the use (and overuse) of this term.

The first problem is that most people are not using the term evidence-based medicine correctly. True evidence-based medicine includes three components:

  1. The best available research

  2. The clinical expertise of the provider

  3. The values and desires of the patient

If you dive into the way most people use the term, they are using it in a way that only encompasses the use of the best available research and not the other two components. This approach is probably more accurately described as research-based or science-based medicine.

There is also the problem of inconsistencies in the definition. If you do a search for “evidence-based medicine” you get pages of results with a lot of variation in the definition. This creates a problem when trying to understand exactly what the term means.

However, in reading much of the criticism of both evidence-based medicine (EBM) and a purely science/research based approach, the biggest problem with any of these lies in blind trust of the research. Those who ride or die based on what is or is not found on PubMed are the problem.

Evidence-Based is Not Blind Trust in Research

Did you know that around 50% of the treatments that are used in general medical practice are not proven to work and about 5% are considered pretty harmful but are still being used? [1]

Of 2500 treatments that are supported by good evidence (as determined by the British Medical Journal) only about 15% of those treatments were rated as beneficial, another 22% of them were rated as likely to be beneficial. 7% were partially beneficial and partially harmful, and 5% were unlikely to be beneficial. 4% were likely to be ineffective or harmful. This leaves the remaining 47% as “unknown” effectiveness. [1] I think it’s also worth noting that I was unable to access the original source of this information and what I have cited here is another source that lists the original source.

The article that discusses these statistics was in response to criticism of the world of complementary and alternative medicine. The critics stating that there is no research to validate many of these alternative practices. The problem with that criticism is that they are holding the world of alternative medicine to a standard that is higher than that of the conventional medical world.

Nowhere is this more obvious than when you start making very basic dietary recommendations to patients rather than suggesting they go straight for a medication to treat their symptoms. Many of these approaches (such as Paleo, primal, or even low-carb) are criticized because there is no randomized controlled trial examining their efficacy (the RCT is the gold standard study in the world of medicine). RCTs (especially in the realm of diet and nutrition) have their own set of problems which I’m not going to get into here, and are not free from bias. While they may represent strong data in a lot of cases, blind trust should never be placed on a research study when caring for patients.

I think that much of this blind trust in RCTs has also caused the medical community to discredit that power of observation. This is where the “clinical expertise of the provider” piece of evidence-based medicine comes in. As a clinical provider, yes you need to be aware of the best, most current evidence, but you also need to rely on what you have seen in your practice as well since not all possible outcomes can be accounted for in these studies. The skill of observation is the thing that those who rely solely on research criticize, but is also the thing that most of what makes up modern medicine relies on (there are no rigorous studies backing up these practices and treatments, simply the observation that something does or doesn’t work).

There are also many cases where an RCT simply won’t work. Take the example of fluid resuscitation in patients with penetrating torso injuries. One study looked at whether immediate vs. delayed fluid resuscitation had better outcomes. This study was a prospective cohort design (an observational study). There was some degree of randomization (which patients got immediate vs. delayed fluid resuscitation), but there’s no way you could have designed it to be an RCT because the control group would have had to get no IV fluids and that’s not ethical. This doesn’t mean that the results of the study aren’t valid. In fact, they determined that delayed IV fluid resuscitation had better outcomes than immediate, which is not what I would have expected. [2]

So in this case, because the outcome (that delayed fluid resuscitation was superior to immediate) was subject to the expertise of the clinician, does that mean that we should keep doing what we did before (immediate fluid resuscitation) because it was observational and not an RCT? Absolutely not! The best available evidence is what we are supposed to use, which in this case is not an RCT because an RCT would have been unethical.

I recently heard on a podcast three clinicians discussing a particular disease treatment. They were discussing what the research recommended vs what they had actually observed to work. For all three of them, what worked was not the same thing as what the research recommended. So when it comes to medical treatment, would you rather have a clinician who blindly follows research or one who acknowledges the research but also takes into account what they have seen that does or doesn’t work?

Other Problems with Research

Removing clinician judgement from the practice of evidence-based medicine confounds the problem of research bias. Yes, we all have biases that are hard to take out of the equation. The even bigger problem is that much research is biased in a way that changes the outcomes. Whether it’s the way that the hypothesis is formed, who is paying for the study, or how the results are reported, it’s hard to find studies that don’t have some level of bias that makes them less reliable.

It has been suggested that most published research findings are not true. [3] The smaller the study, the bigger the potential financial benefit from the findings, the source of the funding, and the hotter the topic is, the less likely the study is to actually be true. And large, otherwise reputable institutions are not exempt from these problems. In late 2018, a Harvard physician was found to have falsified data in his cardiac research studies. [4]

The Importance of Values in Evidence-Based Guidelines

Patient values and preferences are one of the key components of evidence-based medicine, and yet in my experience, this is a piece that is lacking from medicine in general. Clinicians and practitioners often tell patients what to do rather than asking questions and helping them understand the information they need to make a decision that is best for them.

Even if a treatment is cost-effective and known to be clinically effective, it still may not be what the patient chooses, and that’s ok so long as they understand the risks and possible outcomes of their decision. A good example of this is a cancer patient choosing palliative care instead of undergoing chemotherapy. While the chemotherapy has been demonstrated to be effective against many types of cancer, it also has many side effects. It is up to the patient to make the decision of whether or not they want to risk the side effects for the chance of living a longer life, or whether they want to go another route. No amount of evidence can make that decision for them.

Many guidelines and standards of care are also derived from research that focuses on a single disease. But a majority of people don’t have just one disease. Sure multiple guidelines can be applied, but this results in a growing list of medications ad other interventions which can be dangerous and not truly based on the best available data (patients with multiple morbidities are almost always excluded from clinical trials).

Many preventive interventions are based on what might be the most beneficial for large populations of individuals, but have very little benefit to each individual. The example of recommending a low-fat diet to reduce cholesterol and heart disease is a great example of this. For the individual, a lifetime of a low-fat diet might prolong their life by about three months, but in terms of public health, the best way to save one or two lives is to make blanket recommendations for everyone. The problem being, it’s impossible to know how many people are being harmed by these recommendations in order to save one or two lives.

No Clinical Practice, No Evidence-Based Recommendations

One final problem I have with the world of so-called evidence-based medicine is the huge number of “influencers” and practitioners who don’t actually see patients on the internet who are making recommendations to people because there are studies showing X, Y, or Z to be true. While I absolutely appreciate their interpretation of the available evidence (in most cases), because keeping up with research is hard to do, if they aren’t seeing patients, there can be no true evidence-based recommendations. Why? Because there is no way to take into account the observations of the clinician or the values of the patient.

I’ll go back to the example I mentioned earlier with the three clinicians discussing what the research says about treating a specific condition (I can’t recall the podcast or the condition). If they were to blindly follow the evidence, they may have been doing more harm than good. Instead, they acknowledged the research and in the end did what they observed to work for their patients.

Medicine is complex, and it’s unfortunate that this is even an issue that needs to be discussed. It just complicates matters more and confuses people.

TL;DR: Consider the source of your information. Getting health information from a someone with fancy initials after their name but who doesn’t see patients, isn’t probably going to get you the best information. Getting health recommendations from someone who only relies on the research also isn’t the best way to go. True evidence-based medicine takes into account the best available research, the expertise of the provider, and the values of the patient. If you’re not getting all three, it’s not evidence-based.