top of page

It is more than 20 years since the evidence based medicine working group announced a “new paradigm” for teaching and practising clinical medicine.1 Tradition, anecdote, and theoretical reasoning from basic sciences would be replaced by evidence from high quality randomised controlled trials and observational studies, in combination with clinical expertise and the needs and wishes of patients.

Evidence based medicine quickly became an energetic intellectual community committed to making clinical practice more scientific and empirically grounded and thereby achieving safer, more consistent, and more cost effective care.2 Achievements included establishing the Cochrane Collaboration to collate and summarise evidence from clinical trials;3 setting methodological and publication standards for primary and secondary research;4 building national and international infrastructures for developing and updating clinical practice guidelines;5 developing resources and courses for teaching critical appraisal;6 and building the knowledge base for implementation and knowledge translation.7

From the outset, critics were concerned that the emphasis on experimental evidence could devalue basic sciences and the tacit knowledge that accumulates with clinical experience; they also questioned whether findings from average results in clinical studies could inform decisions about real patients, who seldom fit the textbook description of disease and differ from those included in research trials.8

Two decades of enthusiasm and funding have produced numerous successes for evidence based medicine. An early example was the British Thoracic Society’s 1990 asthma guidelines, developed through consensus but based on a combination of randomised trials and observational studies.9 Subsequently, the use of personal care plans and step wise prescription of inhaled steroids for asthma increased,10 and morbidity and mortality fell.11

Distortion of the evidence based brand

The first problem is that the evidence based “quality mark” has been misappropriated and distorted by vested interests. In particular, the drug and medical devices industries increasingly set the research agenda. They define what counts as disease (for example, female sexual arousal disorder, treatable with sildenafil16 and male baldness, treatable with finasteride17) and predisease “risk states” (such as low bone density, treatable with alendronate).18 They also decide which tests and treatments will be compared in empirical studies and choose (often surrogate) outcome measures for establishing “efficacy.”19

Furthermore, by overpowering trials to ensure that small differences will be statistically significant, setting inclusion criteria to select those most likely to respond to treatment, manipulating the dose of both intervention and control drugs, using surrogate endpoints, and selectively publishing positive studies, industry may manage to publish its outputs as “unbiased” studies in leading peer reviewed journals.20

Use of these kinds of tactic in studies of psychiatric drugs sponsored by their respective manufacturers enabled them to show that drug A outperformed drug B, which outperformed drug C, which in turn outperformed drug A.21 One review of industry sponsored trials of antidepressants showed that 37 of 38 with positive findings, but only 14 of 36 with negative findings, were published.22

Too much evidence

The second aspect of evidence based medicine’s crisis (and yet, ironically, also a measure of its success) is the sheer volume of evidence available. In particular, the number of clinical guidelines is now both unmanageable and unfathomable. One 2005 audit of a 24 hour medical take in an acute hospital, for example, included 18 patients with 44 diagnoses and identified 3679 pages of national guidelines (an estimated 122 hours of reading) relevant to their immediate care.27

Marginal gains and a shift from disease to risk

Evidence based medicine is, increasingly, a science of marginal gains—since the low hanging fruit (interventions that promise big improvements) for many conditions were picked long ago. After the early big gains of highly active antiretroviral therapy for HIV28 and triple therapy for Helicobacter pylori positive peptic ulcer,29 contemporary research questions focus on the marginal gains of whether these drug combinations should be given in series or in parallel and how to increase the proportion of patients who take their complex medication regimen as directed.30 31

As the examples above show, evidence based medicine has drifted in recent years from investigating and managing established disease to detecting and intervening in non-diseases. Risk assessment using “evidence based” scores and algorithms (for heart disease, diabetes, cancer, and osteoporosis, for example) now occurs on an industrial scale, with scant attention to the opportunity costs or unintended human and financial consequences.26

Overemphasis on following algorithmic rules

Well intentioned efforts to automate use of evidence through computerised decision support systems, structured templates, and point of care prompts can crowd out the local, individualised, and patient initiated elements of the clinical consultation.8 For example, when a clinician is following a template driven diabetes check-up, serious non-diabetes related symptoms that the patient mentions in passing may not by documented or acted on.32 Inexperienced clinicians may (partly through fear of litigation) engage mechanically and defensively with decision support technologies, stifling the development of a more nuanced clinical expertise that embraces accumulated practical experience, tolerance of uncertainty, and the ability to apply practical and ethical judgment in a unique case.33

Poor fit for multimorbidity

Finally, as the population ages and the prevalence of chronic degenerative diseases increases, the patient with a single condition that maps unproblematically to a single evidence based guideline is becoming a rarity. Even when primary studies were designed to include participants with multiple conditions, applying their findings to patients with particular comorbidities remains problematic. Multimorbidity (a single condition only in name) affects every person differently and seems to defy efforts to produce or apply objective scores, metrics, interventions, or guidelines.37 Increasingly, the evidence based management of one disease or risk state may cause or exacerbate another—most commonly through the perils of polypharmacy in the older patient.38

Return to real evidence based medicine

Individualised for the patient

Real evidence based medicine has the care of individual patients as its top priority, asking, “what is the best course of action for this patient, in these circumstances, at this point in their illness or condition?”39

Importantly, real shared decision making is not the same as taking the patient through a series of if-then decision options. Rather, it involves finding out what matters to the patient—what is at stake for them—and making judicious use of professional knowledge and status (to what extent, and in what ways, does this person want to be “empowered”?) and introducing research evidence in a way that informs a dialogue about what best to do, how, and why.

Judgment not rules

Real evidence based medicine is not bound by rules. The Dreyfus brothers have described five levels of learning, beginning with the novice who learns the basic rules and applies them mechanically with no attention to context.42 The next two stages involve increasing depth of knowledge and sensitivity to context when applying rules. In the fourth and fifth stages, rule following gives way to expert judgments, characterised by rapid, intuitive reasoning informed by imagination, common sense, and judiciously selected research evidence and other rules.

In clinical diagnosis, for example, the novice clinician works methodically and slowly through a long and standardised history, exhaustive physical examination, and (often numerous) diagnostic tests.43 The expert, in contrast, makes a rapid initial differential diagnosis through intuition, then uses a more selective history, examination, and set of tests to rule in or rule out particular possibilities. To equate “quality” in clinical care with strict adherence to guidelines or protocols, however robust these rules may be, is to overlook the evidence on the more sophisticated process of advanced expertise.

Training must be reoriented from rule following

Critical appraisal skills—including basic numeracy, electronic database searching, and the ability systematically to ask questions of a research study—are prerequisites for competence in evidence based medicine.6 But clinicians need to be able to apply them to real case examples.51

Too often, teaching resources use schematic, fictionalised vignettes in which the sick patient is reduced to narrative “factoids” that can populate a decision tree or a score sheet in an objective structured clinical examination. Rather than focus on these tidy textbook cases, once they have learnt some basic rules and gained some experience, students should be encouraged to try intuitive reasoning in the clinic and at the bedside, and then use formal evidence based methods to check, explain, and communicate diagnoses and decisions.43

(…)

 

Referência : 

BMJ. 2014 Jun 13;348:g3725.

bottom of page