International Health News
Newsletter
Resources

My favourite Supplements

Evidence Based Medicine: Fact or Fantasy?

by William R. Ware, PhD

INTRODUCTION

Bill Ware International Health News attempts to present results of medical research that are thought to be of interest to individuals who desire to retain their health or solve some medical problem. Thus the above is an important question. The books listed in the July-August IHN provide an excellent reading list and a source of documentation for some aspects of this topic.

Imagine a medical student inquiring in class: Professor, how do we know that these vaccines like the one for mumps are really almost 100% effective? Professor: Because the companies that manufacture them are required to carry out extensive tests and submit results, in the US to the FDA, concerning the efficacy and safety questions. This process is an important aspect of evidence-based medicine.

News Item, June 27, 2012 (Courthouse News Service): A primary care organization based in Alabama has filed a lawsuit against Merck after gaining access to a complaint filed in 2010 by two ex-Merck virologists who exposed what they alleged to be fraudulent testing designed to make the mumps vaccine appear to be 95% effective when it was far from that, and they claimed this had been going on for a decade with full knowledge of upper management. The implications with regard to both disease control and expenditures on less than satisfactory products (in fact allegedly fraudulent products) by both government and private health care systems, needs no further elaboration. Hundreds of millions of dollars are involved. Readers will recall that this was the company involved in the Vioxx scandal.

News Item, June 22, 2012 (Wall Street Journal). The two virologists also filed a whistleblowers lawsuit in U.S. district court for the Eastern district of Pennsylvania in 2010 which was ignored by the government but now has been unsealed and apparently will proceed. The lawsuit seeks an amount three times the damages incurred by the US plus the maximum amount allowed under federal whistleblower laws. The unsealing of this lawsuit prompted the above lawsuit in Alabama. Note these are simply allegations that have not been proven.

No reasonable, educated person would argue with the proposition that the practice of medicine should be based as much as possible on solid scientific evidence of efficacy, robust benefit to risk with both accurately defined and determined, while at the same time it should take into account the broad, individual characteristics of the patient that are considered and integrated into the physician-patient interaction and its outcome. At present, a significant fraction of all interventions and practices in modern medicine do not meet these evidence criteria simply because studies have never been or cannot be conducted or are deemed unnecessary. However, medicine is visualized as becoming more and more like a hard science and the phrase "scientifically demonstrated" or "scientifically proven" is not uncommon in discussions of therapeutic issues.

While it has been around and evolving for decades, Evidence Based Medicine (EBM) is now the "in" thing, has its own journals, is central to medical education, and might be regarded by some as having achieved dogmatic or pseudo-religious overtones. Those devoted to promoting and enforcing EBM look with disdain on those who practice integrative or complementary medicine, reject most so-called natural therapies, look with great suspicion at any therapy based on case series or histories or other anecdotal evidence, exhibit attitudes that discourage "thinking outside the box", and hold the new era as obviously a huge advance over the days when the "art" and "clinical judgement" of medicine was based on years of personal experience combined with that of other practitioners and knowledge of published case histories. Historically this was an important aspect of the practice of medicine.

THE ISSUE OF EVIDENCE

Dr. Prathap Tharyan asks in the title of a recent paper, "Evidence based medicine: can the evidence be trusted?"1 EBM by definition requires evidence. Evidence must be correct, not false or defective, must reflect reality, and as applied to the individual patient, be applicable and appropriate. The evidence component of EBM is obtained through research, laboratory and clinical studies, and product quality control, and the extent to which EBM can justify its current, exalted position in medicine depends on the level of both the scientific and ethical standards involved. All this is rather obvious. But there are serious problems with the evidence.1 In fact, this is where the wheels come off the wagon, as will be discussed in this review. Interestingly, in discussions in the recent literature on the pros and cons of EBM, the validity, quality, and questions concerning fraud and dishonesty are rarely discussed or even viewed as issues. This is surprising. If one considers that one of the major activities in evidence based medicine in the end involve the writing of prescriptions, the administration of vaccines and the installation of devices, then the credibility and ethics of the companies involved are indeed an issue, and as the news items introduced above indicate, we can look more and more to the law courts and their public-domain documents to reveal some disturbing answers to this critical question. It should be unsettling to everyone from patients to those promoting dogmatic adherence to the precepts of EBM that companies supplying their main therapeutic agents are so frequently in the news in the context of congressional hearings concerning wrongdoing, and lawsuits, class action suits, and huge fines for criminal activity and huge civil case settlements.

My favourite Supplements

GOOD STUDIES, BAD STUDIES

EBM arranges the various study types which constitute clinical evidence into hierarchies, of which there are a number of versions.2 Generally, at the top sits the randomized controlled trial (RTC) and the meta-analyses of such trials. Near the bottom are so-called observational studies such as cohort follow-up studies, case-control studies and finally at the very bottom are case studies and expert opinion.2 Studies involve observing events, measuring biomarkers or disease surrogates or other variables considered important, evaluating the success of a treatment by various measurement techniques, and observing adverse effects. However, it must be emphasized that a significant aspect of the science involved is the science of statistics, and in many instances, scientifically proven translates into statistically demonstrated as significant according to arbitrary but generally universally agreed upon standards. In fact, some would say that a heavy reliance on statistics distinguishes the soft from the hard sciences. This is not a criticism but recognition of the fundamental nature of studies involving variable human populations, or more generally, attempting to answer questions when large numbers of variables with wide ranges are involved, some of which can only be approximately measured. The science of EBM is more closely related to handicapping in horse racing than establishing the crystal structure of a protein, establishing the NMR spectrum of some organic molecule, or determining the properties on some new subatomic particle.

The rise of the RCT to be king of the mountain can in part be ascribed to the frequent phenomenon where negative RCTs contradict observational studies, in some cases observational studies which have had a large impact on clinical practice.3 Thus the growing attitude that one cannot trust observational studies, even if they are large. Critics of observational studies can list a number of what they view as sensational failures, and hormone replacement therapy is one of the most commonly cited examples. In fact the real reasons for this disagreement and in fact other cases of disagreement are still being debated, and the criticism is an oversimplification. Furthermore, observational studies are probably the best way to find adverse drug effects, the only way to study toxins which no one would willing be exposed to in a RCT, and the only practical approach to many question regarding health such as nutrition and lifestyle. But observational studies are merely hypothesis generating, they do not prove a causal relationship. Nevertheless, they can be part of a body of convincing circumstantial evidence regarding causality which in some cases cannot be easily dismissed. Observational studies thus have a place in EBM, but one that is controversial and debated.

Clinical trial results are of great financial significance to the companies involved and most trials are sponsored, executed and interpreted by companies. Thus there is a real potential for conflict of interest, bias, the temptation to rig studies, cheat, lie, conceal data adverse to the company interests, alter results to obtain an indication of benefit, play games with the statistics, and manipulate the regulators. These practices indeed occur and have generated a considerable popular and academic literature partly based on hard evidence from the public records of numerous court proceedings, leading to convictions and billions in criminal fines and civil settlements.

Suspicions are also raised when studies supported by industry and those with private or government support which address the same issue are compared. Industry supported studies almost always achieve positive results, but when the sponsor has no financial interest, the percentage of positive studies drops dramatically. Since the vast majority of clinical trials are sponsored by the industry, this gives one pause to reflect on the integrity of the main pillar holding up EBM.

Clinical studies are frequently farmed out to commercial study providers who then run them, in some instances with little oversight from the sponsor and in countries with few regulations concerning the ethical conduct of medical research. If one ignores ghost writing, academic involvement in clinical trials has dropped dramatically in recent years. In some studies recruited participants are hired guinea pigs drawn from a narrow social stratum not representative of the population visualized as end users of the drug or treatment.4 Companies are able to control the data collection so the individual investigators may never see the whole body of results. This is a problem with the multicenter studies that EBM likes so much. The hiring of ghost writers for important clinical papers presenting results in major journals is common practice. The ghost writer of course works for the sponsors. Academics not directly involved with the design or conduct of the study put their names on the journal articles presenting the results to provide credibility.

Documents obtained under court order and entered into the public domain have become a gold mine for individuals interested in documenting the inner workings of the pharmaceutical industry in the context of both clinical studies and marketing. It is amazing that these researchers can get their results published in major journals. Any student of this subject will soon conclude that the answer to the question posed by Professor Tharyan is no! The evidence cannot be implicitly trusted, even though many studies are undoubtedly honest and unbiased. It is not clear that the present system can overcome these problems, although there is considerable effort being expended to curb some of the practices which seriously undermine the credibility of EBM.

SPIN THE RESULTS

Another issue concerns the standards for acceptable efficacy. RCTs frequently count events in the treated and control groups. From this can be calculated the absolute benefit or harm. If 3% in the treated group and 4% in the control group experienced an endpoint event (such as a gastric bleed, stroke, heart attack etc. depending on what was being studied), then the absolute risk reduction was 1% and the number need to treat (NNT) over the period of the trial to prevent one event was 100, and 99 were treated with no benefit and in some cases at considerable cost and perhaps harm. But the relative risk reduction was one percentage point starting from 4% and going to 3%, or 25%, a much more impressive result!

Studies also compute odds ratios or hazard ratios which also yield relative risk reduction results. An odds ratio of 0.75 represents a risk reduction of 25%. But these odds or hazard ratios can be subjected to extensive statistical manipulation to correct for confounding and even more sophisticated analysis can be done to correct for other aspects of the trial. For these measures of trial results, focus shifts to the so-called confidence interval (CI) as a measure of statistical significance. The universally used CI gives the range for the ratio that has a 95% chance of not occurring by chance, and for the result in question to be statistically significant, the CI must not contain 1.00, the result if there is no effect. However, one sees studies, which call attention to what is viewed as an important result when the odds ratio is 1.01, a 1% relative risk, and presented as significant because the confidence interval is 1.0005 to 1.015. This is charitably called a small or modest size effect, but in fact is probably meaningless and merely reflects low standards of both the journal involved and the referees used. Conservative clinicians like very large so-called size effects, e.g. odds ratios of 0.2 or smaller or 2.0 or even greater before they show much interest. This reflects concern that for ratios near 1.00, the probability of unrecognized confounding is high, even if the result is statistically significant. Furthermore, statistical sigsignificance does not automatically mean clinical significance, a point that seems to have been forgotten by some journal editors, referees and the media.

Trial results look best when presented in relative terms and this is the almost universal practice. A 40% risk reduction is much more impressive than needing to treat 100 patients to produce one beneficial result. The patient is impressed with 40%, has no way of knowing what is really going on and cannot calculate the absolute change from the relative change without more data. Relative benefits are emphasized in most guidelines. The NNT may be downplayed or never mentioned to the patient or even unknown to the physician. But these are population studies, and the NNT represents how many in the population studied are needed to be treated to prevent one adverse event or result in a beneficial event. The question then of course arises, what is an acceptable number? There is no consensus. It is a judgement call and in fact arbitrary, especially when the harms are poorly identified, if at all. How much important should be assigned to probabilities derived from large trial populations when the issue is whether or not to treat an individual patient. Population based studies in fact involve a wide range of patient characteristics and in some cases a rigged population.1

Critics claim that harmful events are downplayed, studies are too short to reveal them, may be rigged to produce low numbers of such events, post-approval reporting after marketing is underway is negligible, and thus the risk/benefit analysis is complex or frequently impossible. Many published studies make it difficult if not impossible even to derive NNT or NNH from the tabulated data.

RCTs involve groups of participants that may not be representative of the populations needing treatment for a disorder. After all, in some cases participants are recruited by paying physicians rather large sums per subject to get them into a study. In other cases, the subjects are recruited via tabloid class newspaper advertising and paid to participate (the guinea pigs). Studies that are farmed out to commercial firms take over everything including recruiting and frequently operate in small countries with minimal supervision and strong incentives to please the sponsoring company. One can also question for many studies the required table in the published report comparing all the characteristics of the placebo and treated groups. Some study designs have a pre-trial period where subjects deemed unsatisfactory in terms of the desired outcome are disqualified, an obvious source of potential bias impacting the application of the results to the intended end-user population of the drug or procedure in question.

Modern statistical software allows those doing the statistical analysis to effortlessly try a large number of approaches, variations and assumptions and view the end results that will appear in the final publication. Most readers of the results will be unable to judge if the most appropriate statistical approach was used or if there was bias in statistical manipulation.

THE META-ANALYSIS - A PLATINUM STANDARD?

If the RCT is the gold standard strived for in EBM, then the meta-analysis (adjusted or weighted pooling or grouping of study results) of RCTs has been called the Platinum Standard.5 Meta-analyses are held in high regard and have a profound impact on guidelines and views of the merits of a therapy. But the meta-analysis is not a simple exercise. The results have utility only if the studies selected are of quite similar populations, i.e. homogeneous. Prior to amalgamating the data, the reviewers must select and then assign weights to the studies according to a set of guidelines. These weights can, for some analyses, determine whether the results favour treatment or placebo, or treatment A vs. treatment B. There is an unsettling and disturbing objectivity problem in this process. This was recently demonstrated when a high level of variability was found when a group of raters from the same department were given 165 trials and asked to rate according to the Cochrane Collaboration guidelines for bias assessment.6 Not only was the inter-rater agreement poor, the assignments significantly impacted the results of the subsequent meta-analysis.

In addition, reviewers cannot arbitrarily exclude studies. If studies with a negative outcome that have been suppressed or concealed so it is impossible to consider them, this invalidates the whole analysis. This was the case with meta-analyses on the efficacy of antidepressants. When they were repeated after the FDA revealed and provided negative study results that qualified for inclusion but were suppressed, the beneficial effect disappeared completely except for very severe depression, a result that caused quite a stir in the halls of psychiatry.7,8 The point is that meta-analyses are not simply a statistical tool for improving or achieving acceptable significance by combining studies, they represent an exercise that offers its own opportunity for bias and lack of objectivity, which undermines the credibility of their position as the platinum standard of EBM. Anyone can do the calculations by simply purchasing one of a number of commercial computer programs, but then the challenge begins. The above considerations become more critical when a large number of studies of variable size, most of which fail to be statistically significant, yield through meta-analysis a small effect size (e.g. an odds ratio of say 0.9 for a protective benefit). Is it clinically significant?

BMJ EXPOSES HUGE PROBLEMS IN CLINICAL STUDIES

Last year the British Medical Journal put out a call for papers concerning extent, causes and consequences of unpublished evidence from clinical trials (not infrequently with negative or null results or showing too many adverse side effects). On January 3 and 4 of this year the results were published online. Lehman and Loder in their editorial9 review the highlights of this cluster of papers, prefacing their remarks by the comment that it may come as a shock to clinicians that the evidence from clinical trials they depend on for guidance is not necessarily relevant, reliable or properly disseminated. In fact, a large proportion of evidence from human trials is unreported and much of what is reported is done so inadequately. One study incorporated unpublished data into existing meta-analyses of nine drugs approved by the FDA between 2001 and 2002. This reanalysis produced identical results of efficacy in only 7% of studies and the remainder were equally split between showing greater or lesser benefit.10 Lehman and Loder comment that most of the interventions currently in use and recommended in guidelines are based on trials carried out before mandatory pre-trial registration, and they describe the reported difficulties investigators have in acquiring a complete set of data, where searching for and obtaining data from unpublished trials can take several years.

One paper examined the impact of the requirement that, as of 2005 prior trial registration became a condition of later publication in many journals, and the additional requirement for publicly funded studies in the US, a summary report must be published within 30 months of study completion. Ross et al11 found that, for publically funded studies between 2005 and 2008, more than half of completed trials failed to report within the required time. Another study found compliance with a regulation of 2007 that changed the time to 12 months for a summary of completed studies was a dismal 22%.12

The editorialists also comment on the interesting phenomenon that using the search item "randomized controlled trial" misses a large number of papers indexed by Medline (PubMed) which adds to the difficulties of searching for trials when doing systematic reviews and meta-analyses. Their overall conclusion:

"What is clear from the linked studies (this BMJ set) is that past failures to ensure proper regulation and registration of clinical trials, and a current culture of haphazard publication and incomplete data disclosure, make proper analysis of the harms and benefits of common interventions almost impossible for systematic reviewers. Our patients will have to live with the consequences of these failures for many years to come. The evidence we publish shows that the current situation is a disservice to research participants, patients, health systems, and the whole endeavour of clinical medicine."

Not a good report card but consistent with a considerable body of earlier critical literature and highly relevant to the issue of the trust one can place in the evidence used in EBM.

TWO CONCRETE EXAMPLES

Two examples should make everyone think twice about the evidence aspect of EBM. One is the Vioxx saga involving Merck and the other the infamous Study 329 from GlaxoSmithKline concerning the pediatric application of the drug Paxil. Both have involved lawsuits and both in retrospect put patients given these drugs at increased risk of serious side effects that were, on the basis of court records and other evidence, suppressed by the manufacturers. Interestingly enough, the media played an important role in bring these scandals to the attention of not only the public but congress and various state attorneys general. Eric Topol's book13 contains a first-hand description of the Vioxx story and David Healy's book14 deals with the Paxil matter, and as well a critical analysis of the Paxil case has appeared in the medical literature.15 The Paxil story provides a good example of ghost writing. Its key study report published in a high-impact journal was in fact written by a ghost writer employed by a private company engaged in preparing articles for publication.14

In Pharmagedon (Chapter 4) David Healy discusses a serious issue that was involved in both of these scandals, the practice of restricting access to trial data even for individuals involved in the study and authoring reports.14 His discussion includes the impact of commercial research organizations and off-shore studies and multicenter studies, all of which also provide the opportunity to restrict knowledge of the full data set for a study such that aspects unfavourable to the sponsor are buried within the veil of company secrecy. He also describe the practice of coding as non-compliant those subjects that in fact suffered serious side effects that should have been reported as such, thus providing a rich source for bias.

THE BENEFIT TO RISK ISSUE

The proper implementation of EBM must obviously consider both beneficial results and harmful events. Otherwise, it is a farce. History teaches that adverse events are suppressed, underestimated, studies rigged to give low adverse event rates, and there are even statistical problems with the analysis of the probability of adverse events that are different than beneficial results. Vioxx has become the textbook example of marketing with the full but suppressed knowledge of serious side effects. This practice has resulted in a number of pharmaceutical companies becoming heavily fined felons and settling for huge amounts in civil suits. There is also the fundamental problem that clinical studies of efficacy are too short and too small to uncover what in the long-run can be side effects serious enough to highly discourage the use of an intervention, but this is unknown at the time of regulatory approval and the immediate intense marketing including expensive TV ads at prime time, especially on the major networks.

My favourite Supplements

GUIDELINES

An essential aspect of EBM involves the official guidelines from professional societies and governments. They provide a perceived solution to the many problems associated with critical appraisal, keeping up with the literature, and knowing what the "experts" in a field think. Adherence to guidelines are being used to judge physician performance in group settings or hospital staff settings and then used to pressure adherence to EBM and discourage the use of old fashioned judgement and intuition and treatment based on experience. Some physicians no doubt find this highly offensive. It is also an issue in reimbursement or insurance coverage of treatments. In the limiting case scenario, accountants and MBAs will take over the control of how medicine is practiced. Insurance companies already play a significant role in exercising treatment control and in part this is based on guidelines, expert opinion and the dictates of EBM.

Electronic records allow the matching of patient presentation and test results with treatment received and allow easy analysis of medical staff performance, even providing a list of nonconformists. Hospital discharge prescriptions can be easily compared with guidelines with the same result. This whole approach is based on the results of studies, some of which, considering past experience, are certain to be wrong, and on guidelines, some of which are strongly influenced if not more or less written by the pharmaceutical industry, and some of which do not even represent the unanimous views of the writing panels but reflect medicine by democratic vote. This is also the way regulatory drug approval decisions are made, if one takes a charitable view. One can imagine the endpoint where a patient sits at a computer terminal and answers a series of question concerning the reason for an office visit, and a nurse or secretary inputs such data as blood pressure, weight, and lab results. The computer then examines the patient data base and decides on the most probably diagnosis and the printer spits the out the analysis and a prescription. The physician on duty, like the supervisor of a set of trading desks on a bank's bond trading floor, checks this over and may be satisfied without a classical office visit. This is not that far removed from what goes on in the 10 minute office visit if guidelines rule the practice. It also illustrates the concern of many physicians that the full enforcement of the EBM culture will destroy personal medicine as they know and practice it.

Success in treating to targets is easy to monitor from electronic records. But to make this a physician or clinic performance parameter ignores the controversial aspect of some high-profile targets. For example, a recent editorial which appeared in a major journal was directed at the developers of new cholesterol treatment guidelines (ATP-IV) and provided three reasons to abandon LDL targets.16

  • There is no scientific basis to support treating to LDL targets.
  • The safety of treating to LDL targets never been proven.
  • Tailored treatment is a simpler, safer, more effective and a more evidence-based approach.

The goal of this challenge was to encourage change that would ensure a reduction of under- and overtreatment and as well promote appropriate treatment with statins. Blood pressure targets provide another example.17,18 and along with LDL, represent common targets in wide clinical use today. Raging controversy regarding screening for breast and prostate cancer has been in the headlines for over a year.

Guidelines, viewed as a centerpiece in EBM, are in reality only partly based on RCTs and associated meta-analyses. Other evidence of necessity is given weight. The writing groups must also rely on their own expert clinical experience and may be influenced by their clinical background and a variety of biases. Disagreement may be resolved by a process which results in guidelines becoming in part a democratic exercise. The end results then appear in journals of the highest impact and become like a catechism with profound influence and held up as how to practiced EBM.

Thus standards of practice have their limitations, not only because they can be very controversial yet accepted, but because they include a one-size-fits-all approach. This approach works very well when you take your BMW in to the dealer with a problem. Guidelines and protocol will probably be followed rigorously. Highly trained technicians analyse the problem by plugging the car into a computer. The problem is identified, will probably correspond to the "patient's" symptoms and presentation, and will be fixed and the car then tested. But cars are simple compared to humans.

IMPLICATIONS

Thus as the designers and followers of EBM set forth to view and use the evidence, they are confronted with studies that may be flawed, rigged, biased, present an incomplete picture of the totality of the data or poorly designed and then consider with great respect meta-analysis of multiple studies which also have the potential for significant if not fatal flaws. Many of those who believe they must rely on the evidence of EBM are not even skilled or familiar with biostatistics and studies have suggested that many are unable to knowledgably analyze tables of results in the papers that contain the evidence, even if they overcome the subscription firewall to full-text. Such are the problems when dealing with the complex problems of human illness in an atmosphere of severe financial conflicts of interest and huge and mind boggling amounts of money at play, while providers grapple with continuing to financially support health care in the face of a huge demographic problem and a culture that pays only lip service to true, effective prevention.

The concept of EBM is an oversimplification of complex therapeutic or screening problems, is only scientific in the context of statistics, and as Eric Topol points out in his book The Creative Destruction of Medicine, EBM must evolve to become personalized medicine, not population medicine.13 But EBM permeates and distorts guidelines which are key tools in everyday practice. The scientific basis is mostly the science of statistical evaluation of data gathered in population studies, and all that results are sets of probabilities. Thus the above analogy to handicapping horse races. Viewed critically, EBM, although undeniably a significant step in the right direction, appears to be a fantasy and at best simply work in progress with a lot remaining to be accomplished. Yet it provides a pedestal on which critics of alternative and integrative medicine can stand and forcefully present their condemnation in the name of "science".

Some features of the problem of why studies cannot be trusted are:

  • Dose selected is too small.
  • If a natural product is involved, doses in the treated arm not significantly greater than the intake in the placebo group.
  • Placebo easily detected.
  • Pre-trial exclusions rig the study in favour of the treatment.
  • Selection of the study population prior to randomization biases the study in favour of benefit.
  • The population studied does not correspond to the intended or expected users of the treatment or drug.
  • The drug or treatment works only on a single phenotype, but the non-responders dilute the results to insignificance. A potentially important benefit missed.
  • The study population is too small.
  • Errors occur in accessing compliance.
  • Drop-out rate too high.
  • The placebo group is contaminated by individuals taking the treatment or drug or undergoing a screening procedure.
  • The endpoint is a surrogate and thus cannot be rigorously extrapolated to clinical benefit which was not measured.
  • Critical parameters are omitted from the data used to show the treatment and control groups are essentially equivalent.
  • Multicenter studies differ center by center on critical aspects of one or more of the aspects of the study.
  • Study data selectively included in the statistical analysis, especially when multicenter studies are involved and each center sees only its own data.
  • Statistical analysis may be done repeatedly using a variety of different approaches and assumptions until the desired result is obtained.
  • Several studies are conducted, but only the one with a favourable result is revealed, with the negative studies suppressed.

J.P. Ioannidis presented four reasons why most published research findings are false.19

  • The smaller the studies, the less likely the findings are true.
  • The smaller the effect, the less likely the findings are true.
  • The greater the financial and other interests involved, the less likely the findings are true.
  • The hotter the scientific area, the less likely the research results are true.

Finally, while it is widely accepted that the gold standard for EBM is the RCT, this in itself has the potential to limit and retard successful therapies, perhaps even for decades or forever. Bradford Hill, the architect of the RCT, remarked on the dangers of giving excessive prominence to this type of trial

"Any belief that the controlled trial is the only way would mean not that the pendulum has swung too far but that it had come right off the hook".20

There are many critical questions in medicine that cannot be answered by RCTs. The approach is either impossible to implement or unethical or too expensive. The RCTs needed for drug approval cost mind boggling amounts of money and drive the industry to farm out such studies to third world countries and use human guinea pigs. All of this provides a severe impediment to progress and reaching a goal of having a truly reliable knowledge base. Critics of alternative medicine should consider the danger of rejecting treatments with clear benefit unattainable by conventional approaches simply because of the absence of RCTs. It will be clear that EBM has "gone over the cliff" when RCTs are required for hospitals to continue using anti-venom to treat snake bite.

While EBM discourages or even penalizes thinking "outside the box". EBM encourages development and testing of patentable drugs which were suggested by the beneficial action of some natural product and then slightly modified to make them eligible for patent protection when the natural products might work better - something that would never be determined. In general, only private or government funds are available to finance RCTs of natural products, and both funds and interest appears severely limited. Most nutritional supplements will never undergo RCTs, and may even be outlawed because of the absence of "scientific evidence", an outcome that is more probable than most realize. In the US, companies cannot even simply list citations to studies in the peer-reviewed literature from top universities suggesting benefit of their product without triggering FDA disapproval which can even take the form of a swat team raid with crushing confiscation of property and records. Demonstrating with a RCT that a substance or intervention is harmful is in generally impossible to implement and unethical. Thus it seems clear that rather than deify EBM, a rationalization of evidence standards and medical and regulatory practice is needed in order to prevent the unintended consequences. Post-approval and marketing surveillance has been demonstrated to have fundamental problems which may never be solved, although efforts are being made.

Advocates of EBM can shrug off criticism by simply pointing that no system is perfect and this is the best we have. This is obviously debatable since the alternative is a mixture of evidence taken with a grain of salt and old fashioned clinical experience, intuition and judgement, and the recognition that the patient is a single individual not necessarily sufficiently like the average study participant to make study results and guidelines necessarily relevant.

The bottom line: How can an "operating system", with its rules and protocols such as EBM, prove satisfactory when it is based on "scientific" data from studies where a significant number will later be demonstrated as false, dishonest and side effects understated, perhaps intentionally.8,19

My favourite Supplements

REFERENCES

  1. Tharyan P. Evidence-based medicine: can the evidence be trusted? Indian J Med Ethics 2011 October;8(4):201-7.
  2. Rawlins M. De Testimonio: on the evidence for decisions about the use of therapeutic interventions. Clin Med 2008 December;8(6):579-88.
  3. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA 2005 July 13;294(2):218-28.
  4. Elliott C. White coat black hat. Adventures on the dark side of medicine. Boston: Beacon Press; 2010.
  5. Stegenga J. Is meta-analysis the platinum standard of evidence? Stud Hist Philos Biol Biomed Sci 2011 December;42(4):497-507.
  6. Hartling L, Ospina M, Liang Y et al. Risk of bias versus quality assessment of randomised controlled trials: cross sectional study. BMJ 2009;339:b4012.
  7. Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT. Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med 2008 February;5(2):e45.
  8. Ioannidis JP. Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials? Philos Ethics Humanit Med 2008;3:14.
  9. Lehman R, Loder E. Missing clinical trial data. BMJ 2012 January 3;344.
  10. Hart B, Lundh A, Bero L. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ 2012 January 3;344.
  11. Ross J, Tse T, Zarin D, Xu H, Zhou L. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ 2012 January 3;344.
  12. Prayle A, Hurley M, Smyth A. Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: cross sectional study. BMJ 2012 January 3;344.
  13. Topol E. The creative destruction of medicine. How the digital revolution will create better health care. New York: Basic Books (Perseus); 2012.
  14. Healey D. Pharmagedon. Berkeley and Los Angeles: University of California Press; 2012.
  15. Jureidini J, McHenry L, Mansfield P. Clinical trials and drug promotion: Selective reporting of study 329. International Journal of Risk & Safety in Medicine 2008;20:73-81.
  16. Hayward RA, Krumholz HM. Three Reasons to Abandon Low-Density Lipoprotein Targets. Circ Cardiovasc Qual Outcomes 2012 January 1;5(1):2-5.
  17. Arguedas JA, Perez MI, Wright JM. Treatment blood pressure targets for hypertension. Cochrane Database Syst Rev 2009;(3):CD004349.
  18. Law MR, Morris JK, Wald NJ. Use of blood pressure lowering drugs in the prevention of cardiovascular disease: meta-analysis of 147 randomised trials in the context of expectations from prospective epidemiological studies. BMJ 2009;338:b1665.
  19. Ioannidis JP. Why most published research findings are false. PLoS Med 2005 August;2(8):e124.
  20. Hill AB. Reflections on controlled trial. Ann Rheum Dis 1966 March;25(2):107-13.

This article was first published in the September 2012 issue (#230) of International Health News

research reports
IHN database

Subscription to IHN




copyright notice