Standards of Evidence
Governments and healthcare policy makers around the world have decided that quality of care within their systems should improve, that it should be evidence based, and that it is in the public”s interest to ensure that this happens (Barlow, 2004; Institute of Medicine, 2001). Various organizations around the world have attempted to meet this mandate by tackling the thorny questions focusing on what constitutes best evidence and how much of it do we need before we designate a clinical process as “evidence based”. The Institute of Medicine (2001) following the lead of Sackett (Sackett, Rosenberg, Gray, Haynes & Richardson, 1996) proffered that evidence based practice (EBP) “” is the integration of best research evidence with clinical expertise and patient values” (Institute of Medicine, 2001, pg. 147). One important component of EBP is the empirical support for specific interventions or applied disorders in problems. Here the focus is on the efficacy and effectiveness (clinical utility) of specific interventions that have earned the designation “Empirically Supported Treatment” (EST). Earlier on, the Division of Clinical Psychology (Division 12) of the American Psychological Association took a stab at developing criteria by which to judge a treatment as empirically supported noting very specifically the number of experimental studies that would be required. At around the same time the Agency for Healthcare Policy And Research of the United States Public Health Service (now the Agency for Healthcare Research and Quality) developed a set of criteria that ranked evidence in a hierarchical fashion from the most convincing (typically well done randomized clinical trials) ranging down to the consensus of expert opinion which, while still contributory, would be substantially less convincing than well done experimental studies. Also in the mid 1990″s, an American Psychological Association joint task force created a template for developing clinical practice guidelines (APA, 1995, since updated, APA 2002) that also concluded that these guidelines should be constructed based on a hierarchy of evidence which would then be evaluated by a competent committee of clinicians and clinical scientists.
Since that time, this hierarchical approach to evaluating evidence regarding interventions has been widely adopted around the world by government healthcare agencies, such as the National Health Service (NHS) in the UK. This group commissioned the National Institute for Clinical Excellence (NICE) expressly for the purpose of developing the evidence base to guide healthcare in the NHS. At the same time, a major joint initiative of the National Institute of Mental Health and the Substance Abuse and Mental Health Services Administration (SAMSHA) in the United States focused on promoting, implementing and evaluating evidence based mental health practices within state mental health systems (NIH, 2004). These governmental and additional professional agencies have adopted a hierarchical approach to evaluating evidence with data from well-done randomized clinical trials as the gold standard, but not excluding non-randomized but large studies conducted directly in clinical practice settings, and consensus among experts on the empirical grounding and totality of the evidence supporting a given intervention. And the American Psychological Association (2005) has published and up-to-date statement on evidence based practice also detailing this hierarchy of evidence, as well as the attention to the context in which the treatment is applied and the differential levels of clinical expertise necessary to apply it. It is important to note that both the APA (2005) statement and other important statements (eg. Salkovskis, 2002) recognize that EPB goes beyond ESTs to include scientifically based approaches to disorders or problems for which ESTs may not yet exist. Here, extrapolation from existing evidence on interventions and ongoing careful measurement of progress becomes important. But the explosion of evidence on ESTs in recent years, and the sometimes premature or inappropriate dissemination of interventions that are not empirically supported require the existence of a forum for evaluating and publishing a series of interventions that meet the highest standards of our field.
David H. Barlow, PhD, ABPP, Editor-in-Chief