What we know now about bridging the gap between research and practice

About two decades ago, psychologists who develop and study psychotherapy interventions began to recognize that publications on the efficacy of new psychotherapies were not sufficient to change practice. Shortly thereafter, research emerged that indicated that manuals and workshops alone were also not sufficient to change practice (see Herschell et al., 2010 for a summary). We are now at a point where we recognize that some form of consultation or follow-up support is needed to achieve levels of skill that are similar to those of therapists in the original clinical trials. But this still isn’t the whole picture. As we have started to investigate reasons why evidence-based psychosocial treatments (EBPTs) aren’t widely used in everyday practice, a new area of research, implementation science, has emerged. Drawing from other fields, such as organizational science, marketing, system dynamics, and education, researchers have developed and begun to test more comprehensive theories of implementation.

Dozens of implementation frameworks have been developed in recent years, although relatively few focus specifically on evidence-based psychosocial treatments (c.f., Aarons, Hurlburt, & Horwitz., 2011). These frameworks share some common elements. First, they all emphasize the importance of the context into which new treatments are introduced. The outer context refers to legislation, policy, mandates, and incentives that can impact decisions to use evidence-based treatments. Examples of outer context influencing EBPT use include policies in the U.S. VA Healthcare System, the U.K., and many state and local mental health systems that require consumer access to EBPTs. The inner context comprises the organizational culture, climate, local leadership, resources, turnover, workload, and other factors within organizations that influence implementation. Some research has shown that new interventions are sustained much longer in organizations with cultures that are less resistant and emphasize proficiency, and that turnover, which impacts EBPT delivery within organizations through attrition of trained staff, is higher in organizations with poorer organizational climates (Glisson et al., 2008). Characteristics of the innovation are also important considerations—is the treatment complex and difficult to learn? Is it compatible with existing practices? Can it be adapted to improve the fit? Finally, characteristics of individuals must be considered. Are clinicians open to using EBPTs, or reluctant to learn them? Are they willing to learn and use them if required to do so? Are clients receptive to EBPTs? Are they able to make it to session weekly, and willing or able to engage in the interventions that comprise the EBPT? Additional research in each of these areas is summarized in a recent review article (Stirman et al., 2015).

When we think about it, it isn’t surprising that this many factors are at play. How many of us have tried to make even small changes in our day-to-day lives (more exercise, cooking at home more, getting our paperwork done more quickly), without success, even if we are completely convinced of the benefits of these changes?  Competing demands, contextual barriers, and a variety of challenges in making changes can add up quickly.  What would it take to make us make a fundamental change to the way we delivered therapy? Just as we need to understand potential barriers to our own efforts—or our clients’ efforts to make changes, we need to understand all of the contingencies, cognitive, affective, and practical circumstances that can impact whether or not therapists deliver EBPs at the level of skill and fidelity necessary to improve clinical outcomes. Not only are we asking clinicians to do something that may be fundamentally different from the way they were trained and the way conceptualize psychotherapy, we are also asking them to do it while juggling high caseloads, paperwork, and local policies. Their colleagues and administrators may not be supportive of these changes. Even if clinicians are enthusiastic about training and their supervisors are supportive in principle, there are often barriers related to productivity demands, reimbursement policies, and a host of other factors that can influence efforts to start or continue using EBPTs.

All of this is not to say that the barriers are insurmountable. Evidence is emerging that through intensive training and consultation, the vast majority of participants in EBPT training programs can be successfully trained, regardless of their theoretical orientation, background, or their organizational context. Researchers are now investigating a variety of strategies to support initial and long-term implementation of EBPTs. Some of the promising strategies include strategies to increase demand for EBPTs, ongoing supportive fidelity monitoring, incentives, training organizational leadership to support EBPT implementation, interventions to improve organizational climate and culture, and task shifting and “train the trainer” models to address capacity issues (Stirman, Gutner, Langdon, & Graham, 2015).

When training clinicians or working with organizations to implement EBPTs, it is important to form a meaningful partnership with the professionals who will be implementing the EBPT. This can take time, but even under a mandate for implementation, it is truly necessary.  Rather than diving right into training, preliminary meetings and discussions to hear from key stakeholders are an essential step. By genuinely respecting their own experiences and perspectives and working to understand all of the factors that influence their decisions about training and ongoing use of EBPTs, it will be easier to develop a successful, comprehensive plan to promote not just initial training success, but long-term practice change (Park et al., 2016; Stirman et al., 2010). After such conversation and assessment, it might be determined that training in a different EBPT than originally planned is necessary to meet a more pressing need. Collaborative planning will also influence the content, timing, or mode of training, or plans for longer-term support. Ongoing program evaluation (e.g., attendance at training, skill development, clients’ symptom change, information about the proportion of eligible clients who are offered or receiving the EBPT) should also inform refinements to the plan and allow for course corrections. A number of large- and small-scale implementation efforts have used these strategies with success (Stirman et al., 2015). The field has come a long way from the days of “if we publish, they will come” and “train and hope”. While there is still a gap between research and practice, we are starting to learn how to narrow it.

Discussion Questions

  1. A thought experiment: If you learned that a new treatment that was very different from how you normally practice was very effective, what would be the barriers and facilitators to your learning it and using it in your everyday practice?
  2. How do policies and organizational factors influence the use of EBPTs in routine care practice settings?
  3. What strategies are necessary to promote the ongoing, skilled use of EBPTs in everyday practice?
  4. If you were partnering with a local clinic or organization to provide training in an EBPT, what factors would you need to attend to, and how would they influence your plans for supporting the implementation of that EBPT?
  5. How might these factors translate into private practice settings to influence clinicians’ use of EBPTs after they attended a workshop or training?

Author Biography








Shannon Wiltsey Stirman is a Psychologist at the National Center for PTSD and an Assistant Professor at the Stanford University School of Medicine. She has been involved in training and implementation programs in public mental health settings and her research focuses on the implementation and sustainability of EBPTs in public sector mental health settings.