Outcome monitoring and feedback: A transtheoretical, transdiagnostic evidence-based practice

This blog piece by Dr. James Boswell discusses a recent study published by his research team in Psychotherapy entitled, “Implementing routine outcome monitoring in clinical practice: Benefits, challenges, and solutions.”

In their seminal paper, Howard, Moras, Brill, Martinovich, and Lutz (1996) suggested using standardized session-to-session measures of patient progress to evaluate and improve treatment outcome by using data-driven feedback. In doing so, they launched a new area of research labeled patient-focused research, which asks: Is this treatment, however constructed, delivered by this particular clinician, helpful to this patient at this point in time? Despite convincing evidence that routine outcome monitoring and feedback enhances outcomes and reduces the risk of deterioration, regardless of the type of treatment or problem area (Shimokawa, Lambert, & Smart, 2010), many clinicians and practice settings do not routinely collect standardized progress and outcome information from patients (de Jong, van Sluis, Nugter, Heiser, & Spinhoven, 2012; Hatfield & Ogles, 2007). In our article (Boswell, Kraus, Miller, & Lambert, 2015), we review the primary challenges to implementing routine outcome monitoring in clinical practice, as well as offer potential solutions for enhancing the adoption, implementation, and sustainability of this practice based on our own personal experience and the existing literature.

Routine outcome monitoring is an effective tool to help clinicians identify whether a given patient is at risk for a negative outcome or on track to experience benefit. Research to date has demonstrated that routine outcome monitoring and feedback significantly reduces deterioration and dropout rates in routine psychological treatment (Lambert, 2010). Predictive analytics indicating that a given patient is at risk for deterioration or non-response can promote clinical responsiveness (also see Boswell, Constantino, Kraus, Bugatti, & Oswald, 2015). An individual clinician may respond to this feedback by (a) offering more sessions or increasing the frequency of sessions, (b) ramping up pointed assessment (e.g., of suicidality, motivation to change, social support), (c) altering the micro-level or macro-level treatment plan, (d) referring for a medication consultation, (e) calling a family meeting, and/or (f) seeking additional consultation and supervision. In the absence of feedback and timely responsiveness, however, treatment may continue to follow the same problematic course.

Many of the barriers to adopting and implementing routine outcome monitoring are common to other evidence-based practices (McHugh & Barlow, 2014). Practical barriers include, but are not limited to: (a) financial and time burden, (b) lack of infrastructure, and (c) turnover. For example, a health care system might choose to invest considerable resources in training its treatment staff in an identified evidence-based treatment for a particular disorder. Training a clinician to implement a new treatment with adequate, sustained fidelity requires significant time and resources (both internal and external). Many clinics, particularly those specializing in substance abuse, experience an extremely high staff turnover rate (Eby, Burk, & Maher, 2010). In the eyes of many, the costs of implementation may soon outweigh the potential benefits. In the area of routine outcome monitoring, some initial training is required for clinicians to learn how to administer assessment tools, interpret feedback, and integrate this information into their treatment plan. In the absence of an infrastructure that promotes sustainability (e.g., an easily accessible and efficient training module for new staff, administrators and supervisors who value outcome monitoring and provide continuity within a system or agency), similar to evidence-based treatment packages, the implementation of routine outcome monitoring might simply “fade away” over time in a setting.

In addition to practical barriers, we also address philosophical barriers to routine outcome monitoring and feedback. In my view, cultural and philosophical issues are ultimately more relevant to adoption and sustainability when compared to practical barriers. While simultaneously being careful not to minimize the critical importance of practical concerns (e.g., financial costs, technology), even after removing such barriers (e.g., no cost system, automated assessment and feedback report, adequate training in use), important individual differences exist with regard to who is likely to seek out and respond constructively to feedback (de Jong et al., 2012; Kluger & DeNisi, 1996). Therefore, eschewing the practical for a moment, we believe that early on in training clinicians should adopt an open attitude toward data-driven decision making. Consistent with Stricker and Trierweiler’s (1995) concept of the “local clinical scientist,” routine outcome monitoring moves the “lab” to routine practice settings, and emphasizes the integration of routine assessment and data-driven feedback to inform the treatment of this patient, as well as groups of patients in a clinician’s practice. Rather than view outcome data and feedback as potentially threatening or irrelevant information, training programs need to support a culture that values actuarially informed treatment and a scientific attitude toward one’s cases. To do otherwise arguably ignores the complexity of behavior change and psychotherapy.

Please continue the discussion by responding to one of the following discussion questions or by generating a question or comment of your own. Comment here or on our Facebook page

Discussion Questions:

  • What types of standardized measures yield the most valid and clinically useful information? Is there value in going beyond symptom and functioning measures?
  • Should there be financial incentives (e.g., from insurers) for incorporating routine outcome monitoring in one’s practice? What are the pros and cons of “pay for reporting” and “pay for performance” initiatives?

About the Author:

BoswellJames F. Boswell, PhD is an Assistant Professor in the Department of Psychology at the University at Albany, State University of New York. He earned his Ph.D. in clinical psychology from the Pennsylvania State University, completed his clinical residency at Warren Alpert Medical School of Brown University, and completed postdoctoral training at the Center for Anxiety and Related Disorders at Boston University. His research program focuses on identifying important participant factors, technical factors, and relational processes that influence the process and outcome of psychological interventions; identifying effective training and implementation strategies; developing Practice Research Networks (PRNs); and integrating outcomes monitoring and feedback systems into routine clinical practice.

 

References

Boswell, J.F., Constantino, M.J., Kraus, D.R., Bugatti, M., & Oswald, J. (2015). The expanding relevance of routinely collected outcome data for mental health care decision making. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0649-6

de Jong, K., van Sluis, P., Nugter, M.A., Heiser, W.J., & Spinhoven, P. (2012). Understanding the differential impact of outcome monitoring: Therapist variables that moderate feedback effects in a randomized clinical trial. Psychotherapy Research, 22, 464-474. DOI:10.1080/10503307.2012.673023

Eby, L. T., Burk, H., & Maher, C. P. (2010). How serious of a problem is staff turnover in substance abuse treatment? A longitudinal study of actual turnover. Journal of Substance Abuse Treatment, 39, 264-271. doi:10.1016/j.jsat.2010.06.009

Hatfield, D. R., & Ogles, B. M. (2007). Why some clinicians use outcome measures and others do not. Administrative Policy Mental Health and Mental Health Services Research, 34, 283-291. doi:10.1007/s10488-006-0110-y

Howard, K. I., Moras, K., Brill, P. L., Martinovich, Z., & Lutz,W. (1996). Evaluation of psychotherapy. American Psychologist, 51, 1059-1064. doi:10.1037/0003-066X.51.10.1059

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a metaanalysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254-284. doi:10.1037/0033-2909.119.2.254

Lambert, M. J. (2010). Prevention of treatment failure: The use of measuring, monitoring, and feedback in clinical practice. Washington DC: American Psychological Association Press.

McHugh, R.K., & Barlow, D.H. (2012). Dissemination and implementation of evidence-based psychological interventions. New York: Oxford University Press.

Shimokawa, K., Lambert, M. J., & Smart, D. (2010). Enhancing treatment outcome of patients at risk of treatment failure: Meta-analytic and mega-analytic review of a psychotherapy quality assurance system. Journal of Consulting & Clinical Psychology, 78, 298-311. doi: 10.1037/a0019247.

Stricker, G., Trierweiler, S.J. (1995). The local clinical scientist: A bridge between science and practice. American Psychologist, 50, 995-1002.