Making the Most of Clinical Science

By Michael W. Otto, Ph.D.

___________________________________________________________________________________________________________

 

I could not be more pleased to be starting my role as President of the Society of Clinical Psychology (SCP). As has been the focus of Presidents before me, I am particularly concerned with how SCP delivers state-of-the-art information on and training in treatment strategies for clinicians, while also offering resources for clinical-researchers trying to increase this knowledge base. These activities are at the core of the Mission Statement for the Society: to encourage and support the integration of psychological science and practice in education, research, application, advocacy and public policy, attending to the importance of diversity. So what is the next step for a society that has long focused on the creation and dissemination of treatment outcome findings, and how can this be done more usefully?  Allow me to offer my perspective.

I have a simple frame for how I think about the contribution of clinical science to my own practice and supervision: research reports offer me clinical experience. For the time it takes me to read a research report, I gain the experience of treating scores of patients, getting to know how treatment turned out if I treated some of them one way, and how it turned out if I treated some another way.  It would take me years of direct clinical effort to get this kind of experience on my own.

I do believe I make great use of outcome feedback in my clinical practice. When something I do appears to pay off for a patient, I tend to do it again. This sort of learning is useful. It is also problematic, because ultimately it means that my successes are my ruts. I don’t get to learn what would have happened if I tried an alternative interventions, and over time, the default is for my therapeutic ruts to get deeper. Outside consultation/supervision can be a helpful alternative. I can borrow from the clinical experience of another, compare that experience with my own, and decide if I want to try interventions outside my rut. But if I fall into peer supervision with people who think a lot like me, then I may not have the chance for a truly fresh perspective.

For me, clinical research offers that truly fresh perspective. I get to know the broad brushstrokes on efficacy garnered from studies with some patients like mine (and some patients who are really not like mine, despite sharing the same diagnosis). I get to know what tends to work despite these individual differences. This is not to say that research trials are not without their biases and limitations. Training differences, allegiance issues, and rater bias are all potential issues for a well-conducted clinical trial. Nonetheless, even with these challenges, a clinical trial still offers the least-biased information to the field because of the systematic recruitment, fixed trial length, independent evaluation, and structured interventions that depart from the outcome perspectives afforded in clinical practice. In short, clinical trials are like democracy as viewed by Winston Churchill: “Democracy is the worst form of government except all those other forms that have been tried.”

In addition to providing relatively unbiased perspectives on what can work with a well-defined cohort of patients, clinical trials provide an important perspective on timelines of response. This is one of overlooked values of clinical trials: showing me how fast and how well patients can respond, so I have a benchmark to know how I am doing in my practice. These response rates at 6 weeks, 12 weeks, etc., provide a crucial source of error detection, allowing me to estimate whether my patient might have gotten better faster with a different approach than one I offered. And believe me, faster makes a difference for patients: every week of less suffering for the patient, the patient’s family, and the patient’s role functioning is a really good week. In short, clinical trials give me a benchmark for knowing whether my own results with patients approach that for at least one well-defined alternative. Clinical trials provide me with the standard to beat.

Clinical trials also provide me with information on a set of prototypic interventions that can offer benefit. Why prototypic?  Let’s consider the process of treatment development and validation (e.g., Rounsaville, Carroll, & Onken, 2001). If a clinical investigator starts with a vision of the type of interventions (often reflecting a principle or principles of change) that could help a given condition, then the first step in the validation process is to complete pilot work and to turn these principle-based interventions into a particular protocol for intervention. And, even though investigators may set out to show the value of specific principles of treatment, they have to, as part of the process of progressing to a tightly-controlled study, operationalize these principles into a very specific protocol that all the study clinicians can follow. The resulting protocol then gets prime attention in the research report and published treatment manual; it is, after all, the embodiment of what worked in the trial. Yet, if we over-attend to that particular protocol of treatment, we slide forward into the complaints against manualized treatment (e.g., too stifling of innovation) that have been well documented by surveys (Borntrager, Chorpita, Higa-McMillan, & Weisz, 2009; Stewart, Stirman, & Chambless, 2012).

As compared to protocols, principles of treatment provide broader guidance, and presumably can be enacted by any number of specific interventions. In the early decades of the growth of empirically-supported treatments (reflected by the empirically-supported treatment list spearheaded and maintained by this Society), there were not enough studies to allow a strong perspective beyond a validated protocol of treatment to the underlying principles. However, because of the ongoing expansion of this list, the diversity of validated protocols allows us to use this treatment-outcome information differently.  Rather than having a collection of interventions that work, with a corresponding pile of treatment manuals, we have enough treatments that we can now hone in on the component interventions and principles that are behind and sometimes shared by specific protocols. With enough trees, the forest is emerging. We are starting to have enough protocol validations from controlled studies that we can better return to where the researchers were before each treatment trial started– attending to underlying principles of treatment.

During the next year a number of important changes will be taking place in the way in which SCP (via the website: https://www.div12.org/) provides information on empirically-supported treatments. First, information on efficacious treatments will be organized around both case prototypes and dominant symptoms, allowing for a more efficient search about the sort of interventions that may be relevant for any given patient.  Second, relevant treatment information will be linked with training resources, allowing for a more efficient translation of trial findings into clinical actions. Third, there will be greater emphasis on the component interventions (rather than just protocols) that are most associated with treatment results, aiding the translation between fixed protocols of treatment and component interventions for clinical change.  For this lattermost innovation, SCP will rely on a new Presidential Task Force that will be providing information on the range of component interventions that have similar targets and conceptually similar interventions, and to emphasize the similarities in efficacy (where they exist) for interventions that share the same targets but may rely on diverse procedures. This information will be organized into a topographical map with information mapped according to its effect size (as indexed by vertical elevation), the number of studies on which the effect size estimation is based (as indexed by horizontal area), and the intervention’s therapist/client burden (as indexed by degree of shading). Moreover, the geographical placement of intervention types on the map will reflect their relative conceptual proximity, making it possible to gauge the general conceptual “direction” in which promising intervention effects can be found.  I am excited by the potential of these maps, and, as soon as a number of these are produced, you will see the product of this effort from this new Presidential Task Force, led by Mark Powers, and ongoing efforts of the Science and Practice Committee, led by Rachel Hershenberg and Susan Raffa.

These efforts will be part of a broader emphasis on improving the usefulness of clinical trial information and to attend to clinical decision-making. As part of a new column (to debut in the next issue of The Clinical Psychologist), I will be interviewing clinicians about how they know what they know for making intervention decisions for patients in their practice. These discussions will also include that clinician’s wish list for the type of clinical research that would most inform their practice. My hope is to better address the research-practice divide by ensuring two-way communication about ways to enhance outcomes in clinical practice. Commensurate with these efforts, SCP will also be re-launching a clinical discussion listserv with the core mission of facilitating ongoing discussions about treatment options relevant to the choice points clinicians face in their practices.  So, stay tuned to the website, this column, and other announcements, SCP will be continuing to innovate ways to serve its membership and the field more generally.

References

Borntrager, C. F., Chorpita, B. F., Higa-McMillan, C. & Weisz, J. R. (2009). Provider attitudes toward evidence-based practices: Are the concerns with the evidence or with the manuals? Psychiatric Services, 60, 677-681.

Rounsaville, B. J, Carroll, K. M., &b Onken, L. S. (2001). A stage model of behavioral therapies research: getting started and moving on from Stage I. Clinical Psychology: Science and Practice, 8,133–142.

Stewart, R. E., Stirman, S. W., & Chambless, D. L. (2012). A qualitative investigation of practicing psychologists’ attitudes toward research-informed practice: Implications for dissemination strategies.  Professional Psychology, Research and Practice. 43(2), 100-109.