Although the field has embraced an evidence-based approach to the practice of Clinical Psychology, it still remains important to enhance Clinical Psychology doctoral training programs’ support of this emphasis. Given the value of grounding doctoral training in Clinical Psychology on empirical evidence, our goal is to develop cross-cutting principles and resources that can guide Clinical Psychology doctoral programs of all forms in the incorporation of evidence-based models of training2. Although a number of other groups are developing training guidelines for various forms of specialty training within Clinical Psychology (e.g., training for clinical scientists, training for cognitive-behavioral therapists, training for behavioral health practitioners), each of these guidelines rests on the assumption that doctoral students will receive foundational training in core areas (e.g., psychopathology, evidence-based assessment, evidence-based treatment) and will receive clinical supervision in the development of core clinical skills (e.g., case formulation, differential diagnosis, treatment/intervention conceptualization, ethics, sociocultural competence). This document delineates specific principles to guide the core doctoral training and foundational clinical supervisory experiences for Clinical Psychology programs, irrespective of the theoretical orientation of the faculty or the training mission of the program. In designing these principles, we are focusing on Clinical Psychology, which is appropriate given our individual and collective training experiences. It is possible that some or all of the principles outlined in this document can apply to training in broader health services psychology.
In working to develop these key principles, the committee combed the recent literature on evidence-based practice and training models across disciplines that reflect this approach (Bauer, 2007; Collins, Leffingwell, & Belar, 2007; Gray, 2004b; Guyatt & Rennie, 2002; Hunsley & Mash, 2007; Spring, 2007; Straus, Glasziou, Richardson, & Haynes, 2011; Thorn, 2007; Youngstrom, 2012). We also considered the policy statement on evidence-based practice issued by the American Psychological Association (APA) that was developed in 2005 (Appendix 1). This policy emphasizes the roles of research, clinical expertise, and patient values in the conduct of evidence-based practice. As noted in the policy, the definition used by APA is specific about how to operationalize research evidence but understandably ambiguous with respect to the nature and operationalization of clinical expertise and patient values. In particular, information about how and when clinical expertise and patient values should be integrated with research is lacking in this policy. The ambiguity is not a deficit of the APA policy so much as an accurate reflection of the state of knowledge in the field. Thus, our guidelines necessarily go beyond the APA statement in providing recommendations for teaching evidence-based practice in clinical psychology training programs.
Moreover, in drafting these principles, our task force recognizes that multiple epistemologies can be useful in undergirding training programs. Within the principles presented here, emphasis is placed on logical positivism (also known as logical empiricism) given the current advances within the field. The members of our task force believe that the phrase “evidence-based” should refer to empirically-grounded information (including but not restricted to EST), and should subsume both quantitative and qualitative sources of data. Although clinical expertise and patient values are regarded by our task force as essential elements in training students to conduct assessment, case formulation, treatment, and larger-scale clinical interventions, these perspectives at present are not grounded in scientific data. As the field continues to evolve and there is greater empirical knowledge about clinical expertise and patient values, including models for how to integrate these three literatures, these components could be integrated more fully into systematic training efforts.3
In order to facilitate the use of these principles, we provide numerous resources that programs may use flexibly to incorporate each principle into their existing curricula (Appendix 2).
We believe that four key principles should guide doctoral training in Clinical Psychology. These principles are:
1a. Training should emphasize grounding in the empirical literature on psychopathology, assessment, and treatment. Within the treatment research literature, training should emphasize three facets: (1) client and therapist characteristics, (2) process variables, and (3) outcome. For each of the facets, students should be exposed to findings that cut across different theoretical orientations as well as those that are unique to particular approaches, and exposed to findings that demonstrate the impact of sociocultural, demographic, and other contextual factors on clinical practice. In addition, both positive and negative effects associated with each of these three facets of research should be covered in the provision of readings and coursework. Training of students should focus not only on what to do in order to facilitate client improvement but also on how to prevent harmful effects (Castonguay et al., 2010), in line with ethical principles in clinical psychology.
1. Client and therapist characteristics refer to factors that are independent of specific treatment approach. Some of these variables have been found to have an impact on the process and outcome of different forms of therapy, such as client’s high level of perfectionism. Other pre-treatment characteristics, such as reactance level (the tendency of a person to oppose being controlled by others) might be considered as markers for matching individual clients with particular forms of therapy or therapeutic styles (Norcross, 2011). Sociocultural and demographic client factors, both observable and non-observable, may also have an impact on the process and outcome of therapy (Pachankis & Goldfried, 2004; Sue & Zane, 2009). These include but are not limited to the client’s age, gender, country of origin, socioeconomic status, ethnicity, religion, language, sexual orientation, and sexual identity. Beyond cultural knowledge and culturally adapted treatment strategies and processes (Bernal, Jimenez-Chafey, & Domenech Rodriguez, 2009; Bernal & Domenech Rodriguez, 2012; Hays & Iwamasa, 2006; Hwang, 2006; Lau, 2005), therapists should also be trained to consider within-group heterogeneity among diverse clients, and to refrain from presupposing a client’s sociocultural values even if seemingly appropriate (Sue & Zane, 2009). Students should also be informed about research on therapist effects, as well as the evidence pointing to therapist characteristics that may facilitate (e.g., emotional well being) or interfere (e.g., hostility toward the self) with the process and/or outcome of therapy.
2. Process variables refer to factors that take place during treatment and that can predict or explain therapeutic change. Participant characteristics such as the therapist’s empathy and the working alliance have been positively related to change in several theoretical approaches to therapy (Norcross, 2011). Similarly, a number of therapeutic events (e.g., increased client’s awareness) have been found to be helpful by both client and therapist. The process literature also indicates that helping skills (e.g., reflection of feelings) are useful in training students to conduct therapy (Hill, Stahl, & Roffman, 2007). In addition, process research has identified variables that are predictive of outcome in particular forms of therapy, such as the use of homework in cognitive behavioral therapy (CBT), accurate interpretations in psychodynamic therapy, and the deepening of emotion in humanistic therapy (see Castonguay, 2013). Reframing interventions (e.g., changing client’s view of presenting problem from an individual to an interpersonal perspective) has also been linked to the process of change in systemic approaches (see Boswell et al., 2010). Moreover, students should be informed that core constructs of a specific type of treatment may predict the change in other orientations. For example, the fostering of experiencing (emphasized in humanistic therapy) and the focus on the past (central to psychodynamic treatment), have both been associated with outcome in CBT (Castonguay, 2013).
3. In terms of treatment outcome findings, students should be informed about two types of research: (1) studies investigating the efficacy and effectiveness of specific treatments, as well as (2) research on the impact of specific types of interventions or parameters that are independent of a particular treatment (“common factors”).
As a result of numerous empirical investigations of specific treatment and systematic application of evidence-based criteria to outcome studies, a number of empirically-supported treatments (ESTs) have been identified (see Chambless & Ollendick, 2001). These evidence-based criteria (e.g., random assignment to treatment conditions, adequate statistical power to detect meaningful differences between treatment and comparison conditions, independent evaluation of outcomes by raters unaware of treatment condition; Chambless & Hollon, 1998; Silverman & Hinshaw, 2008) evaluate aspects of study design and research methodology that increase confidence in conclusions that are drawn about the efficacy of an intervention. When available for a condition or problem, ESTs should be taught early and used preferentially. The preferential use of treatment approaches supported by empirical evidence rests on the assumption that effective treatment should be implemented as early as possible in order to interrupt continuation or progression of the problem (i.e., to reduce harm and suffering and to enhance well-being). ESTs are available for numerous treatment modalities, including group, couples, and family therapy formats. Given that ESTs currently have the best available evidence, the patient has the greatest chance of clinical improvement and potential recovery with these methods.
In relation to principle 1a, some researchers have conducted trials to examine whether clients from different racial/ethnic groups respond differentially to empirically supported treatments while others have focused on making cultural adaptations to ESTs (Bernal, Jimenez-Chafey, & Domenech Rodriguez, 2009; Bernal & Domenech Rodriguez, 2012; Hays & Iwamasa, 2006; Hwang, 2006; Lau, 2005). The limited existing studies suggest that treatments are effective when applied to ethnically diverse populations, especially when they are adapted to the meet the needs of the specific group (Aguilera et al., 2010; Chen et al., 2007; Comas-Díaz, 1981; Fujisawa et al., 2010; Interian et al, 2008; Matsunaga et al., 2010; Molassiotis, et al., 2002; Nakano et al., 2008; Ono et al., 2011; Rossello et al., 1999, 2008; Wong, et al., 2002; Wong, 2008; Zhang, et al., 2002). However, the need for such cultural adaptations is an empirical question and may not apply in some cases or be available as yet in others. A key aspect of training would be to teach students to consult the literature to determine when adaptations are needed for particular subgroups rather than relying on assumptions about the need to modify ESTs. For example, contrary to popular belief, findings from the child treatment literature suggest that ethnic minority status does not moderate treatment effects (Huey & Polo, 2008). Designs that compare culturally adapted and standard ESTs are still needed for many different sociocultural groups to more fully evaluate the need for specific treatment adaptations. Although the field still awaits controlled trials for many diverse groups (e.g., LGB-affirmative approaches), in the meantime students should be trained to draw on the extant empirical work (e.g., combining LGB-specific empirical findings with existing ESTs for the general population) in order to improve our treatment of these groups (e.g. to address LGB-specific presenting issues) until results from such studies are forthcoming (Pachankis, 2009).
Fortunately, ESTs are not restricted to one theoretical orientation. As an example, the literature on treatment of depression (one of the most frequent presenting problems) notes that cognitive, behavioral, interpersonal, psychodynamic, and experiential treatments have been identified as empirically supported (Follette & Greenberg, 2006). Students should also be exposed to forms of therapy that have not yet been identified as EST but have showed positive, albeit preliminary, results, such as different types of integrative therapies for depression and generalized anxiety disorder (see Castonguay, 2013), acceptance commitment therapy for depression (see Follette & Greenberg, 2006), and psychodynamic therapy for panic disorders (Milrod et al., 2007).
Training should include teaching doctoral students how to access information regarding evidence-based treatments from trusted sources as well as how to proceed in the absence of established treatments (Straus, et al., 2011). Training should also include information about potentially harmful treatments (e.g., group treatment for conduct disorder; Critical Incident Stress Debriefing for post-trauma survivors, repressed memory therapy, Lilienfeld, 2007). Similarly, preference should be given to assessment methods that have demonstrated validity (Bossuyt, et al., 2003), recognizing that continued use of assessment methods that lack demonstrated validity always add cost and may result in less valid clinical decisions or even harm (Kraemer, 1992). An important consideration is whether the assessment method has been validated for the particular sociocultural group or in the language that it is being used.
In addition to receiving training in treatment models that have received empirical support, students should learn about patient-focused research, which is designed not to measure the impact of a treatment but to assess and improve the progress of individual clients (or the pattern of change of specific groups of clients). For example, this facet of the outcome literature includes studies demonstrating the effect of therapy “dose” on outcome, examination of phases of therapeutic improvement, and empirical consideration of the beneficial impact of therapist feedback (especially to reduce harmful effects). Patient-focused research is part of a larger body of empirical studies that have been conducted in naturalistic settings and with the active participation of clinicians. Called practice-oriented research (Castonguay, Barkham, Lutz, & McAleavey, 2013), this literature also includes studies on therapist effects as well as a wide array of investigations conducted in practice research networks. Practice-oriented research should be presented to students as equipoise and complementary to studies conducted in controlled environments, with both type of research being viewed as necessary in building a robust knowledge base and improving clinical practice (Barkham & Margison, 2007; Barkham, Stiles, Lambert, & Mellor-Clark, 2010).
This training model changes the traditional role of the supervisor. In addition to the conventional aspects of supervision, the proposed model entails more guidance about searching the literature, critically appraising findings in terms of validity and relevance to the specific client/patient, including taking into account the client’s sociocultural and demographic context, and applying the findings to the case at hand (Straus, et al., 2011). There is also substantial value in having supervisors model the skills of search and application and “thinking aloud” about the process with supervisees. We also advocate adopting a patient-centered approach to learning that has been well-developed in Evidence-Based Medicine (EBM, Straus, Glasziou, Richardson, & Haynes, 2011, Hoge, Tondora, & Stuart, 2003).
1b. Students should, in their training, have enough knowledge of rigorous methods of both quantitative (e.g., randomized control trials, single-subject designs, process-outcome studies) and qualitative analyses (e.g., task analysis, consensual qualitative research, comprehensive process analyses) in clinical psychology in order to become expert consumers of the research literature. Students should have enough foundational knowledge in research design and analysis to be able to evaluate the quality of the published research. Learning standard methods for critically evaluating designs and publications (e.g., consolidated standards of reporting trials (CONSORT), STARD, meta-analysis reporting standards (MARS), Preferred Reporting Items for Systematic Reviews (PRISMA) & journal article reporting standards JARS (American Psychological Association, 2010; APA Publications and Communications Board Working Group on Journal Article Reporting Standards, 2008); Bossuyt, et al., 2003; Moher, Liberati, Telzlaff, & Altman, 2009; Moher, Schulz, & Altman, 2001) can facilitate consistent and rapid evaluation. Training in research methods should also include training in fundamentals of clinical research ethics, epidemiology, measurement, statistical analysis, and skills for comprehending systematic reviews and meta-analyses. Readings should include exposure to informatics and database searching skills. The inclusion of reading on clinical significance is particularly important in research training in Clinical Psychology (e.g., Kazdin, 1999). Model research training will include specification of the full diversity of methodologies that have been used to study psychopathology, assessment, prevention, and treatment [including efficacy, effectiveness, and practice-oriented studies] with articulation of the pros and cons of each design with respect to internal and external validity. Clinical research ethics are a critical aspect of this training to ensure the protection of human subjects, and that the well-being of research participants are not compromised in any way to enhance research design (e.g., through use of a no treatment or waitlist control condition for conditions that require immediate treatment) (Hoagwood & Cavaleri, 2010). Like all other healthcare fields, Clinical Psychology most values research that is ethical, replicable, generalizable, and where possible, able to establish cause-and-effect. These values should be reflected in clinical research training for evidence-based practice.
1c. Students should be taught to collect data throughout treatment to evaluate the effects of treatment on the individual patient (Powsner & Tufte, 1994), make data-based decisions about modifying or terminating treatment taking into account patient response (Lambert, Hansen, & Finch, 2001), and to consider not only symptom presentation but level of impairment and quality of life when making these decisions (Frisch, 1998). A salient component of data-collection is assessment of the patient’s preferences for treatment, as these will guide treatment planning and influence individual responses to specific interventions (Kraemer, 1992; Straus, et al., 2011; Swift, Domenech Rodriguez, &Bernal, 2011). Ideally, students would also have exposure to program evaluation, as a way of thinking about patterns and outcomes at a clinic- or system level, in addition to individual patient-level outcomes (Castonguay, et al., 2013).
1d. Students should be taught how to link assessment and treatment. Assessment is clinically relevant when it addresses one of the 3 P’s of prediction, prescription, and process (Youngstrom, 2008). Included in recommended instruction is training on assessment issues and psychometrics, teaching students to consider the reliability and validity of measures, focusing training on the best validated measures for each clinical purpose and for the given population, and helping students to integrate findings from different sources of assessment, particularly when obtained findings appear discrepant (DeLosReyes & Kazdin, 2005). Assessment and treatment ideally are tightly integrated, with assessment guiding clinical decisions about the next action in treatment. Decisions about where to focus training should start with the most common presenting problems, and then look to the commonly used measures addressing these domains (Camara, Nathan, & Puente, 1998), and critically evaluate whether new data suggest that an alternate measure might have greater validity for a particular purpose or group (defined by demography, cultural, or clinical factors). Recent advances in technology and in the research base make it possible now to integrate methods that focus on individual probabilities of key clinical variables. Training should aim for sufficient “numeracy” (Gigerenzer & Goldstein, 1996; Gigerenzer & Hoffrage, 1995) and competence with concepts and interpretation that doctoral students can use these methods appropriately in providing care , even if not all programs emphasize the statistical underpinnings of these approaches. For example, it would be helpful for graduates to understand how a Receiver Operating Characteristic (ROC) analysis measures a test’s performance at separating one diagnosis or condition from others (Grove & Meehl, 1996; Grove, Zald, Lebow, Snitz, & Nelson, 2000; McFall & Treat, 1999; Swets, Dawes, & Monahan, 2000), and be able to gauge whether the study design was valid for examining diagnostic performance (Bossuyt, et al., 2003). It is possible that research-oriented programs would be more likely to teach signal detection theory, ROC, logistic regression, and other methods as part of their research design or statistical analysis curricula, so that students learn how to generate the results as well as evaluate them. All programs should offer grounding in basic definitions of psychopathology, basic research on human development, interpersonal and social processes, as well as various dimensions of individual functioning (physiology, cognition, affect, and behavior).
2a. Students should be trained to understand and appreciate how heuristics and biases, particularly confirmation bias, will limit the accuracy of their judgments (Arkes, 1991; Garb, 1998). This information can assist the student in learning how to recognize when their decisions are guided by biases, how to correct this, and how to use valid psychological measures to continually evaluate/double-check their clinical impressions (Croskerry, 2002, 2003; Meehl, 1954).
2b. Students should be trained and supervised in the application of scientific thinking to practice, in particular hypothesis testing, data collection, Bayesian decision-making, etc. within a clinical context (Dixon, et al., 2009; Lueger, 2002; Straus, et al., 2011). As mentioned above, they should learn how to monitor progress to make sure that treatment is helping and not having unintended consequences (Jacobson & Truax, 1991; Lambert & Brown, 1996; Lambert, et al., 2001; Powsner & Tufte, 1994). They should also be trained to be keenly aware of any ethical concerns that might impact their clinical or clinical research activities (Hoagwood & Cavaleri, 2010).
2c. Students should be trained in how to proceed clinically in the absence of highly relevant scientific knowledge (the “rigor versus relevance” dilemma, where clinical practice involves some individual cases that will not be well represented in rigorous research) (Schon, 1983). Students should learn how to use guiding principles and generalizations from evidence to shape practice, rather than switching to unstructured, unreflective improvisation in the absence of strong evidence (Castonguay & Beutler, 2006; Norcross, Hogan, & Koocher, 2008; Stricker & Gold, 1996). For example, exposure is a technique with empirical support for a wide range of anxiety and fear conditions. Even if a client’s anxiety problems do not meet diagnostic criteria for a specific anxiety disorder with a corresponding empirically-supported treatment, critical thinking about the scientific literature may lead a therapist to identify exposure as a research-informed treatment option (Woody & Ollendick, 2006). As another example, basic and psychotherapy research converge in suggesting that when treating clients with depressive symptomatology, students should be trained to use interventions that foster the awareness, acceptance and regulation of emotion. Specifically, while psychopathology research has demonstrated the positive impact of emotional disclosure (and the negative impact of suppression of emotion), psychotherapy studies have found positive relationships between emotional deepening and outcome not only in experiential therapy (which emphasize this principle of change) but, as mentioned above, in cognitive therapy for depression (see LeMoulth, Castonguay, Joorman, & McAleavey, in press). Interventions that focus on emotional regulation would seem appropriate even if the client does not meet diagnostic criteria for major depression. Although existing evidence-based interventions have been identified according to diagnostic categories, treatment need not be organized around a diagnosis, but should have a formulation that guides choice of strategies and which can be measured to show progress (Crits-Christoph, 1998; Luborsky, 1984; Sanderson & McGinn, 1997; Stricker & Gold, 1996). These suggestions not only reflect the importance of using critical thinking in the conduct of psychotherapy but are also consistent with the recommendation that clinicians be guided by empirically-based principles of change (e.g., Follette & Greenberg, 2006; Woody & Ollendick, 2006, Norcross, et al., 2008; Spring, 2007) to help them adapt (and/or enhance the impact of) empirically-supported treatments for individual clients (Stricker & Gold, 1996).
2d. Students should learn to be critical consumers and producers of the research literature, recognizing common sources of bias in assessment (Bossuyt, et al., 2003; Campbell & Fiske, 1959; Jaeschke, Guyatt, & Sackett, 1994; Meehl, 1954) and treatment studies (Chambless & Ollendick, 2000; Moher, et al., 2001; Silverman, 1998), and both the strengths and limitations of different study designs for yielding knowledge that generalizes to clinically relevant and diverse populations. Students should be trained to be attuned to how ethical issues may operate in clinical research studies (e.g., selection of control groups, inclusion/exclusion criteria, informed consent; Hoagwood & Cavaleri, 2010). In addition, students should learn how biases and heuristics, including confirmation bias can affect their perspective in research contexts, constrain the nature of hypotheses they consider, and affect their interpretation of both previous research and their own study data.
3a. Courses and other learning experiences should be organized not just in content but towards teaching the student how to learn, and how to continue to update their knowledge and skills throughout their careers to reflect continued progress in the field. In particular, the student should develop an understanding of how to acquire and organize information, learn to keep abreast of new knowledge regarding evidence-based practice, and learn how to incorporate this new knowledge into their clinical practice. This skill is essential given the accelerating pace of information creation and dissemination.
3b. Training should teach patient-centered approaches to framing questions, searching for, evaluating, and applying the evidence in real time using the evidence-based practice model, as elaborated by Sackett and proponents of EBM (Hoge, et al., 2003; Howard, Allen-Meares, & Ruffolo, 2007; Straus, et al., 2011).
3c. Students should be familiar with go-to websites, high quality journals, etc. that are trusted sources of information regarding EBP (Spring, 2007). Crucial skills to be effective consumers of Web-based information include critical appraisal of conflicts of interest, an appraisal of the systematic review criteria and research designs utilized by the website to identify effective practices, as well as strategies for resolving disputes between competing claims about a tool or technique. The skills recommended in Principles 1b and 2d will be particularly useful for students when evaluating the quality of information provided by various websites and journals. In addition, supervisors and trainers should stay abreast of additional and new websites and journals that provide high-quality information and encourage their students to utilize them as part of ongoing and life-long learning.
3d. Students should have practice and support for applying critical appraisal skills to un-vetted sources of information, such as Google, Wikipedia, and social media sites.
4a. Inclusion of opportunities for experiential learning will facilitate the integration needed for students to gain skills with evidence-based practice (Boswell & Castonguay, 2007; McGinn, Jervis, Wisnivesky, Keitz & Wyer, 2008). Examples of high quality experiential learning opportunities include (1) combining didactic lectures with adjunctive small group interactive learning and individual supervision, (2) reviewing case vignettes (Dubicka, Carlson, Vail, & Harrington, 2008; Jenkins, Youngstrom, Washburn, & Youngstrom, 2011), watching videotapes of faculty members conducting assessment or therapy, working through the process of navigating ethical dilemmas (e.g., need to report potential child abuse, disclose information about potentially harmful behavior to the parent of an adolescent) and watching faculty model the process of conducting research searches and critically evaluating the findings; (3) training in health-information technology systems and active utilization of database resources for research and practice (Meats, Brassey, Heneghan, & Glasziou, 2007), learning how to incorporate pre-appraised information, the development of critically appraised topics (CATs), or portfolios combining assessment or therapy materials with summaries of key strengths and limitations (Gilbert, Burls, & Glasziou, 2008); and (4) encouraging active learning via student presentations, debates, written papers, journal club, ethical case vignettes, and collaborative work with peers on these projects, especially when emphasizing clinical relevance (Straus, et al., 2011). There are a variety of developed models for these sorts of educational activities in other health care disciplines that could readily be adapted for use in psychology training, including curated collections of critical reviews of published articles distilling the key features and clinical relevance (e.g., ACP Journal Club; see also Hoge et al., 2003; and Gray, 2004, for mental health examples).
4b. Clinical supervision in evidence-based practice should be performed by supervisors who are well-versed in evidence-based practice. Clinical psychology doctoral programs often share responsibility for clinical supervision with supervision for clinical practica often provided by local practitioners rather than core clinical faculty. Involvement of practicing clinicians may facilitate exposure to a variety of clinical populations and techniques. Regardless of whether supervision is provided by core faculty members or practitioners, it is important to ensure that students receive clinical supervision from supervisors who are knowledgeable about and experienced in the application of evidence-based practices. Smaller programs with fewer faculty may have more challenges offering supervision by multiple faculty members familiar with various methods (Pagoto, et al., 2007). A range of different training support methods, including online videos and continuing education programs are developing as ways of augmenting local expertise and resources.
In addition, clinical supervision should include watching videotapes of the student’s therapy sessions, both during their initial sessions and as they work on mastering specific intervention techniques, in order to observe the student therapists assess patient preferences, introduce therapy options with awareness of diversity issues, consider ethical issues relevant to clinical practice, and integrate relationship skills alongside specific empirically-supported interventions.
In considering how to best incorporate these principles into doctoral training, it is important to recognize that these guidelines can and should be integrated into a program’s existing coursework, practica, and milestone requirements. The task force has explicitly avoided providing specific reading lists, recommending particular course sequences, or putting forward curricular suggestions that might appear prescriptive. Currently, doctoral programs in clinical psychology vary widely with regard to theoretical orientation, training model, and specialization yet all share common elements of training (e.g., specific coursework on psychopathology, diagnosis, case conceptualization, therapy, and ethics; foundational courses in biological factors, cognitive and affective issues, developmental and social influences, and individual differences). We have constructed the principles of training in evidence-based practice so that they can be integrated within the larger Gestalt of a specific clinical training program.
In sum, training doctoral students to conduct evidence-based practice requires coursework, clinical supervision, and research experiences that allow students to use an evidence-based approach to learning and to integrate aspects of evidence-based knowledge throughout all stages of training. An essential component is learning the requisite skills to search for new evidence, to evaluate it critically, and to decide when to update or upgrade skills and content to provide optimal care for the individual case. Evidence-based practice bridges and integrates science and practice via the process of continually checking for clinically relevant evidence that improves the care provided to the individual.
1. To determine authorship of this document, the chair of the committee was listed first; remaining authors were listed in alphabetical order. The opinions expressed in this document do not reflect any one individual’s but rather, have been reached through discussion among the task force.
2. Evidence-Based Training is the conscientious and explicit use of the best current evidence in the design of training programs. The evidence is the outcomes from well- designed research studies in the fields of psychology and evidence-based practice. The terms evidence-based training and practice do not indicate one specific instructional approach or one specific technique and do not favor one theoretical orientation over another. Often mistaken as the same, evidence based practice and training differs from manualized, empirically supported treatments. Rather, evidence based training is a general training approach designed to ensure that doctoral students become proficient in the skills, knowledge and behavior necessary for the study and practice of clinical psychology (Hoge, et al., 2003; Spring, 2007).
3. Feedback and recommendations related to a previous draft of this document were also sought and received from representative of various APA divisions and other professional organizations to ensure the relevance of these principles to different types of Clinical Psychology training programs/models.
American Psychological Association Publications and Communications Board Working Group on Journal Article Reporting Standards. (2008). Reporting standards for research in psychology: why do we need them? What might they be? American Psychologist, 63, 839-851. doi: 10.1037/0003-066X.63.9.839
Barkham, M., & Margison, F. (2007). Practice-based evidence as a complement to evidence-based practice: From dichotomy to chiasmus. In C. Freeman & M. Power (Eds.), Handbook of evidence-based psychotherapies: A guide for research and practice. Chichester: Wiley. pp. 443–476.
Barkham, M., Stiles, W. B., Lambert, M. J. & Mellor-Clark, J. (2010). Building a rigorous and relevant knowledge-base for the psychological therapies. In M. Barkham, G. E. Hardy, & J. Mellor-Clark (Eds.), Developing and delivering practice-based evidence: A guide for the psychological therapies. (pp. 21–61). Chichester: Wiley.
Bossuyt, P. M., Reitsma, J. B., Bruns, D. E., Gatsonis, C. A., Glasziou, P. P., Irwig, L. M., . . . de Vet, H. C. W. (2003). Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. British Medical Journal, 326, 41-44. doi: 10.1136/bmj.326.7379.41
Boswell, J.F., & Castonguay, L.G. (2007). Psychotherapy training: Suggestions for core ingredients and future research. Psychotherapy: Theory, Research, Practice, and Training, 44, 378-383.
Boswell, J.F., Sharpless, B.A., Greenberg, L.S., Heatherington, L., Huppert, J.D., Barber, J. P., Goldfried, M.R., & Castonguay, L.G. (2010). Schools of psychotherapy and the beginnings of a scientific approach. In D. H. Barlow (Ed.) Oxford Handbook of Clinical Psychology. (pages 98-127). New York: Oxford University Press.
Camara, W., Nathan, J., & Puente, A. (1998). Psychological test usage in professional psychology: Report of the APA practice and science directorates (pp. 51). Washington, DC: American Psychological Association.
Castonguay, L.G., (2013). Psychotherapy outcome: A problem worth re-revisiting 50 years later. Psychotherapy, 50, 52-67.
Castonguay, L.G., Barkham, M., Lutz, W., & McAleavey, A.A. (2013). Practice-oriented research: Approaches and application. In M.J. Lambert (Eds.). Bergin and Garfield’s Handbook of psychotherapy and behavior change (sixth edition) (pages 85-133). New York: Wiley.
Castonguay, L.G., & Beutler, L. E. (Eds) (2006). Principles of therapeutic change that work. New York: Oxford University Press
Castonguay, L.G., Boswell, J.F., ,Constantino, M.J., Goldfried, M.R., & Hill, C.E. (2010). Training Implications of Harmful Effects of Psychological Treatments. American Psychologist, 65, 34-49.
Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66, 7–18.
Chambless, D.L., Ollendick, T.H. (2001). Empirically supported psychologyical interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.
Crits-Christoph, P. (1998). Training in Empirically Validated Treatments: The Division 12 APA Task Force Recommendations. In K. S. Dobson & K. D. Craig (Eds.), Empirically Supported Therapies: Best Practice in Professional Psychology (pp. 3-25). Thousand Oaks, CA: Sage Publications.
De Los Reyes, A., & Kazdin, A. E. (2005). Informant discrepancies in the assessment of child psychopathology: a critical review, theoretical framework, and recommendations for further study. Psychological Bulletin, 131, 483–509, DOI 10.1037/0033-2909.131.4.483.
Dixon, M. R., Jackson, J. W., Small, S. L., Horner-King, M. J., Lik, N. M., Garcia, Y., & Rosales, R. (2009). Creating single-subject design graphs in Microsoft Excel 2007. J Appl Behav Anal, 42, 277-293. doi: 10.1901/jaba.2009.42-277
Dubicka, B., Carlson, G. A., Vail, A., & Harrington, R. (2008). Prepubertal mania: diagnostic differences between US and UK clinicians. European Child & Adolescent Psychiatry, 17, 153-161. doi: 10.1007/s00787-007-0649-5
Evans, S. W., & Youngstrom, E. A. (2006). Evidence Based Assessment of Attention-Deficit Hyperactivity Disorder: Measuring Outcomes. Journal of the American Academy of Child and Adolescent Psychiatry, 45, 1132-1137.
Follette, W.C., & Greenberg, L.E (2006). Technique factors in treating dysphoric disorders. In L.G. Castonguay and L.E. Beutler (Eds). Principles of therapeutic change that work. Pages 83-109 New York: Oxford University Press
Frazier, T. W., & Youngstrom, E. A. (2006). Evidence-Based Assessment of Attention-Deficit/Hyperactivity Disorder: Using Multiple Sources of Information. Journal of the American Academy of Child & Adolescent Psychiatry, 45, 614-620. doi: 10.1097/01.chi.0000196597.09103.25
Greenberg, L.S., & Pinsoff, W. M. (1986) (E.d). The Psychotherapeutic process: A research handbook. New York: Guilford Press
Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical-statistical controversy. Psychology, Public Policy, and Law, 2, 293-323.
Hil, C.E., Stahl, J., & Roffman, M. (2007). Training novice psychotherapists: Helping skills and beyond. Psychotherapy: Theory, Research, Practice, and Training, 44, 364-370.
Hoagwood K.E. & Cavaleri, M.A. (2010). Ethical issues in child and adolescent psychosocial treatment research. In J.R. Weisz & A.E. Kazdin (Eds.) Evidence-based psychotherapies for children and adolescents, Second Edition. New York: Guilford.
Howard, M. O., Allen-Meares, P., & Ruffolo, M. C. (2007). Teaching evidence-based practice: Strategic and pedagogical recommendations for schools of social work. Research on Social Work Practice, 17, 561-568.
Hunsley, J., & Mash, E. J. (2007). Evidence-based assessment. Annual Review of Clinical Psychology, 3, 29-51. doi: 10.1146/annurev.clinpsy.3.022806.091419
Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12-19. doi: 10.1037/0022-006X.59.1.12
Jaeschke, R., Guyatt, G. H., & Sackett, D. L. (1994). Users' guides to the medical literature: III. How to use an article about a diagnostic test: A. Are the results of the study valid? Journal of the American Medical Association, 271, 389-391.
Jenkins, M. M., Youngstrom, E. A., Washburn, J. J., & Youngstrom, J. K. (2011). Evidence-based strategies improve assessment of pediatric bipolar disorder by community practitioners. Professional Psychology: Research and Practice, 42, 121-129. doi: 10.1037/a0022506
Lambert, M. J., Hansen, N. B., & Finch, A. E. (2001). Patient-focused research: using patient outcome data to enhance treatment effects. Journal of Consulting & Clinical Psychology, 69, 159-172. doi: 10.1037/0022-006X.69.2.159
LeMoulth, J., Castonguay, L.G., Joorman, J., & McAleavey, A. A. (in press). Depression: Basic research and clinical implications. To appear in L.G. Castonguay & T.G. Oltmanns (Eds), Psychopathology: From science to clinical practice. New York: Guilford Press.
McFall, R. M., & Treat, T. A. (1999). Quantifying the information value of clinical assessment with signal detection theory. Annual Review of Psychology, 50, 215-241. doi: 10.1146/annurev.psych.50.1.215
Meats, E., Brassey, J., Heneghan, C., & Glasziou, P. (2007). Using the Turning Research Into Practice (TRIP) database: How do clinicians really search? Journal of the Medical Library Association : JMLA, 95, 156-163. doi: 10.3163/1536-5050.95.2.156
Milrod, B., Leon, A.C., Busch, F., Rudden, M., Schwalberg, M., Clarkin, J., et al. (2007). A randomized controlled clinical trial of psychoanalytic psychotherapy for panic disorder. American Journal of Psychiatry, 164(2), 265-272.
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Bmj, 339, b2535-b2535. doi: 10.1136/bmj.b2535
Norcross, J.C. (2011) (E.d). Psychotherapeutic relationships that work. New York: Oxford Unversity Press.
Pagoto, S. L., Spring, B., Coups, E. J., Mulvaney, S., Coutu, M. F., & Ozakinci, G. (2007). Barriers and facilitators of evidence-based practice perceived by behavioral science health professionals. J Clin Psychol, 63, 695-705. doi: 10.1002/jclp.20376
Rice, L.& Greenberg, L.S. (1984) (E.d). Patterns of change:Intensive analysis of psychotherapy process. New York: Guilford
Sanderson, W. C. & McGinn, L.K. (1997). Psychological treatments of anxiety disorder patients
with comorbidity. In Wetzler, S, & Sanderson, W.C. (eds.): Treatment Strategies of Patients with Comorbidity. New York: Wiley.
Silverman, W.K. & Hinshaw, S.P. (2008). The Second Special Issue on Evidence-Based
Psychosocial Treatments for Children and Adolescents: A 10-Year Update. Journal of Clinical Child & Adolescent Psychology, 37, 1 — 7.
Stiles, W.B., (1988). Psychotherapy Process-outcome research can be misleading. Psychotherapy, 25, 27-35.
Swift, J.K., Callahan, J.L., & Vollmer, B.M. (2011). Preferences. In J.C. Norcross (2nd ed.) Psychotherapy relationships that work. (pages 301-315). New York: Oxford University Press.
Whiting, P., Rutjes, A. W., Reitsma, J. B., Bossuyt, P. M., & Kleijnen, J. (2003). The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Medical Research Methodology, 3, 25.
Woody, S.R. & Ollendick, T.H. (2006). Technique factors in treating anxiety disorders. In L.G. Castonguay and L.E. Beutler (Eds). Principles of therapeutic change that work. Pages 167-186. New York: Oxford University Press
Youngstrom, E. A. (2008). Evidence-based strategies for the assessment of developmental psychopathology: Measuring prediction, prescription, and process. In D. J. Miklowitz, W. E. Craighead & L. Craighead (Eds.), Developmental psychopathology (pp. 34-77). New York: Wiley.
Youngstrom, E. A. (2012). Future directions in psychological assessment: Combining Evidence-Based Medicine innovations with psychology’s historical strengths to enhance utility. Journal of Clinical Child & Adolescent Psychology. doi: 10.1080/15374416.2012.736358
Youngstrom, E. A., Freeman, A. J., & Jenkins, M. M. (2009). The assessment of children and adolescents with bipolar disorder. Child and Adolescent Psychiatric Clinics of North America, 18, 353-390. doi: 10.1016/j.chc.2008.12.002
The following statement was approved as policy of the American Psychological Association (APA) by the APA Council of Representatives during its August, 2005 meeting.
Evidence-based practice in psychology (EBPP) is the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences. This definition of EBPP closely parallels the definition of evidence-based practice adopted by the Institute of Medicine (2001, p. 147) as adapted from Sackett and colleagues (2000): “Evidence-based practice is the integration of best research evidence with clinical expertise and patient values.” The purpose of EBPP is to promote effective psychological practice and enhance public health by applying empirically supported principles of psychological assessment, case formulation, therapeutic relationship, and intervention.
Best research evidence refers to scientific results related to intervention strategies, assessment, clinical problems, and patient populations in laboratory and field settings as well as to clinically relevant results of basic research in psychology and related fields. A sizeable body of evidence drawn from a variety of research designs and methodologies attests to the effectiveness of psychological practices. Generally, evidence derived from clinically relevant research on psychological practices should be based on systematic reviews, reasonable effect sizes, statistical and clinical significance, and a body of supporting evidence. The validity of conclusions from research on interventions is based on a general progression from clinical observation through systematic reviews of randomized clinical trials, while also recognizing gaps and limitations in the existing literature and its applicability to the specific case at hand (APA, 2002). Health policy and practice are also informed by research using a variety of methods in such areas as public health, epidemiology, human development, social relations, and neuroscience. Researchers and practitioners should join together to ensure that the research available on psychological practice is both clinically relevant and internally valid. It is important not to assume that interventions that have not yet been studied in controlled trials are ineffective. However, widely used psychological practices as well as innovations developed in the field or laboratory should be rigorously evaluated and barriers to conducting this research should be identified and addressed.
Psychologists’ clinical expertise encompasses a number of competencies that promote positive therapeutic outcomes. These competencies include a) conducting assessments and developing diagnostic judgments, systematic case formulations, and treatment plans; b) making clinical decisions, implementing treatments, and monitoring patient progress; c) possessing and using interpersonal expertise, including the formation of therapeutic alliances; d) continuing to self-reflect and acquire professional skills; e) evaluating and using research evidence in both basic and applied psychological science; f) understanding the influence of individual, cultural, and contextual differences on treatment; g) seeking available resources (e.g., consultation, adjunctive or alternative services) as needed; and h) having a cogent rationale for clinical strategies. Expertise develops from clinical and scientific training, theoretical understanding, experience, self-reflection, knowledge of current research, and continuing education and training.
Clinical expertise is used to integrate the best research evidence with clinical data (e.g., information about the patient obtained over the course of treatment) in the context of the patient’s characteristics and preferences to deliver services that have a high probability of achieving the goals of treatment. Integral to clinical expertise is an awareness of the limits of one’s knowledge and skills and attention to the heuristics and biases—both cognitive and affective—that can affect clinical judgment. Moreover, psychologists understand how their own characteristics, values, and context interact with those of the patient.
Psychological services are most effective when responsive to the patient’s specific problems, strengths, personality, sociocultural context, and preferences. Many patient characteristics, such as functional status, readiness to change, and level of social support, are known to be related to therapeutic outcomes. Other important patient characteristics to consider in forming and maintaining a treatment relationship and in implementing specific interventions include a) variations in presenting problems or disorders, etiology, concurrent symptoms or syndromes, and behavior; b) chronological age, developmental status, developmental history, and life stage; c) sociocultural and familial factors (e.g., gender, gender identity, ethnicity, race, social class, religion, disability status, family structure, and sexual orientation); d) environmental context (e.g., institutional racism, health care disparities) and stressors (e.g., unemployment, major life events); and e) personal preferences, values, and preferences related to treatment (e.g., goals, beliefs, worldviews, and treatment expectations). Some effective treatments involve interventions directed toward others in the patient’s environment, such as parents, teachers, and caregivers. A central goal of EBPP is to maximize patient choice among effective alternative interventions.
Clinical decisions should be made in collaboration with the patient, based on the best clinically relevant evidence, and with consideration for the probable costs, benefits, and available resources and options.3 It is the treating psychologist who makes the ultimate judgment regarding a particular intervention or treatment plan. The involvement of an active, informed patient is generally crucial to the success of psychological services. Treatment decisions should never be made by untrained persons unfamiliar with the specifics of the case. The treating psychologist determines the applicability of research conclusions to a particular patient. Individual patients may require decisions and interventions not directly addressed by the available research. The application of research evidence to a given patient always involves probabilistic inferences. Therefore, ongoing monitoring of patient progress and adjustment of treatment as needed are essential to EBPP. APA encourages the development of health care policies that reflect this view of evidence-based psychological practice.
American Psychological Association. (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57, 1052-1059.
Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press.
Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence based medicine: How to practice and teach EBM (2nd ed.). London: Churchill Livingstone.
1. An expanded discussion of the issues raised in this policy statement including the rationale and references supporting it may be found in the Report of the Presidential Task Force on Evidence-Based Practice available online at http://www.apa.org/practice/ebpreport.pdf.
2. To be consistent with discussions of evidence-based practice in other areas of health care, we use the term patient to refer to the child, adolescent, adult, older adult, couple, family, group, organization, community, or other populations receiving psychological services. However, we recognize that in many situations there are important and valid reasons for using such terms as client, consumer or person in place of patient to describe the recipients of services.
3. For some patients (e.g., children and youth), the referral, choice of therapist and treatment, and decision to end treatment are most often made by others (e.g., parents) rather than by the individual who is the target of treatment. This means that the integration of evidence and practice in such cases is likely to involve information sharing and decision making in concert with others.
Principle 1a: We propose a training model that combines the traditional content of graduate curricula with concepts, skills, and practices that are tightly connected with clinical work. The EBM model is distilled in a book, Evidence-Based Medicine: How to Practice and Teach EBM, which is updated regularly and has supporting electronic materials on web sites and a bundled CD (Straus, et al., 2011). The “User’s Guide to the Medical Literature” anthologizes a series of papers originally published in Journal of the American Medical Association, illustrating the principles and their application to individual patients (Guyatt & Rennie, 2002). Guyatt and Rennie also go into much more detail about the research methods and statistical tools used, making this a good reference for instructors as well as producers of research. There is a volume that concentrates on psychiatry and mental health applications; it does not cover all of the skills and ideas included in the other volumes, but it is helpful in concentrating on mental health examples (Gray, 2004a). As well, the Appendix contains a number of readings which can serve to enhance graduate courses on EBP.
Principle 1b: Overall, faculty should integrate training in research and practice. Key recurring questions include, “Are the results of this study valid?” and “Does the information apply to my client?” and “Will it improve their care?” (Straus, et al., 2011)
1. Hands-on exercises during training in both clinical intervention and research methodology (such as searching the literature based on a particular clinical question and critiquing published treatment studies) will help students to integrate what they are learning from related coursework. There are tools available that can expedite the teaching and learning of this material. For example, there are scripts for “Critically Appraised Topics” and software supporting their use on computers and smartphones. Similarly, there are checklists devised to help rapidly appraise articles (e.g., QUADAS; JARS; American Psychological Association, 2010; Whiting, Rutjes, Reitsma, Bossuyt, & Kleijnen, 2003). In addition, standards are available to critically assess research and reviews (e.g., CONSORT, STARD, and MARS).
2. Students can benefit from exposure to recent methods for conducting research within the context of clinical practice, as an exemplar of how research and practice can and should be merged within this environment (see Castonguay et al., 2013).
3. Group supervision models also can promote the modeling and internalization of skills and critical thinking processes. There are a variety of exercises developed in multiple professional disciplines that provide examples that mesh well with psychological training (Hoge, et al., 2003; Straus, et al., 2011).
Principle 1 c: 1. Present practice-oriented research approaches and findings (e.g., phase sand patterns of change, outcome feedback, therapist effects, and helpful events during therapy session) to guide students in integrating data collection into clinical care (Castonguay et al., 2013). Also relevant to integrate and/or conduct in clinical practice is research on patient preferences, patient characteristics that contribute to treatment response, and cultural influences to treatment process and outcome.
2. Students can benefit from exposure to the “episode paradigm”. The episode (or event) paradigm involves different forms of research methodology (e.g., task analyses, comprehensive process analyses, interpersonal process recall) that are based on the assumption that therapeutic change is neither linear nor the result of a gradual increment of therapeutic interventions (or implementation of these techniques). Researchers working within this paradigm have argued that traditional process-outcome methods have largely failed confirm the positive impact of assumed mechanisms of change because these methods tend to (a) investigate mechanism of change in randomly selected sessions (assuming that all session are equally meaningful or impactful), and (b) correlate the frequency of these mechanisms with outcome (assuming that more is always better rather that recognizing that effective therapists adjust their level of interventions based on the need of clients, [Stiles, 1988]). Instead, the episode paradigm is based on the assumption that change is frequently fostered by significant events or episodes that happen at specific (non-random) times during therapy. Within this paradigm, researchers have designed scientifically rigorous strategies to define (based on theory about mechanisms of change and/or clinical observations), identify and analyze (using both qualitative and quantitative methods) (1) meaningful events (in terms of client’s performance, therapist’s tasks and responsiveness, interpersonal climate and transactions between them), (2) the context within which they take place, and (3) their relationship with therapeutic progress (see Greenberg & Pinsoff, 1986; Rice & Greenberg, 1984).
Principle 1 d: There are examples of integrating these methods into a system of assessment and treatment oriented around particular clinical issues such as potential bipolar disorder (Youngstrom, Freeman, & Jenkins, 2009) or attention-deficit/hyperactivity disorder (Evans & Youngstrom, 2006; Frazier & Youngstrom, 2006). These are useful as examples of moving from group-focused research to application of results to individual cases, in addition to the specific content around each diagnosis
Principle 3 d: 1. The Task Force has compiled a list of readings and websites including Cochrane; EffectiveChildTherapy.org, and PsychologicalTreatments.org (D12 Treatments website). These websites are updated periodically, which will facilitate teaching the most current evidence-based information about treatment. Moreover, students can be introduced to these websites, as an entry into life-long learning.
2. Journals that emphasize the integration of research with clinical knowledge include Journal of Consulting and Clinical Psychology, Psychological Assessment, Journal of Abnormal Psychology, Journal of Clinical Child & Adolescent Psychology, Behavior Therapy, Psychotherapy Research, Psychotherapy, and Evidence Based Mental Health. As well, we suggest that doctoral students attend conferences where evidence-based therapies are promoted and taught (e.g., Association of Behavioral and Cognitive Therapies, Society of Behavioral Medicine). This list of Journals and conferences providing high-quality information is not meant to be exhaustive, but rather to provide a few examples. Supervisors and trainers are encouraged to keep abreast of these and other outlets that provide high-quality information consistent with the evidence-based principles in this document, and encourage their students to use them as part of ongoing and life-long learning.
o Original Oxford EBM group founded by Dave Sackett who founded the EBM movement in medicine
o Goal is to develop, teach and promote evidence-based health care through conferences, workshops and EBM tools so that all health care professionals can maintain the highest standards of medicine.
o McMaster group for EMB headed by Gordon Guyatt who coined the term EBM
o Offers resources to develop an EBM approach for different health care fields organizes a workshop each year so that professionals can learn how to incoporate EBP into their training program
o General overview of evidence-based practice
o Provides a concise summary of the history and principles behind the evidence-based practice movement, as well as a selection of links to other relevant websites
o Overview of evidence-based treatments, organized for providers and families
o A curated website providing concise reviews of the state of evidence for particular clinical issues, as well as links to resources and further information
o EBP tutorial
o A brief set of tutorials to help orient to EBP, introduce key concepts, and learn skills
o EBP tutorial
o A brief set of tutorials to help orient to EBP, introduce key concepts, and learn skills
o Website on Research-Supported Psychological Treatments
o The purpose of this website is to provide information about effective treatments for psychological disorders. Basic descriptions are provided for each psychological disorder and treatment. In addition, for each treatment, the website lists key references, clinical resources, and training opportunities.
o Provides a summary of evidence-based assessment and treatment for a variety of providers and for families
o TherapyAdvisor seeks to encourage and promote empirically-based or evidence-based practice by providing information on the psychosocial treatments which have empirical support for the treatment of specific disorders. Practitioner members have additional access to detailed information on each treatment including expert treatment summaries, key and recent references selected by experts, and links to treatment manuals, training materials, and training institutes available for each treatment.
o Division 53 and FIU will be providing a recorded speaker series on EBTs for children/adolescents
o Resources range from brief introductions and overviews to detailed workshops teaching core skills and strategies