U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley-Blackwell Online Open

Logo of blackwellopen

Conducting research in clinical psychology practice: Barriers, facilitators, and recommendations

Kirsten v. smith.

1 Oxford Centre for Anxiety Disorders and Trauma, Department of Experimental Psychology, University of Oxford, UK

Graham R. Thew

The combination of clinical psychologists’ therapeutic expertise and research training means that they are in an ideal position to be conducting high‐quality research projects. However, despite these skills and the documented benefits of research to services and service users, research activity in practice remains low. This article aims to give an overview of the advantages of, and difficulties in conducting research in clinical practice.

We reviewed the relevant literature on barriers to research and reflected on our clinical and research experiences in a range of contexts to offer practical recommendations.

We considered factors involved in the planning, sourcing support, implementation, and dissemination phases of research, and outline suggestions to improve the feasibility of research projects in post‐qualification roles.

Conclusions

We suggest that research leadership is particularly important within clinical psychology to ensure the profession's continued visibility and influence within health settings.

Practitioner points

Clinical implications

  • Emerging evidence suggests that clinical settings that foster research are associated with better patient outcomes.
  • Suggestions to increase the feasibility of research projects in clinical settings are detailed.

Limitations

  • The present recommendations are drawn from the authors’ practical experience and may need adaptation to individual practitioners’ settings.
  • This study does not attempt to assess the efficacy of the strategies suggested.

There is a growing body of evidence that conducting research in clinical practice not only improves the clinical performance of the service (Mckeon et al ., 2013 ) but can also lead to improved physical health outcomes and survival rates (Nickerson et al ., 2014 ; Ozdemir et al ., 2015 ; Rochon, du Bois, & Lange, 2014 ). Clinical psychologists in the United Kingdom are predominantly trained in the ‘scientist‐practitioner model’ meaning that we theoretically have the skills to both deliver psychological therapies and design, conduct, analyse, and interpret research (Holttum & Goble, 2006 ; Stricker, 2002 ). However, despite research output being a requirement of doctoral training, psychological research conducted in clinical practice post‐qualification is not commonplace (Mitchell & Gill, 2014 ; Morton, Patel, & Parker, 2008 ). In fact, it has been suggested that the modal number of publications for clinical psychologists, namely zero, has not improved in over twenty years (Barrom, Shadish, & Montgomery, 1988 ; Eke, Holttum, & Hayward, 2012 ; Norcross, Karpiak, & Santoro, 2005 ).

Clinical psychology trainees are required to produce a substantial and original piece of clinically‐relevant research as part of their training qualification. However, reports suggest that up to 75% of UK doctoral theses are left unpublished (Cooper & Turpin, 2007 ). One suggestion for these low publication rates is the lack of identification with the role of ‘researcher’ and rejection of the scientist‐practitioner model (Gelso, 1993 ; Newman & McKenzie, 2011 ). However, it seems important to broaden the conceptualization of the term ‘research activity’ to more than the production of peer‐reviewed publications and to include consuming research (e.g., reading literature, reviewing guidelines, staying up to date with recent field advances). While not falling under the formal definition of research, service evaluation (designed and conducted solely to define or judge current care) and audit (designed and conducted to inform delivery of best care by comparing current care against a predefined standard) could also reasonably constitute research activity given that they draw on similar skills (NHS Health Research Authority, 2014 ; see Table  1 for an overview of research types, their practical requirements, and general aims). Yet it has been suggested that even service evaluation and audit are not projects that clinical psychologists feel particularly comfortable undertaking (Cooper & Graham, 2009 ).

Types of research activity in clinical psychology, requirements, aims, and potential team member involvement

C = clinicians; M = managers; T = trainees; J = junior staff (e.g., assistant psychologists, research assistants, other junior staff members); A = administrative staff; E = external collaborators; S = statistical advisors; RCT = randomized controlled trial.

A recent study showed that Australian psychologists working in a large metropolitan public health setting reported higher perceived capacity to undertake research compared with other allied health professionals (Elphinston & Pager, 2015 ). However, psychologists also perceived their individual capacity to be greater than that of their team and overarching organization, which may suggest they do not feel research skills are sufficiently valued or harnessed by employers. Perhaps unsurprisingly, team research capacity was found to mediate the relationship between psychologists’ research skills and their current research activity. Consequently, psychologists working in teams where research training was encouraged, funds were allocated, and projects relevant to practice were supported were more likely to engage with research and employ their skills. This study is consistent with earlier research that found subjective norms (i.e., beliefs about how others would perceive ones’ engagement in research) to be an important mediator between research environment and research intention (Eke et al ., 2012 ; Holttum & Goble, 2006 ). Similarly, these findings were supported by a report on attitudes to research activity within the health and social care system in Ireland. This found that a lack of perceived skills, coupled with an organizational culture that did not value research, contributed to low research engagement (McHugh & Byrne, 2011 ). This underutilization of research training is troubling as it remains a unique selling point of clinical psychologists and an opportunity to provide intellectual leadership and influence policy within the health care profession. One explanation might be that, in the United Kingdom at least, research in clinical practice is so rare that there is limited opportunity for the benefits to be realized by wider teams, perhaps feeding the undervaluation of these skills.

However, this only partly explains the low research output of psychologists in clinical practice and unfortunately there is limited literature exploring the reasons for this. Some suggested barriers by McHugh and Byrne ( 2011 ) include the prioritization of clinical roles, lack of protected time, and lack of appropriate funding. In fact, over 80% of their participants cited either a lack of time or clinical work pressures as a factor preventing research activity. A recent report found that much of the research conducted within the National Health Service (NHS) was unfunded (Mitchell & Gill, 2014 ) with previous reports in the United States suggesting that as much as 40% of all research is carried out without adequate funding (Silberman & Snyderman, 1997 ), with 60% of unfunded projects being carried out in researchers’ own time (Schroter, Tite, & Kassem, 2006 ). Previous research found that competence in applying for funding was rated by both research active and inactive health professionals as their weakest skill. The authors suggested that insufficient practical experience due to limited funding opportunities may compound the lack of skill development (McHugh & Byrne, 2011 ). So it seems that we, as clinicians, have the difficult task of fitting research into limited time, with limited funds, often without the support or encouragement of our surrounding teams. Yet, as already mentioned, our research capacity enhances our professional visibility and influence within the field, as well as improving clinical performance and health outcomes.

In the light of the numerous benefits and difficulties, it is important to consider strategies that may facilitate research activity. We reflected on our clinical and research experiences in a range of contexts, aiming to outline key factors that can influence the successful set‐up and implementation of research in clinical practice. Relevant literature was consulted to consider the empirical support for these factors and to guide recommendations to overcome potential barriers.

Determinants of successful research – Recommendations from the field

Role specification.

One factor that can make research projects easier to implement is having them ‘built in’ to overall job roles. Where psychologists are looking for post‐qualification positions or looking to change posts, it is worth considering how research components or specific projects fall within job descriptions. Having research activity included within the overall framework of roles and responsibilities for a post not only facilitates it happening, but also demonstrates something of the service's attitude towards this aspect of clinical psychologists’ skills. Where research is not mentioned, we recommend asking about what opportunities might be available, as it is likely this may be feasible at least in some form. If psychologists are enquiring more routinely about research opportunities within posts, this may contribute to research skills being more widely recognized as a key component of what the profession can offer.

We note that this approach does not apply exclusively to those seeking new posts, and would encourage psychologists to consider research opportunities within the context of job planning meetings, writing job descriptions for vacant posts, or annual appraisals.

Scope of research project

A related point is the choice of research project itself. The size and setting of the service may mean larger‐scale studies are not practical in terms of resources, and original research studies will require formal ethical review, 1 which can be lengthy, so may be a less practical option. However, in our experience the projects that are most difficult to implement in routine clinical settings are those where the impact of the findings may not be immediately apparent. While many may argue that the development of new knowledge is inherently valuable, clinical services must balance a number of competing priorities, meaning they can only feasibly support projects that are likely to lead directly to improved service provision and/or service user benefit. As such, it is recommended that clinicians in the first instance design research projects on the basis of client needs, and/or those with a greater focus on service improvement, which are more likely to be supported by services and clinical teams.

Managerial support

In our experience, research projects in qualified practice hinge greatly on managerial support. Having team leaders, ward managers, or heads of service engaged with the project appears to make them much more feasible, especially in the planning and development stages. We would encourage clinicians to approach managers at the outset of a research project, and to elicit their ideas, interests, and priorities to help shape the project and foster further collaboration. Carving out adequate time for research may be a delicate subject to discuss with managers, especially with a busy caseload and clinical responsibilities. However, given the benefits outlined earlier, and it being among the core skills of our profession, we would encourage decisive advocacy for protected research time.

Previous research has shown that in a sample of research‐active health professionals in the North West of Ireland, almost half (45%) reported having to conduct their research mostly or completely outside of working hours (Research and Education Foundation, 2004 ). We would argue that it is not a reasonable expectation that research activity be subsumed into a schedule already at capacity, and doing this carries the risk of devaluing these skills within our profession. Therefore, we would suggest that managers are given a clear summary of the project, which should include (1) a description of the current problem or unknown issue, and possible implications of this; (2) a summary of potential benefits to service users, and the wider service if the project is done; (3) details of what methods will be used, including what time and resources are required, preferably with minimal impact on routine service provision; and (4) a timeline for the project and dissemination.

Making the most of research time

Finding the time to undertake research projects in the context of a busy clinical service is not straightforward. While we acknowledge that this can be hard to implement, as far as is practical we strongly recommend aiming to designate particular blocks of time in which to undertake research activities, and have found that an effective method to protect this time is to work elsewhere if possible. This helps to keep the research time more distinct and serves as a more concrete reminder of this for both the researcher and other staff members. This approach can also minimize distractions and interruptions, which can reduce perceived effectiveness (Kearns & Gardiner, 2007 ). Clinicians may also want to consider the use of tools such as shared calendars, which can further clarify to the wider team when research time has been allocated. If practical, having this scheduled on a fixed, regular day and time can help make research activity become a more established routine within the service.

It should be noted that it is not necessarily the case that psychologists’ research activity is fully separate from their clinical responsibilities. Some research projects, such as case studies or case series, service user interviews, or single‐case experimental designs, have much greater integration with routine clinical service provision and will therefore require less ‘distinct’ research time (e.g., see Kaur, Murphy, & Smith, 2016 ; Ladd, Luiselli, & Baker, 2009 ; Thew & Krohnert, 2015 ).

Project marketing

Research projects in clinical contexts will require a certain degree of marketing. Having sought and hopefully obtained managerial support, it is helpful to publicize the project, for example, through in‐house presentations, discussion with service users, and service newsletters, magazines, or social media accounts. We have found that projects benefit greatly from the extra visibility and, to some extent, legitimacy that this provides. The marketing approach needs to extend throughout the project to maintain this visibility, which can be achieved through giving brief updates on the status of the project, and taking the time to feed back the results, particularly to staff who may have been involved with recruiting participants or in other capacities. This is also critical to influencing the culture of a service to be more receptive to future research projects.

Some, although not all, projects will require at least some funding, for example, to purchase equipment or resources, to buy out part of a clinician's time, or to recruit a research assistant, and preparing a successful funding application in this competitive climate can be time‐intensive. While this can understandably be a barrier to research activity in some contexts, we would emphasize that funding is by no means required for a successful research project, particularly when there is interest and support from the immediate clinical team including assistants and trainees, or a skilled wider network.

Where funding is being sought, we note that a number of services and trusts have some funds available to support new research projects, particularly those looking to innovate, or deliver more effective and efficient interventions for service users. We recommend working closely with local Research and Development departments, who are able to advise on funding opportunities, and on various aspects of developing and running projects generally. Many charitable organizations fund psychologist‐led projects (examples include the following: MQ: Transforming Mental Health through Research; British Heart Foundation; Marie Curie Cancer Care; Mind; OCD‐UK). At a broader level, agencies such as the National Institute for Health Research, the Wellcome Trust, and the Alzheimer's Society offer more structured programmes of funding to support clinicians in undertaking research projects linked to a clinical or academic institution.

Collaboration

While the research projects conducted as part of clinical training courses tend to be solo efforts with a small number of supervisors, post‐qualification research is able to place a greater emphasis on collaboration. This could be within or across services, and links between clinical services and academic institutions can often be productive. Here, clinicians can benefit from academics’ research expertise and supervision, while academics can benefit from clinicians’ practical experience and knowledge, along with potential links to service users interested in contributing to research studies (Lampropoulos et al ., 2002 ). For example, involvement with academic departments could permit the independent evaluation of local clinical services and establish a protocol and methods for ongoing data collection. On a smaller scale, potential collaborations could include supervising the research projects of clinical trainees and postgraduate junior academics. While collaborations will help to reduce the demands on an individual researcher, they also can serve to maintain the momentum of a research project given multiple people are invested in its completion.

It may also be the case that individuals are willing to assist with the project in a more informal capacity, such as helping with recruitment or general administration. It can be helpful to discuss in the early stages of projects the level of involvement different collaborators will have, and to work out the practical elements of how best to keep people informed and updated with what they might be required to do.

Deadlines and monitoring progress

Although there may be an estimated timescale for the project agreed at the outset, we have found that setting deadlines for different stages of the project can help maintain progress, and prevent the project being overshadowed or neglected in the face of new service‐level priorities or responsibilities. Obviously, a degree of flexibility will always be required, but working to an agreed schedule, and if possible having someone who is more external to the project monitoring its progress, such as a manager or mentor, can be helpful.

Dissemination

Dissemination of project findings can often be a somewhat neglected part of the research process (Cooper & Turpin, 2007 ), but it can play a powerful role in facilitating subsequent service improvements, research projects, and future funding applications. Failing to share and publicize project findings can mean people are unable to see their value and implications, which can therefore hinder research projects from happening in the future.

Publication in peer‐reviewed journals is one effective route to share findings, but there are many others that should also be considered, including presenting at conferences, at team meetings, or directly to service leaders, service users, project participants, and where relevant, to those managing or funding services. To maximize dissemination effectiveness, it will be necessary to adapt the medium and language of your communications to suit a range of audiences. It may be possible to circulate written summaries or brief reports around local or regional professional networks and industry partnerships, and again making use of in‐house media/communication teams can facilitate this.

We note that for some larger projects, the time and effort invested in just obtaining the results can be significant, meaning that finding further time and/or motivation to apply to dissemination activities can be difficult. However, it can be argued that given most projects involve collecting data from participants in some form, who have therefore given their time and energy to assist with the aims of the project, we have a professional duty to make productive use of the findings and ensure that they are shared appropriately. We recommend including dissemination activity in the project timeline from the outset to avoid this being neglected or missed.

Feeling deskilled

Lastly, it is worth noting that for many psychologists, the idea of developing a research project may feel demanding or even daunting and that this may be the principal reason that research ideas do not get taken forward (Cooper & Graham, 2009 ). It is easy to feel that our research skills are no longer up to date, or that our projects will require too much time to be feasible.

Given that this may understandably encourage avoidance of research activity and that as psychologists we all recognize that avoidant strategies are not the most useful in the long term, we have found it helpful to remember the following: First, research projects do not have to use complicated methodology and large samples in order to have scientific merit and useful implications. Second, research activity can be quite closely tied into routine clinical work as described earlier. Third, seeking out potential continuing professional development (CPD) opportunities through workshops, conference attendance, and training activities can improve research skills and increase confidence. Fourth, no researcher knows how to do everything, and that collaboration can be a powerful tool for learning new skills, and lastly, psychologists already have a number of transferable skills from their clinical work, such as the ability to approach a problem logically and systematically, or the capacity to attend accurately and consider carefully what a client is saying, which are equally important and valuable within the research domain.

Despite a strong focus on research skills during clinical psychologists’ training, the evidence suggests that post‐qualification research activity within clinical settings is rare, even though there are tangible benefits to clients and services. While a lack of time to undertake research within clinical roles is perhaps the most obvious reason for this, we have outlined a number of other possible barriers and hope that some of our reflections and suggestions may prove useful to those clinicians who are considering undertaking research projects within their services.

Clinical psychologists’ combination of clinical expertise and research training means that they are in an ideal position to be conducting high‐quality research projects that aim to better understand and intervene across a range of clinical issues. From a professional perspective, these research skills are perhaps one of the key features of clinical psychologists that serve to distinguish us from many health professionals. In a context of financial pressures and cuts to clinical services and training places, it is possible that greater use of these research skills in practice will help to ensure the continued appeal and future utility of clinical psychology.

Acknowledgements

The authors would like to thank Dr Belinda Graham for her helpful comments on this manuscript. This work was supported by the Wellcome Trust (102176); and the NIHR Biomedical Research Centre, based at Oxford University Hospitals NHS Trust, Oxford. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health.

1 Whether ethical review is required depends on the nature of the project and the participants involved. Ethical approval should be sought through the Integrated Research Application System (NHS REC), university review boards, or the Social Care Research Ethics Committee. In the case of independent practice where practitioners may not have access to an ethical review process, they should be able to demonstrate that they have adhered to the Code of Ethics and Conduct and the Code of Human Research Ethics outlined by the British Psychological Society (British Psychological Society, 2006 , 2014 ).

  • Barrom, C. P. , Shadish, W. R. , & Montgomery, L. M. (1988). PhDs, PsyDs, and real‐world constraints on scholarly activity: Another look at the Boulder Model . Professional Psychology: Research and Practice , 19 ( 1 ), 93 https://doi.org/10.1037/0735-7028.19.1.93 [ Google Scholar ]
  • British Psychological Society . (2006). Code of ethics and conduct . Leicester, UK: Author. [ Google Scholar ]
  • British Psychological Society . (2014). Code of human research ethics . Leicester, UK: Author. [ Google Scholar ]
  • Cooper, M. , & Graham, C. (2009). Research and evaluation. In Beinart H., Kennedy P. & Llewellyn S. (Eds.), Clinical psychology in practice (pp. 46–58). Leicester, UK: British Psychology Society and Blackwell. [ Google Scholar ]
  • Cooper, M. , & Turpin, G. (2007). Clinical psychology trainees’ research productivity and publications: An initial survey and contributing factors . Clinical Psychology & Psychotherapy , 14 ( 1 ), 54–62. https://doi.org/10.1002/cpp.513 [ Google Scholar ]
  • Eke, G. , Holttum, S. , & Hayward, M. (2012). Testing a model of research intention among U.K. clinical psychologists: A logistic regression analysis . Journal of Clinical Psychology , 68 , 263–278. https://doi.org/10.1002/jclp.20860 [ PubMed ] [ Google Scholar ]
  • Elphinston, R. A. , & Pager, S. (2015). Untapped potential: Psychologists leading research in clinical practice . Australian Psychologist , 50 ( 2 ), 115–121. https://doi.org/10.1111/ap.12102 [ Google Scholar ]
  • Gelso, C. J. (1993). On the making of a scientist‐practioner: A theory of research training in professional psychology . Professional Psychology: Research and Practice , 24 , 468 https://doi.org/10.1037/0735-7028.24.4.468 [ Google Scholar ]
  • Holttum, S. , & Goble, L. (2006). Factors influencing levels of research activity in clinical psychologists: A new model . Clinical Psychology & Psychotherapy , 13 , 339–351. [ Google Scholar ]
  • Kaur, M. , Murphy, D. , & Smith, K. V. (2016). An adapted imaginal exposure approach to traditional methods used within trauma‐focused cognitive behavioural therapy, trialled with a veteran population . The Cognitive Behaviour Therapist , 9 ( 10 ), 1–11. https://doi.org/10.1017/S1754470X16000052 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kearns, H. , & Gardiner, M. (2007). Is it time well spent? The relationship between time management behaviours, perceived effectiveness and work‐related morale and distress in a university context . Higher Education Research & Development , 26 , 235–247. https://doi.org/10.1080/07294360701310839 [ Google Scholar ]
  • Ladd, M. V. , Luiselli, J. K. , & Baker, L. (2009). Continuous access to competing stimulation as intervention for self‐injurious skin picking in a child with autism . Child & Family Behavior Therapy , 31 ( 1 ), 54–60. https://doi.org/10.1080/07317100802701400 [ Google Scholar ]
  • Lampropoulos, G. K. , Goldfried, M. R. , Castonguay, L. G. , Lambert, M. J. , Stiles, W. B. , & Nestoros, J. N. (2002). What kind of research can we realistically expect from the practitioner? Journal of Clinical Psychology , 58 , 1241–1264. https://doi.org/10.1002/jclp.10109 [ PubMed ] [ Google Scholar ]
  • McHugh, P. , & Byrne, M. (2011). Survey of the research activity, skills and training needs of Health and Social Care professionals in Ireland . Dublin, Ireland: Health Service Executive; Retrieved from http://hdl.handle.net/10147/202929 [ Google Scholar ]
  • Mckeon, S. , Alexander, E. , Brodaty, H. , Ferris, B. , Frazer, I. , & Little, M. (2013). Strategic review of health and medical research: Better health through research . Canberra, ACT: Department of Health and Ageing; Retrieved from www.mckeonreview.org.au [ Google Scholar ]
  • Mitchell, A. J. , & Gill, J. (2014). Research productivity of staff in NHS mental health trusts: Comparison using the Leiden method . Psychiatric Bulletin , 38 ( 1 ), 19–23. https://doi.org/10.1192/pb.bp.113.042630 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Morton, A. , Patel, S. , & Parker, J. (2008). Are we practitioners rather than scientists? A survey of research activity in a psychology department . Clinical Psychology Forum , 189 , 32–36. [ Google Scholar ]
  • Newman, E. F. , & McKenzie, K. (2011). Research activity in British clinical psychology training staff: Do we lead by example? Psychology Learning & Teaching , 10 , 228–238. https://doi.org/10.2304/plat.2011.10.3.228 [ Google Scholar ]
  • NHS Health Research Authority . (2014). Research Ethics Committees (RECs) – Health Research Authority . Health Research Authority; Retrieved from http://www.hra.nhs.uk/about-the-hra/our-committees/research-ethics-committees-recs/ [ Google Scholar ]
  • Nickerson, A. , Liddell, B. J. , Maccallum, F. , Steel, Z. , Silove, D. , & Bryant, R. A. (2014). Posttraumatic stress disorder and prolonged grief in refugees exposed to trauma and loss . BMC Psychiatry , 14 ( 1 ), 106. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Norcross, J. C. , Karpiak, C. P. , & Santoro, S. O. (2005). Clinical psychologists across the years: The division of clinical psychology from 1960 to 2003 . Journal of Clinical Psychology , 61 , 1467–1483. https://doi.org/10.1186/1471-244X-14-106 [ PubMed ] [ Google Scholar ]
  • Ozdemir, B. A. , Karthikesalingam, A. , Sinha, S. , Poloniecki, J. D. , Hinchliffe, R. J. , Thompson, M. M. , … Holt, P. J. (2015). Research activity and the association with mortality . PLoS One , 10 ( 2 ), e0118253 https://doi.org/10.1371/journal.pone.0118253 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Research and Education Foundation . (2004). Research activity survey of health professionals in the NWHB, 1999–2003 . Retrieved from http://hdl.handle.net/10147/45912
  • Rochon, J. , du Bois, A. , & Lange, T. (2014). Mediation analysis of the relationship between institutional research activity and patient survival . BMC Medical Research Methodology , 14 ( 1 ), 1 https://doi.org/10.1186/1471-2288-14-9 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schroter, S. , Tite, L. , & Kassem, A. (2006). Financial support at the time of paper acceptance: A survey of three medical journals . Learned Publishing , 19 , 291–297. https://doi.org/10.1087/095315106778690689 [ Google Scholar ]
  • Silberman, E. K. , & Snyderman, D. A. (1997). Research without external funding in North American psychiatry . American Journal of Psychiatry , 154 , 1159–1160. https://doi.org/10.1176/ajp.154.8.1159 [ PubMed ] [ Google Scholar ]
  • Stricker, G. (2002). What is a scientist‐practitioner anyway? Journal of Clinical Psychology , 58 , 1277–1283. https://doi.org/10.1002/jclp.10111 [ PubMed ] [ Google Scholar ]
  • Thew, G. R. , & Krohnert, N. (2015). Formulation as intervention: Case report and client experience of formulating in therapy . The Cognitive Behaviour Therapist , 8 , e25 https://doi.org/10.1017/S1754470X15000641 [ Google Scholar ]
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Clinical Psychology History, Approaches, and Careers

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

what is research in clinical psychology

Adah Chung is a fact checker, writer, researcher, and occupational therapist. 

what is research in clinical psychology

Carmen MartA-nez BanAs / Getty Images

  • Treatment Approaches

Clinical psychology specialty integrates the science of psychology with treating complex human problems. In addition to directing treating people for mental health concerns, the field of clinical psychology also supports communities, conducts research, and offers training to promote mental health for people of all ages and backgrounds.

This article discusses what clinical psychologists do, the history of the discipline, and the different approaches used today in treating mental health conditions.

What Is Clinical Psychology?

Clinical psychology is the  branch of psychology  concerned with assessing and treating mental illness, abnormal behavior, and psychiatric problems. This psychology specialty area provides comprehensive care and treatment for complex mental health problems. In addition to treating individuals, clinical psychology also focuses on couples, families, and groups.

History of Clinical Psychology

Early influences on the field of clinical psychology include the work of the Austrian psychoanalyst Sigmund Freud . He was one of the first to focus on the idea that mental illness was something that could be treated by talking with the patient, and it was the development of his talk therapy approach that is often cited as the earliest scientific use of clinical psychology.

American psychologist Lightner Witmer opened the first psychological clinic in 1896 with a specific focus on helping children who had learning disabilities. It was also Witmer who first introduced the term "clinical psychology" in a 1907 paper.

Witmer, a former student of  Wilhelm Wundt , defined clinical psychology as "the study of individuals, by observation or experimentation, with the intention of promoting change."

By 1914, 26 other clinics devoted to clinical psychology had been established in the United States. Today, clinical psychology is one of the most popular subfields and the single largest employment area within psychology.

Evolution During the World Wars

Clinical psychology became more established during the period of World War I as practitioners demonstrated the usefulness of psychological assessments. In 1917, the American Association of Clinical Psychology was established, although it was replaced just two years later with the establishment of the American Psychological Association  (APA).

During World War II, clinical psychologists were called upon to help treat what was then known as shell shock, now referred to as post-traumatic stress disorder (PTSD).

The demand for professionals to treat the many returning veterans in need of care contributed to the growth of clinical psychology during this period.

During the 1940s, the United States had no programs offering a formal clinical psychology degree. The U.S. Veterans Administration set up several doctoral-level training programs and by 1950 more than half of all the Doctor of Philosophy (PhD)-level degrees in psychology were awarded in the area of clinical psychology.

Changes in Focus

While the early focus in clinical psychology had mainly been on science and research, graduate programs began adding additional emphasis on psychotherapy . In clinical psychology PhD programs, this approach is today referred to as the scientist-practitioner or Boulder Model.

Later, the Doctor of Psychology (PsyD) degree option emerged, which emphasized professional practice more than research. This practice-oriented doctorate degree in clinical psychology is known as the practitioner-scholar or Vail model.

The field has continued to grow tremendously, and the demand for clinical psychologists today remains strong. One survey found that the percentage of women and minorities in clinical psychology programs has grown over the last two decades. Today, around two-thirds of clinical psychology trainees are women and one-quarter are ethnic minorities.

Treatment Approaches in Clinical Psychology

Clinical psychologists who work as psychotherapists often utilize different treatment approaches when working with clients. While some clinicians focus on a very specific treatment outlook, many use what is referred to as an eclectic approach. This involves drawing on different theoretical methods to develop the best treatment plan for each individual client.

Some of the major theoretical perspectives within clinical psychology include:

Psychodynamic Approach

This perspective grew from Sigmund Freud's work; he believed that the unconscious mind plays a vital role in our behavior. Psychologists who utilize  psychoanalytic therapy  may use techniques such as free association to investigate a client's underlying unconscious motivations.

Modern psychodynamic therapy utilizes talk therapy to help people gain insight, solve problems, and improve relationships. Research has found that this approach to treatment can be as effective as other therapy approaches.

Cognitive Behavioral Approaches

This approach to clinical psychology developed from the behavioral and cognitive schools of thought. Clinical psychologists using this perspective will look at how a client's feelings, behaviors, and thoughts interact. 

Cognitive-behavioral therapy  (CBT) often focuses on changing thoughts and behaviors contributing to psychological distress. Specific types of therapy that are rooted in CBT include:

  • Acceptance and commitment therapy
  • Cognitive processing therapy
  • Dialectical behavior therapy
  • Rational emotive behavior therapy
  • Trauma-focused cognitive behavioral therapy
  • Mindfulness-based cognitive therapy

Humanistic Approaches

This approach to clinical psychology grew from the work of humanist thinkers such as Abraham Maslow and  Carl Rogers . This perspective looks at the client more holistically and is focused on such things as  self-actualization .

Some types of humanistic therapy that a clinical psychologist might practice include client-centered therapy , existential therapy, Gestalt therapy, narrative therapy, or logotherapy.

How to Become a Clinical Psychologist

In the United States, clinical psychologists usually have a doctorate in psychology and receive training in clinical settings. The educational requirements to work in clinical psychology are quite rigorous, and most clinical psychologists spend between four to six years in graduate school after earning a bachelor's degree .

Generally speaking, PhD programs are centered on research, while PsyD programs are practice-oriented. Students may also find graduate programs that offer a terminal master's degree in clinical psychology.

Before choosing a clinical psychology program, you should always check to be sure that the program is accredited by the APA. After completing an accredited graduate training program, prospective clinical psychologists must also complete a period of supervised training and an examination.

Specific licensure requirements vary by state, so you should check with your state's licensing board to learn more.

Students in the United Kingdom can pursue a doctorate-level degree in clinical psychology (DClinPsychol or ClinPsyD) through programs sponsored by the National Health Service.

These programs are generally very competitive and are focused on both research and practice. Students interested in enrolling in one of these programs must have an undergraduate degree in a psychology program approved by the British Psychological Society in addition to experience requirements.

Careers In Clinical Psychology

Clinical psychologists work in a variety of settings (hospitals, clinics, private practice, universities, schools, etc.) and in many capacities. All of them require these professionals to draw on their expertise in special ways and for different purposes.

Some of the job roles performed by those working in clinical psychology can include:

  • Assessment and diagnosis of psychological disorders , such as in a medical setting
  • Treatment of psychological disorders , including drug and alcohol addiction
  • Offering testimony in legal settings
  • Teaching, often at the university level
  • Conducting research
  • Creating and administering programs to treat and prevent social problems

Some clinical psychologists may focus on one of these or provide several of these services. For example, someone may work directly with clients who are admitted to a hospital for psychological disorders, while also running a private therapeutic office that offers short-term and long-term outpatient services to those who need help coping with psychological distress.

Clinical psychology is one of the most popular areas in psychology, but it's important to evaluate your interests before deciding if this area might be right for you. If you enjoy working with people and are able to handle stress and conflict well, clinical psychology may be an excellent choice.

The field of clinical psychology will continue to grow and evolve thanks to the changing needs of the population, as well as shifts in approaches to healthcare policy. If you're still unsure whether clinical psychology is right for you,  taking a psychology career self-test ​may help.

Roccella M, Vetri L. Adventures of clinical psychology .  J Clin Med . 2021;10(21):4848. doi:10.3390/jcm10214848

Benjamin LT Jr. A history of clinical psychology as a profession in America (and a glimpse at its future) .  Annu Rev Clin Psychol . 2005;1:1-30. doi:10.1146/annurev.clinpsy.1.102803.143758

Witmer L. Clinical psychology .  Am Psychol. 1996 ;51 (3):248-251. doi:10.1037/0003-066X.51.3.248

Gee DG, DeYoung KA, McLaughlin KA, et al. Training the next generation of clinical psychological scientists: A data-driven call to action .  Annu Rev Clin Psychol . 2022;18:43-70. doi:10.1146/annurev-clinpsy-081219-092500

American Psychological Association. Doctoral degrees in psychology: How are they different, or not so different ?

Foley KP, McNeil CB. Scholar-Practitioner Model . In: Cautin RL, Lilienfeld SO, eds. The Encyclopedia of Clinical Psychology . Hoboken, NJ: John Wiley & Sons; 2015. doi:10.1002/9781118625392.wbecp532

Norcross JC, Sayette MA, Pomerantz AM. Doctoral training in clinical psychology across 23 years: Continuity and change .  J Clin Psychol . 2018;74(3):385-397. doi:10.1002/jclp.22517

Shedler J. The efficacy of psychodynamic psychotherapy .  Am Psychol. 2010;65(2):98-109. doi:10.1037/a0018378

Steinert C, Munder T, Rabung S, Hoyer J, Leichsenring F.  Psychodynamic therapy: as efficacious as other empirically supported treatments? A meta-analysis testing equivalence of outcomes .  Am J Psychiatry . 2017;174(10):943-953. doi: 10.1176/appi.ajp.2017.17010057

Fenn K, Byrne M. The key principles of cognitive behavioural therapy . InnovAiT: Educ Inspir Gen Prac . 2013;6(9):579-585. doi:10.1177/1755738012471029

Block M. Humanistic Therapy . In: Goldstein S, Naglieri JA., eds. Encyclopedia of Child Behavior and Development . Boston, MA: Springer; 2011. doi:10.1007/978-0-387-79061-9_1403

U.S. Bureau of Labor Statistics. Occupational Outlook Handbook: Psychologists .

National Health Service. Clinical psychologist .

Carr A. Clinical Psychology: An Introduction . London: Routledge; 2012.

Trull TJ, Prinstein M. Clinical Psychology . Belmont, CA: Wadsworth; 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Clinical Psychology

A newer edition of this book is available.

  • < Previous chapter
  • Next chapter >

4 Research Methods in Clinical Psychology

Philip C. Kendall, Department of Psychology, Temple University.

Jonathan S. Comer, Florida International University

  • Published: 18 September 2012
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter describes methodological and design considerations central to the scientific evaluation of treatment efficacy and effectiveness. Matters of design, procedure, measurement, data analysis, and reporting are examined and discussed. The authors consider key concepts of controlled comparisons, random assignment, the use of treatment manuals, integrity and adherence checks, sample and setting selection, treatment transportability, handling missing data, assessing clinical significance, identifying mechanisms of change, and consolidated standards for communicating study findings to the scientific community. Examples from the treatment outcome literature are offered, and guidelines are suggested for conducting treatment evaluations that maximize both scientific rigor and clinical relevance.

Central to research in clinical psychology is the evaluation of treatment outcomes. Research evaluations of the efficacy and effectiveness of therapeutic interventions have evolved from single-subject case histories to complex multimethod experimental investigations of carefully defined treatments applied to genuine clinical samples. The evolution is to be applauded.

In this chapter, we focus on how best to arrange these latter complex evaluations in a manner that maximizes both scientific rigor and clinical relevance. Although all of the ideals are rarely achieved in a single study, our discussions provide exemplars nonetheless. We encourage consistent attempts to incorporate these ideals into research designs, although we recognize that ethical and logistical constraints may compromise components of methodological rigor. We organize our chapter around the things that matter: (a) matters of design, (b) matters of procedure, (c) matters of measurement, (d) matters of data analysis, and (e) matters of reporting.

Matters of Design

To adequately assess the causal impact of a therapeutic intervention, clinical researchers use control procedures derived from experimental science. The objective is to separate the effects of the intervention from changes that result from other factors, which may include the passage of time, patient expectancies of change, therapist attention, repeated assessments, and simply regression to the mean. These extraneous factors must be “controlled” in order to have confidence that the intervention (i.e., the experimental manipulation) is responsible for any observed changes. To elaborate, we turn our attention to the selection of control conditions, random assignment, evaluation of response across time, and comparison of multiple treatments.

Selecting Control Condition(s)

Comparisons of persons randomly assigned to different treatment conditions are required to control for factors other than the treatment. In a “controlled” treatment evaluation, comparable persons are randomly placed into either the treatment condition (composed of those who receive the intervention) or the control condition (composed of those who do not receive the intervention), and by comparing the changes evidenced by the members of both conditions the efficacy of therapy over and above the outcome produced by extraneous factor (e.g., passage of time) can be determined. However, deciding the nature of the control condition (e.g., no-treatment, wait list, attention-placebo, standard treatment-as-usual) is not simple (see Table 4.1 for recent examples).

When comparison clients are assigned to a no-treatment control condition, they are administered the assessments on repeated occasions, separated by an interval of time equal in length to the therapy provided to those in the treatment condition. Any changes seen in the treated clients are compared to changes seen in the nontreated clients. When treated clients evidence significant improvements over nontreated clients, the treatment is credited with producing the changes. This no-treatment procedure eliminates several rival hypotheses (e.g., maturation, spontaneous remission, historical effects, regression to the mean). However, a no-treatment control condition does not guard against other potentially confounding factors, including client anticipation of treatment, client expectancy for change, and the act of seeing a therapist—independent of what specific treatment the therapist actually provided. Although a no-treatment control condition is sometimes useful in the earlier stages of evaluating a treatment, other control procedures are preferred.

Utilizing a waitlist condition —a variant of the no-treatment condition—provides some additional control. Clients in the waitlist condition expect that after a specified period of time they will be receiving treatment, and accordingly may anticipate changes due to this treatment, which may in turn affect the course of their symptoms. The changes that occur for wait-listed clients are evaluated at regular intervals, as are those of the clients who received therapy. If we assume the clients in the waitlist and treatment conditions are comparable (e.g., gender, age, ethnicity, severity of presenting problem, and motivation), then we can make inferences that the changes in the treated clients over and above those also manifested by the waitlist clients are likely due to the intervention rather than to any extraneous factors that were operative for both the treated and the waitlist conditions (e.g., expectations of change). The important demographic data are gathered so that statistical comparisons can be conducted to determine condition comparability. Waitlist conditions, like no-treatment conditions, are of less value for treatments that have already been examined versus somewhat “inactive” comparisons.

There are potential limitations associated with waitlist controls. First, a waitlist client might experience a life crisis that requires immediate professional attention. For ethical purposes, the status of control clients should be monitored to ensure that they are safely able to tolerate the treatment delay. In the event of an emergency, the provision of professional services will compromise the integrity of the waitlist condition. Second, it is preferable that the duration of the control condition be the same as the duration of the treatment condition(s). Comparable durations help to ensure that any differential changes between the conditions would not be due to the differential passage of time. However, suppose an 18-session treatment takes 4–5 months to provide—is it ethical to withhold treatment for 4–5 months as a wait period (see Bersoff & Bersoff, 1999 )? With long waitlist durations, the probability of differential attrition arises, a situation that could have a compromising effect on study results. If rates of attrition from a waitlist condition are high, the sample in the control condition may be sufficiently different from the sample in the treatment condition, and no longer representative of the larger group (e.g., the smaller waitlist group at the end of the study now only represents clients who could tolerate and withstand a prolonged period without treatment).

No-treatment or waitlist controls provide initial evidence of treatment efficacy but are less important once a treatment has, in several evaluations, been found to be more effective than “inactive” control conditions. Attention-placebo (or nonspecific treatment) control conditions are an alternative to the waitlist control that rule out some threats to internal validity, and control for the effects that might be due simply to meeting with and getting the attention of a therapist. In addition, these participants receive a description of a treatment rationale (an explanation of the treatment procedures offered at the beginning of the intervention). The rationale provided to attention-placebo clients mobilizes an expectancy of positive gains. (For discussion of treatment elements separate from the proposed active components see Hollon & DeRubeis, 1981 ; Jacobson & Hollon, 1996 a, 1996b).

Attention-placebo conditions enable clinical researchers to identify the changes produced by specific therapeutic strategies over and above nonspecific strategies. For example, in a recent randomized clinical trial (RCT) (Kendall et al., 2008 ), children with anxiety disorders received cognitive-behavioral treatment (CBT; either individual or family CBT) or a manualized family education, support, and attention (i.e., FESA) condition. Individual and family-based CBT was found to be superior to FESA in reducing the children’s principal anxiety disorder. Given the nature of the FESA condition one was able to infer that the gains associated with receiving CBT are not likely attributed to “common therapy factors” such as learning about anxiety/emotions, experience with an understanding therapist, attention to and opportunities to discuss anxiety.

Despite the advantages of attention-placebo controls, they are not without limitations (Parloff, 1986 ). Attention placebos must be devoid of therapeutic techniques hypothesized to be effective, while at the same time instilling positive expectations in clients and providing professional contact. To offer such an intervention in the guise of effective therapy is acceptable when clients are fully informed in advance and sign informed consent forms acknowledging their willingness to take a chance on receiving either a psychosocial placebo condition. Even then, an attention-placebo condition may be difficult for the therapist to accomplish.

Methodologically, it is difficult to ensure that therapists who conduct attention-placebo conditions have the same degree of positive expectancy for client gains as do therapists conducting specific interventions (Kendall, Holmbeck, & Verduin, 2002 ; O’Leary & Borkovec, 1978 ). “Demand characteristics” would suggest that when therapists predict a favorable outcome, clients will tend to improve accordingly (Kazdin, 2003 ). Thus, therapist expectancies may not be equal for active and placebo conditions, reducing the interpretability of the findings. Similarly, clients in an attention-placebo condition may have high expectations at the start, but may grow disenchanted when no specific changes are emerging. If study results suggest that a therapy condition evidenced significantly better outcomes than a attention-placebo control condition, it is important that the researcher evaluate clients’ perceptions of the credibility of the treatment and their expectations for change to confirm that clients in the attention-placebo condition perceived the treatment to be credible and expected to improve.

The use of a standard treatment (treatment-as-usual) as a comparison condition allows the researcher to evaluate an experimental treatment relative to the intervention that is currently available and being applied (i.e., an existing standard of care). When the standard care intervention and the therapy under study have comparable durations of treatment and client and therapist expectancies, the researcher can test the relative efficacy of the interventions. For example, in a recent RCT (Mufson et al., 2004 ), depressed adolescents were randomly assigned to interpersonal psychotherapy modified for depressed adolescents (IPT-A) or to “treatment-as-usual” in school-based mental health clinics. Adolescents treated with IPT-A compared to treatment-as-usual showed greater symptom reduction and improvement in overall functioning. Given the nature of their comparison group it can be inferred that the gains associated with IPT-A outperformed the existing standard of care for depressed adolescents in the community.

In standard treatment comparisons, it is important to ensure that both the standard (routine) treatment and the new treatment are implemented in a high-quality fashion (Kendall & Hollon, 1983 ). Using a standard treatment condition presents advantages over other conditions. Ethical concerns about no-treatment conditions are quelled, given that care is provided to all participants. Additionally, attrition is likely to be minimized and nonspecific factors are likely to be equated (Kazdin, 2003 ).

Random Assignment

After comparison conditions have been selected, procedures for assigning participants to conditions must be chosen. Random assignment ensures that every participant has an equal chance of being assigned to the active treatment condition or the control condition(s). Random assignment of participants to the active therapy or control conditions and random assignment to study therapists are essential steps toward achieving initial comparability between conditions. However, note that random assignment does not guarantee comparability across treatment conditions—one resultant group may be different on key variables (e.g., age, wealth, impairment) simply due to chance. Appropriate statistical tests can be applied to examine the comparability of participants across treatment conditions.

Problems can arise when random assignment is not applied. Consider a situation in which participants do not have an equal chance of being assigned to the active and control condition. Suppose a researcher were to allow depressed participants to decide for themselves whether to participate in the active treatment or in a waitlist condition. If participants in the active treatment condition subsequently evidenced greater symptom reductions than waitlist participants, one would be unable to rule out the possibility that symptom differences could have resulted from pre-study differences between the participants (e.g., selection bias). Waitlist participants who elected to delay treatment may be individuals not ready to initiate work on their depression symptoms.

Random assignment does not absolutely assure comparability of conditions on all measures, but it does maximize the likelihood of comparability. An alternative procedure, randomized blocks assignment, or assignment by stratified blocks, involves matching prospective clients in subgroups that (a) contain clients that are highly comparable on key dimensions (e.g., initial severity) and (b) contain the same number of clients as the number of conditions. For example, if the study requires two conditions (a standard treatment and a new treatment), clients can be paired off so that each pair is highly comparable. The members in each pair are then randomly assigned to either condition, thus increasing the likelihood that each condition will contain relatively mirror-image participants while retaining the randomization factor. When feasible, randomized blocks assignment of clients to conditions can be a wise research strategy.

Evaluating Response Across Time

To evaluate the effect of a treatment, it is essential to first evaluate the level of each client’s functioning on the dependent variables before the intervention begins. Such pretreatment (or “baseline”) assessments provide key data to inform whether clients are comparable at the beginning of treatment (i.e., between-groups comparisons), and whether clients’ pretreatment levels of functioning differ significantly from functioning assessed at subsequent assessment points (i.e., within-groups comparisons).

Post-treatment assessments of clients are essential to examine the comparative efficacy of treatment versus control conditions. However, evidence of treatment efficacy immediately upon therapy completion may not be indicative of long-term success (maintenance). Treatment outcome may be appreciable at post-treatment but fail to exhibit maintenance of the effects at a follow-up assessment. It is highly recommended, and increasingly expected (Chambless & Hollon, 1998 ), that treatment outcome studies include a follow-up assessment. Follow-up assessments (e.g., 6 months, 1 year) are key to demonstrations of treatment efficacy and are a signpost of methodological rigor. For evidence of maintenance, the treatment must have produced results at the follow-up assessment that are comparable to those evident at post-treatment (i.e., improvements from pretreatment and an absence of detrimental change since post-treatment).

Follow-up evaluations can help to identify differential treatment effects. For example, the effects of two treatments may be comparable at the end of treatment, but one may be more effective in the prevention of relapse (see Greenhouse, Stangl, & Bromberg, 1989 , for discussion of survival analysis). When two treatments are comparable at post-treatment, yet one has a higher relapse rate, the knowledge gained from the follow-up evaluation is a valuable rationale for selecting one treatment over another. For example, Brown and colleagues ( 1997 ) reported on a comparison of CBT and relaxation training as treatments for depression in alcoholism. Using the average (mean) days abstinent and drinks per day as dependent variables, measured at pretreatment and at 3 and 6 months post-treatment, the authors established that, although both treatments produced comparable initial gains, the cognitive-behavioral treatment was superior to relaxation training in maintaining gains.

Follow-up evaluations may also detect continued improvement—the benefits of some interventions may accumulate over time, and possibly expand to other domains of functioning. Researchers and policy-makers have become increasingly interested in expanding intervention research to consider potential indirect effects on the prevention of secondary problems. We followed-up individuals treated with a cognitive-behavioral treatment for childhood anxiety disorders roughly 7 years later (Kendall, Safford, Flannery-Schroeder & Webb, 2002 ). These data indicated that a meaningful percentage of treated participants had maintained improvements in anxiety and that positive responders, as compared with less-positive responders, had a reduced amount of substance-use involvement at long-term follow-up (see also Kendall & Kessler, 2002 ). It is important to note that gains identified at follow-up are best only attributed to the initial treatment after one determines that the participants did not seek or receive additional treatments during the follow-up interval.

As we learn more about the outcomes of treatment, we are intrigued by speculations about the process that takes place in achieving these outcomes. Some researchers are considering therapy process and outcome as intertwined and are assessing change during the course of treatment (i.e., intratreatment) as well as post-treatment and follow-up (e.g., Kazdin, Marciano, & Whitley, 2005 ; Kendall & Ollendick, 2004 ; Shirk, Gudmundsen, Kaplinski, & McMakin, 2008 ; Taft & Murphy, 2007 ). Repeated assessment of client symptoms and functional change suggests that the first several sessions of treatment constitute the period of most rapid positive change (Howard, Lueger, Maling, & Martiovich, 1993 ). However, change across several domains of functioning may be phasic and may require more extended treatment. Intratreatment assessments (see Lambert, Hansen, & Finch, 2001 ) not only permit a fine-grained mapping of the course of change in therapy, but also provide important clues (e.g., Jaycox, Foa, & Morral, 1998 ) to identify mediators (discussed later in this chapter) of positive or adverse outcomes.

Multiple Treatment Comparisons

To determine comparative (or relative) efficacy and effectiveness of therapeutic interventions, researchers use between-groups designs with more than one active treatment condition. Between-groups designs are more direct comparisons of one treatment with one or more alternative treatments. Note that sample size considerations are influenced by whether the comparison is between a treatment and a control condition or one treatment versus another known to be effective treatment (see Kazdin & Bass, 1989 ).

In multiple treatment comparisons, it is optimal when each client is randomly assigned to receive one and only one kind of therapy. The assignment of clients to conditions should result in the initial comparability of the clients receiving each intervention. As previously mentioned, a randomized block procedure, with participants blocked on an important variable (e.g., pretreatment severity), can be used. It is always wise to check the comparability of the clients in the different treatment conditions on other important variables (e.g., sociodemographic variables, prior therapy experience, treatment expectancies/preferences) before continuing with the evaluation of the intervention. If not all participants are available at the outset of treatment, such as when participants come from consecutive clinic admissions, then the comparability of conditions can be checked at several intervals as the therapy outcome study progresses toward completion.

Comparability across therapists administering the different treatments is essential. Therapists conducting each type of treatment should be comparable in (a) training, (b) professional and clinical experience, (c) expertise in the intervention, (d) allegiance with the treatment, and (e) expectation that the intervention will be effective. One method to control for therapist effects has each therapist conduct each type of intervention with at least one client per intervention. Another viable option is stratified blocking , which assures that each intervention is conducted by several comparable therapists. The first method has random assignment of therapists, but is preferred only when therapists are equally expert and positively disposed toward each intervention. For example, it would probably not be a valid test to ask a group of psychodynamic therapists to conduct both a CBT (in which their expertise is low) and a psychodynamic therapy (in which their expertise is high). As is often the case, it is wise to gather data on therapist variables (e.g., expertise, allegiance) and examine their relationships to outcomes.

Comparing alternative treatments requires that the intervention procedures across treatments be equated for salient variables such as (a) duration; (b) length, intensity, and frequency of contacts with clients; (c) credibility of the treatment rationale; (d) setting in which treatment is to be provided; and (e) degree of involvement of persons significant to the client. In some cases, these factors may be the basis for two alternative therapies (e.g., conjoint vs. individual marital therapy; or child- vs. family-based treatment). In such cases, the variable is the experimental contrast rather than a matter for control.

What is the best method of measuring change when two alternative treatments are being compared? Clearly, measures should not be differentially sensitive to one or the other treatment. The measures should (a) cover the range of functioning that is a target for change, (b) tap the costs and possible negative side effects, and (c) be unbiased with respect to the alternate interventions. Comparisons of therapies may be misleading if the assessments are not equally sensitive to the types of changes that are most likely caused by each type of intervention.

When comparing alternative treatments, the “expected efficacy” of each therapy based on prior studies requires consideration. Consider, for example, that two treatments are compared and that therapy A is found to be superior to therapy B. The question can then arise, was therapy A superior, or did therapy B fail to be efficacious in this instance? It would be desirable in demonstrating the efficacy of therapy A if the results due to therapy B reflected the level of efficacy typically found in earlier demonstrations of therapy B’s efficacy. Interpretations of the results of comparative studies are dependent on the level of efficacy of each therapy in relation to its expected (or standard) efficacy. Effect sizes are useful in making these comparisons and in reaching sound conclusions.

Although the issues discussed apply, comparisons of psychological and psychopharmacological treatments (e.g., Dobson et al., 2008 ; Marcus et al., 2007 ; MTA Cooperative Group, 1999 ; Pediatric OCD Treatment Study Team, 2004; Walkup et al, 2008 ) present special issues. For example, how and when should placebo medications be used in comparison to or with psychological therapy? How should expectancy effects be addressed? How should differential attrition be handled? How is it best to handle intrinsic differences in professional contact across psychological and pharmacologic interventions? Follow-ups become especially important after the active treatments are discontinued. The question is especially pertinent given that psychological treatment effects may persist after treatment, whereas the effects of medications may not persist when the medications are discontinued. (Readers interested in discussions of these issues are referred to Hollon, 1996 ; Hollon & DeRubeis, 1981 ; Jacobson & Hollon, 1996 a, 1996b).

Matters of Procedure

We now consider procedural matters related to (a) defining the independent variable (the use of manual-based treatments), (b) checking the integrity of the independent variable (treatment fidelity checks), (c) selecting a sample, and (d) considering the research setting and the transportability of treatment.

Defining the Independent Variable: Manual-based Treatments

It is essential that a treatment be adequately described and detailed in order to replicate an evaluation of the treatment, or to be able to show and teach others how to conduct the treatment. Accordingly, there is the need for the use of treatment manuals. Treatment manuals enhance internal validity and treatment integrity, and afford comparison of treatments across contexts and formats, while reducing confounds (e.g., differences in the amount of contact, type and amount of training, time between sessions). Therapist manuals facilitate training and contribute meaningfully to replication (Dobson & Hamilton, 2002 ; Dobson & Shaw, 1988 ).

Not all agree on the merits of manuals. Debate has ensued regarding the use of manual-based treatments versus a more variable approach typically found in practice (see Addis, Cardemil, Duncan, & Miller, 2006 ; Addis & Krasnow, 2000 ; Westen, Novotny, & Thompson-Brenner, 2004 ). Some argue that manuals limit therapist creativity and place restrictions on the individualization that the therapists use (see also Waltz, Addis, Koerner, & Jacobson, 1993 ; Wilson, 1995 ). Some treatment manuals appear “cook-bookish,” and some lack attention to the necessary clinical sensitivities needed for proper individualization and implementation, but our experience suggests that this is not the norm. An empirical evaluation from our laboratory found that the use of a manual-based treatment for child anxiety disorders (Kendall & Hedtke, 2006 ) did not restrict therapist flexibility (Kendall & Chu, 1999 ). Although it is not the goal of manual-based treatments to have practitioners perform treatment in a rigid manner, this misperception has influenced some practitioners’ openness to the use of manual-based interventions (Addis & Krasnow, 2000 ).

The proper use of manual-based therapy requires interactive training, flexible application, and ongoing supervision (Kendall & Beidas, 2007 ). Professionals cannot become proficient in the administration of therapy simply by reading a manual. As Barlow ( 1989 ) noted, effective use of manual-based treatments must be preceded by adequate training.

Several contemporary treatment manuals allow the therapist to attend to each client’s specific needs, concerns, and comorbid conditions without deviating from the treatment strategies detailed in the manual. The goal is to include provisions for standardized implementation of therapy while utilizing a personalized case formulation (Suveg, Comer, Furr, & Kendall, 2006 ). Importantly, using manual-based treatments does not eliminate the potential for differential therapist effects. Within the context of manual-based treatments, researchers are examining therapist variables (e.g., warmth, therapeutic relationship-building behaviors) that might be related to treatment outcome (Creed & Kendall, 2005 ; Karver et al., 2008 ; Shirk et al., 2008 ).

Checking the Integrity of the Independent Variable: Treatment Fidelity Checks

Quality experimental research includes checking the manipulated variable. In therapy outcome evaluations, the manipulated variable is typically treatment or a characteristic of treatment. By design, all clients are not treated the same. However, just because the study has been so designed does not guarantee that the independent variable (treatment) has been implemented as intended. In the course of a study—whether due to therapist variables, incomplete manual specification, poor therapist training, insufficient therapist monitoring, client demand characteristics, or simply error variance—the treatment that was assigned may not in fact be the treatment that was provided (see also Perepletchikova & Kazdin, 2005 ).

To help ensure that the treatments are indeed implemented as intended, it is wise to require that a treatment plan be followed, that therapists are trained carefully, and that sufficient supervision is available throughout. The researcher should conduct an independent check on the manipulation. For example, therapy sessions are audio- or videotaped, so that an independent rater can listen to/watch the tapes and conduct a manipulation check. Quantifiable judgments regarding key characteristics of the treatment provide the necessary check that the described treatment was indeed provided. Digital videotapes and audiotapes are inexpensive, can be used for subsequent training, and can be analyzed to answer other research questions. Tape recordings of the therapy sessions evaluated by outcome studies not only provide a check on the treatment within each separate study but also allow for a check on the comparability of treatments provided across studies. That is, the therapy provided as CBT in one clinician’s study could be checked to determine its comparability to other clinician-researchers’ CBT.

Procedures from a recently completed clinical trial from our research program comparing two active treatment conditions for child anxiety disorders against an active attention control condition (Kendall et al., 2008 ) can illustrate integrity checks. First, we developed a checklist of the content/strategies called for in each session by the respective manuals. A panel of expert clinicians serve as independent raters who used the checklists to rate randomly selected videotape segments from 20% of randomly selected cases. The panel of raters was trained on nonstudy cases until they reached an inter-rater reliability of .85 (Cohen’s κ). Once reliable, the panel used the checklists to indicate whether the appropriate content was covered for randomly selected segments that were representative of all sessions, conditions, and therapists. A ratio was computed for each coded session: the number of checklist items covered by the therapist relative to the total number of items that should have been included. Results indicated that across the conditions, 85%–92% of intended content was in fact covered.

It is critical to also evaluate the quality of the treatment provided. A therapist may strictly adhere to the manual and yet fail to administer the therapy in an otherwise competent manner, or he or she may competently administer therapy while significantly deviating from the manual. In both cases, the operational definition of the independent variable (i.e., the treatment manual) has been violated, treatment integrity impaired, and replication rendered impossible (Dobson & Shaw, 1988 ). When a treatment fails to demonstrate expected gains, one can examine the adequacy with which the treatment was implemented (see Hollon, Garber, & Shelton, 2005 ). It is also of interest to study potential variations in treatment outcome that may be associated with differences in the quality of the treatment provided (Garfield, 1998 ; Kendall & Hollon, 1983 ). Expert judges are needed to make determinations of differential quality prior to the examination of differential outcomes for high- versus low-quality therapy implementation (see Waltz et al., 1993 ).

Sampling Issues

Choosing a sample to best represent the clinical population about which you are interested in making inferences requires consideration. Debate exists over the preferred samples for treatment outcome research. A selected sample refers to a sample of participants who may need service but who may otherwise only approximate clinically disordered individuals. Randomized controlled trials, by contrast, apply and evaluate treatments with actual clients who are seeking services. Consider a study investigating the effects of treatment X on depression. The researcher could use (a) a sample of clinically depressed clients diagnosed via structured interviews ( genuine clinical sample ), (b) a sample consisting of a group of adults who self-report dysphoric mood (an analogue sample ), or (c) a sample of depressed persons after excluding cases with suicidal intent, economic stress, and family conflict ( highly select sample ). This last sample may meet diagnostic criteria for depression but are nevertheless highly selected.

The benefits of using analogue or select samples may include a greater ability to control various conditions and minimize threats to internal validity, and from a practical standpoint researchers may find it easier to recruit these samples over genuine clinical samples. On the other hand, select and analogue samples compromise external validity—these are not the same people seen in typical clinical practice. With respect to depression, for instance, many question whether depression in genuine clinical populations compares meaningfully to self-reported dysphoria in adults (e.g., Coyne, 1994 ; Krupnick, Shea, & Elkin, 1986 ; Tennen, Hall, & Affleck, 1995 ; see also Ruscio & Ruscio, 2002, 2008). Researchers consider how the study results will be interpreted and generalized when deciding whether to use clinical, analogue, or select samples.

Researchers consider client diversity when deciding which samples to study. Historically, research supporting the efficacy of psychological treatments was conducted with predominantly European American samples—although this is rapidly changing (see Huey & Polo, 2008 ). One can question the extent to which efficacy findings from European American samples can be generalized to ethnic minority samples (Bernal, Bonilla, & Bellido, 1995 ; Bernal & Scharron-Del-Rio, 2001 ; Hall, 2001 ; Sue, 1998 ). Investigations have also addressed the potential for bias in diagnoses and in the provision of mental health services to ethnic minority patients (e.g., Flaherty & Meaer, 1980 ; Homma-True, Green, Lopez, & Trimble, 1993 ; Lopez, 1989 ; Snowden, 2003 ).

A simple rule is that the research sample should reflect the population to which the results will be generalized. To generalize to a minority/diverse population, one must study a minority/diverse sample. Any barriers to care must be reduced and outreach efforts employed to inform minorities of available services (see Sweeney, Robins, Ruberu, & Jones, 2005 ; Yeh, McCabe, Hough, Dupuis, & Hazen, 2003 ). Walders and Drotar ( 2000 ) provide guidelines for recruiting and working with ethnically diverse samples.

Once sample diversity is accomplished, statistical analyses can examine potential differential outcomes (see Arnold et al., 2003 ; Treadwell, Flannery-Schroeder, & Kendall, 1994 ). Grouping and analyzing research participants by ethnic status is one approach. However, this approach is simplistic because it fails to address variations in individual client’s degree of ethnic identity. It is often the degree to which an individual identifies with an ethnocultural group or community, and not simply his or her ethnicity itself, that may potentially moderate treatment outcome.

Research determines treatment efficacy, but it is not sufficient to demonstrate efficacy within a narrowly defined sample in a highly selective setting. The question of whether the treatment can be transported to other settings requires independent evaluation (Southam-Gerow, Ringeisen, & Sherrill, 2006 ). Treatment outcome studies conducted in some settings (settings in which clients may differ on important variables) may not generalize to other settings. Some have questioned whether the outcomes found at select research centers will transport to clinical practice settings. One should study, rather than assume, that a treatment found to be efficacious within a research clinical setting will be efficacious in a clinical service setting (see Hoagwood, 2002 ; Silverman, Kurtines, & Hoagwood, 2004 ; Southam-Gerow et al., 2006 ; Weisz, Donenberg, Han, & Weiss, 1995 ; Weisz, Weiss, & Donenberg, 1992 ).

Closing the gap between clinical research and practice requires transporting effective treatments (getting “what works” into practice) and identifying additional research into those factors (e.g., client, therapist, researcher, service delivery setting; see Kendall & Southam-Gerow, 1995; Silverman et al., 2004 ) that may be involved in successful transportation. Fishman ( 2000 ) suggested that an electronic journal of case studies be assembled so that patient, therapy, and environmental variables can be collected/compiled from within naturalistic therapy settings. Although the methodology has flaws (Stricker, 2000 ), information technology–based approaches may facilitate more seamless integration of research and practice and foster new waves of outcome research.

Matters of Measurement

Assessing the dependent variable(s).

No single measure serves as the sole indicator of clients’ treatment-related gains. Rather, a variety of methods, measures, data sources, and sampling domains (e.g., symptomatic distress, functional impairment, quality of life) are used to assess therapy outcomes. A contemporary and rigorous study of the effects of therapy may use assessments of client self-report; client test/task performance; therapist judgments and ratings; archival or documentary records (e.g., health-care visit and costs, work and school records); observations by trained, unbiased, blinded observers; rating by significant people in the client’s life; and independent judgments by professionals. Outcomes have more compelling impact when seen by independent (blind) evaluators than when based solely on the therapist’s opinion or the client’s self-reports.

The multi-informant strategy , in which data on variables of interest are collected from multiple reporters (e.g., client, family members, peers) can be particularly important when assessing children and adolescents. Features of cognitive development may compromise youth self-reports, and children may offer what they believe to be the desired responses. Thus, in RCTs with youth, collecting additional data from key adults in children’s lives who observe them across different contexts (e.g., parents, teachers) is valued. However, because emotions and mood are partially internal phenomena, some symptoms may be less known to parents and teachers, and some observable symptoms may occur in situations outside the home or school.

An inherent concern with multi-informant assessment is that discrepancies among informants are to be expected (Comer & Kendall, 2004 ; Edelbrock, Costello, Dulcan, Conover, & Kalas, 1986 ). Research indicates low to moderate concordance rates among informants in the assessment of children and adolescents (Achenbach, McConaughy, & Howell, 1987 ; De Los Reyes & Kazdin, 2005 ). For example, cross-informant agreement in the assessment of childhood mood/anxiety can be low (Comer & Kendall, 2004 ; Grills & Ollendick, 2003 ).

A multimodal strategy relies on multiple inquiries to evaluate an underlying construct of interest. For example, assessing family functioning may include family members completing self-report forms on their perceptions of family relationships, as well as conducting structured behavioral observations of family members interacting (to later be coded by independent raters). Statistical packages can integrate data obtained from multimodal assessment strategies. The increasing availability of handheld communication devices and personal digital assistants allows researchers to incorporate experience sampling methodology (ESM), in which people report on their emotions and behavior in the actual situation ( in situ ). These ESM data provide naturalistic information on patterns in day-to-day functioning.

Treatment evaluations use multiple targets of assessment. For example, one can measure overall psychological adjustment, specific interpersonal skills, the presence of a diagnosis, self-report mood, cognitive functioning, life environment, vocational status, and the quality of interpersonal relationships. No one target captures all, and using multiple targets facilitates an examination of therapeutic changes when changes occur, and the absence of change when interventions are less beneficial.

Broadly speaking, evaluation of therapy-induced change can be appraised on two levels: the specifying level and the impact level (Kendall, Pellegrini, & Urbain, 1981 ). The specifying level refers to the exact skills, cognitive or emotional processes, or behaviors that have been modified during treatment (e.g., examining the number of positive spousal statements generated during a specific marital relationship task). In contrast, the impact level refers to the general level of functioning of the client (e.g., absence of a diagnosis, functional status of the client). A compelling demonstration of beneficial treatment would include change that occurs at both the level of specific discrete skills and behaviors, and the impact level of generalized functioning in which the client interacts differently within the larger environmental context.

Assessing multiple domains of functioning provides a comprehensive evaluation of treatment, but it is rarely the case that a treatment produces uniform effects across the domains assessed. Suppose treatment A, relative to a control condition, improves depressed clients’ level of depression, but not their overall psychological well-being. In an RCT designed to evaluate improved level of depression and psychological well-being, should treatment A be deemed efficacious if only one of two measures found gains? De Los Reyes and Kazdin ( 2006 ) propose the Range of Possible Changes model, which calls for a multidimensional conceptualization of intervention change. In this spirit, we recommend that researchers conducting RCTs be explicit about the domains of functioning expected to change and the relative magnitude of such expected changes. We also caution consumers of the treatment outcome literature against simplistic dichotomous appraisals of treatments as efficacious or not.

Matters of Data Analysis

Contrary to popular misguided perceptions, data do not “speak” for themselves. Data analysis is an active process in which we extract useful information from the data we have collected in ways that allow us to make statistical inferences about the larger population that a given sample was selected to represent. Although a comprehensive statistical discussion is beyond the present scope (the interested reader is referred to Jaccard & Guilamo-Ramos, 2002a, 2002b; Kraemer & Kupfer, 2006 ; Kraemer, Wilson, Fairburn, & Agras, 2002 ) in this section, we discuss four areas that merit consideration in the context of research methods in clinical psychology: (a) handling missing data and attrition, (b) assessing clinical significance (i.e., the persuasiveness of outcomes), (c) mechanisms of change (i.e., mediators and moderators), and (d) cumulative outcome analyses.

Handling Missing Data and Attrition

Given the time-intensive and ongoing nature of RCTs, not all clients who are assigned to treatment actually complete their participation in the study. A loss of research participants ( attrition ) may occur just after randomization, prior to post-treatment evaluation, or during the follow-up interval. Increasingly, clinical scientists are analyzing attrition and its predictors and correlates to elucidate the nature of treatment dropout, understand treatment tolerability, and to enhance the sustainability mental health services in the community (Kendall & Sugarman, 1997 ; Reis & Brown, 2006 ; Vanable, Carey, Carey, & Maisto, 2002 ). However, from a research methods standpoint, attrition can be problematic for data analysis, such as when there are large numbers of noncompleters or when attrition varies across conditions (Leon et al., 2006 ; Molenberghs et al., 2004 ).

No matter how diligently researchers work to prevent attrition, data will likely be lost. Although attrition rates vary across studies, Mason ( 1999 ) estimated that most researchers can expect nearly 20% of their sample to withdraw or be removed from a study before it is completed. To address this matter, researchers can conduct and report two sets of analyses: (a) analyses of outcomes for the treatment completers and (b) analyses of outcomes for all clients who were included at the time of randomization (i.e., the intent-to-treat sample ). An analysis of completers involves the evaluation of only those who completed treatment and examines what the effects of treatment are when someone completes its full course. Treatment dropouts, treatment refusers, and clients who fail to adhere to treatment schedules would not be included in these outcome analyses. In such cases, reports of treatment outcome may be somewhat high because they represent the results for only those who adhered to and completed the treatment. Intent-to-treat analyses, a more conservative approach to addressing missing data, require the evaluation of outcomes for all participants involved at the point of randomization. Proponents of intent-to-treatment analyses will say, “once randomized, always analyzed.”

When conducting intent-to-treat analyses, the method used to handle missing endpoint data requires consideration, because different methods can produce different outcomes. Delucchi and Bostrom ( 1999 ) summarized the effects of missing data on a range of statistical analyses. Researchers address missing endpoint data via one of several ways: (a) last observation carried forward (LOCF), (b) substituting pretreatment scores for post-treatment scores, (c) multiple imputation methods, and (d) mixed-effects models.

The following example illustrates these different methods. Suppose a researcher conducts a smoking cessation trial comparing a 12-week active treatment (treatment A) to a 12-week waitlist control condition, with mean number of daily cigarettes used over the course of the previous week as the dependent variable, and with four assessment points: pretreatment, week 4, week 8, and post-treatment. A LOCF analysis assumes that participants who attrit remain constant on the outcome variable from their last assessed point through the post-treatment evaluation. If a participant drops out at week 9, the data from the week 8 assessment would be substituted for their missing post-treatment assessment data. A LOCF approach can be problematic however, as the last data collected may not be representative of the dropout participant’s ultimate progress or lack of progress at post-treatment, given that participants may change after dropping out of treatment (e.g., cigarette use may abruptly rise upon dropout, reversing initially assessed gains). The use of pretreatment data as post-treatment data (a conservative and not recommended method) simply inserts pretreatment scores for cases of attrition as post-treatment scores, assuming that participants who attrit make no change from their initial baseline state.

Critics of the LOCF and the pretreatment data substitution methods argue that these crude methods introduce systematic bias and fail to take into account the uncertainty of post-treatment functioning (see Leon et al., 2006 ). Increasingly, journals are calling for missing data imputation methods to be grounded in statistical theory and to incorporate the uncertainty regarding the true value of the missing data. Multiple imputation methods impute a range of values for the missing data (incorporating the uncertainty of the true values of missing data), generating a number of nonidentical datasets (typically five is considered sufficient; Little & Rubin, 2002). After the researcher conducts analyses on the nonidentical datasets, the results are pooled and the resulting variability addresses the uncertainty of the true value of the missing data. Moreover, mixed-effects modeling, which relies on linear and/or logistic regression to address missing data in the context of random (e.g., participant) and fixed effects (e.g., treatment, age, sex) (see Hedeker & Gibbons, 1994, 1997; Laird & Ware, 1982 ), can be used (see Neuner et al., 2008 for an example). Mixed-effects modeling may be particularly useful in addressing missing data if numerous assessments are collected throughout a treatment trial (e.g., weekly symptom ratings).

Given a lack of consensus regarding the most appropriate way to address missing data in RCTs, we encourage researchers—if it is possible for noncompleting participants to be contacted and evaluated at the time when the treatment protocol would have ended—to contact and reassess participants. This method controls for the passage of time, because both dropouts and treatment completers are evaluated over time periods of the same duration. If this method is used, however, it is important to determine whether dropouts sought and/or received alternative treatments in the interim.

Clinical Significance: Assessing the Persuasiveness of Outcomes

The data produced by research projects designed to evaluate the efficacy of therapy are submitted to statistical tests of significance. The mean scores for participants in each condition are compared, the within-group and between-group variability is considered, and the analysis produces a numerical figure, which is then checked against critical values. An outcome achieves statistical significance if the magnitude of the mean difference is beyond what could have resulted by chance alone (typically defined by convention as p <.05). Statistical analyses and statistical significance are essential for therapy evaluation because they inform us that the degree of change was likely not due to chance. However, statistical tests alone do not provide evidence of clinical significance .

Sole reliance on statistical significance can lead to perceiving differences (i.e., treatment gains) as potent when in fact they may not be clinically significant. For example, imagine that the results of a treatment outcome study demonstrate that mean Beck Depression Inventory (BDI) scores are significantly lower at post-treatment than pretreatment. An examination of the means, however, reveals only a small but reliable shift from a mean of 29 to a mean of 24. Given large sample sizes, this difference may well achieve statistical significance at the p <.05 level (i.e., over 95% chance that the finding is not due to chance alone), yet perhaps be of limited practical significance. At both pre- and post-treatment, the scores are within the range considered indicative of clinical levels of depressive distress (Kendall, Hollon, Beck, Hammen, & Ingram, 1987 ), and such a magnitude of change may have little effect on a person’s perceived quality of life (Gladis, Gosch, Dishuk, & Crits-Christoph, 1990). Moreover, statistically meager results may disguise meaningful changes in client functioning. As Kazdin ( 1999 ) put it, sometimes a little can mean a lot, and vice versa.

Clinical significance refers to the meaningfulness or persuasiveness of the magnitude of change (Kendall, 1999 ). Whereas tests of statistical significance address the question “Were there treatment-related changes?” tests of clinical significance address the question “Were the treatment-related changes convincing and meaningful?” In the treatment of a depressive disorder, for example, clinically significant changes would have to be of the magnitude that, after therapy, the person no longer suffered from debilitating depression. Specifically, this can be made operational as changes on a measure of the presenting problem (e.g., depressive symptoms) that result in the client’s being returned to within normal limits on that same measure. Several approaches for measuring clinically significant change have been developed, two of which are normative sample comparison and reliable change index .

Normative comparisons

Clinically significant improvement can be identified using normative comparisons (Kendall & Grove, 1988 ), a method for operationalizing clinical significance testing. Normative comparisons (Kendall & Grove, 1988 ; Kendall, Marrs-Garcia, Nath, & Sheldrick, 1999 ) can be conducted in several steps. First, the researcher selects a normative group for post-treatment comparison. Given that several well-established measures provide normative data (e.g., the Beck Depression Inventory, the Child Behavior Checklist), investigators may choose to rely on these preexisting normative samples. However, when normative data do not exist, or when the treatment sample is qualitatively different on key factors (e.g., age, socioeconomic status), it may be necessary to collect one’s own normative data.

In typical research, when using statistical tests to compare groups, the investigator assumes that the groups are equivalent (null hypothesis) and wishes to find that they are not (alternate hypothesis). However, when the goal is to show that treated individuals are equivalent to “normal” individuals on some factor (i.e., are indistinguishable from normative comparisons), traditional hypothesis-testing methods are inadequate. To circumvent this problem, one uses an equivalency testing method (Kendall, Marrs-Garcia, et al., 1999 ) that examines whether the difference between the treatment and normative groups is within some predetermined range. Used in conjunction with traditional hypothesis testing, this approach allows for conclusions about the equivalency of groups (see e.g., Jarrett, Vittengl, Doyle, & Clark, 2007 ; Kendall et al., 2008 ; Pelham et al., 2000 ; Westbrook & Kirk, 2007 ; for examples of normative comparisons), thus testing that post-treatment case are within a normative range on the measure of interest.

The reliable change index

Another method to the examining clinically significant change is the Reliable Change Index (RCI; Jacobson, Follette, & Revenstorf, 1984; Jacobson & Traux, 1991 ). The RCI involves calculating the number of clients moving from a dysfunctional to a normative range. The RCI is a calculation of a difference score (post- minus pre-treatment) divided by the standard error of measurement (calculated based on the reliability of the measure). The RCI is influenced by the magnitude of change and the reliability of the measure (for a reconsideration of the interpretation of RCI, see Hsu, 1996). The RCI has been used in clinical psychological research, although its originators point out that it has at times been misapplied (Jacobson, Roberts, Berns, & McGlinchey, 1999 ). When used in conjunction with reliable measures and appropriate cutoff scores, it can be a valuable tool for assessing clinical significance.

Concluding comments on clinical significance

Although progress has been made regarding the operationalization of clinical significance, some debate exists over how to improve its measurement (see Beutler & Moleiro, 2001 ; Blanton & Jaccard, 2006 ; Jensen, 2001 ). Whereas some researchers propose more advanced methods of normative comparison and analysis (e.g., using multiple normative samples), others suggest that clinical significance remain as a simple, client-focused, and practical adjunct to statistical significance results (Follette & Callaghan, 1996 ; Martinovich, Saunders, & Howard, 1996 ; Tingey, Lambert, Burlingame & Hansen, 1996 ).

Evaluations of statistical and clinical significance are most informative when used in conjunction with one another, and it is becoming more common for reports of RCTs to include evaluations of both. Statistically significant improvements are not equivalent to “cures,” and clinical significance is a complementary, not a substitute, evaluative strategy. Statistical significance is required to document that changes were beyond those that could be expected due to chance alone—yet, it is also useful to consider if the changes returned dysfunctional clients to within normative limits on the measures of interest.

Evaluating Mechanisms of Change: Mediators and Moderators of Treatment Response

When evaluating treatment efficacy, it is of interest to identify (a) the conditions that dictate when a treatment is more or less effective, and (b) the processes through which a treatment produces change. Addressing such issues necessitates the specification of moderator and mediator variables (Baron & Kenny, 1986 ; Holmbeck, 1997 ; Kraemer et al., 2002 ; Shadish & Sweeney, 1991 ). A moderator is a variable that delineates the conditions under which a given treatment is related to an outcome. Conceptually, moderators identify on whom and under what circumstances treatments have different effects (Kraemer et al., 2002 ). Functionally, a moderator is a variable that influences either the direction or the strength of a relationship between an independent variable (treatment) and a dependent variable (outcome). For example, if a given treatment were found to be more effective with women than with men, gender would be considered a moderator of the association between treatment and outcome. A mediator, on the other hand, is a variable that serves to explain the process by which a treatment impacts on an outcome. Conceptually, mediators identify how and why treatments have effects (Kraemer et al., 2002 ). If an effective treatment for child conduct problems was found to impact on the parenting behavior of mothers and fathers, which in turn were found to have a significant impact on child problem behavior, then parent behavior would be considered to mediate the treat-to-outcome relationship (provided certain statistical criteria were met; see Holmbeck, 1997 ). Let’s take a closer look at each of these notions.

Treatment moderators help clarify for clinicians (and other consumers of the treatment outcome literature) which clients might be most responsive to a particular treatment (and for which clients alternative treatment might be sought). They have historically received more attention in the research literature than mediators of effectiveness. Moderator variables that have received the most attention include client age, client ethnicity, client gender, problem type, problem severity, therapist training, mode of delivery (e.g., individual, group, family), setting, and type and source of outcome measure (e.g., Dimidjian et al., 2006 ; Kolko, Brent, Baugher, Bridge, & Birmaher, 2000 ; McBride, Atkinson, Quilty, & Bagby, 2006 ; Owens et al., 2003 ; Shadish & Sweeney, 1991 ; Weisz, Weiss, Han, Granger, & Morton, 1995 ).

How does one test for the presence of a moderator effect? A moderator effect is an interaction effect (Holmbeck, 1997 ) and can be evaluated using multiple regression analyses or analyses of variance (ANOVA). When using multiple regression, the predictor (e.g., treatment vs. no treatment) and proposed moderator (e.g., age of client) are main effects and are entered into the regression equation first, followed by the interaction of the predictor and the moderator. Alternatively, if one is only interested in testing the significance of the interaction effect, all of these terms can be entered simultaneously (see Aiken & West, 1991 ). If one is using ANOVA, the significance of the interaction between two main effects is tested in an analogous manner: a moderator, like an interaction effect, documents that the effects of one variable (e.g., treatment) are different across different levels of another variable (i.e., the moderator).

The presence of a significant interaction tells us that there is significant moderation (i.e., that the association between the treatment variable and the outcome variable differs significantly across different levels of the moderator). Unfortunately, it tells us little about the specific conditions that dictate whether or not the treatment is significantly related to the outcome. For example, if a treatment-by-age interaction effect is significant in predicting treatment-related change, we know that the effect of the treatment for older clients differs from the effect of the treatment for younger clients, but we do not know whether the treatment effect is statistically significant for either age group. One would not yet know, based on the initial significant interaction effect, whether the relationship between treatment and outcome was significant for the older group, the younger group, or both groups.

Thus, when testing for moderation of treatment effects, statistically significant interactions must be further scrutinized. One such post-hoc probing approach is to plot and test the significance of simple slopes of regression lines for high and low values of the moderator variable (Aiken & West, 1991 ; Kraemer et al., 2002 ). Alternatively, one can test the significance of simple main effects via ANOVA procedures when the predictor (e.g., treatment vs. no treatment) and moderator (e.g., gender) are both categorical variables.

A mediator is that variable that specifies the process through which a particular outcome is produced. The mediator effect elucidates the mechanism by which the independent variable (e.g., treatment) is related to outcome (e.g., treatment-related changes). Thus, mediational models are inherently causal models, and in the context of an experimental design (i.e., random assignment), significant meditational pathways are suggestive of causal relationships. As noted by Collins, Maccoby, Steinberg, Hetherington, and Bornstein ( 2000 ), studies of parenting interventions inform us not only about the effectiveness (or lack thereof) of such interventions, but also about causal relations between potential parenting mediators and child outcomes. For example, Forgatch and DeGarmo ( 1999 ) administered a parent training treatment to a sample of recently divorced mothers (as well as controls) and found that treatment was associated with positive (or less-negative) changes in parenting behavior—and that changes in parenting behavior were linked with changes in child behavior. This work not only provides preliminary evidence for the utility of a particular treatment approach, but also demonstrates that a prospective (and perhaps causal) link exists in the direction of parenting impacting on child outcome.

When testing for meditational effects, the researcher is usually interested in whether a variable “mediates” the association between a treatment and an outcome, such that the mediator accounts for (i.e., attenuates) part or all of this association. To test for mediation, one examines whether the following are significant: (1) the association between the predictor (e.g., treatment) and the outcome, (2) the association between the predictor and the mediator, and (3) the association between the mediator and the outcome, after controlling for the effect of the predictor. If these three conditions are first met, one then examines (4) whether the predictor-to-outcome effect is less after controlling for the mediator. A corollary of the first condition is that there initially should be a significant relationship between the treatment and the outcome for a mediator to serve its mediating role. If the treatment and outcome are not significantly associated, there is no effect to mediate. Such a bivariate association between treatment and outcome is not required for moderated effects.

The three prerequisite conditions for testing mediational effects can be tested with three multiple regression analyses (Baron & Kenny, 1986 ). The significance of the treatment-to-outcome path (condition 1 above) is examined in the first regression. The significance of the treatment-to-mediator path (condition 2) is examined in the second regression. Finally, the treatment and mediator variable are simultaneously employed as predictors (via simultaneous entry) in the third equation, where the outcome is the dependent variable. Baron and Kenny ( 1986 ) recommend using simultaneous entry (rather than hierarchical entry) in this third equation, so that the effect of the mediator on the outcome is examined after controlling for the treatment and the effect of the treatment on the outcome is examined after controlling for the mediator (borrowing from path analytic methodology; Cohen & Cohen, 1983). The significance of the mediator-to-outcome path in this third equation is a test of condition 3. The relative effect of the treatment on the outcome in this equation (when the mediator is controlled) in comparison to the effect of the treatment on the outcome in the first equation (when the mediator is not controlled) is the test of the fourth condition. Specifically, the treatment should be less associated with the outcome in the third equation than was the case in the first equation (i.e., the association between treatment and the dependent variable is attenuated in the presence of the proposed mediator variable).

Consider the following example: Within a cognitive-behavioral treatment for childhood anxiety disorders, what changes within the clients mediate the identified positive outcomes? To test for mediation, Kendall and Treadwell ( 2007 ) computed three regression equations for each dependent variable. In the first, it was established that treatment condition (CBT) predicted the dependent variable (e.g., change on an established anxiety measure). The second equation established that treatment condition predicted the proposed mediator (i.e., changes in children’s self-statements during the trial). In the third equation, it was established that changes in children’s self-statements (i.e., the proposed mediator) independently predicted the dependent variable. Finally, the meditational hypothesis was confirmed when the independent variable (treatment) no longer significantly predicted the dependent variable when change in self-statements was entered into the equation. This study (Kendall & Treadwell, 2007 ) provided support that change in children’s self-talk mediates the effects of cognitive-behavior treatment for childhood anxiety.

How much reduction in the total effect is necessary to support the presence of mediation? Some researchers have reported whether the treatment-to-outcome effect drops from significance (e.g., p <.05) to nonsignificance (e.g., p >.05) after the mediator is introduced into the model. This strategy may be flawed, however, because a drop from significance to nonsignificance may occur, for example, when a regression coefficient drops from .28 to .27, but may not occur when it drops from .75 to .35. In other words, it is possible that significant mediation has not occurred when the test of the treatment-to-outcome effect drops from significance to nonsignificance after taking the mediator into account. On the other hand, it is also possible that significant mediation has occurred even when statistical test of the treatment-to-outcome effect continues to be significant after taking the mediator into account. Thus, it has been recommended when reporting mediational tests to also include a significance test that examines whether the drop in the treatment-to-outcome effect achieves statistical significance when accounting for the impact of the proposed mediator (see MacKinnon & Dwyer, 1993 ; Sobel, 1988 for details).

Cumulative Outcome Analyses: From Qualitative Reviews to Meta-Analytic Evaluations

The literature examining the outcomes of diverse therapies is vast, and there is a need to integrate that which we have learned in a systematic, coherent, and meaningful manner. Several major cumulative analyses have undertaken the challenging task of reviewing and reaching conclusions with regard to the effects of psychological therapy. Some of the reviews are strictly qualitative and are based on subjective conclusions, whereas others have used tabulations of the number of studies favoring one type of intervention versus that of competing interventions (e.g., Beutler, 1979 ; Luborsky, Singer, & Luborsky, 1975 ). This approach uses a “box score” summary of the findings, and reviewers would compare rates of treatment success to draw conclusions about outcomes. Still other reviewers have used multidimensional analyses of the impact of potential causal factors on therapy outcome: meta-analysis (Smith & Glass, 1977 ).

Meta-analytic procedures provide a quantitative, accepted, and respected approach to the synthesis of a body of empirical literature. Literature reviews are increasingly moving away from the qualitative summary of studies to the quantitative analysis of the reported findings of the studies (e.g., Cooper & Hedges, 1994 ; Cooper & Rosenthal, 1980 ; Durlak, 1999 ; Rosenthal, 1984 ). By summarizing the magnitude of overall relationships found across studies, determining factors associated with variations in the magnitude of such relationships, and establishing relationships by aggregate analysis, meta-analytic procedures provide more systematic, exhaustive, objective, and representative conclusions than do qualitative reviews (Rosenthal, 1984 ). To understand the effects of psychological treatments, as well as the factors associated with variations in these effects, meta-analysis is a preferred tool with which to inform funding decisions, service delivery, and public policy.

Meta-analytic techniques are highly informative because they synthesize findings across multiple studies by converting the results of each investigation into a common metric (e.g., the effect size). The outcomes of different types of treatments can then be compared with respect to the aggregate magnitude of change reflected in such statistics across studies. The effect size is typically derived by computing the difference between the reported means of the treatment group and control group at post-treatment, then dividing this difference by the pooled standard deviation of the two groups (Durlak, 1995 ). The more rigorous scientific journals now require authors to include effect sizes in their reports.

Assuming that one has decided to conduct a meta-analytic review, what are the steps involved in conducting a meta-analysis? After determining that a particular research area has matured to the point at which a meta-analysis is possible and the results of such an analysis would be of interest to the field, one conducts a literature search. Multiple methods of searching are often used (e.g., computer database searches, reviews of reference sections from relevant article, sending a table of studies to be included to known experts in the area to review for potential missing citations). A word of advice to the meta-analyzer: Do not rely solely on computer searches, because they routinely omit several important studies.

A decision that often arises at this point is whether studies of varying quality should be included (Kendall, Flannery-Schroeder, & Ford, 1999 ; Kendall & Maruyama, 1985 ). On the one hand, one could argue that studies of poor quality should not be included in the review, since such studies would not ordinarily be used to draw conclusions about the effectiveness of a given psychological therapy. On the other hand, decisions concerning whether a study is of poor versus good quality are often not straightforward. A study may have certain exemplary features and other less desirable features. By including studies that vary in quality, one can examine whether certain “quality” variables (e.g., select vs. genuine clinical cases) are associated with differential outcomes. For example, in a recent meta-analysis (Furr, Comer, Edmunds, & Kendall, 2008 ), studies were rated in terms of their methodological rigor: one point for addressing missing data, one point for including appropriate comparison groups, one point for using psychometrically sound measures, etc. The research can then examine the extent to which methodological quality is related to results.

Coding the results of specific studies is an important part of a meta-analysis. Decisions need to be made regarding what types of variables will be coded and how inter-rater reliability among coders will be assessed. For example, in a study that examined the outcomes of a psychological therapy, one might code the nature of the intervention, whether the treatment was conducted in clinically representative conditions (Shadish, Matt, Navarro, & Phillips, 2000 ), the number of sessions, the types of participants, the diagnoses of the participants, the age range, the gender distribution, the therapy administration method (e.g., group vs. individual), the qualifications of the therapists, the various features of the research design, and types of outcomes. Once variables such as these have been coded, the effect sizes are then computed. The methods employed to compute effect sizes should be specified. Another consideration is whether effect sizes will be weighted (for example, based on the sample sizes of the studies reviewed, methodological rigor of studies, etc.). Using sample size to weight study findings has historically been employed in meta-analyses as a way to approximate the reliability of findings (i.e., larger samples would expectedly yield more reliable estimates than smaller samples). However, researchers are increasingly weighting studies by inverse variance weights (i.e., 1/(SE) 2 ), where SE = standard error), rather than sample size, as this provides a more direct weighting of study findings by reliability. By weighting by inverse variance weights, the researcher is weighting by precision—the smaller the SE, the more precise the effect size, and consequently the greater you want to represent that effect size when aggregating it with other effect sizes.

After computing the effect sizes and inverse variance weights across studies, and then computing an overall weighted mean effect size (and confidence interval) based on the inverse variance weights associated with each effect size, the researcher evaluates the adequacy of the mean effect size in representing the entire distribution of effects via homogeneity testing (i.e., homogeneity statistic, Q). This consists of comparing the observed variability in the effect size values with the estimate of variance that is expected from subject-level sampling error alone (Lipsey & Wilson, 2000 ). A stem-and-leaf plot can also be useful in determining the distribution of effect sizes. Often a researcher will specifically hypothesize that effect sizes will be significantly heterogenous, given that multiple factors (e.g., sample characteristics, study methodology, etc.) can systematically exert influences on documented treatment effects. If the distribution is not found to be homogeneous, the studies likely estimate different population mean effect sizes, and alternative procedures are required that are beyond the scope of this chapter (see Lipsey & Wilson, 2000 ).

The merits of integration and summation of the results of related outcome studies are recognized, yet some cautions must be exercised in any meta-analysis. As noted earlier, one must check on the quality of the studies, eliminating those that cannot contribute meaningful findings due to basic inadequacies (Kraemer, Gardner, Brooks, & Yesavage, 1998 ). Consider the following: Would you accept the recommendation that one treatment approach is superior to another if the recommendation was based on inadequate research? Probably not. If the research evidence is methodologically unsound, it is insufficient evidence for a recommendation; it remains inadequate as a basis for either supporting or refuting treatment recommendations, and therefore it should not be included in cumulative analyses. If a study is methodologically sound, then regardless of the outcome, it must be included.

Caution is paramount in meta-analyses in which various studies are said to provide evidence that treatment is superior to controls. The exact nature of the control condition in each specific study must be examined, especially in the case of attention-placebo control conditions. This caution arises from the indefinite definition of attention-placebo control conditions. As has been noted, one researcher’s attention-placebo control condition may be serving as another researcher’s therapy condition! Meta-analyzers cannot tabulate the number of studies in which treatment was found to be efficacious in relation to controls without examining the nature of the control condition.

Currently, major efforts are being made to identify and examine those psychological treatments that can be considered empirically supported. These efforts take a set of “criteria” that have been proposed as required for a treatment to be considered empirically supported and review the reported research literature in search of studies that can be used to meet the criteria. Such reviews (e.g., Baucom, Shoham, Mueser, Daiuto, & Stickle, 1998 ; Compas, Haaga, Keefe, Leitenberg, & Williams, 1998 ; DeRubeis & Crits-Christoph, 1998 ; Kazdin & Weisz, 1998 ; Weisz, Jensen-Doss, & Hawley, 2006 ; Weisz, McCarty, & Valeri, 2006 ) and reactions to the approach (e.g., Beutler, 1998 ; Borkovec & Castonguay, 1998 ; Garfield, 1998 ; Goldfried & Wolfe, 1998 ) document not only that this approach is being applied, but also that there are treatments that meet the criteria of having been supported by empirical research.

Matters of Reporting

Communicating study findings to the scientific community is the final stage of conducting an evaluation of treatment. A well-constructed and quality report will discuss findings in the context of previous related work (e.g., discussing how the findings build on and support previous work; discussing the ways in which findings are discrepant from previous work and why this may be the case), as well as consider limitations and shortcomings that can direct future theory and empirical efforts in the area.

When preparing a quality report, the researcher provides all of the relative information for the reader to critically appraise, interpret, and/or replicate study findings. Historically, there have been some inadequacies in the reporting of RCTs (see Westen et al., 2004 for a critique of past practices). In fact, inadequacies in the reporting of RCTs can result in bias in estimating the effectiveness of interventions (Moher, Schulz, & Altman, 2001 ; Shulz, Chalmers, Hayes, & Altman, 1995 ). To maximize transparency in the reporting of RCTs, an international group of epidemiologists, statisticians, and journal editors developed a set of consolidated standards of reporting trials (i.e., CONSORT; see Begg et al., 1996), consisting of a 22-item checklist of study features that can bias estimates of treatment effects, or that are critical to judging the reliability or relevance of study findings, and consequently should be included in a comprehensive research report. A quality report will address each of these 22 items. For example, the title and abstract are to include how participants were allocated to interventions (e.g., randomly assigned), the methods must clearly detail eligibility criteria (i.e., inclusion/exclusion criteria) and how the sample size was determined, the procedures must indicate whether or not evaluators were blind to treatment assignment, and baseline demographic characteristics must be included for all participants. Importantly, participant flow must be characterized at each stage. The researcher reports the specific numbers of participants randomly assigned to each treatment condition, who received treatments as assigned, who participated in post-treatment evaluations, and who participated in follow-up evaluations (see Figure 4.1 for an example from Kendall et al., 2008 ). It has become standard practice for scientific journals to require a CONSORT flow diagram.

When the researcher has prepared a quality report that he or she deems is ready to be communicated to the academic community, the next decision is where to submit the report. When communicating the results of a clinical evaluation to the scientific community, the researcher should only consider submitting the report of their findings to a peer-reviewed journal. Publishing the outcomes of a study in a refereed journal (i.e., one that employs the peer-review process) signals that the work has been accepted and approved for publication by a panel of qualified and impartial reviewers (i.e., independent scientists knowledgeable in the area but not involved with the study). Consumers should be highly cautious of studies published in journals that do not place manuscript submissions through a rigorous peer-review process.

Example of flow diagram used in reporting to depict participant flow at each stage of a study. From Kendall, P. C., Hudson, J.L., Gosch, E., Flannery-Schroeder, E., & Suveg, C. (2008). Cognitive-behavioral therapy for anxiety disordered youth: A randomized clinical trial evaluating child and family modalities. Journal of Consulting and Clinical Psychology, 76 , 282–297. Reprinted with permission of the publisher, the American Psychological Association (APA).

Although the peer-review process slows down the speed with which one is able to communicate study results (much to the chagrin of the excited researcher who just completed an investigation), it is nonetheless one of the indispensable safeguards that we have to ensure that our collective knowledge base is drawn from studies meeting acceptable standards. Typically, the review process is “blind,” meaning that the authors of the article do not know the identities of the peer-reviewers who are considering their manuscript. Many journals employ a double-blind peer-review process, in which the identities of study authors are also not known to the peer-reviewers.

Having reviewed matters of design, procedure, measurement, data analysis, and reporting that are pertinent, one recognizes that no one single study, even with optimal design and procedures, can answer the relevant questions about the efficacy and effectiveness of therapy. Rather, a series and collection of studies, with varying approaches, is necessary. The criteria for determining empirically supported treatments have been proposed, and the quest for identification of such treatments continues. The goal is for the research to be rigorous, with the end goal being that the most promising procedures serve professional practice and those in need of services.

Therapy outcome research plays a vital role in facilitating a dialogue between scientist-practitioners and the public and private sector (e.g., Department of Health and Human Services, insurance payers, policy-makers). Outcome research is increasingly being examined by both managed care organizations and professional associations with the intent of formulating practice guidelines for cost-effective psychological care that provides maximal service to those in need. There is the risk that psychological science and practice will be co-opted and exploited in the service only of cost-containment and profitability: Therapy outcome research must retain scientific rigor while enhancing the ability of practitioners to deliver effective procedures to individuals in need.

Achenbach, T. M., McConaughy, S. H., & Howell, C. T. ( 1987 ). Child/adolescent behavioral and emotional problems: Implication of cross-informant correlations for situational specificity.   Psychological Bulletin, 101, 213–232.

Addis, M., Cardemil, E. V., Duncan, B., & Miller, S. ( 2006 ). Does manualization improve therapy outcomes? In J. C. Norcross, L. E. Beutler, & R. F. Levant (Eds.), Evidence-based practices in mental health (pp. 131–160). Washington, DC: American Psychological Association.

Google Scholar

Google Preview

Addis, M., & Krasnow, A. ( 2000 ). A national survey of practicing psychologists’ attitudes toward psychotherapy treatment manuals.   Journal of Consulting and Clinical Psychology, 68, 331–339.

Aiken, L. S., & West, S. G. ( 1991 ). Multiple regression: Testing and interpreting interactions. Newbury Park, CA: Sage.

Arnold, L. E., Elliott, M., Sachs, L., Bird, H., Kraemer, H. C., Wells, K. C., et al. ( 2003 ). Effects of ethnicity on treatment attendance, stimulant response/dose, and 14-month outcome in ADHD.   Journal of Consulting and Clinical Psychology, 71, 713–727.

Barlow, D. H. ( 1989 ). Treatment outcome evaluation methodology with anxiety disorders: Strengths and key issues.   Advances in Behavior Research and Therapy, 11, 121–132.

Baron, R. M., & Kenny, D. A. ( 1986 ). The mediator-moderator variable distinction in social psychological research: Conceptual, strategic, and statistical consideration.   Journal of Personality and Social Psychology, 51, 1173–1182.

Baucom, D., Shoham, V., Mueser, K., Daiuto, A., & Stickle, T. ( 1998 ). Empirically supported couple and family interventions for marital distress and adult mental health problems.   Journal of Consulting and Clinical Psychology, 66, 53–88.

Begg, C. B., Cho, M. K., Eastwood, S., Horton, R., Moher, D., Olkin, I., et al. ( 1996 ). Improving the quality of reporting of randomized controlled trials: The CONSORT statement.   JAMA, 276, 637–639.

Bernal, G., Bonilla, J., & Bellido, C. ( 1995 ). Ecological validity and cultural sensitivity for outcome research: Issues for the cultural adaptation and development of psychosocial treatments with Hispanics.   Journal of Abnormal Child Psychology, 23, 67–82.

Bersoff, D. M., & Bersoff, D. N. ( 1999 ). Ethical perspectives in clinical research. In P. C. Kendall, J. Butcher, & G. Holmbeck (Eds.), Handbook of research methods in clinical psychology (pp. 31–55).

Bernal, G., & Scharron-Del-Rio, M. R. ( 2001 ). Are empirically supported treatments valid for ethnic minorities? Toward an alternative approach for treatment research.   Cultural Diversity and Ethnic Minority Psychology, 7, 328–342.

Beutler, L. ( 1979 ). Toward specific psychological therapies for specific conditions.   Journal of Consulting and Clinical Psychology, 47, 882–897.

Beutler, L. ( 1998 ). Identifying empirically supported treatments: What if we didn’t?   Journal of Consulting and Clinical Psychology, 66, 37–52.

Beutler, L. E., & Moleiro, C. ( 2001 ). Clinical versus reliable and significant change.   Clinical Psychology: Science and Practice, 8, 441–445.

Borkovec, T., & Castonguay, L. ( 1998 ). What is the scientific meaning of empirically supported therapy?   Journal of Consulting and Clinical Psychology, 54, 136–142.

Blanton, H., & Jaccard, J. ( 2006 ). Arbitrary metrics in psychology.   American Psychologist, 61, 27–41.

Brown, R. A., Evans, M., Miller, I., Burgess, E., & Mueller, T. ( 1997 ). Cognitive-behavioral treatment for depression in alcoholism.   Journal of Consulting and Clinical Psychology, 65, 715–726.

Chambless, D. L., & Hollon, S. D. ( 1998 ). Defining empirically supported therapies.   Journal of Consulting and Clinical Psychology, 66, 7–18.

Cohen, J., & Cohen, P. ( 1983 ). Applied multiple regression/correlation analysis for the behavioral sciences (2 nd edition). Hillsdale, NJ: Erlbaum.

Collins, W. A., Maccoby, E. E., Steinberg, L., Hetherington, E. M., & Bornstein, M. H. ( 2000 ). Contemporary research on parenting: The case for nature and nurture.   American Psychologist, 55, 218–232.

Comer, J. S., & Kendall, P. C. ( 2004 ). A symptom-level examination of parent-child agreement in the diagnosis of anxious youths.   Journal of the American Academy of Child and Adolescent Psychiatry, 43, 878–886.

Compas, B., Haaga, D., Keefe, F., Leitenberg, H., & Williams, D. ( 1998 ). Sampling of empirically supported psychological treatments from health psychology: Smoking, chronic pain, cancer, and bulimia nervosa.   Journal of Consulting and Clinical Psychology, 66, 89–112.

Cooper, H., & Hedges, L. V. ( 1994 ). The handbook of research synthesis . New York: Russell Sage.

Cooper, H. M., & Rosenthal, R. ( 1980 ). Statistical versus traditional procedures for summarizing research findings.   Psychological Bulletin, 87, 442–449.

Coyne, J. C. ( 1994 ). Self-reported distress: Analog or ersatz depression?   Psychological Bulletin, 116, 29–45.

Creed, T. A., & Kendall, P. C. ( 2005 ). Therapist alliance-building behavior within a cognitive-behavioral treatment for anxiety in youth.   Journal of Consulting and Clinical Psychology, 73, 498–505.

De Los Reyes, A., & Kazdin, A. E. ( 2005 ). Informant discrepancies in the assessment of childhood psychopathology: A critical review, theoretical framework, and recommendations for further study.   Psychological Bulletin, 131, 483–509.

De Los Reyes, A., & Kazdin, A. E. ( 2006 ). Conceptualizing changes in behavior in intervention research: The range of possible changes model.   Psychological Review, 113, 554–583.

Delucchi, K., & Bostrom, A. ( 1999 ). Small sample longitudinal clinical trials with missing data: A comparison of analytic methods.   Psychological Methods, 4, 158–172.

DeRubeis, R., & Crits-Christoph, P. ( 1998 ). Empirically supported individual and group psychological treatments for adult mental disorders.   Journal of Consulting and Clinical Psychology, 66, 27–52.

Dimidjian, S., Hollon, S. D., Dobson, K. S., Schmaling, K. B., Kohlenberg, R. J., & Addis, M. E. ( 2006 ). Randomized trial of behavioral activation, cognitive therapy, and antidepressant medication in the acute treatment of adults with major depression.   Journal of Consulting and Clinical Psychology, 74, 658 –670.

Dobson, K. S., & Hamilton, K. E. ( 2002 ). The stage model for psychotherapy manual development: A valuable tool for promoting evidence-based practice.   Clinical Psychology: Science and Practice, 9, 407–409.

Dobson, K. S., Hollon, S. D., Dimidjian, S., Schmaling, K. B., Kohlenberg, R. J., Gallop, R. J., et al. ( 2008 ). Randomized trial of behavioral activation, cognitive therapy, and antidepressant medication in the prevention of relapse and recurrence in major depression.   Journal of Consulting and Clinical Psychology, 76, 468–477.

Dobson, K. S., & Shaw, B. ( 1988 ). The use of treatment manuals in cognitive therapy. Experience and issues. Journal of Consulting and Clinical Psychology, 56 , 673–682.

Durlak, J. A. ( 1995 ). Understanding meta-analysis. In L. G. Grimm, & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (pp. 319–352). Washington, DC: American Psychological Association.

Durlak, J. A. ( 1999 ). Meta-analytic research methods. In P. C. Kendall, J. N. Butcher, & G. N. Holmbeck (Eds.), Research methods in clinical psychology (pp. 419–429). New York: John Wiley & Sons.

Edelbrock, C., Costello, A. J., Dulcan, M. K., Conover, N. C., & Kalas, R. ( 1986 ). Parent-child agreement on psychiatric symptoms assessed via a structured interview.   Journal of Child Psychology and Psychiatry, 27, 181–190.

Fishman, D.B. ( 2000 ). Transcending the efficacy versus effectiveness research debate: Proposal for a new, electronic “Journal of Pragmatic Case Studies. ” Prevention and Treatment, 3, ArtID8.

Follette, W. C., & Callaghan, G. M. ( 1996 ). The importance of the principle of clinical significance—defining significant to whom and for what purpose: A response to Tingey, Lambert, Burlingame, and Hansen.   Psychotherapy Research, 6, 133–143.

Forgatch, M. S., & DeGarmo, D. S. ( 1999 ). Parenting through change: An effective prevention program for single mothers.   Journal of Consulting and Clinical Psychology, 67, 711–724.

Flaherty, J. A., & Meaer, R. ( 1980 ). Measuring racial bias in inpatient treatment.   American Journal of Psychiatry, 137, 679–682.

Furr, J. M., Comer, J. S., Edmunds, J., & Kendall, P. C. ( 2008 , November). Disasters and Youth: A Meta-Analytic Examination of Posttraumatic Stress. Paper presented at the annual meeting of the Association for Behavioral and Cognitive Therapies, Orlando, FL.

Garfield, S. ( 1998 ). Some comments on empirically supported psychological treatments.   Journal of Consulting and Clinical Psychology, 66, 121–125.

Goldfried, M., & Wolfe, B. ( 1998 ). Toward a more clinically valid approach to therapy research.   Journal of Consulting and Clinical Psychology, 66, 143–150.

Gladis, M. M., Gosch, E. A., Dishuk, N. M., & Crits-Cristoph, P. ( 1999 ). Quality of Life: Exampling the scope of clinical significance.   Journal of Consulting and Clinical Psychology, 67, 320–331.

Greenhouse, J., Stangl, D., & Bromberg, J. ( 1989 ). An introduction to survival analysis: Statistical methods for analysis of clinical trial data.   Journal of Consulting and Clinical Psychology, 57, 536–544.

Grills, A. E., & Ollendick, T. H. ( 2003 ). Multiple informant agreement and the Anxiety Disorders Interview Schedule for Parents and Children.   Journal of the American Academy of Child and Adolescent Psychiatry, 42, 30–40.

Hall, G. C. N. ( 2001 ). Psychotherapy research with ethnic minorities: Empirical, ethnical, and conceptual issues.   Journal of Consulting and Clinical Psychology, 69, 502–510.

Havik, O. E., & VandenBos, G. R. ( 1996 ). Limitations of manualized psychotherapy for everyday clinical practice.   Clinical Psychology: Science and Practice, 3, 264–267.

Hedeker, D., & Gibbons, R. D. ( 1994 ). A random-effects ordinal regression model for multilevel analysis.   Biometrics, 50, 933–944.

Hedeker, D., & Gibbons, R. D. ( 1997 ). Application of random-effects pattern-mixture models for missing data in longitudinal studies.   Psychological Methods, 2, 64–78.

Hoagwood, K. ( 2002 ). Making the translation from research to its application: The je ne sais pas of evidence-based practices.   Clinical Psychology: Science and Practice, 9, 210–213.

Hollon, S. D. ( 1996 ). The efficacy and effectiveness of psychotherapy relative to medications.   American Psychologist, 51, 1025–1030.

Hollon, S. D., & DeRubeis, R. J. ( 1981 ). Placebo-psychotherapy combinations: Inappropriate representation of psychotherapy in drug-psychotherapy comparative trials.   Psychological Bulletin, 90, 467–477.

Hollon, S. D., Garber, J., & Shelton, R. C. ( 2005 ). Treatment of depression in adolescents with cognitive behavior therapy and medications: A commentary on the TADS project.   Cognitive and Behaviowral Practice, 12, 149–155.

Holmbeck, G. N. ( 1997 ). Toward terminological, conceptual, and statistical clarity in the study of mediators and moderators: Examples from the child-clinical and pediatric psychology literatures.   Journal of Consulting and Clinical and Clinical Psychology, 65, 599–610.

Homma-True, R., Greene, B., Lopez, S. R., & Trimble, J. E. ( 1993 ). Ethnocultural diversity in clinical psychology.   Clinical Psychologist, 46, 50–63.

Howard, K. I., Lueger, R., Maling, M., & Martinovich, Z. ( 1993 ). A phase model of psychotherapy.   Journal of Consulting and Clinical Psychology, 61, 678–685.

Hsu, L. ( 1996 ). On the identification of clinically significant changes: Reinterpretation of Jacobson’s cut scores.   Journal of Psychopathology and Behavior Assessment, 18, 371–386.

Huey, S. J., & Polo, A. J. ( 2008 ). Evidence-based psychosocial treatments for ethnic minority youth.   Journal of Clinical Child and Adolescent Psychology, 37, 262–301.

Jaccard, J., & Guilamo-Ramos, V. ( 2002 a). Analysis of variance frameworks in clinical child and adolescent psychology: Issues and recommendations.   Journal of Clinical Child and Adolescent Psychology, 31, 130–146.

Jaccard, J., & Guilamo-Ramos, V. ( 2002 b). Analysis of variance frameworks in clinical child and adolescent psychology: Advanced issues and recommendations.   Journal of Clinical Child and Adolescent Psychology, 31, 278–294.

Jacobson, N. S., Follette, W. C., & Revenstorf, D. ( 1984 ). Psychotherapy outcome research: Methods for reporting variability and evaluating clinical significance.   Behavior Therapy, 15, 336–352.

Jacobson, N. S., & Hollon, S. D. ( 1996 a). Cognitive-behavior therapy versus pharmacotherapy: Now that the jury’s returned its verdict, it’s time to present the rest of the evidence.   Journal of Consulting and Clinical Psychology, 74, 74–80.

Jacobson, N. S., & Hollon, S. D. ( 1996 b). Prospects for future comparisons between drugs and psychotherapy: Lessons from the CBT-versus-pharmacotherapy exchange.   Journal of Consulting and Clinical Psychology, 64, 104–108.

Jacobson, N. S., Roberts, L. J., Berns, S. B., & McGlinchey, J. B. ( 1999 ). Methods for defining and determining the clinical significance of treatment effects. Description, application, and alternatives. Journal of Consulting and Clinical Psychology, 67, 300–307.

Jacobson, N. S., & Traux, P. ( 1991 ). Clinical significance: A statistic approach to defining meaningful change in psychotherapy research.   Journal of Consulting and Clinical Psychology, 59, 12–19.

Jarrett, R. B., Vittengl, J. R., Doyle, K., & Clark, L. A. ( 2007 ). Changes in cognitive content during and following cognitive therapy for recurrent depression: Substantial and enduring, but not predictive of change in depressive symptoms.   Journal of Consulting and Clinical Psychology, 75, 432–446.

Jaycox, L., Foa, E., & Morral, A. ( 1998 ). Influence of emotional engagement and habituation on exposure therapy for PTSD.   Journal of Consulting and Clinical Psychology, 66, 185–192.

Jensen, P. S. ( 2001 ). Clinical equivalence: A step, a misstep, or just a misnomer?   Clinical Psychology: Science and Practice, 8, 436–440.

Karver, M., Shirk, S., Handelsman, J. B., Fields, S., Crisp, H., Gudmundsen, G., & McMakin, D. ( 2008 ). Relationship processes in youth psychotherapy: Measuring alliance, alliance-building behaviors, and client involvement.   Journal of Emotional and Behavioral Disorders, 16, 15–28.

Kazdin, A. E. ( 1999 ). The meanings and measurement of clinical significance.   Journal of Consulting and Clinical Psychology, 67, 332–339.

Kazdin, A. E. ( 2003 ). Research design in clinical psychology, 4th ed . Boston, MA: Allyn and Bacon.

Kazdin, A. E., & Bass, D. ( 1989 ). Power to detect differences between alternative treatments in comparative psychotherapy outcome research.   Journal of Consulting and Clinical Psychology, 57, 138–147.

Kazdin, A. E., Marciano, P. L., & Whitley, M. K. ( 2005 ). The therapeutic alliance in cognitive-behavioral treatment of children referred for oppositional, aggress, and antisocial behavior.   Journal of Consulting and Clinical Psychology, 73, 726–730.

Kazdin, A. E., & Weisz, J. R. ( 1998 ). Identifying and developing empirically supported child and adolescent treatments.   Journal of Consulting and Clinical Psychology, 66, 19–36 .

Kendall, P. C. ( 1999 ). Introduction to the special section: Clinical Significance.   Journal of Consulting and Clinical Psychology, 67, 283–284.

Kendall, P. C., & Beidas, R. S. ( 2007 ). Smoothing the trail for dissemination of evidence-based practices for youth: Flexibility within fidelity.   Professional Psychology: Research and Practice, 38, 13–20.

Kendall, P. C., & Chu, B. ( 1999 ). Retrospective self-reports of therapist flexibility in a manual-based treatment for youths with anxiety disorders.   Journal of Clinical Child Psychology, 29, 209–220.

Kendall, P. C., Flannery-Schroeder, E., & Ford, J. ( 1999 ). Therapy outcome research methods (330–363). In P. C. Kendall, J. Butcher, & G. Holmbeck (Eds.). Handbook of research methods in clinical psychology (2nd ed.). New York: Wiley.

Kendall, P. C., & Grove, W. ( 1988 ). Normative comparisons in therapy outcome.   Behavioral Assessment, 10, 147–158.

Kendall, P. C., & Hedtke, K. A. ( 2006 ). Cognitive-behavioral therapy for anxious children, 3rd edition . Ardmore, PA: Workbook Publishing. www.WorkbookPublishing.com

Kendall, P. C., & Hollon, S. D. ( 1983 ). Calibrating therapy: Collaborative archiving of tape samples from therapy outcome trials.   Cognitive Therapy and Research, 7, 199–204.

Kendall, P. C., Hollon, S., Beck, A. T., Hammen, C., & Ingram, R. ( 1987 ). Issues and recommendations regarding use of the Beck Depression Inventory.   Cognitive Therapy and Research, 11, 289–299.

Kendall, P. C., Holmbeck, G., & Verduin, T. L. ( 2002 ). Methodology, design, and evaluation in psychotherapy research. In M. Lambert, A. Bergin, & S. Garfield (Eds.), Handbook of psychotherapy and behavior change, 5th ed. New York: Wiley.

Kendall, P. C., Hudson, J. L., Gosch, E., Flannery-Schroeder, E., & Suveg, C. ( 2008 ). Cognitive-behavioral therapy for anxiety disordered youth: A randomized clinical trial evaluating child and family modalities.   Journal of Consulting and Clinical Psychology, 76, 282–297.

Kendall, P. C., & Kessler, R. C. ( 2002 ). The impact of childhood psychopathology interventions on subsequent substance abuse: Policy implications, comments, and recommendations.   Journal of Consulting and Clinical Psychology, 70, 1303–1306.

Kendall, P. C., Marrs-Garcia, A., Nath, S. R., & Sheldrick, R. C. ( 1999 ). Normative comparisons for the evaluation of clinical significance.   Journal of Consulting and Clinical Psychology, 67, 285–299.

Kendall, P. C., & Maruyama, G. ( 1985 ). Meta-analysis: On the road to synthesis of knowledge?   Clinical Psychology Review, 5, 79–89.

Kendall, P. C., & Ollendick, T. H. ( 2004 ). Setting the research and practice agenda for anxiety in children and adolescence: A topic comes of age.   Cognitive and Behavioral Practice, 11, 65–74.

Kendall, P. C., Pellegrini, D. S., & Urbain, E. S. ( 1981 ). Approaches to assessment for cognitive-behavioral interventions with children. In P. C. Kendall & S. D. Hollon (Eds.), Assessment strategies for cognitive-behavioral interventions . New York: Academic Press.

Kendall, P. C., Safford, S., Flannery-Schroeder, E., & Webb, A. ( 2002 ). Child anxiety treatment: Outcomes in adolescence and impact on substance use and depression at 7.4-year follow-up.   Journal of the Consulting and Clinical Psychology, 72, 276–287.

Kendall, P. C., Southam-Gerow, M. A. ( 1995 ). Issues in the transportability of treatment: The case of anxiety disorders in youth.   Journal of Consulting and Clinical Psychology, 63, 702–708.

Kendall, P. C., & Sugarman, A. ( 1997 ). Attrition in the treatment of childhood anxiety disorders.   Journal of Consulting and Clinical Psychology, 65, 883–888.

Kendall, P. C., & Treadwell, K. R. H. ( 2007 ). The role of self-statements as a mediator in treatment for youth with anxiety disorders.   Journal of Consulting and Clinical Psychology, 75, 380–389.

Kolko, D. J., Brent, D. A., Baugher, M., Bridge, J., & Birmaher, B. ( 2000 ). Cognitive and family therapies for adolescent depression: Treatment specificity, mediation, and moderation.   Journal of Consulting and Clinical Psychology, 68, 603–614.

Kraemer, H. C., Gardner, C., Brooks, J., & Yesavage, J. ( 1998 ). Advantages of excluding underpowered studies in meta-analysis: Inclusionist versus exclusionist viewpoints.   Psychological Methods, 3, 23–31.

Kraemer, H. C., & Kupfer, D. J. ( 2006 ). Size of treatment effects and their importance to clinical research and practice.   Biological Psychiatry, 59, 990–996.

Kraemer, H. C., Wilson, G. T., Fairburn, C. G., & Agras, W. S. ( 2002 ). Mediators and moderators of treatment effects in randomized clinical trials.   Archives of General Psychiatry, 59, 877–883.

Krupnick, J., Shea, T., & Elkin, I. ( 1986 ). Generalizability of treatment studies utilizing solicited patients.   Journal of Consulting and Clinical Psychology, 54, 68–78.

Laird, N. M., & Ware, J. H. ( 1982 ). Random-effects models for longitudinal data.   Biometrics, 38, 963–974.

Lambert, M. J., Hansen, N. B., & Finch, A. E. ( 2001 ). Patient-focused research: Using patient outcome data to enhance treatment effects.   Journal of Consulting and Clinical Psychology, 69, 159–172.

Leon, A. C., Mallinckrodt, C. H., Chuang-Stein, C., Archibald, D. G., Archer, G. E., & Chartier, K. ( 2006 ). Attrition in randomized controlled clinical trials: Methodological issues in psychopharmacology.   Biological Psychiatry, 59, 1001–1005.

Lipsey, M. W., & Wilson, D. B. ( 2000 ). Practical meta-analysis. Applied social research methods series (Vol. 49). Thousand Oaks, CA: Sage Publications.

Little, R. J. A., & Rubin, D. ( 2002 ). Statistical analysis with missing data , 2nd ed. New York: Wiley.

Lopez, S. R. ( 1989 ). Patient variable biases in clinical judgment: Conceptual overview and methodological considerations.   Psychological Bulletin, 106, 184–204.

Luborsky, L., Singer, B., & Luborsky, L. ( 1975 ). Comparative studies of psychotherapy.   Archives of General Psychiatry, 32, 995–1008.

MacKinnon, D. P., & Dwyer, J. H. ( 1993 ). Estimating mediated effects in prevention studies.   Evaluation Review, 17, 144–158.

Marcus, S. M., Gorman, J., Shea, M. K., Lewin, D., Martinez, J., Ray, S. et al. ( 2007 ). A comparison of medication side effect reports by panic disorder patients with and without concomitant cognitive behavior therapy.   American Journal of Psychiatry, 164, 273–275.

Martinovich, Z., Saunders, S., & Howard, K. ( 1996 ). Some comments on “Assessing clinical significance. ” Psychotherapy Research, 6, 124–132.

Mason, M. J. ( 1999 ). A review of procedural and statistical methods for handling attrition and missing data.   Measurement and Evaluation in Counseling and Development, 32, 111–118.

McBride, C., Atkinson, L., Quilty, L. C., & Bagby, R. M. ( 2006 ). Attachment as moderator of treatment outcome in major depression: A randomized control trial of interpersonal psychotherapy versus cognitive behavior therapy.   Journal of Consulting and Clinical Psychology, 74, 1041–1054.

Moher, D., Schulz, K. F., & Altman, D. ( 2001 ). The CONSORT Statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials.   JAMA, 285, 1987–1991.

Molenberghs, G., Thijs, H., Jansen, I., Beunckens, C., Kenward, M. G., Mallinckrodt, C., & Carroll, R. ( 2004 ). Analyzing incomplete longitudinal clinical trial data.   Biostatistics, 5, 445–464.

MTA Cooperative Group. ( 1999 ). A 14-month randomized clinical trial of treatment strategies for attention-deficit/hyperactivity disorder.   Archives of General Psychiatry, 56, 1088–1096.

Mufson, L., Dorta, K. P., Wickramaratne, P., Nomura, Y., Olfson, M., & Weissman, M. M. ( 2004 ). A randomized effectiveness trial of interpersonal psychotherapy for depressed adolescents.   Archives of General Psychiatry, 61, 577–584.

Neuner, F., Onyut, P. L., Ertl, V., Odenwald, M., Schauer, E., & Elbert, T. ( 2008 ). Treatment of posttraumatic stress disorder by trained lay counselors in an African refugee settlement: A randomized controlled trial.   Journal of Consulting and Clinical Psychology, 76, 686–694.

Ogden, T., & Hagen, K. A. ( 2008 ). Treatment effectiveness of parent management training in Norway: A randomized controlled trial of children with conduct problems.   Journal of Consulting and Clinical Psychology, 76, 607–621.

O’Leary, K. D., & Borkovec, T. D. ( 1978 ). Conceptual, methodological, and ethical problems of placebo groups in psychotherapy research.   American Psychologist, 33, 821–830.

Owens, E. B., Hinshaw, S. P., Kraemer, H. C., Arnold, L. E., Abikoff, H. B., Cantwell, D. P., et al. ( 2003 ). Which treatment for whom for ADHD? Moderators of treatment response in the MTA.   Journal of Consulting and Clinical Psychology, 71, 540–552.

Parloff, M. B. ( 1986 ). Placebo controls in psychotherapy research: A sine qua non or a placebo for research problems?   Journal of Consulting and Clinical Psychology, 54, 79–87.

Pediatric OCD Treatment Study (POTS) Team. ( 2004 ). Cognitive-behavior therapy, sertraline, and their combination for children and adolescents with obsessive-compulsive disorder: The Pediatric OCD Treatment Study (POTS) randomized controlled trial.   JAMA, 292, 1969–1976.

Pelham, W. E., Jr., Gnagy, E. M., Greiner, A. R., Hoza, B., Hinshaw, S. P., Swanson, J. M., et al. ( 2000 ). Behavioral versus behavioral and psychopharmacological treatment in ADHD children attending a summer treatment program.   Journal of Abnormal Child Psychology, 28, 507–525.

Perepletchikova, F., & Kazdin, A. E. ( 2005 ). Treatment integrity and therapeutic change: Issues and research recommendations.   Clinical Psychology: Science and Practice, 12, 365–383.

Rapee, R. M., Abbott, M. J., & Lyneham, H. J. ( 2006 ). Bibliotherapy for children with anxiety disorders using written materials for parents: A randomized controlled trial.   Journal of Consulting and Clinical Psychology, 74, 436–444.

Reis, B. F., & Brown, L. G. ( 2006 ). Preventing therapy dropout in the real world: The clinical utility of videotape preparation and client estimate of treatment duration.   Professional Psychology: Research and Practice, 37, 311–316.

Rosenthal, R. ( 1984 ). Meta-analytic procedures for social research. Beverly Hils, CA: Sage.

Ruscio, A. M., & Ruscio, J. ( 2002 ). The latent structure of analogue depression: Should the Beck Depression Inventory be used to classify groups?   Psychological Assessment, 14, 135–145.

Ruscio, J., & Ruscio, A. M. ( 2008 ). Categories and dimensions: Advancing psychological science through the study of latent structure.   Current Directions in Psychological Science, 17, 203–207.

Shadish, W. R., & Sweeney, R. B. ( 1991 ). Mediators and moderators in meta-analysis: There’s a reason we don’t let dodo birds tell us which psychotherapies should have prizes.   Journal of Consulting and Clinical Psychology, 59, 883–893.

Shadish, W. R., Matt, G. E., Navarro, A. M., & Phillips, G. ( 2000 ). The effects of psychological therapies under clinically representative conditions: A meta-analysis.   Psychological Bulletin, 126, 512–529.

Shirk, S. R., Gudmundsen, G., Kaplinski, H., & McMakin, D. L. ( 2008 ). Alliance and outcome in cognitive-behavioral therapy for adolescent depression.   Journal of Clinical Child and Adolescent Psychology, 37, 631–639.

Shulz, K. F., Chalmers, I., Hayes, R. J., & Altman, D. G. ( 1995 ). Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treatment effects in clinical trials.   JAMA, 273, 408–412.

Silverman, W. K., Kurtines, W. M., & Hoagwood, K. ( 2004 ). Research progress on effectiveness, transportability, and dissemination of empirically supported treatments: Integrating theory and research.   Clinical Psychology: Science and Practice, 11, 295–299.

Smith, M.L., & Glass, G.V. ( 1977 ). Meta-analysis of psychotherapy outcome studies.   American Psychologist, 32, 752–760.

Snowden, L. R. ( 2003 ). Bias in mental health assessment and intervention: Theory and evidence.   American Journal of Public Health, 93, 239–243.

Sobel, M. E. ( 1988 ). Direct and indirect effects in linear structural equation models. In J. S. Long (Ed.), Common problems/proper solutions: Avoiding error in quantitative research (pp. 46–64). Beverly Hills, CA: Sage.

Southam-Gerow, M. A., Ringeisen, H. L., & Sherrill, J. T. ( 2006 ). Integrating interventions and services research: Progress and prospects.   Clinical Psychology: Science and Practice, 13, 1–8.

Stricker, G. ( 2000 ). What is a scientist-practitioner anyway?   Journal of Clinical Psychology , 58, 1277–1283.

Sue, S. ( 1998 ). In search of cultural competence in psychotherapy and counseling.   American Psychologist, 53, 440–448.

Suveg, C., Comer, J. S., Furr, J. M., & Kendall, P. C. ( 2006 ). Adapting manualized CBT for a cognitively delayed child with multiple anxiety disorders.   Clinical Case Studies, 5, 488–510.

Sweeney, M., Robins, M., Ruberu, M., & Jones, J. ( 2005 ). African-American and Latino families in TADS: Recruitment and treatment considerations.   Cognitive and Behavioral Practice, 12, 221–229.

Taft, C. T., & Murphy, C. M. ( 2007 ). The working alliance in intervention for partner violence perpetrators: Recent research and theory.   Journal of Family Violence, 22, 11–18.

Tennen, H., Hall, J. A., & Affleck, G. ( 1995 ). Depression research methodologies in the Journal of Personality and Social Psychology : A review and critique. Journal of Personality and Social Psychology, 68 , 870–884.

Tingey, R. C., Lambert, M. J., Burlingame, G. M., & Hansen, N. B. ( 1996 ). Assessing clinical significance: Proposed extensions to method.   Psychotherapy Research, 6, 109–123.

Treadwell, K., Flannery-Schroeder, E. C., & Kendall, P. C. ( 1994 ). Ethnicity and gender in a sample of clinic-referred anxious children: Adaptive functioning, diagnostic status, and treatment outcome.   Journal of Anxiety Disorders, 9, 373–384.

Vanable, P. A., Carey, M. P., Carey, K. B., & Maisto, S. A. ( 2002 ). Predictors of participation and attrition in a health promotion study involving psychiatric outpatients.   Journal of Consulting and Clinical Psychology, 70, 362–368.

Walders, N., Drotar, D. ( 2000 ). Understanding cultural and ethnic influences in research with child clinical and pediatric psychology populations. In D. Drotar (Ed.), Handbook of research in pediatric and clinical child psychology (pp. 165–188).

Walkup, J.T., Albano, A.M., Piacentini, J., Birmaher, B., Compton, S.N., et al. ( 2008 ). Cognitive behavioral therapy, sertraline, or a combination in childhood anxiety.   New England Journal of Medicine, 359, 1–14.

Waltz, J., Addis, M. E., Koerner, K., & Jacobson, N. S. ( 1993 ). Testing the integrity of a psychotherapy protocol: Assessment of adherence and competence.   Journal of Consulting and Clinical Psychology, 61, 620–630.

Weisz, J., Donenberg, G. R., Han, S. S., & Weiss, B. ( 1995 ). Bridging the gap between laboratory and clinic in child and adolescent psychotherapy.   Journal of Consulting and Clinical Psychology, 63, 688–701.

Weisz, J. R., Jensen-Doss, A., & Hawley, K. M. ( 2006 ). Evidence-based youth psychotherapies versus usual clinical care: A meta-analysis of direct comparisons.   American Psychologist, 61, 671–689.

Weisz, J. R., McCarty, C. A., & Valeri, S. M. ( 2006 ). Effects of psychotherapy for depression in children and adolescents: A meta-analysis.   Psychological Bulletin, 132, 132–149.

Weisz, J. R., Weiss, B., & Donenberg, G. R. ( 1992 ). The lab versus the clinic: Effects of child and adolescent psychotherapy.   American Psychologist, 47, 1578–1585.

Weisz, J. R., Weiss, B., Han, S. S., Granger, D. A., & Morton, T. ( 1995 ). Effects of psychotherapy with children and adolescents revisited: A meta-analysis of treatment outcome studies.   Psychological Bulletin, 117, 450–468.

Westbrook, D., & Kirk, J. ( 2007 ). The clinical effectiveness of cognitive behaviour therapy: Outcome for a large sample of adults treated in routine practice.   Behaviour Research and Therapy, 43, 1243–1261.

Westen, D., Novotny, C., & Thompson-Brenner, H. ( 2004 ). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials.   Psychological Bulletin, 130, 631–663.

Wilson, G. T. ( 1995 ). Empirically validated treatments as a basis for clinical practice: Problems and prospects. In S. C. Hayes, V. M. Follette, R. D. Dawes, & K. Grady (Eds.), Scientific standards of psychological practice: Issues and recommendations (pp. 163–196). Reno, NV: Context Press.

Yeh, M., McCabe, K., Hough, R. L., Dupuis, D., & Hazen, A. ( 2003 ). Racial and ethnic differences in parental endorsement of barriers to mental health services in youth.   Mental Health Services Research, 5, 65–77.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

The Science of Psychology

5 Experimental and Clinical Psychologists

Learning objectives.

  • Define the clinical practice of psychology and distinguish it from experimental psychology.
  • Explain how science is relevant to clinical practice.
  • Define the concept of an empirically supported treatment and give some examples.

Who Conducts Scientific Research in Psychology?

Experimental psychologists.

Scientific research in psychology is generally conducted by people with doctoral degrees (usually the  doctor of philosophy [Ph.D.] ) and master’s degrees in psychology and related fields, often supported by research assistants with bachelor’s degrees or other relevant training. Some of them work for government agencies (e.g., doing research on the impact of public policies), national associations (e.g., the American Psychological Association), non-profit organizations (e.g., National Alliance on Mental Illness), or in the private sector (e.g., in product marketing and development; organizational behavior). However, the majority of them are college and university faculty, who often collaborate with their graduate and undergraduate students. Although some researchers are trained and licensed as clinicians for mental health work—especially those who conduct research in clinical psychology—the majority are not. Instead, they have expertise in one or more of the many other subfields of psychology: behavioral neuroscience, cognitive psychology, developmental psychology, personality psychology, social psychology, and so on. Doctoral-level researchers might be employed to conduct research full-time or, like many college and university faculty members, to conduct research in addition to teaching classes and serving their institution and community in other ways.

Of course, people also conduct research in psychology because they enjoy the intellectual and technical challenges involved and the satisfaction of contributing to scientific knowledge of human behavior. You might find that you enjoy the process too. If so, your college or university might offer opportunities to get involved in ongoing research as either a research assistant or a participant. Of course, you might find that you do not enjoy the process of conducting scientific research in psychology. But at least you will have a better understanding of where scientific knowledge in psychology comes from, an appreciation of its strengths and limitations, and an awareness of how it can be applied to solve practical problems in psychology and everyday life.

Scientific Psychology Blogs

A fun and easy way to follow current scientific research in psychology is to read any of the many excellent blogs devoted to summarizing and commenting on new findings. Among them are the following:

Research Digest, http://digest.bps.org.uk/ Talk Psych, http://www.talkpsych.com/ Brain Blogger, http://brainblogger.com/ Mind Hacks, http://mindhacks.com/ PsyBlog, http://www.spring.org.uk

You can also browse to http://www.researchblogging.org , select psychology as your topic, and read entries from a wide variety of blogs.

Clinical Psychologists

Psychology is the scientific study of behavior and mental processes. But it is also the application of scientific research to “help people, organizations, and communities function better” (American Psychological Association, 2011) [1] . By far the most common and widely known application is the clinical practice of psychology — the diagnosis and treatment of psychological disorders and related problems. Let us use the term  clinical practice  broadly to refer to the activities of clinical and counseling psychologists, school psychologists, marriage and family therapists, licensed clinical social workers, and others who work with people individually or in small groups to identify and help address their psychological problems. It is important to consider the relationship between scientific research and clinical practice because many students are especially interested in clinical practice, perhaps even as a career.

The main point is that psychological disorders and other behavioral problems are part of the natural world. This means that questions about their nature, causes, and consequences are empirically testable and therefore subject to scientific study. As with other questions about human behavior, we cannot rely on our intuition or common sense for detailed and accurate answers. Consider, for example, that dozens of popular books and thousands of websites claim that adult children of alcoholics have a distinct personality profile, including low self-esteem, feelings of powerlessness, and difficulties with intimacy. Although this sounds plausible, scientific research has demonstrated that adult children of alcoholics are no more likely to have these problems than anybody else (Lilienfeld et al., 2010) [2] . Similarly, questions about whether a particular psychotherapy is effective are empirically testable questions that can be answered by scientific research. If a new psychotherapy is an effective treatment for depression, then systematic observation should reveal that depressed people who receive this psychotherapy improve more than a similar group of depressed people who do not receive this psychotherapy (or who receive some alternative treatment). Treatments that have been shown to work in this way are called empirically supported treatments .

Empirically Supported Treatments

An empirically supported treatment is one that has been studied scientifically and shown to result in greater improvement than no treatment, a placebo, or some alternative treatment. These include many forms of psychotherapy, which can be as effective as standard drug therapies. Among the forms of psychotherapy with strong empirical support are the following:

  • Acceptance and committment therapy (ACT) . for depression, mixed anxiety disorders, psychosis, chronic pain, and obsessive-compulsive disorder.
  • Behavioral couples therapy. For alcohol use disorders.
  • Cognitive behavioral therapy (CBT). For many disorders including eating disorders, depression, anxiety disorders, etc.
  • Exposure therapy. For post-traumatic stress disorder and phobias.
  • Exposure therapy with response prevention.  For obsessive-compulsive disorder.
  • Family-based treatment. For eating disorders.

For a more complete list, see the following website, which is maintained by Division 12 of the American Psychological Association, the Society for Clinical Psychology: http://www.div12.org/psychological-treatments

Many in the clinical psychology community have argued that their field has not paid enough attention to scientific research—for example, by failing to use empirically supported treatments—and have suggested a variety of changes in the way clinicians are trained and treatments are evaluated and put into practice. Others believe that these claims are exaggerated and the suggested changes are unnecessary (Norcross, Beutler, & Levant, 2005) [3] . On both sides of the debate, however, there is agreement that a scientific approach to clinical psychology is essential if the goal is to diagnose and treat psychological problems based on detailed and accurate knowledge about those problems and the most effective treatments for them. So not only is it important for scientific research in clinical psychology to continue, but it is also important for clinicians who never conduct a scientific study themselves to be scientifically literate so that they can read and evaluate new research and make treatment decisions based on the best available evidence.

  • American Psychological Association. (2011). About APA . Retrieved from http://www.apa.org/about ↵
  • Lilienfeld, S. O., Lynn, S. J., Ruscio, J., & Beyerstein, B. L. (2010). 50 great myths of popular psychology . Malden, MA: Wiley-Blackwell. ↵
  • Norcross, J. C., Beutler, L. E., & Levant, R. F. (Eds.). (2005). Evidence-based practices in mental health: Debate and dialogue on the fundamental questions . Washington, DC: American Psychological Association. ↵

An academic degree earned through intensive study of a particular discipline and the completion of a set of research studies that contribute new knowledge to the academic literature.

The diagnosis and treatment of psychological disorders and related problems.

A treatment that that has been shown through systematic observation to lead to better outcomes when compared to no-treatment or placebo control groups.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

CCRPS Clinical Research Taininrg

What is a Clinical Research Psychologist

Clinical psychology research is a specialization within clinical research. It is the study of behavioral and mental health. In many ways, it is as important to the nation's health and well being as medical research.

In the same way that medical scientists work to understand the prevention, genesis, and spread of various diseases, clinical research psychologists conduct rigorous psychological research studies to understand, prevent, and treat the psychological conditions as it applies to individuals, couples, families, cultures, and diverse communities.

Empirical results gathered from psychological research studies guide practitioners in developing effective interventions and techniques that clinical psychologists employ - proven, reliable results that improve lives, mend troubled relationships, manage addictions, and help manage and treat a variety of other mental health issues. Clinical psychology integrates science with practice and produces a field that encourages a robust, ongoing process of scientific discovery and clinical application .

Clinical research psychologists integrate the science of psychology and the treatment of complex human problems with the intention of promoting change. The four main goals of psychology are to describe, explain, predict and control the behavior and mental processes of others. This approach allows clinical researchers to accomplish their goals for their psychological studies, which is to describe, explain, predict, and in some cases, influence processes or behaviors of the mind. The ultimate goal of scientific research in psychology is to illustrate behaviors and give details on why they take place.

Clinical psychologists work largely in health and social care settings including hospitals, health centers, community mental health teams, Child and Adolescent Mental Health Services (CAMHS) and social services. They often work as part of a team with other health professionals and practitioners.

Salary and Education

The mean annual salary of a clinical psychologist is about $69,000, however, those with doctoral degrees can earn salaries of $116,343 or more. This industry is highly stable and growing, as psychological research becomes more important to various other industries.

If you want to become a clinical research psychologist, you need a master’s or doctorate degree. In these graduate programs, you will be trained at how to navigate this large body of research. In addition, many clinical psychology students are able to make significant contributions to the field during their education by assisting in labs and learning valuable field knowledge.

Research in clinical psychology is vast, containing hundreds if not thousands of topics. By engaging in research, we are investigating new ways to understand the human mind, and developing solutions to enrich the lives of all others, many students create current and up-to-date with psychology research at universities and research labs across the world.

Take courses from CCRPS and learn more on how to become a clinical research professional.

Clinical Research Coordinator Training

Pharmacovigilance Certification

CRA Training

ICH-GCP Training

Clinical Trials Assistant Training

Advanced Clinical Research Project Manager Certification

Advanced Principal Investigator Physician Certification

Medical Monitor Certification

Discover more from Clinical Research Training | Certified Clinical Research Professionals Course

What to Know About Working at Eurofins Clinical Research Laboratories (CRL)

What is a clinical research director.

Department Of Psychology and Neuroscience

Our Commitment to Research

Our faculty ranks include accomplished researchers who have made substantial contributions to the advancement of science in our field. The Department of Psychology and Neuroscience is consistently ranked in the top four departments in the College in terms of the external grant funding that is generated by the faculty and we were recently ranked 1st in the National Science Foundation survey for total Research and Development Funding. Read more about our reputation and accomplishments .

Developmental Research

Behavioral and Integrative Neuroscience

The primary focus of Behavioral and Integrative Neuroscience is to understand the biological basis of behavior. A central research theme in our program is to understand biological mechanisms underlying drug addiction including neural mechanisms mediating drug-seeking behaviors, pharmacological mechanisms underlying the analgesic and rewarding properties of abused substances, and the effects of abused substances on the immune system. In this regard, faculty conduct research using state-of-the-art experimental approaches including sophisticated behavioral assays, molecular and cell biology techniques, neuroimaging, electrophysiology, electrochemistry, and molecular genetic tools. Read more about our faculty’s research interests .

Clinical Psychology

Faculty research in Clinical Psychology encompasses a wide range of topics, ranging from the causes of psychological disorders and the psychological mechanisms underlying symptoms, to applied domains involving the development and evaluation of intervention and assessment instruments. Several major themes emerge in faculty research, including the study of the nature of psychopathology, the effectiveness of psychotherapy interventions, prevention, and therapy, health psychology, interpersonal relationships, and the study of ethnic minority issues related to psychological health. Read more about our faculty’s research interests .

Cognitive Psychology

Cognitive Psychology is the scientific study of mental processes underlying behavior, a broad research area encompassing the study of attention, perception, memory, language, reasoning and problem-solving. These research areas are strongly represented by the faculty of the Cognitive Psychology Program along with research on cognitive aging and cognitive neuroscience. The primary research method is behavioral experimentation with humans (both college-aged adults and healthy older adults). Other important methods use neuroimaging technologies (such as fMRI and ERP), which allow us to correlate neural processing with cognitive function, eye-tracking technologies, which allow for the study of reading and visual perception, and experimentation with special neurological populations, whose cognitive deficits can inform our understanding of the effects of brain damage on normal cognition. Read more about our faculty’s research interests .

Developmental Psychology

Faculty in the Developmental Psychology Program are interested in many of the same cognitive and social phenomena as our colleagues in other Programs, but with a specific focus on how these phenomena emerge, stabilize, and change as a function of maturation and experience. Thus, we conduct research on infancy, childhood, and adolescence, using either cross-sectional designs in which age is an independent variable or longitudinal designs in which individuals are assessed over days, months, years, or generations. We imbed our research within the physiological context of neural, hormonal, and physical maturation, and within environmental contexts such as the parent-child dyad, the family, the classroom, and the community. Read more about our faculty’s research interests .

Quantitative Psychology

Internationally known for research and training in quantitative and statistical methods for psychological research, our Quantitative Psychology faculty have methodological interests in measurement theory, survey methods, and methods for analysis of correlational data. Our methodological research focuses on education diversity, structure of personality, issues in health psychology, statistical techniques for the study of change over time, and development and adaptation of modeling and analysis tools that are suited to evaluating linear and nonlinear dynamical systems models. All of our faculty are active in interdisciplinary substantive research in a variety of fields, including educational testing, substance abuse, child development, developmental psychopathology, and diversity in education. In this work, these faculty bring their quantitative and substantive expertise to enhance design and analysis of empirical research projects. Read more about our faculty’s research interests .

Social Psychology

Faculty research in Social Psychology encompass many core areas of the field, with a particular strength in affective processes as it relates to well-being, attitudes and stereotyping, and social judgment and decision-making. Faculty research interests are broad and include attitudes and attitude change, decision making, social cognition, emotions, stereotyping and prejudice, interpersonal processes, and group interactions. Read more about our faculty’s research interests .

You are using an outdated browser. Please upgrade your browser to improve your experience. Thanks!

Stony Brook University

  • Department History
  • Areas of Study
  • Newsletters
  • PhD Program
  • Masters Program
  • Current PhD Student Information
  • Research Themes
  • Forms for Progress and Completion
  • Academic Advising
  • Calendar of Events
  • Course Schedules
  • Course Syllabi
  • Frequently Asked Questions
  • Get Involved
  • Honors Program
  • Internships
  • New Students
  • Other Resources
  • Preparing for Graduate School
  • Psychology URECA
  • Research Opportunities
  • Scholarships
  • Subject Pool Overview
  • Tips and Tricks
  • Anxiety Disorders Clinic
  • Integrative Neuroscience
  • Krasner Psychological Center
  • Marital Therapy & Forensic Assessment
  • SCAN Center
  • Shared Facilities
  • Diversity and Inclusion Excellence
  • Human Rights
  • Alumni Update Form
  • Website Questions

Clinical Psychology Overview

clinical psychology banner picture

The Doctoral Program in Clinical Psychology

The Stony Brook Ph.D. program in Clinical Psychology began in 1966. Based on chair rankings in US News and World Report, it has been ranked among the very top clinical programs in the United States for the past several decades, and it has a long tradition of strong publication rates by both faculty and graduates (Mattson et al., 2005; Roy et al., 2006). In the 2020 US News and World Report rankings , the Stony Brook Clinical Psychology doctoral program was ranked 3rd in the country. The clinical program was among the first in the country to espouse the behavioral tradition in clinical psychology. Currently, the program retains its behavioral roots, but has evolved to encompass a broader set of perspectives that are oriented around an empirical approach to clinical psychology. Our goal is to graduate clinical scientists who approach psychological problems from an evidence-based perspective and who are also skilled clinicians. As such, students receive research and clinical training in a broad range of approaches. Our program is most suited to students who are interested in pursuing academic and research-related careers.

Accreditation

The program is accredited by PCSAS (Psychological Clinical Science Accreditation System) through 2030. In addition, the program is a member of the PCSAS Founder's Circle. PCSAS provides rigorous, objective, and empirically based accreditation of Ph.D. programs in scientific clinical psychology. Its goal is to promote superior science-centered education and training in clinical psychology, increase the quality and number of clinical scientists contributing to the advancement of public health, and enhance the scientific knowledge base for mental and behavioral health care. PCSAS accreditation is in line with our program’s commitment to a clinical science training model.

The program is accredited (inactive) by the APA CoA (Commission on Accreditation, American Psychological Association, 750 First Street, NE, Washington, DC 20002-4242, Phone: 202-336-5979). The program was most recently accredited by APA CoA in the spring of 2018, and, at the time, received full accreditation until 2028.

Statement On Diversity

The Psychology Department and the clinical program respect and value diversity. We view diversity broadly, including (but not limited to) age, race, ethnicity, national origin, gender, gender identity, sexual orientation, socioeconomic status, religion, and ability status. Diversity in our student body is an important priority and contributes to the strength of our department. Our  Diversity Committee , composed of faculty and students, is dedicated to promotion of awareness, support, and dialogue with regard to all aspects of diversity in research and clinical training.

The research interests of the core faculty center on depressive disorders (child, adolescent, adult), anxiety disorders (child, adolescent, adult), autism spectrum disorders, personality, child maltreatment, close relationship functioning (e.g., discord and aggression among couples, romantic competence among adolescents and adults, relationship education), lesbian, gay, and bisexual issues (among youth and adults), emotion regulation processes (e.g., cognitive, interpersonal, neurobiological), and emotion and attention processes in normal and pathological conditions.

Research and Clinical Facilities

Departmental: Faculty maintain active laboratories for research and graduate training (see individual faculty pages for further description). Clinical facilities include the Krasner Psychological Center (KPC) and its affiliate, the Anxiety Disorders Clinic, which are training, research, and service units that provide psychological services and consultation to the community and campus, and the University Marital Therapy Clinic that provides consultation, assessment, and therapy for couples and individuals in the community who are experiencing relationship difficulties and serves as a center for research evaluation of couples.

Campus: Collaborative relationships exist with the Department of Psychiatry, the University Counseling and Psychological Services and the Center for Prevention and Outreach, where students can engage in research and clinical activities.

Off-campus: Affiliations have been established with numerous agencies on Long Island and in the surrounding areas, which provide opportunities for clinical externships and research collaboration.

Program Requirements

Official program requirements are detailed on our Program Requirements page. More generally, the program is designed to provide students with competencies in research, clinical work, and teaching through coursework, research mentoring, and clinical supervision. Students follow a program of coursework through their first 3 to 4 years in the program that includes courses pertaining to the foundations of clinical psychology (e.g., psychopathology, assessment, and intervention), research methods and statistics, and ethics. Students are also required to take courses in other areas of psychology to increase breadth of training. Students become actively involved in a research lab upon arrival in the program and are required to complete two projects by the end of their third year in order to advance to candidacy, which is followed by the doctoral dissertation. Virtually all students present papers at major professional conferences and publish at least one (and often many) papers during the course of their graduate training. Clinical training, under the supervision of area faculty, begins in the second year of the program in our Krasner Psychological Center (KPC) and can continue until the internship year. Prior to internship, many students choose to complete externships at local agencies and hospitals in addition to their training in the KPC. Throughout the program, students often work as teaching assistants and are required to complete at least two semesters of substantial direct instruction of undergraduates, which involves lecturing in undergraduate classes. Students typically complete the program, including the internship year, in 6 years. For more information on time to completion see Student Admissions, Outcomes, and Other Data on this website. 

Admission to the Program

Please visit the Clinical Program’s Admissions FAQ s for information about applying to the program. This document provides our application and admission policies, and our recommendations for preparing your application. We encourage applicants to prepare their application accordingly.

IMPORTANT! APPLICANTS FOR FALL 2022 ADMISSIONS AND BEYOND: If you are admitted to our program for Fall 2022 and choose to attend, you will graduate from a program that is accredited only by PCSAS. You will not graduate from an APA accredited program, nor will any subsequent entering classes.

The program typically receives over 300 applications (and recently many more) and has an entering class of 4 to 8 students. For information on characteristics of accepted applicants see Student Admissions, Outcomes, and Other Data on this website.

In line with the Psychology Department’s value of diversity, the clinical program encourages applications from a diverse range of applicants, including (but not limited to) applications from people of different ages, races, ethnicities, national origins, gender identities, sexual orientations, socioeconomic statuses, religions, and ability statuses.

As a member of the Council of University Directors of Clinical Training (CUDCP), the Clinical Psychology program at Stony Brook University adheres to CUDCP’s policies and guidelines for graduate school admissions, offers and acceptance. For additional information about these policies, please visit    this page.

Psychology GRE Test for Clinical Psychology Admissions:

Neither the GRE general test nor the Psychology subject test is required for application or admission to the program.  In fact, to ensure fairness in our application review process, we do not accept general or subject test GRE scores as part of your application. Even if you have taken these exams, please do not include your scores on your CV or supplementary materials.

The Clinical Program has an outstanding placement record. Of all students graduating since 2004, the vast majority are in positions in which they function as clinical scientists (e.g., academic or research positions, research post-docs, clinical settings that involve research and/or the provision and dissemination of evidence-based approaches to treatment). Our students’ careers typically emphasize the scientific generation of new knowledge (in the form of research engagement, publishing, presenting, etc.) and the widespread dissemination of such knowledge (in the form of teaching, mentoring, supervision, consultation, program and policy development). Our students also are exceptionally well-trained in science-based clinical practice, and their careers often (and typically) include service provision.

  • Campus Resources
  • Center of Excellence
  • Clinical Faculty
  • Clinical Requirements
  • Consumer Information Disclosures State Level Licensure Education Requirements
  • Student Admissions Outcome & other Data

Health eCareers logo

opens in a new window

article featured image

Physician Assistant Career Paths: Clinical Practice vs. Clinical Research

Whether you’re already a practicing physician assistant (PA) or are still studying to become a PA, you’re likely aware that there is more than one exciting career pathway available to you. While 93.7 percent of physician assistants choose clinical practice after completing their studies and certification, others may decide to pursue a career in clinical research. Some PAs even do both, participating in clinical research trials within a practice setting in conjunction with caring for a panel of patients. 

What to Expect as a PA in Clinical Research

Clinical research trials are critical to the advancement of medicine. Not only are they used to develop and test potentially lifesaving treatments, but they can also be designed to further the understanding of disease processes and short- and long-term outcomes. 

Physician assistants employed in clinical research participate in a variety of ways—from the recruiting, screening, and retaining of participants to implementing trial protocols. They often serve as clinical research coordinators, and may even advance to co-investigator and principal investigator positions. 

Principal—or primary—investigators (also known as PIs) are responsible for running clinical trials according to study protocols, keeping accurate records, and reporting adverse effects. Co-investigators (also known as sub-investigators) work under the PI and complete many of the same duties. They also supervise the clinical research coordinators who are responsible for monitoring trial-related activities and the clinical research associates who are part of the research team. 

Caring for patients who are enrolled in clinical trials is an increasingly common role for physician assistants as well. These PAs basically provide patient care to the study participants in the same way they would do so within clinical practice. 

How to Become a Clinical Research PA

Many physician assistants who work in clinical research learn through on-the-job training. Often, their entrance into the field is through a position working with a physician or on a medical team that’s involved in clinical trials. This may happen directly after certification or after years in clinical practice. Either way, knowledge of the clinical research lifecycle, an understanding of biostatics, or a strong desire to learn more in these areas will be helpful. 

If you think you may want to advance to the level of co-investigator or primary investigator, earning a doctorate degree may give you an advantage over others seeking promotion to these leadership positions. Look for PA doctorate programs with a focus on clinical research. 

It can also be helpful to connect with other physician assistants who are already involved in research. These PAs can give you a firsthand account of their experience and may be willing to refer you for available positions and serve as mentors. 

Other ways to find your first opportunity include searching for PA jobs that combine clinical practice with clinical trials. And, if you’re still a PA student, you can pursue opportunities within your school’s academic research lab or at local offices of research attached to your academic center. 

Pros and Cons of Choosing Both Clinical Research and Clinical Practice as a PA

Because clinical research trials often test developing therapies and explore the forefront of medical physiology, physician assistants who choose to work in both clinical research and clinical practice have a serious advantage over their counterparts who focus on clinical practice alone. 

These PAs gain a direct understanding of how treatment is evolving, and they can incorporate that knowledge into their clinical practice. Rather than waiting for a medical body (like the American Medical Association, for example) to put together new treatment guidelines years down the line, they’re able to treat their patients according to the latest discoveries now. 

Of course, there are also potential drawbacks when working in both research and practice. These include the possibility that your research work may not be factored into your clinical salary (resulting in more work for the same pay), and the need to cover unbudgeted travel costs for attending conferences if you’re asked to present your research. 

Blocks that spell start and a person hitting a green button

Related Articles

Blocks that spell start and a person hitting a green button

How to Launch Your Physician Assistant Career

Depiction of a group of healthcare professionals in cardiology

A Day In the Life of a Cardiology PA

Diploma wrapped with a ribbon

The Pros and Cons of Doctoral Degrees for PAs

Secondary Menu

Research assistant position @ children's national hospital, washington dc.

The ADHD & Learning Differences Program is hiring up to two full-time clinical research assistant/coordinator positions to work with the director to coordinate federally funded research studies and community engagement activities. If you are interested in resilience, digital mental health interventions, and/or school/community implementation and looking for ways to prepare for graduate school, consider becoming part of a collaborative clinical research team under the supervision of Melissa Dvorsky, PhD at Children’s National Hospital in Washington, DC. The research assistant will be joining a highly productive team consisting of faculty, two postdoctoral fellows, doctoral level graduate students, and undergraduate students.

  Responsibilities

  • Assist in the development and implementation of school/community-based research.
  • Coordinate a randomized clinical trial and/or longitudinal developmental studies, including school and participant recruitment, diagnostic assessment, and outcome monitoring.
  • Support community partners including school mental health providers in implementing evidence-based practices in their settings.
  • Work with the DC, MD, and VA community to support programs that ensure all youth opportunity to reach their highest level of mental health and well-being.
  • Enroll participants, administer, and track data collection.
  • Coordinate data management and analysis.
  • Supervise and train undergraduate volunteers and delegating lab responsibilities.
  • Assist with writing peer reviewed publications, preparing conference presentations and grant applications.
  • Work with the Institutional Review Board (IRB) to maintain research ethical compliance.
  • Contribute to projects focused on promoting youth mental health equity so that all individuals have access to quality mental health care regardless of race, ethnicity, linguistic literacy/proficiency, gender, socioeconomic status, sexual orientation, disability status, or geographical location.

Requirements

  • Bachelor’s degree with previous experience in the conduct of clinical research and/or behavioral interventions
  • Excellent organizational, interpersonal, and communication skills
  • Two-year minimum commitment
  • Reliable transportation
  • Evening hours for occasional participant enrollment

To apply, please complete the following survey:  https://bit.ly/ADHDRA  

Please contact Amanda Steinberg with any questions [email protected]

Review of applications will begin immediately

  • Professional development
  • Post-graduation
  • Diversity, Equity & Inclusion
  • Climate Handbook
  • P&N Team Resources
  • Degree Requirements
  • Practicum and Ongoing Research Projects in Psychology
  • Research Participation Requirements for Psychology Courses
  • Summer Vertical Integration Program (VIP)
  • Psychology Courses
  • Graduate School Advice
  • Career Options
  • Forms & Resources
  • Global Education
  • Trinity Ambassadors
  • Co-requisite Requirement
  • Neuroscience Courses
  • Neuroscience: Undergraduate Research Opportunities
  • Neuroscience Research Practicum & Laboratories
  • Summer Neuroscience Program
  • Research Independent Study in Neuroscience
  • Graduation with Distinction
  • Frequently Asked Questions
  • Neuroscience Teaching Lab
  • Student Spotlights
  • Neuroscience Graduation 2024 Program
  • Other Job Boards
  • Student Organizations
  • Clinical Psychology
  • Cognition & the Brain
  • Developmental Psychology
  • Social Psychology
  • Systems and Integrative Neuroscience
  • Admitting Faculty
  • Application FAQ
  • Financial Support
  • Teaching Opportunities
  • Departmental Graduate Requirements
  • MAP/Dissertation Committee Guidelines
  • MAP/Oral Exam Guidelines/Timeline
  • Dissertation and Final Examination Guidelines
  • Awards for Current Students
  • Teaching Resources
  • Instructor/TA Guidelines
  • Faculty Mentorship Vision Statement
  • All Courses
  • Psychology: Course Sequence
  • Psychology: Methods Courses
  • Neuroscience: Course Clusters
  • Neuroscience: Courses By Category
  • Primary Faculty
  • Joint Graduate Training Faculty
  • Instructional Faculty
  • Secondary Faculty
  • Graduate Students
  • Postdocs, Affiliates, and Research Scientists
  • Faculty Research Labs
  • Research News Stories
  • Child Studies
  • Community Volunteers
  • Charles Lafitte Foundation: Funding Support
  • Meet Our Alumni
  • For Current Students
  • Assisting Duke Students
  • Neuroscience Graduation 2023 Program
  • Psychology Graduation 2023 Program
  • Giving to the Department

IMAGES

  1. Clinical Psychology

    what is research in clinical psychology

  2. Understanding Research in Clinical and Counselling Psychology

    what is research in clinical psychology

  3. Describe the Different Research Methods Used by Psychologists

    what is research in clinical psychology

  4. An Introduction to Psychology Research Methods

    what is research in clinical psychology

  5. Clinical Psychology: Definition

    what is research in clinical psychology

  6. PPT

    what is research in clinical psychology

VIDEO

  1. 1-3- Types of Clinical Research

  2. Study Clinical Psychology at Roosevelt University, USA

  3. Clinical Psychology Lecture #3

  4. Clinical Psychology ♥️#farahshah #psychology #mentalhealth

  5. Clinical Psychology Lecture #4

  6. Clinical Psychology

COMMENTS

  1. Clinical Psychology

    Clinical psychology is the psychological specialty that provides continuing and comprehensive mental and behavioral health care for individuals, couples, families, and groups; consultation to agencies and communities; training, education and supervision; and research-based practice. It is a specialty in breadth — one that addresses a wide ...

  2. APA Handbook of Clinical Psychology

    The 5-volume APA Handbook of Clinical Psychology reflects the state-of-the-art in clinical psychology — science, practice, research, and training. The Handbook provides a comprehensive overview of: the history of clinical psychology, specialties and settings, theoretical and research approaches, assessment, treatment and prevention ...

  3. The Use of Research Methods in Psychological Research: A Systematised

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  4. Conducting research in clinical psychology practice: Barriers

    How can clinical psychologists conduct research in their practice settings? This article provides a comprehensive overview of the benefits, challenges, and strategies of conducting practice-based research, with examples from different fields and settings. The article also discusses the ethical and practical issues involved, and offers recommendations for enhancing the quality and dissemination ...

  5. Clinical Psychology Research Topics

    Clinical psychology research is one of the most popular subfields in psychology. With such a wide range of topics to cover, figuring out clinical psychology research topics for papers, presentations, and experiments can be tricky.

  6. Clinical Psychology History, Approaches, and Careers

    Clinical psychology is the branch of psychology concerned with assessing and treating mental illness, abnormal behavior, and psychiatric problems. This psychology specialty area provides comprehensive care and treatment for complex mental health problems. In addition to treating individuals, clinical psychology also focuses on couples, families ...

  7. Clinical psychology

    Clinical psychology is an integration of human science, behavioral science, theory, and clinical knowledge for the purpose of understanding, preventing, and relieving psychologically-based distress or dysfunction and to promote subjective well-being and personal development. [1] [2] Central to its practice are psychological assessment, clinical ...

  8. Research Methods in Clinical Psychology

    second edition, entitled Research Methods in Clinical Psychology, focused on clinical psychologists as a primary readership, with counseling, health, educational, and community psychologists also being very much in our minds. The book should really be called something like Research Methods in Clinical Psychology and Allied

  9. 4 Research Methods in Clinical Psychology

    Central to research in clinical psychology is the evaluation of treatment outcomes. Research evaluations of the efficacy and effectiveness of therapeutic interventions have evolved from single-subject case histories to complex multimethod experimental investigations of carefully defined treatments applied to genuine clinical samples. The ...

  10. (When and how) does basic research in clinical psychology lead to more

    An important aim of basic research in Clinical Psychology is to improve clinical practice (e.g., by developing novel interventions or improving the efficacy of existing ones) based on an improved understanding of key mechanisms involved in psychopathology. In the first part of this article, we examine how frequently this translation has ...

  11. Current Issues and Future Directions in Clinical Psychological Science

    We bring together eminent scholars from across the world to contribute target articles on cutting-edge advancements and ongoing issues that encompass rhetoric in science, structural models of psychopathology, experimental psychopathology research, sociopolitical values in the multicultural movement, positive illusions about societal change ...

  12. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  13. Research areas in the Clinical Psychology

    Clinical psychology research is as important to the nation's health and well being as medical research. In the same way that medical scientists work to understand the prevention, genesis, and spread of various genetic and infectious diseases, scientists conduct rigorous psychological research studies to understand, prevent, and treat the human condition as it applies psychologically to ...

  14. Experimental and Clinical Psychologists

    Clinical Psychologists. Psychology is the scientific study of behavior and mental processes. But it is also the application of scientific research to "help people, organizations, and communities function better" (American Psychological Association, 2011) [1].By far the most common and widely known application is the clinical practice of psychology—the diagnosis and treatment of ...

  15. Research design clinical psychology 6th edition

    'Dr. Kazdin's Research Design in Clinical Psychology (6th edition) is the perfect textbook to help students develop skills in and appreciation for sound research design. Research Methods is often the course that students dread, finding the material dry and inaccessible, but Dr. Kazdin's approach to balancing scientific rigor with humor ...

  16. Spotlight Articles in Clinical Psychology

    August 3, 2023. It is time for a measurement-based care professional practice guideline in psychology. from Psychotherapy. July 31, 2023. Methodological and quantitative issues in the study of personality pathology. from Personality Disorders: Theory, Research, and Treatment. April 26, 2023.

  17. Journal of Clinical Psychology

    The Journal of Clinical Psychology is a clinical psychology and psychotherapy journal devoted to research, assessment, and practice in clinical psychological science. In addition to papers on psychopathology, psychodiagnostics, and the psychotherapeutic process, we welcome articles on psychotherapy effectiveness research, psychological assessment and treatment matching, clinical outcomes ...

  18. What is a Clinical Research Psychologist

    Clinical psychology research is a specialization within clinical research. It is the study of behavioral and mental health. In many ways, it is as important to the nation's health and well being as medical research. In the same way that medical scientists work to understand the prevention, genesis, and spread of various diseases, clinical ...

  19. Handbook of Research Methods in Clinical Psychology

    The Handbook of Research Methods in Clinical Psychology presents a comprehensive and contemporary treatment of research methodologies used in clinical psychology. Topics discussed include experimental and quasi-experimental designs, statistical analysis, validity, ethics, cultural diversity, and the scientific process of publishing. Written by leading researchers, the chapters focus on ...

  20. PDF What are the steps to a Clinical Psychology PhD?

    What is a Clinical Psychology PhD? A few (brief) notes about Clinical Psychology PhD programs: Clinical psych are best for people who want to focus on research and clinical work; though skills can be applied to a range of fields (e.g, public health, policy, etc). If you are interested in only clinical work, PhDs may not be the best fit!

  21. Research Areas

    Cognitive Psychology is the scientific study of mental processes underlying behavior, a broad research area encompassing the study of attention, perception, memory, language, reasoning and problem-solving. These research areas are strongly represented by the faculty of the Cognitive Psychology Program along with research on cognitive aging and ...

  22. Clinical Psychology

    The Doctoral Program in Clinical Psychology. The Stony Brook Ph.D. program in Clinical Psychology began in 1966. Based on chair rankings in US News and World Report, it has been ranked among the very top clinical programs in the United States for the past several decades, and it has a long tradition of strong publication rates by both faculty and graduates (Mattson et al., 2005; Roy et al., 2006).

  23. Engaging with research as a clinician

    Dr Eleanor Chatburn qualified as a Clinical Psychologist in 2021 from the University of Bath. She is a Lecturer at the Department of Clinical Psychology and Psychological Therapies, University of East Anglia and a Visiting Researcher at the Department of Psychiatry, University of Cambridge. Twitter: @eleanorchats.

  24. Clinical Psychology

    Psy.D. In Clinical Psychology. Welcome to the Clinical Psychology Psy.D. Program at Florida Institute of Technology. The program at Florida Tech that leads to a Psy.D. in clinical psychology is accredited by the American Psychological Association* and offers students training based on a practitioner-scholar model that prepares students for entry-level positions as clinical psychologists.

  25. How to Become a Clinical Research Associate

    A clinical research associate works on behalf of the sponsor (pharmaceutical company, university, or health organization) or for a contract research organization (CRO) that funds the research. Clinical trials are the long, scientific process of ensuring that certain drugs, therapies, and devices are safe and effective for public consumption and ...

  26. 14 emerging trends

    That said, the urgent need for mental health services will be a trend for years to come. That is especially true among children: Mental health-related emergency department visits have increased 24% for children between ages 5 and 11 and 31% for those ages 12 to 17 during the COVID-19 pandemic. That trend will be exacerbated by the climate ...

  27. Clinical Practice vs. Clinical Research for PAs

    They also supervise the clinical research coordinators who are responsible for monitoring trial-related activities and the clinical research associates who are part of the research team. Caring for patients who are enrolled in clinical trials is an increasingly common role for physician assistants as well. These PAs basically provide patient ...

  28. Research Assistant Position @ Children's National Hospital, Washington

    The ADHD & Learning Differences Program is hiring up to two full-time clinical research assistant/coordinator positions to work with the director to coordinate federally funded research studies and community engagement activities. If you are interested in resilience, digital mental health interventions, and/or school/community implementation and looking for ways to prepare for graduate school ...